Hi !
I have set up hadoop on my machine as per
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
I am able to run application with capacity scheduler by submit jobs to a
paricular queue from owner of hadoop hduser.
I tried this from other user :
1.
You can do it.
If you understand how Hadoop works, then you should realized that it's
a Python question and a Linux question.
Pass the native files via -files and setup environment variables
via mapred.child.env.
I've done a similar thing with Ruby. For Ruby, the environment
variables are PATH,
Did you give permissions recursively?
$ sudo chown -R hduser:hadoop hadoop
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Sunday, September 18, 2011 12:00 pm
Subject: Submitting Jobs from different user to a queue in capacity scheduler
To:
Hello Arun,
On Sun, Sep 18, 2011 at 11:59 AM, ArunKumar arunk...@gmail.com wrote:
Hi !
I have set up hadoop on my machine as per
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
I am able to run application with capacity scheduler by submit jobs to a
Hi !
I have given permissions in the beginning $ sudo chown -R hduser:hadoop
hadoop .
I gave $chmod -R 777 hadoop
When i try
arun$ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
I get
Number of Maps = 1
Samples per Map = 1
Hello Arun,
Now we reached to hadoop permissions ;)
If you really need not worry about permissions, then you can disable it and
proceed (dfs.permissions = false).
else you can set the required permissions to user as well.
permissions guide.
Hi Uma !
I have added in hdfs-site.xml the following
property
namedfs.permissions/name
valuefalse/value
/property
and restarted the cluster.
I tried :
arun@arun-Presario-C500-RU914PA-ACJ:~$ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
Number of
Hi !
I have step up hadoop on eclipse as per
http://www.mail-archive.com/common-dev@hadoop.apache.org/msg02531.html
http://www.mail-archive.com/common-dev@hadoop.apache.org/msg02531.html
I could run wordcount example.
i have modified site xml as necessary for capacity scheduler.
When i run i
Hi Uma !
I have deleted the data in /app/hadoop/tmp and formatted namenode and
restarted cluster..
I tried
arun$ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
Number of Maps = 1
Samples per Map = 1
org.apache.hadoop.security.AccessControlException:
Useful contributions. I want to find out one more thing, has Hadoop been
successfully simulated so far? May using Opnet or ns2?
Regards,
kobina.
On 18 September 2011 03:37, Michael Segel michael_se...@hotmail.com wrote:
Gee Tom,
No disrespect, but I don't believe you have any personal
As hfuser, create the /user/arun directory in hdfs-user. Then change the
ownership /user/arun to arun.
-Joey
On Sep 18, 2011 8:07 AM, ArunKumar arunk...@gmail.com wrote:
Hi Uma !
I have deleted the data in /app/hadoop/tmp and formatted namenode and
restarted cluster..
I tried
arun$
Hi Arun,
Setting mapreduce.jobtracker.staging.root.dir propery value to /user might fix
this issue...
or other way could be, just execute below command
hadoop fs -chmod 777 /
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Sunday, September 18, 2011 8:38 pm
On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
or other way could be, just execute below command
hadoop fs -chmod 777 /
I wouldn't do this - it's overkill, and there's no way to go back. Instead,
if you really want to disregard all permissions on
Hi, all
recently, I was hit by a question, how is a hadoop job divided into 2
phases?,
In textbooks, we are told that the mapreduce jobs are divided into 2 phases,
map and reduce, and for reduce, we further divided it into 3 stages,
shuffle, sort, and reduce, but in hadoop codes, I never think
Hi Nan
I have the same question for a while. In some research papers, people like
to make the reduce stage to be slow start. In this way, the map stage and
reduce stage are easy to differentiate. You can use the number of remaining
unallocated map tasks to detect in which stage your job is.
To
Nan,
The 'phase' is implicitly understood by the 'progress' (value) made by the
map/reduce tasks (see o.a.h.mapred.TaskStatus.Phase).
For e.g.
Reduce:
0-33% - Shuffle
34-66% - Sort (actually, just 'merge', there is no sort in the reduce since
all map-outputs are sorted)
67-100% -
Hi,
this 0-33-66-100% phases are really confusing to beginners. We see that in our
training classes. The output should be more verbose, such as breaking down the
phases into seperate progress numbers.
Does that make sense?
Am 19.09.2011 um 06:17 schrieb Arun C Murthy:
Nan,
The 'phase' is
Agreed.
At least, I believe the new web-ui for MRv2 is (or will be soon) more verbose
about this.
On Sep 18, 2011, at 9:23 PM, Kai Voigt wrote:
Hi,
this 0-33-66-100% phases are really confusing to beginners. We see that in
our training classes. The output should be more verbose, such as
Agreed.
i suggested 'dfs.permissions' flag also earlier in this thread. :-)
Regards,
Uma
- Original Message -
From: Aaron T. Myers a...@cloudera.com
Date: Monday, September 19, 2011 7:45 am
Subject: Re: Submitting Jobs from different user to a queue in capacity
scheduler
To:
Hi, Arun ,
Thanks!
As you explained, in the hadoop, we cannot explicitly divide job as two
phase, map and reduce, but only for reduce task, we can judge which stage
it's in, (shuffle, sort, reduce) (with 0.23 , we can also do it with
mappers, )
right?
Nan
On Mon, Sep 19, 2011 at 12:17 PM,
Hi Arun
I have a question. Do you know what is the reason that hadoop allows the map
and the reduce stage overlap? Or anyone knows about it. Thank you in
advance.
Chen
On Sun, Sep 18, 2011 at 11:17 PM, Arun C Murthy a...@hortonworks.com wrote:
Nan,
The 'phase' is implicitly understood by
21 matches
Mail list logo