Hi, after upgraded to Hadoop 2 (yarn), I found that
'mapred.jobtracker.taskScheduler.maxRunningTasksPerJob' no longer worked,
right?
One workaround is to use queue to limit it, but it's not easy to control it
from job submitter.
Is there any way to limit the concurrent running mappers per job?
If the cluster have enough resource, then more than one job will run at the
same time
2015-04-18 2:27 GMT+08:00 xeonmailinglist-gmail xeonmailingl...@gmail.com:
Hi,
I have a MapReduce runtime where I put several jobs running in
concurrency. How I manage the job scheduler so that it won't run
have you try jps command and looking what hadoop service is running?
On Mon, Apr 20, 2015 at 6:45 PM, Anand Murali anand_vi...@yahoo.com wrote:
Yes. All Hadoop commands. The error message is linked to IP address,a dn I
checked Hadoop wiki, this is a network issue on Ubuntu. Unfortunately, I
Hi
But the Hadoop wiki say this is a network issue especially with Ubuntu. Please
look at my paste and follow thru link.
As regards my temporary solution. I have to remove all Hadoop files and re
extract it and start over and then it works for a couple of runs before it
starts all over again.
I didn't specify it so it's using the default value (in /tmp)
On Sun, Apr 19, 2015 at 10:21 PM, Drake민영근 drake@nexr.com wrote:
Hi,
guess the yarn.nodemanager.local-dirs property is the problem. Can you
provide that part of yarn-site.xml?
Thanks.
Drake 민영근 Ph.D
kt NexR
On Mon, Apr
SOLVED...
but this is weird, I added hadoop.tmp.dir in core-site.xml I was ok with it
writing to the default location (/tmp/hadoop-${user.name}) but changed it
to /tmp/hadoop instead and now everything works
Now I'm wondering why that might be an issue
On Mon, Apr 20, 2015 at 8:54 AM,
have you try:
hdfs dfs -ls /
*with slash in the end of command?
On Mon, Apr 20, 2015 at 5:50 PM, Anand Murali anand_vi...@yahoo.com wrote:
Hi All:
I am using Ubuntu 14.10 desktop and Hadoop-2.6 pseudo mode.
Start-dfs/Stop-dfs is normal. However, after a couple of times of usage,
when I try
I did set so the files should be there for 10 minutes...
property
nameyarn.nodemanager.delete.debug-delay-sec/name
value600/value
/property
On Mon, Apr 20, 2015 at 8:52 AM, Fernando O. fot...@gmail.com wrote:
I didn't specify it so it's using the default value (in /tmp)
On
Yes. All Hadoop commands. The error message is linked to IP address,a dn I
checked Hadoop wiki, this is a network issue on Ubuntu. Unfortunately, I dont
know much about networks.
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162
No. I shall try. Can you point me to jps resources.
Thanks
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Monday, April 20, 2015 5:50 PM, Himawan Mahardianto
mahardia...@ugm.ac.id wrote:
have you try
Are you sure the namenode is running well from ouput on jps command?
Have you try to give an IP on your PC other than 127.0.0.1?
and could you paste your /etc/hosts and hadoop_folder/etc/hadoop/slaves
file configuration on this reply?
On Mon, Apr 20, 2015 at 8:10 PM, Anand Murali
Himanwan:
Jps fails on my laptop although JDK1.7.0 is installed. There is no etc/hosts
and the slaves file has one entry called localhost
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Monday, April 20, 2015
you just run jps on your terminal, here my jps output command on my
namenode:
hadoop@node-17:~$ jps
18487 Jps
18150 NameNode
18385 SecondaryNameNode
hadoop@node-17:~$
from that output I could make sure that my namenode is running well, how
bout your namenode, are you sure it's running well
Hi,
I am able to run a `hadoop fs -ls s3n://my-s3-bucket` from the command line
without issue. I have set some options in hadoop-env.sh to make sure all
the S3 stuff for hadoop 2.6 is set up correctly. (This was very confusing,
BTW and not enough searchable documentation on changes to the s3
Hi Chris,
Thanks for your insights. Last question: can you tell me the main
differences (from a Hadoop dev point of view) between the public REST api
and the HDFS wire protocol?
My gut feeling tells me hdfs is mainly used in cluster communication and
the public one is, well, for public api's. But
Hi All:
I am using Ubuntu 14.10 desktop and Hadoop-2.6 pseudo mode. Start-dfs/Stop-dfs
is normal. However, after a couple of times of usage, when I try to connect to
HDFS,, I am refused connection. Find below
anand_vihar@Latitude-E5540:~$ ssh localhost
Welcome to Ubuntu 14.10 (GNU/Linux
I appreciate the response. These JAR files aren't 3rd party. They're
included with the Hadoop distribution, but in Hadoop 2.6 they stopped being
loaded by default and now they have to be loaded manually, if needed.
Essentially the problem boils down to:
- need to access s3n URLs
- cannot access
This is an install on a CentOS 6 virtual machine used in our test
environment. We use HDP in staging and production and we discovered these
issues while trying to build a new cluster using HDP 2.2 which upgrades
from Hadoop 2.4 to Hadoop 2.6.
William Watson
Software Engineer
(904) 705-7056 PCS
One thing I think which i most likely missed completely is are you using
an amazon EMR cluster or something in house?
---
Regards,
Jonathan Aquilina
Founder Eagle Eye T
On 2015-04-20 16:21, Billy Watson wrote:
I appreciate the response. These JAR files aren't 3rd party. They're included
Sadly I'll have to pull back I have only run a Hadoop map reduce cluster with
Amazon met
Sent from my iPhone
On 20 Apr 2015, at 16:53, Billy Watson williamrwat...@gmail.com wrote:
This is an install on a CentOS 6 virtual machine used in our test
environment. We use HDP in staging and
Great, thanks a lot.
b.
On Mon, Apr 20, 2015 at 7:03 PM, Chris Nauroth cnaur...@hortonworks.com
wrote:
Hi Bram,
Your gut feeling is correct. These 2 properties are used in private
implementation details of cluster communication. I believe these 2
properties are currently the only
We found the correct configs.
This post was helpful, but didn't entirely work for us out of the box since
we are using hadoop-pseudo-distributed.
http://hortonworks.com/community/forums/topic/s3n-error-for-hdp-2-2/
We added a property to the core-site.xml file:
property
Hi Bram,
Your gut feeling is correct. These 2 properties are used in private
implementation details of cluster communication. I believe these 2 properties
are currently the only difference compared to the public REST API.
Chris Nauroth
Hortonworks
http://hortonworks.com/
From: Bram
Hi Mahmood,
I tried testing a simplified version of this, and it worked in my environment.
One thing I noticed is that your ls output is for directory /in/in, but the job
execution uses /in for the input path. If the input path is a directory, then
the assumption is that it will be a
Thanks, anyways. Anyone else run into this issue?
William Watson
Software Engineer
(904) 705-7056 PCS
On Mon, Apr 20, 2015 at 11:11 AM, Jonathan Aquilina jaquil...@eagleeyet.net
wrote:
Sadly I'll have to pull back I have only run a Hadoop map reduce cluster
with Amazon met
Sent from my
hi,
You can build native lib in your platform with maven:
mvn package -Pnative -DskipTests
发件人: Mahmood Naderan [mailto:nt_mahm...@yahoo.com]
发送时间: 2015年4月19日 2:54
收件人: User
主题: Unable to load native-hadoop library
Hi,
Regarding this warning
WARN util.NativeCodeLoader: Unable to load
26 matches
Mail list logo