Recently I have installed data nodes on Ubuntu 12.04 and observed failing
M/R jobs with errors like this:
Diagnostics report from attempt_1351154628597_0002_m_00_0: Container
[pid=14529,containerID=container_1351154628597_0002_01_02] is running
beyond virtual memory limits. Current usage:
Could not get it to make sense out of MALLOC_ARENA_MAX. No .bashrc etc.
no env script seemed to have any impact.
Made jobs work again by setting yarn.nodemanager.vmem-pmem-ratio=10. Now
they probably run with some obscene and unnecessary vmem allocation
(which I read does not come for free
Make sure username on both the machines is same. Also, have you copied the
public key to the slave machine?
Regards,
Mohammad Tariq
On Thu, Oct 25, 2012 at 1:58 PM, yogesh.kuma...@wipro.com wrote:
Hi all,
I am trying to run the command
ssh Master
it runs and shows after entering
make sure username is same on both the machines. and you can copy the key
manually as well.
Regards,
Mohammad Tariq
On Thu, Oct 25, 2012 at 2:46 PM, yogesh.kuma...@wipro.com wrote:
Hi Mohammad,
It was first Issue, I have tried to copy the by using the command
ssh-copy-id -i
Hi All,
I am trying to copy the public key by this command.
Master:~ mediaadmin$ ssh-copy -id -i $HOME/.ssh/id_rsa.pub pluto@Slave
I have two machines Master Name is pluto and same name is of Slave. (Admin)
And I got this error, Where I am going wrong?
ssh-copy-id: command not found
Please
operating system you are using will be of good help to answer your question.
Normally the command you are looking for is provided by openssh-clients
Install this package if not already.
If installed normally on a redhat system its placed at /usr/bin/ssh-copy-id
On Thu, Oct 25, 2012 at 3:24 PM,
http://blog.csdn.net/onlyqi/article/details/6544989
https://issues.apache.org/jira/browse/HDFS-2185
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailability.html
http://blog.csdn.net/chenpingbupt/article/details/7922042
scp file_to_copy u...@remote.server.fi:/path/to/location
Regards,
Mohammad Tariq
On Thu, Oct 25, 2012 at 3:31 PM, Nitin Pawar nitinpawar...@gmail.comwrote:
operating system you are using will be of good help to answer your
question.
Normally the command you are looking for is
Thanks All,
The copy has been done but here comes another horrible issue.
when I log in as Master
ssh Master it asks for Password
Master:~ mediaadmin$ ssh Master
Password: abc
Last login: Thu Oct 25 17:13:30 2012
Master:~ mediaadmin$
and for Slave it dosent ask.
Master:~ mediaadmin$ ssh
Did you mean to mail that to yourself as a means of bookmarks or did
you just want to share this bundle of unrelated links with us?
On Thu, Oct 25, 2012 at 3:43 PM, lei liu liulei...@gmail.com wrote:
http://blog.csdn.net/onlyqi/article/details/6544989
Hi,
I would like to announce the availability of HiBench 2.2 at
https://github.com/intel-hadoop/hibench. Since the release of HiBench 2.1, we
have received many good feedbacks, and HiBench 2.2 provides an update to v2.1
based on these feedbacks, including:
1) Build automatic data
any hints friends will i have to try this with a cluster set up?? with
datanode installed on a diffrnt ip address
On Tue, Oct 23, 2012 at 12:34 PM, Visioner Sadak
visioner.sa...@gmail.comwrote:
my config files already have ips instead of localhost.yes if i copy
paste ip in address bar it
Hi Liu,
Locks are not sufficient, because there is no way to enforce a lock in a
distributed system without unbounded blocking. What you might be referring
to is a lease, but leases are still problematic unless you can put bounds
on the speed with which clocks progress on different machines,
I think master machine authorized-key is missed.
Please do following..
ssh-copy-id -i ~/.ssh/id_rsa.pub {IP of Master machine}..
Before starting cluster better to check whether ssh is enable or not by doing
ssh {slave or master IP} from Master ( here it should not ask passwd).
Hi Brahma,
I am on Mac OS X it dosent have copy cmd i.e
sh-copy-id -i
I copyed it as
mediaadmin$ cat ~/.ssh/id_rsa.pub | ssh pluto@10.203.33.80 'cat
~/.ssh/authorized_keys'
Password:
and did
ssh 10.203.33.80 and it asked for password.
Master:~ mediaadmin$ ssh 10.203.33.80
Password:
Thanks,
Alberto
On 24 October 2012 16:33, Harsh J ha...@cloudera.com wrote:
Using either is fully supported in 2.x+ at least. Neither is
deprecated, but I'd personally recommend using the new API going
forward. There's no known major issues with it.
FWIW, Apache HBase uses the new API for
Yogesh,
Have you figured it out? I had the same issue (needed passwordless ssh) and
managed. Let me know if you are still stuck.
AK47
From: yogesh.kuma...@wipro.com [mailto:yogesh.kuma...@wipro.com]
Sent: Thursday, October 25, 2012 4:28 AM
To: user@hadoop.apache.org
Subject: ERROR:: SSH
Yogesh,
One need to understand how passwordless ssh work.
Say there is a user “yosh”
He types ssh localhost and is prompted for a password. This is how to resolve
this.
1.
Type : ssh-keygen -t rsa
-t stand for type and rsa (encryption) - another type will be dsa.
Well, after you run above
Thanks Andy,
I got your point :-),
What I have done earlier..
I have configured
1) One as a Master ( plays role of
both Name node and Data Node)
2) Second as a Slave (
Only date node)
I have give same name to both Machines and they have Admin access.
pluto ( for both Master and Slave).
Yoges,
If you are asked for a password you PSSWDLSS SSH is not working.
Shoot, forgot one detail. Please rememeber to set file authorized_keys to 600
permission. :)
From: yogesh dhari [mailto:yogeshdh...@live.com]
Sent: Thursday, October 25, 2012 1:14 PM
To: hadoop helpforoum
Subject: RE:
On 25 October 2012 14:08, Todd Lipcon t...@cloudera.com wrote:
Hi Liu,
Locks are not sufficient, because there is no way to enforce a lock in a
distributed system without unbounded blocking. What you might be referring
to is a lease, but leases are still problematic unless you can put bounds
Guys,
I finally solved ALL the Errors: in ...datanode*.log after trying to start
the node with service datanode start.
The errors were:
- conflicting NN DD ids - solved through reformatting NN.
- could not connect to 127.0.0.1:8020 - Connection refused - solved through
correcting a typo
Hello Harsh,
I am following steps based on this link:
http://wiki.apache.org/hadoop/AmazonS3
When i run the job, I am seeing that the hadoop places all the jars
required for the job on s3. However, when it tries to run the job, it
complains
The ownership on the staging directory
On Thu, Oct 25, 2012 at 7:01 AM, yogesh.kuma...@wipro.com wrote:
Hi Brahma,
I am on Mac OS X it dosent have copy cmd i.e
sh-copy-id -i
I copyed it as
mediaadmin$ cat ~/.ssh/id_rsa.pub | ssh pluto@10.203.33.80 'cat
~/.ssh/authorized_keys'
Password:
and did
ssh 10.203.33.80 and it
On 25 October 2012 20:24, Daniel Käfer d.kae...@hs-furtwangen.de wrote:
Hello all,
I'm looking for a reference architecture for hadoop. The only result I
found is Lambda architecture from Nathan Marz[0].
I quite like the new Hadoop in Practice for a lot of that, especially the
answer to #2,
Hi,
That should be:
-files path_to_my_library.so
and to include jars in for your mrjobs, you would do:
2) -libjars path_to_my1.jar,path_to_my2.jar
Brock
On Thu, Oct 25, 2012 at 6:10 PM, Dipesh Khakhkhar
dipeshsoftw...@gmail.com wrote:
Hi,
I am a new hadoop user and have few very basic
Thanks for answering my query.
1. I have tried -files path _o_my_libary.so while invoking my MR
application but I still UnsatisfiedLinkError: no mylibrary in
java.library.path
2. I have removed path to my jar in hadoop-classpath in hadoop-env.sh and
provide -libjars path_to_myfile.jar and tried
1) Does your local program use the native library before submitting
the job to the cluster?
Here is an example of using native code in MR
https://github.com/brockn/hadoop-thumbnail
2) I thought libjars would work for local classpath issues as well as
remove. However, to add the jar to your local
hi,all;
I want to write file to hdfs over thrift .
If the file is gzip or tar file , after uploading the files,i find the file
size changes and can not tar xvzf/xvf anymore .
For normal plain text file ,it works well .
[hadoop@HOST s_cripts]$ echo $LANG
en_US.UTF-8
[hadoop@HOST s_cripts]$
Yes, I am trying to use both (classes from my jar file and the native
library) before submitting job to the cluster.
Everything works, if i put native library in lib/native/Linux-amd64-64
folder and add path to my jar in hadoop-env.sh
I thought -files/-archives/-libjars options will be very
I've got MultipleOutputs configured to generate 2 named outputs. I'd like to
send one to s3n:// and one to hdfs://
Is this possible? One is a final summary report, the other is input to the
next job.
Thanks,
David
I am new to Hadoop. When I execute
bin/hadoop jar
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.2-alpha.jar pi -
Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
-libjars
share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.0.2-alpha.jar 16
1
1. I
32 matches
Mail list logo