I have a Map function and a Reduce funtion outputting kep-value pairs
of class Text and IntWritable.. This is just the gist of the Map part
in the Main function :
TableMapReduceUtil.initTableMapperJob(
tablename,// input HBase table name
scan, // Scan instance to control
Dear all,
I am trying to run the submit the distributed shell client on a Windows
machine to submit it to a yarn resource manager on a Linux box. I am stuck
with the following client error message ...
Unknown method getClusterMetrics called on interface
I have the same confusion, anyone who can reply to this will be very
appreciated.
From: Elazar Leibovich elaz...@gmail.commailto:elaz...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Thursday, July 25, 2013 3:51
One reason is the lists to accept or reject DN accepts hostnames. If dns
temporarily can't resolve an IP then an unauthorized DN might slip back into
the cluster, or a decommissioning node might go back into service.
Daryn
On Jul 29, 2013, at 8:21 AM, 武泽胜 wrote:
I have the same confusion,
I can third this concern. What purpose does this complexity increasing
requirement serve? Why not remove it?
Greg Bledsoe
From: 武泽胜 wuzesh...@xiaomi.commailto:wuzesh...@xiaomi.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Just for clarity, DNS as a service is NOT Required. Name resolution is.
I use /etc/hosts files to identify all nodes in my clusters.
One of the reasons for using Names over IP's is ease of use. I would much
rather use a hostname in my XML to identify NN, JT, etc. vs. some random
string of
Ease of use is a reason to support names, not to intentionally disallow raw
IPs. Not using names is convenient if you want to erect a temporary cluster
on a group of machines you don't own.
You have a user access, but name resolution is not always defined. As a
user you cannot change /etc/hosts.
I thought some high availability and resource isolation features in
Mesos are more matured. If no one is interested in this topic, MR
should go with YARN.
On Fri, Jul 26, 2013 at 7:14 PM, Harsh J ha...@cloudera.com wrote:
Do we have a good reason to prefer Mesos over YARN for scheduling MR
Actually,
I am interested.
Lots of different Apache top level projects seem to overlap and it can be
confusing.
Its very easy for a good technology to get starved because no one asks how to
combine these features in to the framework.
On Jul 29, 2013, at 9:58 AM, Tsuyoshi OZAWA
But even if you have permission to change /etc/hosts, /etc/hosts resolution
seems to introduce instability for the reverse lookup leading to unpredictable
results. Dns gets used and if this doesn't match your /etc/hosts file, you
have problems. Or am I missing something?
Greg
From: Chris
Harsh, yes, I know what you mean :-) Never mind. We should discuss
this topic with MR users.
On Tue, Jul 30, 2013 at 12:08 AM, Michael Segel
msegel_had...@hotmail.com wrote:
Actually,
I am interested.
Lots of different Apache top level projects seem to overlap and it can be
confusing.
Its
Hi,
Can you explain the problem you actually face in trying to run the
above setup? Do you also set your reducer output types?
On Mon, Jul 29, 2013 at 4:48 PM, Pavan Sudheendra pavan0...@gmail.com wrote:
I have a Map function and a Reduce funtion outputting kep-value pairs
of class Text and
Hi,
I am getting a weird error?
13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
attempt_201307102216_0145_r_16_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
I'm getting the exact same error. Only thing is I'm trying to write to a
sequence file.
Regards,
Pavan
On Jul 29, 2013 11:29 PM, jamal sasha jamalsha...@gmail.com wrote:
Hi,
I am getting a weird error?
13/07/29 10:50:58 INFO mapred.JobClient: Task Id :
attempt_201307102216_0145_r_16_0,
Hi All,
When i issue df -h command in namenode, i am not able to get back the
result. Is it a issue with the filesytems??
Regards
Sathish
Ok.
A very basic (stupid) question.
I am trying to compute mean using hadoop.
So my implementation is like this:
public class Mean
public static class Pair{
//simple class to create object
}
public class MeanMapperLongWritable, Text,Text, Pair
emit(text,pair) //where pair is (local sum,
What do you mean by you can't get back a result? The command hangs,
errors out, gives an incorrect result, what does it do? Can you post your
error please?
On Jul 30, 2013 1:31 AM, Sathish Kumar sa848...@gmail.com wrote:
Hi All,
When i issue df -h command in namenode, i am not able to get back
You can write custom key/value classes by implementing
org.apache.hadoop.io.Writable interface for your Job.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html
Thanks
Devaraj k
From: jamal sasha [mailto:jamalsha...@gmail.com]
Sent: 30 July 2013 10:27
To:
i have a product env hadoop cluster , i want to know if the kerberos will
cause big performance impack to my cluster?thanks all
My foundation is more Linux than Hadoop, so I'll support Harsh (like he
needs it) in asking, What's the problem? If you can't df -h this is
probably a lower than Hadoop issue, and while most Hadoop folks are
willing to help (see the fact that Harsh responded) this is 99.9% likely to
be an EXT4,
Viji R thank you very much
2013/7/26 Viji R v...@cloudera.com
Hi,
These are used to keep block IDs that are being verified, i.e., DN
periodically matches blocks with stored checksums to root out
corrupted or rotted data. They are removed once verification
completes.
Regards,
Viji
On
Hi,
This is the output message whjich i got when it failed:
WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease
on /sequenceOutput/_temporary/_attempt_local_0001_r_00_0/part-r-0
File
22 matches
Mail list logo