Using hadoop 2.4.0. #of Applications running on average is small ~ 40 -60.
The metrics in Ganglia shows around around 10-30 apps killed every 5 mins
which is very high wrt to the apps running at any given time(40-60). The RM
logs though show 0 failed apps in audit logs during that hour.
The RM UI
Hello all.
We're periodically scan HBase tables to aggregate statistic information,
and store it to MySQL.
We have 3 kinds of CP (kind of data source), each has one Channel and one
Article table.
(Channel : Article is 1:N relation.)
All CPs table schema are different a bit, so in order to
Hi
Could you give more information, which version of hadoop are you using?
QueueMetrics.AppsKilled/Failed metrics shows much higher nos i.e ~100.
However RMAuditLogger shows 1 or 2 Apps as Killed/Failed in the logs.
May be I suspect that Logs might be rolled out. Does more applications are
There are several ways to confirm from YARN that total number of Killed/Failed
applications in cluster
1. Get from RM web UI lists OR
2. From admin try using this to get numbers of failed and killed applications:
./yarn application -list -appStates FAILED,KILLED
3. Using client API's
Since
Hi,
I am trying to run |distcp| using a java class, but I get the error of
class not found |DistCpOptions|. I have used the argument |-libjars
./share/hadoop/tools/lib/hadoop-distcp-2.6.0.jar| to pass the jar file,
but it seems that is not right. How I pass the lib properly?
Output:
I have found the problem. I started to use `webhdfs` and everything is ok.
On 03-02-2015 10:40, xeonmailinglist wrote:
What do you mean by no path is given? Even if I launch this command, I
get the same error…. What path should I put here?
|$ hadoop distcp hdfs://hadoop-coc-1:50070/input1
Hi,
Can you please try webhdfs instead hdfs?
- Alexander
On 03 Feb 2015, at 12:05, xeonmailinglist xeonmailingl...@gmail.com wrote:
Maybe this has to do with this error… I can’t do ls to my own machine using
the command below. Can this be related to the other problem? Shouldn't I list
Ah, good. Cross-posting :)
BR,
Alex
On 03 Feb 2015, at 12:41, xeonmailinglist xeonmailingl...@gmail.com wrote:
I have found the problem. I started to use `webhdfs` and everything is ok.
On 03-02-2015 10:40, xeonmailinglist wrote:
What do you mean by no path is given? Even if I launch
Another good option is hftp.
Artem Ervits
On Feb 3, 2015 6:42 AM, xeonmailinglist xeonmailingl...@gmail.com wrote:
I have found the problem. I started to use `webhdfs` and everything is ok.
On 03-02-2015 10:40, xeonmailinglist wrote:
What do you mean by no path is given? Even if I launch
I am converting a secure non-HA cluster into a secure HA cluster. After the
configuration and started all the journalnodes, I executed the following
commands on the original NameNode:
1. hdfs name -initializeSharedEdits #this step succeeded
2. hadoop-daemon.sh start namenode # this step failed.
Hello,
Was trying to debug reasons for Killed/Failed apps and was checking for the
applications that were killed/failed in RM logs - from RMAuditLogger.
QueueMetrics.AppsKilled/Failed metrics shows much higher nos i.e ~100.
However RMAuditLogger shows 1 or 2 Apps as Killed/Failed in the logs. Is
Hi All,
This is with respect to the JIRA defect: HADOOP-10774 related to kerberose
authentication using IBM JAVA.
Looks like there are lot of changes haven been done to properly handle the
kerbserose authentication using the JIRA defect: HADOOP-9446 for IBM JAVA.
But their are still some
Have you added all host specific principals in kerberos database ?
Thanks,
On Tue, Feb 3, 2015 at 7:59 AM, 郝东 donhof...@163.com wrote:
I am converting a secure non-HA cluster into a secure HA cluster. After
the configuration and started all the journalnodes, I executed the
following commands
Got it. Here's the solution:
```
vagrant@hadoop-coc-1:~/Programs/hadoop$ export
HADOOP_CLASSPATH=share/hadoop/tools/lib/hadoop-distcp-2.6.0.jar; hadoop
jar wordcount.jar -libjars
$HADOOP_HOME/share/hadoop/tools/lib/hadoop-distcp-2.6.0.jar /input1
/outputmp /output1
```
On 03-02-2015 14:58,
Got it. Here’s the solution:
|vagrant@hadoop-coc-1:~/Programs/hadoop$ export
HADOOP_CLASSPATH=share/hadoop/tools/lib/hadoop-distcp-2.6.0.jar; hadoop jar
wordcount.jar -libjars
$HADOOP_HOME/share/hadoop/tools/lib/hadoop-distcp-2.6.0.jar /input1 /outputmp
/output1
|
On 03-02-2015 14:58,
Hi,
I want this because I want to create depency between 2 jobs. The first
job execute the wordcount example, and the second job copy the output of
the wordcount to another HDFS.
Therefore, I want to create a job (job 2) that includes the code to copy
data to another HDFS. The code is below.
Maybe this has to do with this error… I can’t do |ls| to my own machine
using the command below. Can this be related to the other problem?
Shouldn't I list the files with this command?
|vagrant@hadoop-coc-1:~$ hdfs dfs -ls hdfs://192.168.56.100/
ls: Call From hadoop-coc-1/192.168.56.100 to
What do you mean by no path is given? Even if I launch this command, I
get the same error…. What path should I put here?
|$ hadoop distcp hdfs://hadoop-coc-1:50070/input1
hdfs://hadoop-coc-2:50070/input1|
Thanks,
On 02-02-2015 19:59, Alexander Alten-Lorenz wrote:
Have a closer look:
Check http://hadoop.apache.org/mailing_lists.html#User
Regards,
Ramkumar Bashyam
On Wed, Jan 7, 2015 at 7:01 PM, Kiran Prasad Gorigay
kiranprasa...@imimobile.com wrote:
unsubscribe
my cluster A, and cluster B. To upgrade to version 2.6
In what order should I upgrade?
Journalnode 1 gt;gt; Journalnode 2 gt; Journalnode 3 gt;gt; Namenode Std
gt;gt; Namenode Act gt;gt; Datanode ??
Do I also need to upgrade the zookeeper?
hadoop-2.4.1 : JournalNode, Namenode, Datanode
unsubscribe
Hi,
I have checked my kerberos database. All the principals are there. By the way,
if I did not enable HA, just enable the secure-mode, the Namenode can be
started correctly.
At 2015-02-04 01:24:21, Manoj Samel manojsamelt...@gmail.com wrote:
Have you added all host specific principals
22 matches
Mail list logo