Hi
Health script should execute successfully. If your health check required to
fail, than add ERROR that print in console. This is because health script may
fail because of Syntax error, Command not found(IOexception) or several other
reasons.
In order to work health script,
Do not add exit
Seems to be this is issue, which is logged..Please check following jira for
sameHope you also facing same issue...
https://issues.apache.org/jira/browse/HDFS-4929
Thanks Regards
Brahma Reddy Battula
From: Lixiang Ao [aolixi...@gmail.com]
Sent:
Hi
I have one node in unhealthy status:
Total Vmem allocated for Containers 4.20 GB
Vmem enforcement enabledfalse
Total Pmem allocated for Container 2 GB
Pmem enforcement enabledfalse
NodeHealthyStatus false
LastNodeHealthTime Wed Mar 19 13:31:24
Hi
There is no relation to NameNode format
Does NodeManger is started with default configuration? If no , any NodeManger
health script is configured?
Suspect can be
1. /hadoop does not have permission or
2. disk is full
Thanks Regards
Rohith Sharma K S
-Original
Hi ,
Want to Know Big Data / Hadoop ? If yes , join us for Free Webinar by
industry experts at below link.
*FREE webinar on Hadoop, Hosted by : Manoj , Research Director*
*Join us for a webinar on Mar 19, 2014 at 8:00 PM IST.*
*Register now!*
tnx got it work. In my init script I used wrong user. It was permissions
problem like Rohith said.
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
On
Hi,
I have folder named INPUT.
Inside INPUT i have 5 resume are there.
hduser@localhost:~/Ranjini$ hadoop fs -ls /user/hduser/INPUT
Found 5 items
-rw-r--r-- 1 hduser supergroup 5438 2014-03-18 15:20
/user/hduser/INPUT/Rakesh Chowdary_Microstrategy.txt
-rw-r--r-- 1 hduser supergroup
Thanks for you helps, but still could not solve my problem.
On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi s...@gopivotal.com wrote:
Ah yes, I overlooked this. Then please check the file are there or not:
ls /home/hadoop/project/hadoop-data/dfs/name?
Regards,
*Stanley Shi,*
On Tue, Mar
Hi all,
is it possible to install Mongodb on the same VM which consists hadoop?
--
amiable harsha
I can get Server Stacks from web ui. But I don't know which code handle
the function, how can the web app get the stacks information from jvm?
Certainly it is , and quite common especially if you have some high
performance machines : they can run as mapreduce slaves and also double as
mongo hosts. The problem would of course be that when running mapreduce
jobs you might have very slow network bandwidth at times, and if your front
end
Hi,
I'm working on minimal implementation of JSR 203 to provide access to
HDFS (1.2.1) for a GUI tool needed in my company.
Some features already works as create a directory, delete something,
list files in directory.
I would know if someone already worked on something like that. Maybe a
Any thoughts on this? Confirm or Deny it's an issue.. may be?
On Mon, Mar 17, 2014 at 11:43 AM, Something Something
mailinglist...@gmail.com wrote:
I would like to trigger a few Hadoop jobs simultaneously. I've created a
pool of threads using Executors.newFixedThreadPool. Idea is that if
I would like to trigger a few Hadoop jobs simultaneously. I've created a
pool of threads using Executors.newFixedThreadPool. Idea is that if the
pool size is 2, my code will trigger 2 Hadoop jobs at the same exact time
using 'ToolRunner.run'. In my testing, I noticed that these 2 threads keep
Hi all,
I'm running with Hadoop 1.0.4 and HBase 0.94.12 bundled (OSGi) versions I
built.
Most issues I encountered are related to class loaders.
One of the patterns I noticed in both projects is:
ClassLoader cl = Thread.currentThread().getContextClassLoader();
if(cl == null) {
cl
thank s jay and praveen,
i want to use both separately don't want to use mongodb in the place of
hbase
On Wed, Mar 19, 2014 at 9:25 PM, Jay Vyas jayunit...@gmail.com wrote:
Certainly it is , and quite common especially if you have some high
performance machines : they can run as mapreduce
Why not ? Its just a matter of installing 2 different packages.
Depends on what do you want to use it for, you need to take care of few
things, but as far as installation is concerned, it should be easily doable.
Regards
Prav
On Wed, Mar 19, 2014 at 3:41 PM, sri harsha rsharsh...@gmail.com
Hi
In the middle of a map-reduce job I get
map 20% reduce 6%
...
The reduce copier failed
map 20% reduce 0%
map 20% reduce 1%
map 20% reduce 2%
map 20% reduce 3%
Does that imply a *retry* process? Or I have to be worried about that message?
Regards,
Mahmood
Hello,
This is a hadoop-common functionality. See the StacksServlet class
code:
https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java#L1044
On Wed, Mar 19, 2014 at 9:17 PM, 赵璨 asoq...@gmail.com wrote:
I
While it does mean a retry, if the job eventually fails (after finite
retries all fail as well), then you have a problem to investigate. If
the job eventually succeeded, then this may have been a transient
issue. Worth investigating either way.
On Thu, Mar 20, 2014 at 12:57 AM, Mahmood Naderan
You want to do a word count for each file, but the code give you a word
count for all the files, right?
=
word.set(tokenizer.nextToken());
output.collect(word, one);
==
change it to:
word.set(filename++tokenizer.nextToken());
output.collect(word,one);
Regards,
*Stanley
Hi
I have 3 nodes Hadoop cluster in which I created 3 Data Nodes.
However, I don't have enough space in one of the node to cater other projects'
log. So I decommissioned this node from a Data node list but I could not
re-claimed the space from it.
Is there a way to get this node to release
Please check my inline comments which are in blue color...
From: Phan, Truong Q [troung.p...@team.telstra.com]
Sent: Thursday, March 20, 2014 8:04 AM
To: user@hadoop.apache.org
Subject: how to free up space of the old Data Node
Hi
I have 3 nodes Hadoop cluster
I want to access and study the hadoop cluster's metadata which is stored in
fsimage file on namenode machine. i came to know that offLineImageViewer is
used to do so. But when i am trying doing it i am getting an exception.
/usr/hadoop/hadoop-1.2.1# bin/hadoop oiv -i fsimage -o fsimage.txt
I want to access and study the hadoop cluster's metadata which is stored in
fsimage file on namenode machine. i came to know that offLineImageViewer is
used to do so. But when i am trying doing it i am getting an exception.
/usr/hadoop/hadoop-1.2.1# bin/hadoop oiv -i fsimage -o fsimage.txt
Thanks for the reply.
This Hadoop cluster is our POC and the node has less space compare to the other
two nodes.
How do I change the Replication Factore (RF) from 3 down to 2?
Is this controlled by this parameter (dfs.datanode.handler.count)?
Thanks and Regards,
Truong Phan
P+ 61 2 8576
You can change the replication factor using the following command
hdfs dfs - setrep [-R] rep path
Once this is done, you can re-commission the datanode, then all the
overreplicated blocks will be removed.
If not removed, restart the datanode.
Regards,
Vinayakumar B
From: Phan, Truong Q
Hi
Could you please provide me an alternative link where it explains on how to
change the final output file name with the desired name rather than in
partition like: part-*?
Can I have a sample Python's code to run MapReduce Streaming with a custome
output file names?
One helper from the
Hi Battula,
I hope Battula is your first name. :P
Here are the output of your suggested commands:
[root@nsda3dmsrpt02] /usr/lib/hadoop-0.20-mapreduce# sudo -u hdfs hadoop fsck /
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Connecting
Please check my inline comments which are in blue color...
From: Phan, Truong Q [troung.p...@team.telstra.com]
Sent: Thursday, March 20, 2014 10:28 AM
To: user@hadoop.apache.org
Subject: RE: how to free up space of the old Data Node
Thanks for the reply.
This
Following Node is down..Please have look on datanode logs and try to make it
up...Before going for further action..(like decreasing the replication factor..)
Dead datanodes:
Name: 172.18.126.99:50010 (nsda3dmsrpt02.internal.bigpond.com)
From: Phan, Truong Q
Seems to be issue,,please file a jira..
https://issues.apache.org/jira
From: c.agra...@outlook.com [c.agra...@outlook.com] on behalf of Chetan Agrawal
[cagrawa...@gmail.com]
Sent: Wednesday, March 19, 2014 12:50 PM
To: hadoop users
Subject:
Hi,
If we give the below code,
===
word.set(filename++tokenizer.nextToken());
output.collect(word,one);
==
The output is wrong. because it shows the
filename word occurance
vinitha java 4
vinitha oracle 3
sony
33 matches
Mail list logo