I am using Cloudera CDH 4, the latest version of it. I didnt remove anything
from the shell, as I can recall that issue happened when I added some feature
from the Cloudera Manager.
Any thought?
Best Regards, Joshua Tu
From: bharathvissapragada1...@gmail.com
Date: Fri, 15 Nov 2013 11:41:19 +053
thanks
On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni wrote:
> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions
@chandu banavaram:
This exception usually happens if hdfs is trying to write into a file which
is no more in hdfs..
I think in my case certain files are not created in my hdfs.it failed to
create due to some permissions.
I am trying out.
On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni wrote:
What is your hadoop version? Did you manually delete any files from the nn
edits dir? Do you see this gap in the file listing of edits directory too?
Ideally all the txids appear consecutive when you do a file listing in that
dir.
On Fri, Nov 15, 2013 at 9:44 AM, Joshua Tu wrote:
> Hi there,
>
Hi there,
I deployed a single node for testing, today the NN stopped and cannot start
it with eror: There appears to be a gap in the edit log.
2013-11-14 15:00:01,431 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
shutdown complete.
2013-11-14 15:00:01,432 F
Hi again
At the tips of my namenode(hadoop-2.2.0) log,it is :
13/11/15 10:03:38 INFO namenode.NameNode: registered UNIX signal handlers
for [TERM, HUP, INT]
Does it means whether I can send the UNIX signal (such as TERM, HUP, INT
) to through the terminal.
What reflection will the namen
I have gone through the QJM design doc but i still want to make sure this
particular point.
In NN HA setup with QJM, i understood the journal manager class in namenode
takes care of writing edit logs to all journal nodes and i am guessing that
this does not impact the NN's performance because NN s
Hi,
we changed the jobtracker name recently in our production and populated the
same value in conf files across all our worker nodes.
couple of tasktrackers ( atleast from what we saw ) are started filling up
with the below log messages. It works when i manual do nslookup ,dig for
the jobtracke
On Thu, Nov 14, 2013 at 7:14 AM, Pastrana, Rodrigo (RIS-BCT)
wrote:
> So apparently the v2.2-beta apache Hadoop 2.2 tar file no longer contains
> 64bit version of libhdfs as reported here:
> (https://issues.apache.org/jira/browse/HADOOP-9911).
>
> Does anybody know if there are any plans to includ
yes . I closed :(
On Thu, Nov 14, 2013 at 8:51 PM, java8964 java8964 wrote:
> Maybe just a silly guess, did you close your Writer?
>
> Yong
>
> --
> Date: Thu, 14 Nov 2013 12:47:13 +0530
> Subject: Re: Folder not created using Hadoop Mapreduce code
> From: unmeshab...
Hi all
A info log got my questions :
13/11/15 10:03:40 INFO namenode.FSNamesystem: Finished loading FSImage in
261 msecs
Re-format filesystem in QJM to [10.7.23.122:8485, 10.7.23.125:8485,
10.
may be it is not reading the right configuration file.
if possible,make a malformed file,then restart tasktracker where it must
fail to do so.
On Thu, Nov 14, 2013 at 11:53 PM, Vincent Y. Shen
wrote:
> H i tried but still not working... 4 nodes 8 reducers all values
> are final
>
>
> On
Hi
I have a group and foreach statements as below
grouped = GROUP filterdata BY (page_name,web_session_id);
x = foreach grouped {
distinct_web_cookie_id= DISTINCT filterdata.web_cookie_id;
distinct_encrypted_customer_id= DISTINCT filterdata.encrypted_customer_id;
distinct_web_session_id= DISTINC
Yeah, you're right, I meant when locality relaxation is turned off.
I'm not super familiar with the capacity scheduler (more familiar with the
fair scheduler), so maybe someone with knowledge about that can chime in.
On Thu, Nov 14, 2013 at 1:28 PM, Gaurav Gupta wrote:
> Sandy,
>
>
>
> For the
Sandy,
For the first question you mentioned relaxation locality is turn on, I
assume you mean that relax locality is set to false right?
For scheduler, I am using capacity scheduler and modified following property
in capacity-scheduler.xml
yarn.scheduler.capacity.node-locality-delay
Great to hear. Other answers inline
On Thu, Nov 14, 2013 at 12:05 PM, Gaurav Gupta wrote:
> Sandy,
>
>
>
> The last trick worked but now I have couple of more questions
>
>
>
> 1. If I don’t request for rack and relax locality is false with
> scheduler delay on , I see that when I pass a w
Sandy,
The last trick worked but now I have couple of more questions
1. If I don't request for rack and relax locality is false with
scheduler delay on , I see that when I pass a wrong host I don't get any
container back. Why so?
2. I also noticed that with scheduler delay on and
Requesting the rack is not necessary, and is leading to the behavior that
you're seeing.
The documentation states:
* If locality relaxation is disabled, then only within the same
request,
* a node and its rack may be specified together. This allows for a
specific
* rack with a preference
The distributed shell example is the best one that I know of.
John
From: Bill Q [mailto:bill.q@gmail.com]
Sent: Tuesday, November 12, 2013 12:51 PM
To: user@hadoop.apache.org
Subject: Re: Writing an application based on YARN 2.2
Hi Lohit,
Thanks a lot. Is there any updated docs that would se
hi all,
i have to mount the hadoop.tmp.dir folder into local file system as tomcat
home folder. guide my how to do that.
thanks,
Mallik.
H i tried but still not working... 4 nodes 8 reducers all values
are final
On Thu, Nov 14, 2013 at 3:02 PM, Dieter De Witte wrote:
> you could make the values final: add true to make sure
> they cannot be overridden..
>
>
> 2013/11/14 Vincent Y. Shen
>
>> Hi Dieter,
>>
>> Thanks a lo
I found it frustrating to edit and view hadoop configuration files by hand.
I thus wrote a small utility that enable one to edit and query hadoop
configuration from the command line.
While this is not particularly useful for a production cluster, when
configuration changes are rare, this could be
Hi
I'm building a eclipse plugin for the hadoop-2.2.0.
It come out an NoClassDefFoundError[1] exception even I have put the jar
in the classpath.
Would anybody give some suggestions?
Regard
[1]--
Maybe just a silly guess, did you close your Writer?
Yong
Date: Thu, 14 Nov 2013 12:47:13 +0530
Subject: Re: Folder not created using Hadoop Mapreduce code
From: unmeshab...@gmail.com
To: user@hadoop.apache.org
@rab ra: ys using filesystem s mkdir() we can create folders and we can also
create i
So apparently the v2.2-beta apache Hadoop 2.2 tar file no longer contains 64bit
version of libhdfs as reported here:
(https://issues.apache.org/jira/browse/HADOOP-9911).
Does anybody know if there are any plans to include the 64bit libs in the
distribution? Any ideas where to search for future p
you could make the values final: add true to make sure
they cannot be overridden..
2013/11/14 Vincent Y. Shen
> Hi Dieter,
>
> Thanks a lot for prompt reply! very much appreciated.
>
> Indeed, I did change in the mapred-site.xml in ALL the nodes and restarted
> the cluster 4 times but no luck
Hi Dieter,
Thanks a lot for prompt reply! very much appreciated.
Indeed, I did change in the mapred-site.xml in ALL the nodes and restarted
the cluster 4 times but no luck...
Any other hint?
On Thu, Nov 14, 2013 at 2:56 PM, Dieter De Witte wrote:
> Dear Vincent,
>
> You have to make sur
Dear Vincent,
You have to make sure that mapred-site is modified on every tasktracker
node! Not just the master! Mapred-site can be customized if you have a
heterogeneous cluster. I usually start the dfs, then copy all the
mapredsites to the slave nodes, then run start-mapred.sh and then start a
j
Hi Guys,
I am increasing mapred.tasktacker.reduce.tasks.maximum in the hadoop
cluster. But regardless whatever I tried, hadoop refuses to use my value
but the default (2)
What I did do
===
stop mapred
stop dfs
update the value of mapred.tasktacker.reduce.tasks.maximum
start dfs
sta
Hi
I am doing the upgrade test from 2.0.5.alpha to 2.2.0.
On the original 2.0.5.alpha ,I have enable the HA,and when I do the
upgrade ,It comes out an exception[1]:
So what's is step to upgrade with HA enabled.Do I need to disable the
HA,and do the upgrade on the namenode and the backupnode s
30 matches
Mail list logo