Again I got the same error and it says
The reducer copier failed
...
could not find any valid local directory for file
/tmp/hadoop-hadoop/map_150.out
Searching the web shows that I have to clean up the /tmp/hadoop-hadoop folder
but the total size of this folder is 800KB with 1100 files. Doe
Hi Rohit,
How I enable debug for AM container logs ? and to which location are they
written to ?
I tried changing log4j.prop and can see DEBUGs for RM,NM etc but I don't
see AM related debug logs.
Thanks,
Ashwin
On Fri, Mar 21, 2014 at 3:05 AM, Rohith Sharma K S <
rohithsharm...@huawei.com> wrot
Yes, this is the best way to go.
Sent from my iPhone5s
> On 2014年3月22日, at 3:03, Something Something wrote:
>
> I will be happy to follow all these steps if someone confirms that this is
> the best way to handle it. Seems harmless to me, but just wondering. Thanks.
>
>
>> On Fri, Mar 21, 2
I will be happy to follow all these steps if someone confirms that this is
the best way to handle it. Seems harmless to me, but just wondering.
Thanks.
On Fri, Mar 21, 2014 at 1:26 AM, Bertrand Dechoux wrote:
> JIRA, test, patch and review? I am sure the community would welcome it.
> And if you
Hi,
I'm trying to get all the start and finish times for all the run jobs on
a yarn cluster.
yarn application -list -appStates ALL
Will get me most of the details of the jobs, but not the times. However,
I can parse this for the application ids and then run
yarn application -status $ID
on
If this parameter is at the job level (i.e. for the whole run level) then
you can set this value int the Configuration object to pass it on to the
mappers.
http://www.thecloudavenue.com/2011/11/passing-parameters-to-mappers-and.html
Regards,
Shahab
On Fri, Mar 21, 2014 at 7:08 AM, Ranjini Rathin
OK it seems that there was a "free disk space" issue.
I made more spaces and running again.
Regards,
Mahmood
On Friday, March 21, 2014 11:43 AM, shashwat shriparv
wrote:
Check if the tmp dir, hdfs remaining or log directory are getting filled up
while this job runs..
On Fri, Mar 21,
Hi,
Thanks for the great support i have fixed the issue. I have now got
the output.
But , i have one query ,Possible to give runtime argument for mapper class
like,
Giving the value C,JAVA in runtime.
* if((sp[k].equalsIgnoreCase("C"))){*
while (itr.hasMor
Hi
The below stack trace is generic for any am launcher failed to launch. Can
debug on AM container logs, so get proper stacktrace.?
Thanks & Regards
Rohith Sharma K S
From: Ashwin Shankar [mailto:ashwinshanka...@gmail.com]
Sent: 21 March 2014 14:02
To: user@hadoop.apache.org
Subject: Job fail
Hi Will,
Take a look at the distributedshell source code located at the hadoop
source code:
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell
There are 2 major files:
ApplicationMa
I see, thanks.
2014-03-21 16:20 GMT+08:00 Azuryy Yu :
> It'll be supported in 2.4.
> please look at here:
> https://issues.apache.org/jira/browse/HDFS-5138
>
>
>
> On Fri, Mar 21, 2014 at 3:46 PM, Meng QingPing wrote:
>
>> Hi,
>>
>> Hadoop dfs upgrade fail when HA enabled. Can Hadoop add featur
Hi,
I'm writing a new feature in Fair scheduler and wanted to test it out
by running jobs submitted by different users from my laptop.
My sleep job runs fine as long as the user name is my mac user name.
If I change my hadoop user name by setting HADOOP_USER_NAME,
my jobs fail with the exception
*
JIRA, test, patch and review? I am sure the community would welcome it. And
if you don't, well, it is unlikely to be appear soon into hadoop trunk.
Bertrand
On Fri, Mar 21, 2014 at 12:49 AM, Something Something <
mailinglist...@gmail.com> wrote:
> Confirmed that ToolRunner is NOT thread-safe:
>
It'll be supported in 2.4.
please look at here:
https://issues.apache.org/jira/browse/HDFS-5138
On Fri, Mar 21, 2014 at 3:46 PM, Meng QingPing wrote:
> Hi,
>
> Hadoop dfs upgrade fail when HA enabled. Can Hadoop add feature to
> upgrade dfs based on HA configure automatically ?
>
> Thanks,
>
Hello Norbert,
In 2.x, the upgrade process on the DN disk layout happens with greater
parallelism, and the overall DN upgrade is also a distributed task, so
you can expect it to complete very fast - certainly less than an hour
tops.
On Fri, Mar 21, 2014 at 12:54 PM, norbi wrote:
> Hi List,
>
> w
Hi,
Hadoop dfs upgrade fail when HA enabled. Can Hadoop add feature to upgrade
dfs based on HA configure automatically ?
Thanks,
Jack
Hi List,
we are using a hadoop cluster (v 0.20.2) and want to upgrade to cloudera
4.6 (hadoop 2.0).
Configured Capacity: 969329209806848 (881.6 TB)
Present Capacity: 969146545811532 (881.43 TB)
DFS Remaining: 219694195171526 (199.81 TB)
DFS Used: 749452350640006 (681.62 TB)
DFS Used%: 77.33%
U
Check if the tmp dir, hdfs remaining or log directory are getting filled
up while this job runs..
On Fri, Mar 21, 2014 at 12:11 PM, Mahmood Naderan wrote:
> that imply a *retry* process? Or I have to be wo
*Warm Regards_**∞_*
* Shashwat Shriparv*
[image:
http://www.linkedin.com/pub/sha
18 matches
Mail list logo