Thx Rob Blah
On Fri, Nov 15, 2013 at 6:54 PM, Rob Blah wrote:
> MapReduce
> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
> "We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks
> Containers."
>
> You can try editing configuration file:
> mapreduce.
If you "rollback", you lose all new data.
On Sat, Nov 16, 2013 at 12:25 AM, krispyjala wrote:
> What happens if you upgrade 1.0.4 to 2.2, run some stuff that puts in new
> data while the upgrade is not finalized, but then revert back to 1.0.4? Will
> the new data also be reverted back to 1.0.4 sa
>From the command line, can you run 'jmap -heap' ?
http://download.oracle.com/javase/1.5.0/docs/tooldocs/share/jmap.html
On Fri, Nov 15, 2013 at 10:50 AM, Viswanathan J
wrote:
> Hi guys,
>
> I had JT OOME in hadoop version 1.2.1 and applied the patch based on the
> fix given by Apache contribut
I also found this post to be of great help:
http://tzulitai.wordpress.com/2013/08/30/yarn-applications-code-level-breakdown-client
I think he has another post that covers ApplicationMaster code breakdown.
-- Kris.
--
View this message in context:
http://hadoop-common.472056.n3.nabble.com/Wri
What happens if you upgrade 1.0.4 to 2.2, run some stuff that puts in new
data while the upgrade is not finalized, but then revert back to 1.0.4? Will
the new data also be reverted back to 1.0.4 safely?
--
View this message in context:
http://hadoop-common.472056.n3.nabble.com/New-data-on-unfin
Hi guys,
I had JT OOME in hadoop version 1.2.1 and applied the patch based on the
fix given by Apache contributors for jira issue mapreduce-5508.
After applying that fix the heap size gradually increases but after one
week jobtracker completely hangs and slowdown but didn't get JT OOME. No
error
Hi,
I am trying to run my C++ code using hadoop-1.2.1. I am using pipes. The code
requires to access CLAPACK libraries. My test setup is based on single node.
CLAPECK is installed on that node. Everything compiles without any problem but
the when I run a job the maper hangs first and than faile
Im configuring Hadoop 2.2.0 stable release with HA namenode but i dont
know how to configure remote access to the cluster.
I have HA namenode configured with manual failover and i
defined|dfs.nameservices|and i can access hdfs with nameservice from all
the nodes included in the cluster, but no
MapReduce
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
"We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks
Containers."
You can try editing configuration file:
mapreduce.map.memory.mb
?
mapreduce.reduce.memory.mb
?
Or set these in a driver.
If y
Question have you tried giving more memory to the containers?
2013/11/15 unmesha sreeveni
> ys found
> tried -D mapred.child.java.opts=-Xmx4096M on the command line:
>
>
> On Fri, Nov 15, 2013 at 3:01 PM, unmesha sreeveni
> wrote:
>
>> hadoop jar /home/my/hadoop2.jar /user/unmesha/inputdata /u
"Can you check the config entry for yarn.scheduler.capacity.resource-
calculator ?
It should point to org.apache.hadoop.yarn.util.resource.
DefaultResourceCalculator"
Answer provided by Ted Yu in thread "DefaultResourceCalculator class not
found, ResourceManager fails to start."
regards
2013/1
Was due to permission issues.
http://stackoverflow.com/questions/15941108/hdfs-access-from-remote-host-through-java-api-user-authentication
On Fri, Nov 15, 2013 at 8:34 AM, unmesha sreeveni wrote:
> yes . I closed :(
>
>
> On Thu, Nov 14, 2013 at 8:51 PM, java8964 java8964
> wrote:
>
>> Maybe j
ys found
tried -D mapred.child.java.opts=-Xmx4096M on the command line:
On Fri, Nov 15, 2013 at 3:01 PM, unmesha sreeveni wrote:
> hadoop jar /home/my/hadoop2.jar /user/unmesha/inputdata /user/unmesha/out
>
>
>
> On Fri, Nov 15, 2013 at 2:53 PM, unmesha sreeveni
> wrote:
>
>> When i tried to ex
Hi all
It‘s wierd to failed to start my yarn resourcemanager with an
exception[1].
I aslo do some google, someone also encountered this problem with no
solved answer.
I check the src ,there is actually no the DefaultResourceCalculator in
package :org.apache.hadoop.yarn.server.resourcema
[solved] It is due to permission issue.
On Wed, Nov 13, 2013 at 11:27 PM, Rahul Bhattacharjee <
rahul.rec@gmail.com> wrote:
> If you have a map only job , then the output of the mappers would be
> written by hadoop itself.
>
> thanks,
> Rahul
>
>
> On Wed, Nov 13, 2013 at 9:50 AM, Sahil Agar
hadoop jar /home/my/hadoop2.jar /user/unmesha/inputdata /user/unmesha/out
On Fri, Nov 15, 2013 at 2:53 PM, unmesha sreeveni wrote:
> When i tried to execute my program with 100 MB file i am getting
> JavaHeapSpace Exception
>
>
> >hadoop jar /home/my/hadoop2.jar /user/unmesha/inputdata
> /user/
When i tried to execute my program with 100 MB file i am getting
JavaHeapSpace Exception
>hadoop jar /home/my/hadoop2.jar /user/unmesha/inputdata /user/inputdata/out
How to increase the heap size through commandline
--
*Thanks & Regards*
Unmesha Sreeveni U.B
*Junior Developer*
u r most welcome :)
On Fri, Nov 15, 2013 at 12:46 PM, chandu banavaram <
chandu.banava...@gmail.com> wrote:
> thanks
>
>
> On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni
> wrote:
>
>> @chandu banavaram:
>> This exception usually happens if hdfs is trying to write into a file
>> which is no
Hi,
I have a very simple use case...
Basically I have an edge list and I am trying to convert it into adjacency
list..
Basically
src target
a b
ac
bd
be
and so on..
What I am trying to build is
a [b,c]
b [d,e]
.. and so on..
But every now and then.. I hit a super node..which h
19 matches
Mail list logo