Hi,
I need some guidance to make RMI call from the map/reduce job, basically
connect to "localhost" RMI server.
This is usually involve setting security manager and providing policy
file that allows socket communications.
What is the best approach to do this for map reduce job.
Thank y
You should increase the heap size of the child JVM process running task
tracker rather than that of the process running job tracker. By default,
Hadoop allocates 1000 MB of memory to each daemon it runs. This is
controlled by the HADOOP_HEAPSIZE setting in hadoop-env.sh. Note that this
value is not
Rakesh,
The API: DistributedCache.getLocalCacheFiles(conf) , returns a list of
files added to dist cache.
More on this topic,
Courtesy : Alexander Behm
http://www.ics.uci.edu/~abehm/hadoop.html#howto_distributed_cache
-Shrijeet
On Tue, Oct 19, 2010 at 1:09 PM, rakesh kothari
wrote:
>
> I am usin
I am using Hadoop 0.20.1.
-Rakesh
From: rkothari_...@hotmail.com
To: mapreduce-user@hadoop.apache.org
Subject: Accessing files from distributed cache
Date: Tue, 19 Oct 2010 13:03:04 -0700
Hi,
What's the way to access files copied to distributed cache from the map tasks ?
e.g.
if I run
Hi,
What's the way to access files copied to distributed cache from the map tasks ?
e.g.
if I run my M/R job as $hadoop jar my.jar -files hdfs://path/to/my/file.txt,
How can I access file.txt in my Map(or reduce) task ?
Thanks,
-Rakesh
Where is it failing exactly? Map/Reduce tasks are failing or something else?
On Tue, Oct 19, 2010 at 9:28 AM, Yin Lou wrote:
> Hi,
>
> You can increase heapsize by -D mapred.child.java.opts="-d64 -Xmx4096m"
>
> Hope it helps.
> Yin
>
>
> On Tue, Oct 19, 2010 at 12:03 PM, web service wrote:
>
>>
Hi,
You can increase heapsize by -D mapred.child.java.opts="-d64 -Xmx4096m"
Hope it helps.
Yin
On Tue, Oct 19, 2010 at 12:03 PM, web service wrote:
> I have a simple map-reduce program, which runs fine under eclipse. However
> when I execute it using hadoop, it gives me an out of memory error.
I have a simple map-reduce program, which runs fine under eclipse. However
when I execute it using hadoop, it gives me an out of memory error.
Hadoop_heapsize is 2000MB
Not sure what the problem is.