- Original Message -
From: Arko Provo Mukherjee
Date: Tuesday, November 8, 2011 1:26 pm
Subject: Issues with Distributed Caching
To: mapreduce-user@hadoop.apache.org
> Hello,
>
> I am having the following problem with Distributed Caching.
>
> *In the driver class, I am doing the followi
forwarding to mapreduce
--- Begin Message ---
Am I being completely silly asking about this? Does anyone know?
On Wed, Nov 2, 2011 at 6:27 PM, Meng Mao wrote:
> Is there any mechanism in place to remove failed task attempt directories
> from the TaskTracker's jobcache?
>
> It seems like for
- Original Message -
From: Russell Brown
Date: Friday, November 4, 2011 9:18 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
>
> On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686
- Original Message -
From: Russell Brown
Date: Friday, November 4, 2011 9:11 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
>
> On 4 Nov 2011, at 15:35, Uma Maheswara Rao G 72686 wrote:
>
This problem may come if you dont configure the hostmappings properly.
Can you check whether your tasktrackers are pingable from each other with the
configured hostsnames?
Regards,
Uma
- Original Message -
From: Russell Brown
Date: Friday, November 4, 2011 9:00 pm
Subject: Never ending r
ts more than 4 days im working in this issue
> and
> tried different ways but no result.^^
>
> BS.
> Masoud
>
> On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote:
> > it wont disply any thing on console.
> > If you get any error while exceuting the co
> So is there a client program to call this?
>
> Can one write their own simple client to call this method from all
> diskson the cluster?
>
> How about a map reduce job to collect from all disks on the cluster?
>
> On 10/15/11 4:51 AM, "Uma Maheswara Rao G 7268
/** Return the disk usage of the filesystem, including total capacity,
* used space, and remaining space */
public DiskStatus getDiskStatus() throws IOException {
return dfs.getDiskStatus();
}
DistributedFileSystem has the above API from java API side.
Regards,
Uma
- Original Mess
Hello Joris,
Looks You have configured mapred.map.child.java.opts to -Xmx512M,
To spawn a child process that much memory is required.
Can you check what other processes occupied memory in your machine. Bacuse your
current task is not getting the enough memory to initialize. or try to reduce
th
umak$
> bin/mumak.sh src/test/data/19-jobs.trace.json.gz
> src/test/data/19-jobs.topology.json.gz
> it gets stuck at some point. Log is here
> <http://pastebin.com/9SNUHLFy>
> Thanks,
> Arun
>
>
>
>
>
> On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswa
10 matches
Mail list logo