This question is more related to mapreduce.
I put user@hbase in Bcc.
Cheers
On Sun, Mar 31, 2013 at 11:15 AM, tojaneyang wrote:
> Hi Ted,
>
> Do you have any suggestions for this?
>
> I am using hadoop which is packaged within hbase -0.94.1. It is hadoop
> 1.0.3.
>
> Thanks,
>
> Xia
>
>
>
> -
Hi,
This directory is used as part of the 'DistributedCache' feature. (
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#DistributedCache).
There is a configuration key "local.cache.size" which controls the amount
of data stored under DistributedCache. The default limit is 10GB. However,
M
To: user@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?
Hi,
This directory is used as part of the 'DistributedCache' feature.
(http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#DistributedCache).
There is a configuration key "local.cache.size"
already goes more than 1G now. Looks
> like it does not do the work. Could you advise if what I did is correct?
>
> local.cache.size
> 50
>
> Thanks,
>
> Xia
>
> From: Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
> Sent: Monday, April 08, 201
t; in file core-default.xml, which is in hadoop-core-1.0.3.jar
Thanks,
Jane
From: Arun C Murthy [mailto:a...@hortonworks.com]
Sent: Wednesday, April 10, 2013 2:45 PM
To: user@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?
Ensure no jobs are running (cache limit is only for
orks.com]
> *Sent:* Wednesday, April 10, 2013 2:45 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: How to configure mapreduce archive size?
>
> ** **
>
> Ensure no jobs are running (cache limit is only for non-active cache
> files), check after a little whil
onfiguration().set(TableOutputFormat.*OUTPUT_TABLE*,
> tableName);
>
>job.setNumReduceTasks(0);
>
>
>
>*boolean* b = job.waitForCompletion(*true*);
>
> ** **
>
> *From:* Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
> *Sent:* Thursday
/archive. Could you help?
Thanks.
Xia
From: Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
Sent: Thursday, April 11, 2013 9:09 PM
To: user@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?
TableMapReduceUtil has APIs like addDependencyJars which will use
bject: RE: How to configure mapreduce archive size?
Hi Hemanth,
I did not explicitly using DistributedCache in my code. I did not use any
command line arguments like -libjars neither.
Where can I find job.xml? I am using Hbase MapReduce API and not setting any
job.xml.
The key point is I wa
-
From: bejoy.had...@gmail.com
Date: Tue, 16 Apr 2013 18:05:51
To:
Reply-To: bejoy.had...@gmail.com
Subject: Re: How to configure mapreduce archive size?
You can get your Job.xml for each jobs from The JT web UI. Click on the job, on
the specific job page you'll get this.
Regards
Bejo
ou help?
>
> ** **
>
> Thanks.
>
> ** **
>
> Xia
>
> ** **
>
> *From:* Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
> *Sent:* Thursday, April 11, 2013 9:09 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: How to configure map
tact
them again.
Thanks,
Jane
From: Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
Sent: Tuesday, April 16, 2013 9:35 PM
To: user@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?
You can limit the size by setting local.cache.size in the mapred-site.xml (or
core-s
; I will contact them again.
>
> ** **
>
> Thanks,
>
> ** **
>
> Jane
>
> ** **
>
> *From:* Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
> *Sent:* Tuesday, April 16, 2013 9:35 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: How to configur
* **
>
> Thanks,
>
> ** **
>
> Jane
>
> ** **
>
> *From:* Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
> *Sent:* Wednesday, April 17, 2013 9:11 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: How to configure mapreduce archive size?**
14 matches
Mail list logo