thanks Allen, I really wish there wasn't such a version 0.21.0. :)
On Mon, Aug 22, 2011 at 12:08 PM, Allen Wittenauer wrote:
>
> On Aug 19, 2011, at 12:39 AM, steven zhuang wrote:
>
> > I updated my hadoop cluster from 0.20.2 to higher version
> > 0.21.0 because of MAPREDUCE-1286, and
On Aug 15, 2011, at 9:00 PM, Chris Song wrote:
> Why hadoop should be built in JAVA?
http://www.quora.com/Why-was-Hadoop-written-in-Java
> How will it be if HADOOP is implemented in C or Phython?
http://www.quora.com/Would-Hadoop-be-different-if-it-were-coded-in-C-C++-instead-of-Java-How
On Aug 17, 2011, at 12:36 AM, Steven Hafran wrote:
>
>
> after reviewing the hadoop docs, i've tried setting the following properties
> when starting my streaming job; however, they don't seem to have any impact.
> -jobconf mapred.tasktracker.reduce.tasks.maximum=1
"tasktracker" is the
On Aug 17, 2011, at 10:53 AM, Matt Davies wrote:
> Hello,
>
> I'm playing around with the Capacity Scheduler (coming from the Fair
> Scheduler), and it appears that a queue with jobs submitted by the same user
> are treated as FIFO. So, for example, if I submit job1 and job2 to the
> "low" queu
On Aug 19, 2011, at 12:39 AM, steven zhuang wrote:
> I updated my hadoop cluster from 0.20.2 to higher version
> 0.21.0 because of MAPREDUCE-1286, and now I have problem running a Hbase on
> it.
> I saw the 0.21.0 version is marked as "unstable, unsupported, does
> not include s
On Aug 21, 2011, at 7:17 PM, Michel Segel wrote:
> Avi,
> First why 32 bit OS?
> You have a 64 bit processor that has 4 cores hyper threaded looks like 8cpus.
With only 1.7gb of mem, there likely isn't much of a reason to use a
64-bit OS. The machines (as you point out) are already tig
Avi,
First why 32 bit OS?
You have a 64 bit processor that has 4 cores hyper threaded looks like 8cpus.
With only 1.7 GB you're going to be limited on the number of slots you can
configure.
I'd say run ganglia but that would take resources away from you. It sounds
like the default parameters a
Hi Avi,
I'm also learning Hadoop now. There's a tool named "nmon" that can track the
usage of the server. You can use this to track the mem, cpu, disk and network
usage of the servers. It's very easy to use and there's a nmon-analyzer that
can generate excel diagrams base on the nmon data.
Hop
On Sun, Aug 21, 2011 at 10:22 AM, Joey Echeverria wrote:
> Not that I know of.
>
> -Joey
>
> On Fri, Aug 19, 2011 at 1:16 PM, modemide wrote:
> > Ha, what a silly mistake.
> >
> > Thank you Joey.
> >
> > Do you also happen to know of an easier way to tell which racks the
> > jobtracker/namenode
Hi
I just upgraded from 0.19 to 0.20 everything seems fine however the web
monitoring tool doesn't work any more:
neither http://mydomain.com:50070/webapps/hdfs/dfshealth.jsp
nore http//mydomain.com:50070/dfshealth.jsp
Both give me a 404.
Same stands for the job tracking tool
Any idea where t
Not that I know of.
-Joey
On Fri, Aug 19, 2011 at 1:16 PM, modemide wrote:
> Ha, what a silly mistake.
>
> Thank you Joey.
>
> Do you also happen to know of an easier way to tell which racks the
> jobtracker/namenode think each node is in?
>
>
>
> On 8/19/11, Joey Echeverria wrote:
>> Did you r
> Hi All,
>
> Recently I was working with the CompositeInputFormat for MapSide Join and
> then tried MultipleInputs for ReduceSide join. The problem I found out was
> that both of these methods needed you to use JobConf class (@deprecated) and
> the implementation hasnt been provided in the 0.2
12 matches
Mail list logo