o gaurentee that one task will be scheduled on each node.
> It can be like 2 in one node and 1 in another.
>
> Regards
> Bejoy KS
>
--
Steve Sonnenberg
;
>> ** **
>>
>> --
>> Harsh J
>>
>> ------
>> This email message may contain proprietary, private and confidential
>> information. The information transmitted is intended only for the person(s)
>> or entities to which it is addressed. Any review, retransmission,
>> dissemination or other use of, or taking of any action in reliance upon,
>> this information by persons or entities other than the intended recipient
>> is prohibited and may be illegal. If you received this in error, please
>> contact the sender and delete the message from your system.
>>
>> Mu Sigma takes all reasonable steps to ensure that its electronic
>> communications are free from viruses. However, given Internet
>> accessibility, the Company cannot accept liability for any virus introduced
>> by this e-mail or any attachment and you are advised to use up-to-date
>> virus checking software.
>>
>
>
>
> --
> Harsh J
>
--
Steve Sonnenberg
n the Web UI after configuring HDFS for two nodes and
> configuring MR to use HDFS?
>
> On Mon, Jul 23, 2012 at 11:23 PM, Steve Sonnenberg
> wrote:
> > Thanks Harsh,
> >
> > 1) I was using NFS
> > 2) I don't believe that anything under /tmp is distributed ev
not the TT nodes.
>
> Use hdfs:// FS for fully-distributed operation.
>
> On Fri, Jul 20, 2012 at 10:06 PM, Steve Sonnenberg
> wrote:
> > I have a 2-node Fedora system and in cluster mode, I have the following
> > issue that I can't resolve.
> >
> > Hadoop 1.0
Sorry this is my first posting and I haven't gotten a copy nor any response.
Could someone please respond if you are seeing this?
Thanks,
Newbie
On Fri, Jul 20, 2012 at 12:36 PM, Steve Sonnenberg wrote:
> I have a 2-node Fedora system and in cluster mode, I have the following
> i
che.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2440)
at java.lang.Thread.run(Thread.java:636)
On both systems, ownership of all files directories under
/tmp/hadoop-hadoop is the user/group hadoop/hadoop.
Any ideas?
Thanks
--
Steve Sonnenberg