Yes Koert! That is correct!
On Thu, May 3, 2012 at 6:08 PM, Koert Kuipers wrote:
> do i understand it correctly that with kerberos enabled the mappers and
> reducers will be "run as" the actual user that started them? as opposed to
> the user that runs the tasktracker, which is mapred or hadoop
, Ravi Prakash wrote:
> Hi Pedro,
>
> I know its sub-optimal but you should be able to put in as many
> System.out.println / log messages as you want and you should be able to see
> them in stdout, and syslog files. Which version of hadoop are you using?
>
>
>
>
> On
Hi Pedro,
I know its sub-optimal but you should be able to put in as many
System.out.println / log messages as you want and you should be able to see
them in stdout, and syslog files. Which version of hadoop are you using?
On Thu, Mar 29, 2012 at 10:33 AM, Pedro Costa wrote:
> Hi,
>
> I'm try
Sector-Sphere
On Mon, Jan 30, 2012 at 4:24 PM, Ronald Petty wrote:
> R.V.,
>
> Are you looking for the platforms that due distributed computation or the
> larger ecosystems like programming apis, etc.?
>
> Here are some platforms:
>
> C-Squared
> Globus
> Condor
>
> Here are some libraries:
>
>
You probably want to add a capacity-scheduler.xml to define the queues
On Tue, Jan 10, 2012 at 2:46 PM, Ann Pal wrote:
> Hi
> Is the following the only steps to turn on capacity scheduler?
> [1] Edit conf/yarn-site.xml to include:
> yarn.resourcemanager.scheduler.class -
> org.apache.hadoop.yar
>From what I know the number of containers will depend on the amount of
resources your node has. If it has 8 Gb RAM and each container has 2 Gb,
then there'll be a maximum of 4 containers.
On Tue, Jan 10, 2012 at 5:44 AM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:
> Hi,
>
Hi,
Clearly the jar file containing the class
"org/apache/hadoop/conf/Configuration" is not available on the CLASSPATH.
Did you build hadoop properly? The way I would usually check this is:
1. I have a shell script findclass
#!/bin/sh
LOOK_FOR=$1
if [ -z $LOOK_FOR ]; then
echo -e "Usage: