How hadoop work with two network cards?

2011-12-18 Thread geyong mao
I run Nutch on my cluster, but when run the step of generate or someother
task the data exchange through network is so much that other process need
network always get faided.
To avoid this problem I config a new network card eth1 to Nutch(Hadoop),
but it only work well some time. other time it still use the eth0. this
will cause some other process in this cluster failed.
Does any one can help me with this problem?
Is the problem caused by Nutch or Hadoop ?
Is it better if i exchange the eth0 and eth1?
thank you!


jsp-compile doesn't support the "webxml" attribute

2011-12-18 Thread Abhishek Pratap Singh
Hi All,

I m compiling  hadoop core 0.20.2 and getting this error ---
hadoop-common-0.20.2/build.xml:357:
jsp-compile doesn't support the "webxml" attribute
I m using ant 1.8.1. Any pointers why this is coming up?

Regards,
Abhishek


Re: TestFairScheduler failing - version 0.20. security 204

2011-12-18 Thread Merto Mertek
I figured out that if I run the test in console with "ant
test-fairscheduler" (my modification of target test in
src/contrib/build.xml) all tests runs ok. If I understand this right
probably testing is always done with ant and test files are never triggered
in eclipse ide.

Because I am rather new to all of this I would like to hear from you how do
you develop a new feature and how you test it. In my situation I would do
it as follows:
- develop a new feature ( make some code modification)
- build the scheduler with ant
- write unit tests
- run tests class from ant
- deploy a new scheduler build/jar to a cluster
- try it on a working cluster

Is there any other option how to try a new functionality locally or in any
other way? Any comments and suggestion are welcomed
Thank you..




On 17 December 2011 21:58, Merto Mertek  wrote:

> Hi,
>
> I am having some problems with running the following test file
>
> org.apache.hadoop.mapred.TestFairScheduler
>
> Nearly all test fails, most of them with the error:
> javalang.runtimeexception: COULD NOT START JT. Here is a 
> trace
> .
> Code was checkout from the svn branch, then I run "ant build" and "ant
> eclipse". Test was run inside eclipse.
>
> I would like to solve those problems before modifying the scheduler. Any
> hints appreciated. Probably just some config issue?
>
> Thank you
>
>
>
>


Re: Regarding a Multi user environment

2011-12-18 Thread alo alt
Hi,

if I understood correctly, you want other users than root can stop /
start an entire cluster?
Its possible via sudoers (visudo). Create a group, insert the users
(must be exists on the system) and give the right to run bin/start.sh
and bin/stop.sh.

hope it helps,
 - Alex

On Sat, Dec 17, 2011 at 9:51 AM, ashutosh pangasa
 wrote:
> I have set up hadoop for a multi user environment. Different users are able
> to submit map reduce jobs on a cluster.
>
> What i am trying to do is to see if different users can start or stop the
> cluster as well.
>
> Is it possible in hadoop . if yes, how can we do it



-- 
Alexander Lorenz
http://mapredit.blogspot.com

P Think of the environment: please don't print this email unless you
really need to.