Senthil wants to chat

2011-08-26 Thread Senthil
---

Senthil wants to stay in better touch using some of Google's coolest new
products.

If you already have Gmail or Google Talk, visit:
http://mail.google.com/mail/b-64ed5f4d24-f56e808097-AkoazAk7VfaxoixZKBrhIYpQ3-Y
You'll need to click this link to be able to chat with Senthil.

To get Gmail - a free email account from Google with over 2,800 megabytes of
storage - and chat with Senthil, visit:
http://mail.google.com/mail/a-64ed5f4d24-f56e808097-AkoazAk7VfaxoixZKBrhIYpQ3-Y

Gmail offers:
- Instant messaging right inside Gmail
- Powerful spam protection
- Built-in search for finding your messages and a helpful way of organizing
  emails into "conversations"
- No pop-up ads or untargeted banners - just text ads and related information
  that are relevant to the content of your messages

All this, and its yours for free. But wait, there's more! By opening a Gmail
account, you also get access to Google Talk, Google's instant messaging
service:

http://www.google.com/talk/

Google Talk offers:
- Web-based chat that you can use anywhere, without a download
- A contact list that's synchronized with your Gmail account
- Free, high quality PC-to-PC voice calls when you download the Google Talk
  client

We're working hard to add new features and make improvements, so we might also
ask for your comments and suggestions periodically. We appreciate your help in
making our products even better!

Thanks,
The Google Team

To learn more about Gmail and Google Talk, visit:
http://mail.google.com/mail/help/about.html
http://www.google.com/talk/about.html

(If clicking the URLs in this message does not work, copy and paste them into
the address bar of your browser).


Re: Jobs failing on submit

2011-08-26 Thread John Armstrong
On Fri, 26 Aug 2011 12:20:47 -0700, Ramya Sunil 
wrote:
> Can you also post the configuration of the scheduler you are using? You
> might also want to check the jobtracker logs. It would help in further
> debugging.

Where would I find the scheduler configuration?  I haven't changed it, so
I assume I'm using the default.

This is what I see in the jobtracker logs when I submit the job:

2011-08-26 16:11:19,164 INFO org.apache.hadoop.mapred.JobTracker: Job
job_201108261610_0001 added successfully for user 'hdfs' to queue 'default'
2011-08-26 16:11:19,164 INFO org.apache.hadoop.mapred.JobTracker:
Initializing job_201108261610_0001
2011-08-26 16:11:19,164 INFO org.apache.hadoop.mapred.JobInProgress:
Initializing job_201108261610_0001
2011-08-26 16:11:19,165 INFO org.apache.hadoop.mapred.AuditLogger:
USER=hdfs   IP=127.0.0.1OPERATION=SUBMIT_JOB
TARGET=job_201108261610_0001RESULT=SUCCESS

Nothing shows up in the tasktracker logs when I submit the job.

> State "4" indicates that the job is still in the PREP state and not a
job
> failure. We have seen these kind of errors when either the cluster does
not
> have tasktrackers to run the tasks or when the queue to which the job is
> submitted does not have sufficient capacity.

So it's possible something has gone wrong with the job queue?  Is it
possible something's stuck in there?  How would I find it/clean it out?

> If you do not see this log message, that implies the cluster does not
have
> enough resources due to which JT is unable to schedule the tasks.

I do see this line in the TaskTracker logs; it might have something to do
with the problem, but I have no idea how to fix it.

2011-08-26 16:14:41,966 WARN org.apache.hadoop.mapred.TaskTracker:
TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is
disabled.

Thanks for the pointers.


Re: Jobs failing on submit

2011-08-26 Thread Ramya Sunil
On Fri, Aug 26, 2011 at 11:50 AM, John Armstrong wrote:

> On Fri, 26 Aug 2011 11:46:42 -0700, Ramya Sunil 
> wrote:
> > How many tasktrackers do you have? Can you check if your tasktrackers
> are
> > running and the total available map and reduce capacity in your cluster?
>
> In pseudo-distributed there's one tasktracker, which is running, and the
> total map and reduce capacity is reported by the jobtracker at 6 slots
> each.
>
> > Can you also post the configuration of the scheduler you are using? You
> > might also want to check the jobtracker logs. It would help in further
> > debugging.
>
> Any ideas what I should be looking for that could cause a job to list as
> failed before launching any task JVMs and without reporting back to the
> launcher that it's failed?  Am I correct in interpreting "state 4" as
> "failure"?
>

State "4" indicates that the job is still in the PREP state and not a job
failure. We have seen these kind of errors when either the cluster does not
have tasktrackers to run the tasks or when the queue to which the job is
submitted does not have sufficient capacity.
In the logs, if you are able to see "Adding task (MAP/REDUCE)
...for tracker 'tracker_'", that means the task was
scheduled to be run on the TT. One can then look at the TT logs to check why
the tasks did not begin execution.
If you do not see this log message, that implies the cluster does not have
enough resources due to which JT is unable to schedule the tasks.

Thanks
Ramya


Re: Hadoop in process?

2011-08-26 Thread Sonal Goyal
Hi Frank,

You can use the ClusterMapReduceCase class from org.apache.hadoop.mapred.

Here is an example of adapting it to Junit4 and running test dfs and
cluster.

https://github.com/sonalgoyal/hiho/blob/master/test/co/nubetech/hiho/common/HihoTestCase.java

And here is a blog post that discusses this in detail:
http://nubetech.co/testing-hadoop-map-reduce-jobs

Best Regards,
Sonal
Crux: Reporting for HBase 
Nube Technologies 







On Sat, Aug 27, 2011 at 12:00 AM, Frank Astier wrote:

> Hi -
>
> Is there a way I can start HDFS (the namenode) from a Java main and run
> unit tests against that? I need to integrate my Java/HDFS program into unit
> tests, and the unit test machine might not have Hadoop installed. I’m
> currently running the unit tests by hand with hadoop jar ... My unit tests
> create a bunch of (small) files in HDFS and manipulate them. I use the fs
> API for that. I don’t have map/reduce jobs (yet!).
>
> Thanks!
>
> Frank
>


RE: Hadoop in process?

2011-08-26 Thread GOEKE, MATTHEW (AG/1000)
It depends on what scope you want your unit tests to operate at. There is a 
class you might want to look into called MiniMRCluster if you are dead set on 
having as deep of tests as possible but you can still cover quite a bit with 
MRUnit and Junit4/Mockito.

Matt

-Original Message-
From: Frank Astier [mailto:fast...@yahoo-inc.com] 
Sent: Friday, August 26, 2011 1:30 PM
To: common-user@hadoop.apache.org
Subject: Hadoop in process?

Hi -

Is there a way I can start HDFS (the namenode) from a Java main and run unit 
tests against that? I need to integrate my Java/HDFS program into unit tests, 
and the unit test machine might not have Hadoop installed. I'm currently 
running the unit tests by hand with hadoop jar ... My unit tests create a bunch 
of (small) files in HDFS and manipulate them. I use the fs API for that. I 
don't have map/reduce jobs (yet!).

Thanks!

Frank
This e-mail message may contain privileged and/or confidential information, and 
is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please 
notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of 
this e-mail by you is strictly prohibited.

All e-mails and attachments sent and received are subject to monitoring, 
reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking 
for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage 
caused by any such code transmitted by or accompanying
this e-mail or any attachment.


The information contained in this email may be subject to the export control 
laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and 
sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this 
information you are obligated to comply with all
applicable U.S. export laws and regulations.



Re: Jobs failing on submit

2011-08-26 Thread John Armstrong
On Fri, 26 Aug 2011 11:46:42 -0700, Ramya Sunil 
wrote:
> How many tasktrackers do you have? Can you check if your tasktrackers
are
> running and the total available map and reduce capacity in your cluster?

In pseudo-distributed there's one tasktracker, which is running, and the
total map and reduce capacity is reported by the jobtracker at 6 slots
each.

> Can you also post the configuration of the scheduler you are using? You
> might also want to check the jobtracker logs. It would help in further
> debugging.

Any ideas what I should be looking for that could cause a job to list as
failed before launching any task JVMs and without reporting back to the
launcher that it's failed?  Am I correct in interpreting "state 4" as
"failure"?


Re: Jobs failing on submit

2011-08-26 Thread Ramya Sunil
Hi John,

How many tasktrackers do you have? Can you check if your tasktrackers are
running and the total available map and reduce capacity in your cluster?
Can you also post the configuration of the scheduler you are using? You
might also want to check the jobtracker logs. It would help in further
debugging.

Thanks
Ramya

On Fri, Aug 26, 2011 at 7:50 AM, John Armstrong wrote:

> One of my colleagues has noticed this problem for a while, and now it's
> biting me.  Jobs seem to be failing before every really starting.  It seems
> to be limited (so far) to running in pseudo-distributed mode, since that's
> where he saw the problem and where I'm now seeing it; it hasn't come up on
> our cluster (yet).
>
> So here's what happens:
>
> $ java -classpath $MY_CLASSPATH MyLauncherClass -conf my-config.xml -D
> extra.properties=extravalues
> ...
> launcher output
> ...
> 11/08/26 10:35:54 INFO input.FileInputFormat: Total input paths to process
> : 2
> 11/08/26 10:35:54 INFO mapred.JobClient: Running job:
> job_201108261034_0001
> 11/08/26 10:35:55 INFO mapred.JobClient:  map 0% reduce 0%
>
> and it just sits there.  If I look at the jobtracker's web view the number
> of submissions increments, but nothing shows up as a running, completed,
> failed, or retired job.  If I use the command line probe I find
>
> $ hadoop job -list
> 1 jobs currently running
> JobId   State   StartTime   UserNamePriority
>  SchedulingInfo
> job_201108261034_0001   4   1314369354247   hdfsNORMAL  NA
>
> If I try to kill this job, nothing happens; it remains in the list with
> state 4 (failed?).  I've tried telling the mapper JVM to suspend so I can
> find it in netstat and attach a debugger from IDEA, but it seems that the
> job never gets to the point of even spinning up a JVM to run the mapper.
>
> Any ideas what might be going wrong?  Thanks.
>


Hadoop in process?

2011-08-26 Thread Frank Astier
Hi -

Is there a way I can start HDFS (the namenode) from a Java main and run unit 
tests against that? I need to integrate my Java/HDFS program into unit tests, 
and the unit test machine might not have Hadoop installed. I’m currently 
running the unit tests by hand with hadoop jar ... My unit tests create a bunch 
of (small) files in HDFS and manipulate them. I use the fs API for that. I 
don’t have map/reduce jobs (yet!).

Thanks!

Frank


Re: Using hadoop 0.20.203.0 single node setup root directory is writable for everybody despite I've set it's mode to 755 and then even 000 (verified using -ls)

2011-08-26 Thread Stanislaw Adaszewski
Ooops never mind, have turned off permissions myself :/ Guess that's enough
for friday evening.

Best regards,

--
S.

On 26 August 2011 18:13, Stanislaw Adaszewski wrote:

> I mean hadoop filesystem of course.
>
>
> On 26 August 2011 18:12, Stanislaw Adaszewski wrote:
>
>> Using hadoop 0.20.203.0 single node setup root directory is writable for
>> everybody despite I've set it's mode to 755 and then even 000 (verified
>> using -ls)
>>
>> What could be the problem?
>>
>> Best regards,
>>
>> --
>> S.
>>
>
>


Re: Using hadoop 0.20.203.0 single node setup root directory is writable for everybody despite I've set it's mode to 755 and then even 000 (verified using -ls)

2011-08-26 Thread Stanislaw Adaszewski
I mean hadoop filesystem of course.

On 26 August 2011 18:12, Stanislaw Adaszewski wrote:

> Using hadoop 0.20.203.0 single node setup root directory is writable for
> everybody despite I've set it's mode to 755 and then even 000 (verified
> using -ls)
>
> What could be the problem?
>
> Best regards,
>
> --
> S.
>


Using hadoop 0.20.203.0 single node setup root directory is writable for everybody despite I've set it's mode to 755 and then even 000 (verified using -ls)

2011-08-26 Thread Stanislaw Adaszewski
Using hadoop 0.20.203.0 single node setup root directory is writable for
everybody despite I've set it's mode to 755 and then even 000 (verified
using -ls)

What could be the problem?

Best regards,

--
S.


Exception in thread "main" java.io.IOException: No FileSystem for scheme: file

2011-08-26 Thread Dieter Plaetinck
Hi,
I know this question has been asked before, but I could not find 
the right solution.  Maybe because I use hadoop 0.20.2, some posts assumed 
older versions.


My code (relevant chunk):
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);

the last line gives:
Exception in thread "main" java.io.IOException: No FileSystem for
scheme: file


I launched this like so:
java -cp /usr/local/hadoop/src/core/:/usr/local/hadoop/conf/ -jar
myjar.jar
AFAICT, this should make sure all the configuration can be found and it should 
be able to connect to the filesystem:

dplaetin@n-0:~$ ls -alh /usr/local/hadoop/src/core/ /usr/local/hadoop/conf/
/usr/local/hadoop/conf/:
total 64K
drwxr-xr-x  2 dplaetin Search 4.0K Aug 26 17:21 .
drwxr-xr-x 12 root root   4.0K Feb 19  2010 ..
-rw-rw-r--  1 root root   3.9K Feb 19  2010 capacity-scheduler.xml
-rw-rw-r--  1 root root535 Feb 19  2010 configuration.xsl
-rw-r--r--  1 dplaetin Search  459 Apr 29 15:06 core-site.xml
-rw-r--r--  1 dplaetin Search 2.3K Apr 11 14:23 hadoop-env.sh
-rw-rw-r--  1 root root   1.3K Feb 19  2010 hadoop-metrics.properties
-rw-rw-r--  1 root root   4.1K Feb 19  2010 hadoop-policy.xml
-rw-r--r--  1 dplaetin Search  490 Apr 11 10:18 hdfs-site.xml
-rw-r--r--  1 dplaetin Search 2.8K Apr 11 14:23 log4j.properties
-rw-r--r--  1 dplaetin Search 1.1K Aug  3 09:49 mapred-site.xml
-rw-rw-r--  1 root root 10 Feb 19  2010 masters
-rw-r--r--  1 dplaetin Search   95 Apr  4 17:17 slaves
-rw-rw-r--  1 root root   1.3K Feb 19  2010 ssl-client.xml.example
-rw-rw-r--  1 root root   1.2K Feb 19  2010 ssl-server.xml.example

/usr/local/hadoop/src/core/:
total 36K
drwxr-xr-x  3 root root 4.0K Aug 24 10:40 .
drwxr-xr-x 15 root root 4.0K Aug 24 10:40 ..
-rw-rw-r--  1 root root  14K Feb 19  2010 core-default.xml
drwxr-xr-x  3 root root 4.0K Feb 19  2010 org
-rw-rw-r--  1 root root 7.9K Feb 19  2010 overview.html

Specifically, you can see I have a core-default.xml and a core-site.xml, which 
should be all that's needed,
according to the org.apache.hadoop.conf.Configuration documentation
(http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/conf/Configuration.html)
I read somewhere there should be a hadoop-default.xml; I thought this was 
deprecated but to be sure I
created the empty file in /usr/local/hadoop/src/core/ , but the error remained 
the same.

The cluster works fine, I've done tens of jobs on it, but as you can see, 
something fails when I try to interface to it directly.

Thanks in advance for any help,
Dieter




Jobs failing on submit

2011-08-26 Thread John Armstrong
One of my colleagues has noticed this problem for a while, and now it's
biting me.  Jobs seem to be failing before every really starting.  It seems
to be limited (so far) to running in pseudo-distributed mode, since that's
where he saw the problem and where I'm now seeing it; it hasn't come up on
our cluster (yet).

So here's what happens:

$ java -classpath $MY_CLASSPATH MyLauncherClass -conf my-config.xml -D
extra.properties=extravalues
...
launcher output
...
11/08/26 10:35:54 INFO input.FileInputFormat: Total input paths to process
: 2
11/08/26 10:35:54 INFO mapred.JobClient: Running job:
job_201108261034_0001
11/08/26 10:35:55 INFO mapred.JobClient:  map 0% reduce 0%

and it just sits there.  If I look at the jobtracker's web view the number
of submissions increments, but nothing shows up as a running, completed,
failed, or retired job.  If I use the command line probe I find

$ hadoop job -list
1 jobs currently running
JobId   State   StartTime   UserNamePrioritySchedulingInfo
job_201108261034_0001   4   1314369354247   hdfsNORMAL  NA

If I try to kill this job, nothing happens; it remains in the list with
state 4 (failed?).  I've tried telling the mapper JVM to suspend so I can
find it in netstat and attach a debugger from IDEA, but it seems that the
job never gets to the point of even spinning up a JVM to run the mapper.

Any ideas what might be going wrong?  Thanks.


Error while trying to start hadoop on ubuntu lucene first time.

2011-08-26 Thread sean wagner
Can anyone offer me some insight. It may have been due to me trying to run the 
start-all.sh script instead of starting the services. Not sure.

Thanks
Sean



/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu-mogile-1/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u1
STARTUP_MSG:   build = 
file:///tmp/nightly_2011-07-18_07-57-52_3/hadoop-0.20-0.20.2+923.97-1~lucid -r 
bdafb1dbffd0d5f2fbc6ee022e1c8df6500fd638; compiled by 'root' on Mon Jul 18 
09:40:01 PDT 2011
/
2011-08-25 17:16:48,653 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-08-25 17:16:48,655 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing 
NameNodeMeterics using context 
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
2011-08-25 17:16:48,664 INFO org.apache.hadoop.hdfs.util.GSet: VM type   = 
64-bit
2011-08-25 17:16:48,664 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 
17.77875 MB
2011-08-25 17:16:48,665 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 
2^21 = 2097152 entries
2011-08-25 17:16:48,665 INFO org.apache.hadoop.hdfs.util.GSet: 
recommended=2097152, actual=2097152
2011-08-25 17:16:48,676 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
java.io.IOException: Expecting a line not the end of stream
    at org.apache.hadoop.fs.DF.parseExecResult(DF.java:117)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:237)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at org.apache.hadoop.fs.DF.getFilesystem(DF.java:63)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirsToCheck(NameNodeResourceChecker.java:87)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.(NameNodeResourceChecker.java:71)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:348)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:327)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:465)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1224)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
2011-08-25 17:16:48,678 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.lang.NullPointerException
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:560)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:330)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:465)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1224)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)

2011-08-25 17:16:48,678 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG: 
/
SHUTDOWN_MSG: Shutting down NameNode at ubuntu-mogile-1/127.0.1.1
/



20$ sudo ls -lR
.:
total 4
drwxrwxrwx 4 root root 4096 2011-08-25 17:15 cache

./cache:
total 8
drwxr-xr-x 3 hdfs   hdfs   4096 2011-08-25 17:15 hdfs
drwxr-xr-x 3 mapred mapred 4096 2011-08-25 17:15 mapred

./cache/hdfs:
total 4
drwxr-xr-x 3 hdfs hdfs 4096 2011-08-25 17:15 dfs

./cache/hdfs/dfs:
total 4
drwx-- 2 hdfs hdfs 4096 2011-08-25 17:15 data

./cache/hdfs/dfs/data:
total 0

./cache/mapred:
total 4
drwxr-xr-x 3 mapred mapred 4096 2011-08-25 17:15 mapred

./cache/mapred/mapred:
total 4
drwxr-xr-x 6 mapred mapred 4096 2011-08-25 17:16 local

./cache/mapred/mapred/local:
total 16
drwxr-xr-x 2 mapred mapred 4096 2011-08-25 17:16 taskTracker
drwxr-xr-x 2 mapred mapred 4096 2011-08-25 17:16 toBeDeleted
drwxr-xr-x 2 mapred mapred 4096 2011-08-25 17:16 tt_log_tmp
drwx-- 2 mapred mapred 4096 2011-08-25 17:16 ttprivate

./cache/mapred/mapred/local/taskTracker:
total 0

./cache/mapred/mapred/local/toBeDeleted:
total 0

./cache/mapred/mapred/local/tt_log_tmp:
total 0

./cache/mapred/mapred/local/ttprivate:
total 0

Re: About avatar patches?

2011-08-26 Thread Linden Hillenbrand
Hi Shanmuganathan,

I am assuming the Facebook team can provide further context here but from
the github repo that is in the JIRA it looks like "Release 0.20.3 + FB
Changes (Unreleased) is the version that this was applied to.

You can find the committed changes here:
https://github.com/facebook/hadoop-20-warehouse/tree/master/src/contrib/highavailability

Best,
Linden

On Fri, Aug 26, 2011 at 5:16 AM, shanmuganathan.r <
shanmuganatha...@zohocorp.com> wrote:

> Hi All,
>
> The following patches in the
> https://issues.apache.org/jira/browse/HDFS-976 link  are belongs to which
> Version of hadoop ?
>
> 0001-0.20.3_rc2-AvatarNode.patch
>
>  AvatarNode.20.patch
>
> Thanks in Advance .
>
> Regards,
>
> Shanmuganathan
>
>


-- 
Linden Hillenbrand
Customer Operations Engineer

Phone:  650.644.3900 x4946
Email:   lin...@cloudera.com
Twitter: @lhillenbrand
Data:http://www.cloudera.com


Re: MultipleInputs in hadoop 0.20.2

2011-08-26 Thread praveenesh kumar
>>FWIW the trunk/future-branches have new API MultipleInputs you can
>>pull and include in your project

   Can anyone please tell me how I can do the above thing. How can I use
MultipleInputs of higher hadoop version to use it in lower hadoop version.

Thanks

On Wed, Aug 24, 2011 at 5:50 PM, Harsh J  wrote:

> 0.20.x supports the older API and it has been 're-deemed' as the
> stable one. You shouldn't face any hesitation in using it as even 0.23
> would carry it (although there its properly deprecated). This is quite
> some confusion but I guess you still won't have some of the old API
> features in the new one.
>
> FWIW the trunk/future-branches have new API MultipleInputs you can
> pull and include in your project. Also, alternative distributions that
> do stable backports may carry MultipleInputs in the new API (I use
> CDH3 and it does have mapreduce.lib.input.MultipleInputs backported in
> it).
>
> On Wed, Aug 24, 2011 at 2:40 PM, praveenesh kumar 
> wrote:
> > Hello guys,
> >
> > I am looking to use MultipleInputs.addInputPath() method in hadoop
> 0.20.2.
> > But when I am looking to its signature in the API, its like this :
> > *
> >  public static void addInputPath(JobConf conf,
> >Path path,
> >Class > InputFormat> inputFormatClass)*
> >
> >   *  public static void addInputPath(JobConf conf,
> >   Path path,
> >   Class > InputFormat> inputFormatClass,
> >   Class > Mapper> mapperClass)*
> >
> > But as far as I know in hadoop 0.20.2, JobConf object is deprecated.
> > How can I use MultipleInputs.addInputPath() in hadoop. Is there any other
> > way or any new class introduced instead of this one.
> >
> > Thanks,
> > Praveenesh
> >
>
>
>
> --
> Harsh J
>


About avatar patches?

2011-08-26 Thread shanmuganathan.r
Hi All,

The following patches in the https://issues.apache.org/jira/browse/HDFS-976 
link  are belongs to which Version of hadoop ?

0001-0.20.3_rc2-AvatarNode.patch

 AvatarNode.20.patch

Thanks in Advance .

Regards,

Shanmuganathan



Doubt in Avatarnode?

2011-08-26 Thread shanmuganathan.r
Hi All,

  I have the doubt in avatar node setup .

I configure the avatarnode using the patch 
https://issues.apache.org/jira/browse/HDFS-976

Am I need to configure the NFS filer for share the FSimage file between active 
and standby avatarnodes ? 

What is the other configurations needed for this setup?

Thanks in advance

Regards,

Shanmuganathan





Doubt in Avatarnode?

2011-08-26 Thread shanmuganathan.r
Hi All,

  I have the doubt in avatar node setup .

I configure the avatarnode using the patch 
https://issues.apache.org/jira/browse/HDFS-976

Am I need to configure the NFS filer for share the FSimage file between active 
and standby avatarnodes ? 

What is the other configurations needed for this setup?

Regards,

Shanmuganathan