Map-Reduce V/S Hadoop Ecosystem

2012-11-07 Thread yogesh.kumar13
Hello Hadoop Champs, Please give some suggestion.. As Hadoop Ecosystem(Hive, Pig...) internally do Map-Reduce to process. My Question is 1). where Map-Reduce program(written in Java, python etc) are overtaking Hadoop Ecosystem. 2). Limitations of Hadoop Ecosystem comparing with Writing Map-Re

RE: ERROR:: TaskTracker is not STARTING..

2012-10-31 Thread yogesh.kumar13
Thanks Robert, I got your reference. Its running now :-) Regards Yogesh Kumar From: Robert Molina [rmol...@hortonworks.com] Sent: Wednesday, October 31, 2012 6:50 AM To: user@hadoop.apache.org Subject: Re: ERROR:: TaskTracker is not STARTING.. Hi Yogesh, For the

RE: How to do HADOOP RECOVERY ???

2012-10-29 Thread yogesh.kumar13
Hi Uma, You are correct, when I start cluster it goes into safemode and if I do wait its doesn't come out. I use -safemode leave option. Safe mode is ON. The ratio of reported blocks 0.0037 has not reached the threshold 0.9990. Safe mode will be turned off automatically. 379 files and director

RE: How to do HADOOP RECOVERY ???

2012-10-29 Thread yogesh.kumar13
Thanks Uma, I am using hadoop-0.20.2 version. UI shows. Cluster Summary 379 files and directories, 270 blocks = 649 total. Heap Size is 81.06 MB / 991.69 MB (8%) WARNING : There are about 270 missing blocks. Please check the log or run fsck. Configured Capacity : 465.44 GB DFS Used

How to do HADOOP RECOVERY ???

2012-10-29 Thread yogesh.kumar13
Hi All, I run this command hadoop fsck -Ddfs.http.address=localhost:50070 / and found that some blocks are missing and corrupted results comes like.. /user/hive/warehouse/tt_report_htcount/00_0: MISSING 2 blocks of total size 71826120 B.. /user/hive/warehouse/tt_report_perhour_hit/00_

ERROR:: TaskTracker is not STARTING..

2012-10-26 Thread yogesh.kumar13
Hi All, I am trying to run Hadoop cluster but TaskTracker is not running, I have cluster of two machines 1st Machine Namenode+Datanode 2nd Machine DataNode. here is the TaskTracker's Log file. 2012-10-26 15:45:35,405 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: /*

RE: ERROR: ssh-copy-id: command not found IN HADOOP DISTRIBUTED MODE

2012-10-25 Thread yogesh.kumar13
Hi Brahma, I am on Mac OS X it dosent have copy cmd i.e sh-copy-id -i I copyed it as mediaadmin$ cat ~/.ssh/id_rsa.pub | ssh pluto@10.203.33.80 'cat >> ~/.ssh/authorized_keys' Password: and did ssh 10.203.33.80 and it asked for password. Master:~ mediaadmin$ ssh 10.203.33.80 Password:

ERROR:: Hadoop Installation in distributed mode.

2012-10-25 Thread yogesh.kumar13
Hi All, I trying to install Hadoop in distributed mode over Two Mac OS X 10.6.8 machines. I have configured 1) One as a Master ( plays role of both Name node and Data Node) 2) Second as a Slave ( Only date node) I have give same name to both Machines and they have Admin access. pluto ( for bot

RE: ERROR: ssh-copy-id: command not found IN HADOOP DISTRIBUTED MODE

2012-10-25 Thread yogesh.kumar13
Thanks All, The copy has been done but here comes another horrible issue. when I log in as Master ssh Master it asks for Password Master:~ mediaadmin$ ssh Master Password: abc Last login: Thu Oct 25 17:13:30 2012 Master:~ mediaadmin$ and for Slave it dosent ask. Master:~ mediaadmin$ ssh pl

ERROR: ssh-copy-id: command not found IN HADOOP DISTRIBUTED MODE

2012-10-25 Thread yogesh.kumar13
Hi All, I am trying to copy the public key by this command. Master:~ mediaadmin$ ssh-copy -id -i $HOME/.ssh/id_rsa.pub pluto@Slave I have two machines Master Name is pluto and same name is of Slave. (Admin) And I got this error, Where I am going wrong? ssh-copy-id: command not found Please s

RE: ERROR:: SSH failour for distributed node hadoop cluster

2012-10-25 Thread yogesh.kumar13
Hi Mohammad, It was first Issue, I have tried to copy the by using the command ssh-copy-id -i $HOME/.ssh/id_rsa.pub pluto@slave but it showed error. Master:~ mediaadmin$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub pluto@Slave -bash: ssh-copy-id: command not found Why is it so?? Regards Yogesh Kumar

ERROR:: SSH failour for distributed node hadoop cluster

2012-10-25 Thread yogesh.kumar13
Hi all, I am trying to run the command ssh Master it runs and shows after entering password. Password: abc Last login: Thu Oct 25 13:51:06 2012 from master But ssh for Slave through error. ssh Slave it asks for password ans denie Password: abc Password: abc Password: abc Permission denied

RE: removing datanodes from clustes.

2012-09-12 Thread yogesh.kumar13
Thanks Brahmareddy, Do we need to create include and exclude files, and of which extension. Please suggest. Regards Yogesh Kumar From: Brahma Reddy Battula [brahmareddy.batt...@huawei.com] Sent: Wednesday, September 12, 2012 10:16 AM To: user@hadoop.apache.org S

RE: Sqoop installation

2012-08-21 Thread yogesh.kumar13
Hello Rahul, Follow the steps mentioned in this blog. http://jugnu-life.blogspot.in/search/label/Sqoop Regards Yogesh Kumar Dhari From: rahul p [rahulpoolancha...@gmail.com] Sent: Tuesday, August 21, 2012 4:14 PM To: user@hadoop.apache.org Subject: Sqoop install