Re:

2013-07-19 Thread Anit Alexander
Hello Tariq,
I solved the problem. There must have been some problem in the custom input
format i created. so i took a sample custom input format which was working
in cdh4 environment and applied the changes as per my requirement. It is
working now. But i havent tested that code in apache hadoop environment yet
:)

Regards,
Anit


On Thu, Jul 18, 2013 at 1:22 AM, Mohammad Tariq donta...@gmail.com wrote:

 Hello Anit,

 Could you show me the exact error log?

 Warm Regards,
 Tariq
 cloudfront.blogspot.com


 On Tue, Jul 16, 2013 at 8:45 AM, Anit Alexander anitama...@gmail.comwrote:

 yes i did recompile. But i seem to face the same problem. I am running
 the map reduce with a custom input format. I am not sure if there is some
 change in the API to get the splits correct.

 Regards


 On Tue, Jul 16, 2013 at 6:24 AM, 闫昆 yankunhad...@gmail.com wrote:

 I think you should recompile the program after run the program


 2013/7/13 Anit Alexander anitama...@gmail.com

 Hello,

 I am encountering a problem in cdh4 environment.
 I can successfully run the map reduce job in the hadoop cluster. But
 when i migrated the same map reduce to my cdh4 environment it creates an
 error stating that it cannot read the next block(each block is 64 mb). Why
 is that so?

 Hadoop environment: hadoop 1.0.3
 java version 1.6

 chd4 environment: CDH4.2.0
 java version 1.6

 Regards,
 Anit Alexander







Re:

2013-07-15 Thread Anit Alexander
yes i did recompile. But i seem to face the same problem. I am running the
map reduce with a custom input format. I am not sure if there is some
change in the API to get the splits correct.

Regards


On Tue, Jul 16, 2013 at 6:24 AM, 闫昆 yankunhad...@gmail.com wrote:

 I think you should recompile the program after run the program


 2013/7/13 Anit Alexander anitama...@gmail.com

 Hello,

 I am encountering a problem in cdh4 environment.
 I can successfully run the map reduce job in the hadoop cluster. But when
 i migrated the same map reduce to my cdh4 environment it creates an error
 stating that it cannot read the next block(each block is 64 mb). Why is
 that so?

 Hadoop environment: hadoop 1.0.3
 java version 1.6

 chd4 environment: CDH4.2.0
 java version 1.6

 Regards,
 Anit Alexander





Re: In Window (Cygwin) DataNode and TaskTracker is not starting

2012-10-11 Thread Anit Alexander
Hi ,
did anyone get a solution to this issue . i'm facing the same at my end too
Im currently using hadoop 1.0.3 and trying to run it on windows 7 using cygwin

I was sucessful in getting the jobtracker,namenode,tasktracker running
(after verification in the browser)
But the jps doesnt show the tasktracker running
Currently the datanode is not starting and no significant Error msg in the logs

Let me know if anyone has found any solution to this situtation

Thanks
Anit


On Thu, Oct 11, 2012 at 12:16 PM, Visioner Sadak
visioner.sa...@gmail.com wrote:
 i did atttached wait will re attach


 On Wed, Oct 10, 2012 at 10:55 PM, Sujit Dhamale sujitdhamal...@gmail.com
 wrote:

 Hi,
 Can you please add Document in this mail :)

 Kind Regards
 Sujit Dhamale
 (+91 9970086652)


 On Wed, Oct 10, 2012 at 4:30 PM, Visioner Sadak visioner.sa...@gmail.com
 wrote:

 go thru the steps mentioned in this doc it will help you..


 On Wed, Oct 10, 2012 at 4:15 PM, Sujit Dhamale sujitdhamal...@gmail.com
 wrote:

 Hi
 Please help me out in this ??

 is there is any other Way to run Hadoop on Window ??


 On Tue, Oct 9, 2012 at 11:00 AM, Sujit Dhamale
 sujitdhamal...@gmail.com wrote:

 Hi ,
 i install Hadoop on window with the help of Cygwin .

 data node and Task tracker is not starting .
 can some one help me in this ?
 i added Log File with this mail .

 is any one have troubleshooting document available for window ? any
 best link for Running hadoop on Window (Most of artificial available with
 old HAdoop )



 SUJITD@SUJITD07 /usr/local/hadoop
 $ jps
 5552 Jps

 SUJITD@SUJITD07 /usr/local/hadoop
 $ bin/start-all.sh
 starting namenode, logging to
 /usr/local/hadoop/libexec/../logs/hadoop-SUJITD-namenode-SUJITD07.out
 localhost: starting datanode, logging to
 /usr/local/hadoop/libexec/../logs/hadoop-SUJITD-datanode-SUJITD07.out
 localhost: starting secondarynamenode, logging to
 /usr/local/hadoop/libexec/../logs/hadoop-SUJITD-secondarynamenode-SUJITD07.out
 starting jobtracker, logging to
 /usr/local/hadoop/libexec/../logs/hadoop-SUJITD-jobtracker-SUJITD07.out
 localhost: starting tasktracker, logging to
 /usr/local/hadoop/libexec/../logs/hadoop-SUJITD-tasktracker-SUJITD07.out

 SUJITD@SUJITD07 /usr/local/hadoop
 $ jps
 4132 Jps
 2936 NameNode
 4320 JobTracker


 Kind Regards
 Sujit Dhamale
 (+91 9970086652)







how to skip a mapper

2012-09-10 Thread Anit Alexander
Hello list,

  Is it possible to start the mapper from a particular byte
location in a file which is in hdfs?

Regards,
Anit


custom format

2012-09-03 Thread Anit Alexander
hello user,

I am trying to create a map reduce program which will have splits
based on a specific length. The content has to be extracted in a way
such that the newline(\n) or tab(\t) etc characters will be considered
as a byte and not as a mapper instance. is this possible through
custom input? if yes, how will i create a custom file split based on a
specific length value. Any suggestions?

Regards,
Anit


Re: custom format

2012-09-03 Thread Anit Alexander
Hi Hemanth,

Thank you for your valuable reply.

Regards,
Anit

On Mon, Sep 3, 2012 at 4:57 PM, Hemanth Yamijala yhema...@gmail.com wrote:
 Hi,

 I found this while trying to see if such a FileFormat or Split already exists:
 http://bitsofinfo.wordpress.com/2009/11/01/reading-fixed-length-width-input-record-reader-with-hadoop-mapreduce/

 I have certainly not tried it myself, hence can't say if it is
 current, etc. But maybe it'll help you in some way.

 Thanks
 Hemanth

 On Mon, Sep 3, 2012 at 4:30 PM, Anit Alexander anitama...@gmail.com wrote:
 hello user,

 I am trying to create a map reduce program which will have splits
 based on a specific length. The content has to be extracted in a way
 such that the newline(\n) or tab(\t) etc characters will be considered
 as a byte and not as a mapper instance. is this possible through
 custom input? if yes, how will i create a custom file split based on a
 specific length value. Any suggestions?

 Regards,
 Anit