Hadoop Rack awareness on virtual system

2013-05-23 Thread Jitendra Yadav
Hi, Can we create and test hadoop rack awareness functionality in virtual box system(like on laptop .etc)?. Thanks~

Hadoop Installation Mappers setting

2013-05-23 Thread Jitendra Yadav
Hi, While installing hadoop cluster, how we can calculate the exact number of mappers value. Thanks~

Re: Hadoop Rack awareness on virtual system

2013-05-23 Thread Jitendra Yadav
PM, Leonid Fedotov wrote: > You definitely can. > Just set rack script on your VMs. > > Leonid > > > On Thu, May 23, 2013 at 2:50 AM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Hi, >> >> Can we create and test hadoop rack aware

Re: Hadoop Installation Mappers setting

2013-05-23 Thread Jitendra Yadav
as well as hadoop daemons. >> >> Divide the available memory with child jvm size and that would get the >> max >> num of slots. >> >> Also check whether sufficient number of cores are available as well. >> >> >> Regards >> Bejoy KS >&

Re: Error while using the Hadoop Streaming

2013-05-24 Thread Jitendra Yadav
Hi, I have run Michael's python map reduce example several times without any issue. I think this issue is related to your file path 'mapper.py'. you are using python binary? try this, hadoop jar /home/yyy/Dropbox/Private/xxx/Projects/task_week_22/hadoop-streaming-1.1.2.jar \ -input /user/yyy/

Re: Error while using the Hadoop Streaming

2013-05-24 Thread Jitendra Yadav
n the very first > line of both scripts: #!/usr/bin/python > > Any ideas? > > > On Fri, May 24, 2013 at 7:41 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Hi, >> >> I have run Michael's python map reduce example several times without any

Re: Install hadoop on multiple VMs in 1 laptop like a cluster

2013-05-31 Thread Jitendra Yadav
Hi, You can create a clone machine through an existing virtual machine in VMware and then run it as a separate virtual machine. http://www.vmware.com/support/ws55/doc/ws_clone_new_wizard.html After installing you have to make sure that all the virtual machines are setup with correct network set

Re: hi

2013-05-31 Thread Jitendra Yadav
Hi, This executable comes with JDK bundle. You can find this in your jdk/bin directory. Regards Jitendra On Fri, May 31, 2013 at 5:11 PM, shashwat shriparv < dwivedishash...@gmail.com> wrote: > C:\Program: command not found?? > > From where are you running this command is you hadoop is

Re: New hadoop 1.2 single node installation giving problems

2013-07-23 Thread Jitendra Yadav
Hi, You might have missed some configuration (XML tags ), Please check all the Conf files. Thanks On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani wrote: > Hi There, > > First of all, sorry if I am asking some stupid question. Myself being new > to the Hadoop environment , am finding it a bit dif

Re: New hadoop 1.2 single node installation giving problems

2013-07-23 Thread Jitendra Yadav
t; # optional. When running a distributed configuration it is best to > # set JAVA_HOME in this file, so that it is correctly defined on > # remote nodes. > > # The java implementation to use. Required. > export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25 > > # Extra Java CLASSPATH element

Re: New hadoop 1.2 single node installation giving problems

2013-07-23 Thread Jitendra Yadav
: Cannot access .: No such file or directory.* > > > > On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Hi Ashish, >> >> Please check in hdfs-site.xml. >> >> It is missing. >> >> Thanks.

Re: datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_"

2013-07-30 Thread Jitendra Yadav
Hi, Can you please check the existence/status of any of mentioned block in your hdfs cluster. Command: hdfs fsck / -block |grep 'blk number' Thanks On 7/30/13, ch huang wrote: > i do not know how to solve this,anyone can help > > 2013-07-30 17:28:40,953 INFO > org.apache.hadoop.hdfs.server.da

Re: datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_"

2013-07-31 Thread Jitendra Yadav
y, i the block did not exist ,but why it will missing? > > > On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Hi, >> >> Can you please check the existence/status of any of mentioned block >> in your hdfs clu

Re: Many Standby NameNodes for QJM HA

2013-08-01 Thread Jitendra Yadav
May be I'm wrong but I think right now there is only one standby NN facility available. Thanks On Thu, Aug 1, 2013 at 2:01 PM, lei liu wrote: > I use hadoop-2.0.5 version, and I use QJM for HA. > > I want to there are two Standby NameNodes and one Active NameNode in HDFS > cluster, > > I think

Re: Namenode is failing with expception to join

2013-08-07 Thread Jitendra Yadav
Hi, Did you configured your Name Node to store multiple copies of its metadata?. You can recover your name node in that situation. #hadoop namenode -recover it will ask you whether you want to continue or not, Please follow the instructions. Thanks On Wed, Aug 7, 2013 at 1:44 PM, Manish Bhoge

Re: Oozie ssh action error

2013-08-07 Thread Jitendra Yadav
Hi, I hope below points might help you. *Approach 1#* You need to change the sshd_config file in the remote server (probably in /etc/ssh/sshd_config). Change PasswordAuthentication value. PasswordAuthentication no to PasswordAuthentication yes And then restart the SSHD daemon *Approach 2#* Ch

Re: Datanode doesn't connect to Namenode

2013-08-07 Thread Jitendra Yadav
Hi, Your logs showing that the process is creating IPC call not for namenode, it is hitting datanode itself. Check you please check you datanode processes status?. Regards Jitendra On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez < felipe.o.gutier...@gmail.com> wrote: > Hi everyone, > > My sla

Re: Datanode doesn't connect to Namenode

2013-08-07 Thread Jitendra Yadav
I'm not able to see tasktraker process on your datanode. On Wed, Aug 7, 2013 at 11:14 PM, Felipe Gutierrez < felipe.o.gutier...@gmail.com> wrote: > yes, in slave I type: > fs.default.name > hdfs://cloud15:54310 > > in master I type: > fs.default.name > hdfs://cloud6:54310 > > If I type cloud6 on

Re: Datanode doesn't connect to Namenode

2013-08-07 Thread Jitendra Yadav
19025 DataNode > 19092 Jps > > > On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav > wrote: > >> Hi, >> >> Your logs showing that the process is creating IPC call not for namenode, >> it is hitting datanode itself. >> >> Check you please check you da

Re: Hadoop upgrade

2013-08-09 Thread Jitendra Yadav
Please refer Hadoop 1.1.2 release notes. http://hadoop.apache.org/docs/r1.1.2/releasenotes.html On Fri, Aug 9, 2013 at 5:41 PM, Viswanathan J wrote: > Hi, > > Planning to upgrade hadoop from 1.0.3 to 1.1.2, what are the key features > or advantages. >

getting errors on datanode/tasktracker logs

2013-08-09 Thread Jitendra Yadav
Hi, I'm getting below errors in log file while starting datanode and tasktracker. I'm using Hadoop 1.1.2 and java 1.7.0_21. mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zi

Re: getting errors on datanode/tasktracker logs

2013-08-10 Thread Jitendra Yadav
.com > Mobile Tel: +91 (0)9899821370 > > > On Sat, Aug 10, 2013 at 12:17 AM, Jitendra Yadav > > wrote: > >> Hi, >> >> I'm getting below errors in log file while starting datanode and >> tasktracker. I'm using Hadoop 1.1.2 and java 1.7.0_21. >

Re: getting errors on datanode/tasktracker logs

2013-08-10 Thread Jitendra Yadav
you try with 1.6 java also. > > With just starting daemons you are getting this error ? > > I suppose this is your test cluster. > > You seems to be having error similar to below link > https://forums.oracle.com/message/10371413 > On 10/08/2013 4:48 AM, "Jitendra Yadav&

Re: Discrepancy in the values of consumed disk space by hadoop

2013-08-11 Thread Jitendra Yadav
Hi, I think you are referring DFS Used (from NameNode report) and Total size (from fsck) values right?. *DFS Used:* This contains the total hdfs space used on all the connected data nodes, in your case 230296610816 (214.48 GB). ** *Total Size:* Fsck utility looks for the blocks in namespace , it

Re: when Standby Namenode is doing checkpoint, the Active NameNode is slow.

2013-08-13 Thread Jitendra Yadav
Hi, Can you please let me know that how you identified the slowness between primary and standby namnode? Also please share the network connection bandwidth between these two servers. Thanks On Tue, Aug 13, 2013 at 11:52 AM, lei liu wrote: > The fsimage file size is 1658934155 > > > 2013/8/13 H

Re: Exceptions in Name node and Data node logs

2013-08-13 Thread Jitendra Yadav
Hi, One of your DN is marked as dead because NN is not able to get heartbeat message from DN but NN still getting block information from dead node. This error is similar to a bug *HDFS-1250* reported 2 years back and fixed in 0.20 release. Can you please check the status of DN's in cluster. #bin

Re: when Standby Namenode is doing checkpoint, the Active NameNode is slow.

2013-08-13 Thread Jitendra Yadav
mage: Transfer took > > 241.45s at 0.00 KB/s > > 2013-08-13 17:53:21,107 INFO > > org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image > with > > txid 521186406 to namenode at 10.232.98.77:20021 > > > > > > There are below info in Active NameNode: > >

Re: when Standby Namenode is doing checkpoint, the Active NameNode is slow.

2013-08-15 Thread Jitendra Yadav
Hi, Looks like you got some pace, did you also tried with compression parameter? I think you will get more optimization with it. Also file transfer speed depends on our network bandwidth between PNN/SNN and network traffic b/w nodes.What's your network conf? Thanks On Wed, Aug 14, 2013 at 11:39 AM

Re: mapper and reducer task failure

2013-08-15 Thread Jitendra Yadav
Which version of hadoop you are using? On Thu, Aug 15, 2013 at 8:05 PM, Pradeep Singh wrote: > Hi RaJ, > I am using ubuntu 13.04 . and running hadoop with root user > Regards > Pradeep Singh > > > On Thu, Aug 15, 2013 at 7:29 PM, Raj K Singh wrote: > >> it seems that you are running hadoop loca

Re: mapper and reducer task failure

2013-08-15 Thread Jitendra Yadav
egards > Pradeep Singh > > > On Thu, Aug 15, 2013 at 8:22 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Which version of hadoop you are using? >> >> >> On Thu, Aug 15, 2013 at 8:05 PM, Pradeep Singh >> wrote: >> >>&g

Re: User level permission

2013-08-15 Thread Jitendra Yadav
Can you please share the configuration in hdfs-site.xml,core-site.xml file along with NN and DN logs.? Also please check the permission at fs.data.dir location. Thanks On Thu, Aug 15, 2013 at 9:08 PM, Sriram Balachander < sriram.balachan...@gmail.com> wrote: > Hi All > > I am new to hadoop and

Re: question about data block replicator number

2013-08-16 Thread Jitendra Yadav
Yup that's the responsibility of namenode to control under and over replicated blocks automatically, However you can you balancer script any time. Thanks On Fri, Aug 16, 2013 at 2:09 PM, bharath vissapragada < bharathvissapragada1...@gmail.com> wrote: > No, namenode deletes over-replicated blocks

Re: how to cache a remote hadoop file

2013-08-16 Thread Jitendra Yadav
I think Inmemory hadoop mechanism will full fill your requirement. Thanks On Fri, Aug 16, 2013 at 2:21 PM, Visioner Sadak wrote: > Hello friends i m using webhdfs to fetch a remote hadoop file in my > browser is there any caching mechanism that you guys know to load this file > faster > > > http:

Re: how to cache a remote hadoop file

2013-08-16 Thread Jitendra Yadav
any configurations in order to implement it > > > On Fri, Aug 16, 2013 at 2:26 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> I think Inmemory hadoop mechanism will full fill your requirement. >> >> Thanks >> >> On Fri, Aug 16, 2013 at 2:21

Re: how to cache a remote hadoop file

2013-08-17 Thread Jitendra Yadav
Ok here you go.. Spark is an open source and it can be integrated with Hadoop. http://spark-project.org/ Thanks On Sat, Aug 17, 2013 at 3:13 AM, Visioner Sadak wrote: > friends is there any open source caching mechanism for hadoop > > > On Fri, Aug 16, 2013 at 4:56 PM, Ji

Hadoop Error in HA configuration Hadoop 2.0.5

2013-08-17 Thread Jitendra Yadav
Hi, After configuration, I tried to start Hadoop 2.0.5 alpha in( HA conf) test environment but I'm getting below errors again and again. 2013-08-13 06:35:35,276 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /u0_pool/hadoop-hadoop/dfs/name/in_use.lock acquired by nodename 4654@warehou

Re: Hadoop Error in HA configuration Hadoop 2.0.5

2013-08-17 Thread Jitendra Yadav
I'm not using QJM, although I have configured NFS shared edits option i.e " /u1_pool/namenode" Thanks On 8/18/13, Azuryy Yu wrote: > hi, > > did you using QJM ha? > On Aug 18, 2013 3:04 AM, "Jitendra Yadav" > wrote: > >> Hi, >> >> Af

Re: Hadoop Error in HA configuration Hadoop 2.0.5

2013-08-18 Thread Jitendra Yadav
more configuration available with this release i.e. NFS share edits to enable HA. So is it mandatory to use QJM for HA? Thanks On 8/18/13, Roman Shaposhnik wrote: > On Sat, Aug 17, 2013 at 11:34 PM, Jitendra Yadav > wrote: >> I'm not using QJM, although I have configured NFS sh

Re: Hadoop Error in HA configuration Hadoop 2.0.5

2013-08-18 Thread Jitendra Yadav
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) Did I missed something? Thanks On 8/18/13, Rajiv Chittajallu wrote: > Did you run 'hdfs namenode -bootstrapStandby' to create metadata in shared > edits dir? > > > On Aug 18, 2013, at 2:35, "Jit

Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

2013-08-27 Thread Jitendra Yadav
Hi, Please follow the HA configuration steps available at below link. http://hadoop.apache.org/docs/r2.1.0-beta/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html *"dfs.ha.namenodes.[nameservice ID] - unique identifiers for each NameNode in the nameservice * *Configure with a list o

Re: There are 2 datanode(s) running and 2 node(s) are excluded in this operation.

2013-08-28 Thread Jitendra Yadav
Hi, Also can you please share the dfs heath check report of your cluster? Thanks On Wed, Aug 28, 2013 at 3:46 PM, xeon wrote: > Hi, > > I don't have the "dfs.hosts.exclude" property defined, but I still get > the error "There are 2 datanode(s) running and 2 node(s) are excluded in > this opera

Re: metric type

2013-08-30 Thread Jitendra Yadav
Hi, Below link contains the answer for your question. http://hadoop.apache.org/docs/r1.2.0/api/org/apache/hadoop/metrics2/package-summary.html Regards Jitendra On Fri, Aug 30, 2013 at 11:35 AM, lei liu wrote: > I use the metrics v2, there are COUNTER and GAUGE metric type in metrics > v2. > W

Re: Hadoop HA error "JOURNAL is not supported in state standby"

2013-08-30 Thread Jitendra Yadav
Hi, Totally agreed with Jing's reply, I faced the same issue previously, At that time I was doing cluster upgrade. However I upgraded all the nodes but in one of my node hdfs bin pointing to previous version, So I changed the PATH and it works fine for me. Thanks On Fri, Aug 30, 2013 at 2:10 AM,

Re: reduce job hung in pending state: "No room for reduce task"

2013-08-30 Thread Jitendra Yadav
Hi, Did you checked the free disk space on server where your reducer task was running? because it need approx. 264gb free disk space to run(as per logs). Thanks Jitendra On 8/30/13, Jim Colestock wrote: > Hello All, > > We're running into the following 2 bugs again: > https://issues.apache.org/j

Re: hadoop 2.0.5 datanode heartbeat issue

2013-08-30 Thread Jitendra Yadav
Hi, However your conf looks fine but I would say that you should restart your DN once and check your NN weburl. Regards Jitendra On 8/31/13, orahad bigdata wrote: > here is my conf files. > > ---core-site.xml--- > > > fs.defaultFS > hdfs://orahadoop > > > dfs.journaln

Re: metric type

2013-08-30 Thread Jitendra Yadav
of bytes read per second,and display the > result into ganglia, should I use MutableCounterLong or MutableGaugeLong? > > If I want to display current xceiver thread number in datanode into ganglia, > should I use MutableCounterLong or MutableGaugeLong? > > Thanks, > LiuLei >

Re: metric type

2013-08-31 Thread Jitendra Yadav
gt; > 2013/8/31 Jitendra Yadav > >> Hi, >> >> For IO/sec statistics I think MutableCounterLongRate and >> MutableCounterLong more useful than others and for xceiver thread >> number I'm not bit sure right now. >> >> Thanks >> Jiitendra >

Re: hadoop 2.0.5 datanode heartbeat issue

2013-09-01 Thread Jitendra Yadav
s issue occurred? > > Thanks > > > > On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav > wrote: > >> Hi, >> >> However your conf looks fine but I would say that you should restart >> your DN once and check your NN weburl. >> >>

Re: java.net.ConnectException when using Httpfs

2013-09-03 Thread Jitendra Yadav
Can you please share your /etc/hosts file? Thanks Jitendra On Tue, Sep 3, 2013 at 4:53 PM, Visioner Sadak wrote: > i removed 127.0.0.1 references from my etc hosts now its throwing > > {"RemoteException":{"message":"Call From redsigma1.local\/132.168.0.10 to > localhost:8020 failed on connectio

Re: SNN not writing data fs.checkpoint.dir location

2013-09-05 Thread Jitendra Yadav
Please share your Hadoop version and hdfs-site.xml conf also I'm assuming that you already restarted your cluster after changing fs.checkpoint.dir. Thanks On 9/5/13, Munna wrote: > Hi, > > I have configured fs.checkpoint.dir in hdfs-site.xml, but still it was > writing in /tmp location. Please gi

Re: SNN not writing data fs.checkpoint.dir location

2013-09-05 Thread Jitendra Yadav
o? > please confirm... > > > On Fri, Sep 6, 2013 at 12:10 AM, Jitendra Yadav > wrote: > >> Hi, >> >> If you are running SNN on same node as NN then it's ok otherwise you >> should add these properties at SNN side too. >> >> >> Thanks >> J

Re: SNN not writing data fs.checkpoint.dir location

2013-09-05 Thread Jitendra Yadav
Hi, This means that your specified checkpoint directory has been locked by SNN for use. Thanks Jitendra On 9/6/13, Munna wrote: > "in_use.lock" ? > > > On Fri, Sep 6, 2013 at 12:26 AM, Jitendra Yadav > wrote: > >> Hi, >> >> Well I think you shoul

Re: SNN not writing data fs.checkpoint.dir location

2013-09-05 Thread Jitendra Yadav
ds between two periodic >> > checkpoints >> > >> > >> > I have entered these changes in Namenode only. >> > >> > >> > On Thu, Sep 5, 2013 at 11:47 PM, Jitendra Yadav < >> jeetuyadav200...@gmail.com> >> &g

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
Also can you please check your masters file content in hadoop conf directory? Regards JItendra On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault wrote: > Could you confirm that you put the hash in front of 192.168.6.10 > localhost > > It should look like > > # 192.168.6.10localhost > > Thanks >

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
Means your $HADOOP_HOME/conf/masters file content. On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas wrote: > Jitendra: When you say " check your masters file content" what are you > referring to? > > > On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav > wrote: > &g

Re: hadoop cares about /etc/hosts ?

2013-09-11 Thread Jitendra Yadav
violet slave >>>> >>>> But when I commented the line appended with hash, >>>> 127.0.0.1 localhost >>>> # >>>> 192.168.6.10localhost >>>> ### >>>> >>>> 192.168.6.10tulip master >>>

Re: Hadoop - Browsefile system error

2013-09-16 Thread Jitendra Yadav
Hi, >From where you are accessing this "http://10.108.19.68:50070 " URL? Regards Jitendra On Mon, Sep 16, 2013 at 3:22 PM, Manickam P wrote: > Hi, > > I've installed hadoop-2.

Re: Hadoop - Browsefile system error

2013-09-16 Thread Jitendra Yadav
com:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=10.158.99.68:9000>:50070 URL. I believe your accessing this URl from your physical box right? Thanks Jitendra On Mon, Sep 16, 2013 at 3:26 PM, Jitendra Yadav wrote: > Hi, > > From where you are accessing this > &

Re: Hadoop - Browsefile system error

2013-09-16 Thread Jitendra Yadav
here your are accessing > http://10.108.19.68<http://ilab2-hadoop2-vm1.eng.dnb.com:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=10.158.99.68:9000>:50070 > URL. I believe your accessing this URl from your physical box right? > > Thanks > Jitendra > > > On Mon

Re: Hadoop - Browsefile system error

2013-09-16 Thread Jitendra Yadav
m<http://ilab2-hadoop2-vm1.eng.dnb.com/> > domain and IP entry in your host file from where your are accessing > http://10.108.19.68<http://ilab2-hadoop2-vm1.eng.dnb.com:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=10.158.99.68:9000>:50070 > URL. I believe

Re: Hadoop - Browsefile system error

2013-09-16 Thread Jitendra Yadav
e you have > lab2-hadoop2-vm1.eng.dnb.com<http://ilab2-hadoop2-vm1.eng.dnb.com/> > domain and IP entry in your host file from where your are accessing > http://10.108.19.68<http://ilab2-hadoop2-vm1.eng.dnb.com:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=

Re: Uploading a file to HDFS

2013-09-26 Thread Jitendra Yadav
Case 2: While selecting target DN in case of write operations, NN will always prefers first DN as same DN from where client sending the data, in some cases NN ignore that DN when there is some disk space issues or some other health symptoms found,rest of things will same. Thanks Jitendra On Th

Hadoop Solaris OS compatibility

2013-09-27 Thread Jitendra Yadav
Hi All, Since few years, I'm working as hadoop admin on Linux platform,Though we have majority of servers on Solaris (Sun Sparc hardware). Many times I have seen that hadoop is compatible with Linux. Is that right?. If yes then what all things I need to have so that I can run hadoop on Solaris in

Re: Secondary NameNode doCheckpoint Exception

2013-10-03 Thread Jitendra Yadav
Hi, There is some layout Version value issue in your VERSION file. Can you please share the VERSION file content from NN and SNN ? ${dfs.name.dir}/current/VERSION Regards Jitendra On Thu, Oct 3, 2013 at 1:22 PM, Furkan Bıçak wrote: > On 03-10-2013 10:44, Furkan Bıçak wrote: > >> Hi Everyone,

Re: Secondary NameNode doCheckpoint Exception

2013-10-03 Thread Jitendra Yadav
Did you upgraded your cluster ? Regards Jitendra On Thu, Oct 3, 2013 at 1:22 PM, Furkan Bıçak wrote: > On 03-10-2013 10:44, Furkan Bıçak wrote: > >> Hi Everyone, >> >> I am starting my hadoop cluster manually from Java which works fine until >> some time. When secondary NameNode tries to do me

Re: Secondary NameNode doCheckpoint Exception

2013-10-03 Thread Jitendra Yadav
Can you restart your cluster using below scripts and check the logs? # stop-all.sh #start-all.sh Regards Jitendra On Thu, Oct 3, 2013 at 2:09 PM, Furkan Bıçak wrote: > No, I started from scratch. > > Thanks, > Frkn. > > > On 03-10-2013 11:32, Jitendra Yadav wrote: >

Re: Hadoop Solaris OS compatibility

2013-10-03 Thread Jitendra Yadav
Thanks Roman, I will keep in touch with you. However we have faced lots of issues on Solaris Sparc OS but mostly are with hadoop 2.x.x version Regards Jitendra On Thu, Oct 3, 2013 at 5:04 AM, Roman Shaposhnik wrote: > On Fri, Sep 27, 2013 at 2:42 AM, Jitendra Yadav > wrote: >

Re: Secondary NameNode doCheckpoint Exception

2013-10-03 Thread Jitendra Yadav
ode is started, it tries the merge and then gives that error. > > Thanks, > Frkn. > > On 03-10-2013 12:10, Jitendra Yadav wrote: > > Can you restart your cluster using below scripts and check the logs? > > # stop-all.sh > #start-all.sh > > Regards > Jitendra &

Migrating from Legacy to Hadoop.

2013-10-08 Thread Jitendra Yadav
Hi All, We are planning to consolidate our 3 existing warehouse databases to Hadoop cluster, In our testing phase we have designed the target environment and transferred the data from source to target (not in sync but almost completed ). These legacy systems were using traditional ETL/replication

Re: Migrating from Legacy to Hadoop.

2013-10-08 Thread Jitendra Yadav
ic? > > Regards > > Bertrand > > > On Tue, Oct 8, 2013 at 5:47 PM, Jitendra Yadav > wrote: > >> Hi All, >> >> We are planning to consolidate our 3 existing warehouse databases to >> Hadoop cluster, In our testing phase we have designed the target >

Re: Error putting files in the HDFS

2013-10-08 Thread Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you cluster. Please check your data node logs. Regards Jitendra On 10/8/13, Basu,Indrashish wrote: > > Hello, > > My name is Indrashish Basu and I am a Masters student in the Department > of Electrical and Computer Engineering. > >

Re: Error putting files in the HDFS

2013-10-08 Thread Jitendra Yadav
1:30,032 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 >> blocks got processed in 19 msecs >> 2013-10-07 11:31:30,035 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic >> block scanner. >> 2013-10-07 11:41:42,222 INFO >> org.ap

Re: Regarding CDR Data

2013-10-21 Thread Jitendra Yadav
Hi, Due to some security concerns I can't share the real time CDR logs but as an alternative you can create your own script that will generate dummy CDR records for your analysis. Below link might be helpful. http://www.gedis-studio.com/online-call-detail-records-cdr-generator.html Regards Jite

Re: how to schedule the hadoop commands in cron job or any orther way?

2013-11-07 Thread Jitendra Yadav
Hi, If you have few and simpler hadoop jobs like for data ingestion or manipulation etc. then you can easily manage it through some shell script and put it in crontab. But in case you have bunch of complex and interdependent jobs then you can use Oozie tool for configuring, scheduling and monitori

Re: Working with Capacity Scheduler

2013-11-26 Thread Jitendra Yadav
Try this. - hadoop queue -showacls - hadoop queue -list Regards Jitendra On Tue, Nov 26, 2013 at 6:58 PM, Munna wrote: > Hi Olivier, > > Thank you for your reply. > > As you said, i ran those commands and i am getting following error message > on both the commands. > > > > > > > > > > > *[ro

Re: auto-failover does not work

2013-12-02 Thread Jitendra Yadav
Which fencing method you are using in you configuration? Do you have correct ssh configuration between your hosts? Regards Jitendra On Mon, Dec 2, 2013 at 5:34 PM, YouPeng Yang wrote: > Hi i > I'm testing the HA auto-failover within hadoop-2.2.0 > > The cluster can be manully failover ,how

Re: auto-failover does not work

2013-12-02 Thread Jitendra Yadav
Are you able to connect both NN hosts using SSH without password? Make sure you have correct ssh keys in authorized key file. Regards Jitendra On Mon, Dec 2, 2013 at 5:50 PM, YouPeng Yang wrote: > Hi Pavan > > > I'm using sshfence > > --core-site.xml- > > > > fs.de

Re: auto-failover does not work

2013-12-02 Thread Jitendra Yadav
gt; Hi Jitendra > Yes > I'm doubt that it need to enter the ssh-agent bash & ssh-add before I > ssh the NN from each other.Is it an problem? > > Regards > > > > > 2013/12/2 Jitendra Yadav > >> Are you able to connect both NN hosts using SSH without

Re: Strange error on Datanodes

2013-12-02 Thread Jitendra Yadav
Hi, Can you share some more logs from Data nodes? could you please also share the conf and cluster size? Regards Jitendra On Mon, Dec 2, 2013 at 8:49 PM, Siddharth Tiwari wrote: > Hi team > > I see following errors on datanodes. What is the reason for this and how > can it will be resolved:- >

Re: Strange error on Datanodes

2013-12-02 Thread Jitendra Yadav
Which hadoop destro you are using?, It would be good if you share the logs from data node on which the data block(blk_-2927699636194035560_63092) exist and from name nodes also. Regards Jitendra On Mon, Dec 2, 2013 at 9:13 PM, Siddharth Tiwari wrote: > Hi Jeet > > I have a cluster of size 25, 4

Re: Strange error on Datanodes

2013-12-03 Thread Jitendra Yadav
I did some analysis on the provided logs and confs. Instead of one issue i believe you may have two issue going on. 1. java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 2. 2013-12-02 13:12:06,58

Re: Strange error on Datanodes

2013-12-03 Thread Jitendra Yadav
:54040 dest: /10.238.10.43:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) <> Try to increase the dfs.datanode.max.xcievers conf value in the datanode hdfs-site.conf Regards Jitendra On Tue, Dec 3, 2013 at 3:17 P

Re: Strange error on Datanodes

2013-12-03 Thread Jitendra Yadav
est: /10.238.10.43:50010 java.io.IOException: Premature > EOF from inputStreamat > org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) > > <> Try to increase the dfs.datanode.max.xcievers conf value in the datanode > hdfs-site.conf > > > Regards > >

Re: get error in running terasort tool

2013-12-04 Thread Jitendra Yadav
can you check how many healthy data nodes available in your cluster? Use: #hadoop dfsadmin -report Regards Jitendra On Thu, Dec 5, 2013 at 12:48 PM, ch huang wrote: > hi,maillist: > i try run terasort in my cluster ,but failed ,following > is error ,i do not know why, anyon

Re: Name node and data node replacement

2013-12-11 Thread Jitendra Yadav
Yes you are right. It will periodically checks the under replicated blocks information and place those blocks on available data nodes if required. Regards Jitendra On Wed, Dec 11, 2013 at 1:57 PM, oc tsdb wrote: > Hi, > > Thanks for your response. > > You mean NN will always try to maintain nu

Re: About formatting of namenode..

2014-01-31 Thread Jitendra Yadav
Hi Jyoti, That's right you will lose all the HDFS data, therefore you need take backup of your critical data from HDFS to some other place. If you are using Logical volumes then it would better to add more space on the particular volume/mount point. Regards Jitendra On Fri, Jan 31, 2014 at 4:10

Re: About formatting of namenode..

2014-01-31 Thread Jitendra Yadav
ny problem but I am getting little bit scared. > > Thanks > > > On Fri, Jan 31, 2014 at 4:22 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >> Hi Jyoti, >> >> That's right you will lose all the HDFS data, therefore you need take >>

Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1

2014-01-31 Thread Jitendra Yadav
Hi, Please post the output of dfs report command, this could help us to understand cluster health. # *hadoop dfsadmin -report* Thanks Jitendra On Fri, Jan 31, 2014 at 6:44 PM, Stuti Awasthi wrote: > Hi All, > > > > I am suddenly started facing issue on Hadoop Cluster. Seems like HTTP > requ

Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1

2014-01-31 Thread Jitendra Yadav
%: 5.59% > > DFS Remaining%: 81.06% > > Last contact: Fri Jan 31 18:55:18 IST 2014 > > > > > > Name: 10.139.9.234:50010 > > Decommission Status : Normal > > Configured Capacity: 82436759552 (76.78 GB) > > DFS Used: 4277760000 (3.98 GB) &g

Re: java.io.FileNotFoundException: http://HOSTNAME:50070/getimage?getimage=1

2014-01-31 Thread Jitendra Yadav
Correcting typo error. *dfs.namenode.http-address* *Thanks* On Fri, Jan 31, 2014 at 7:25 PM, Jitendra Yadav wrote: > Can you please change below property and restart your cluster again? > > FROM: > > dfs.http.address > > > TO: > dfs.namenode.http-addres > &g

Re: Is it possible to access a hadoop 1 file system (hdfs) via the hadoop 2.2.0 command line tools?

2014-02-01 Thread Jitendra Yadav
In this case I believe Hadoop client version should be same as Hadoop cluster version. Thanks Jitendra On Sat, Feb 1, 2014 at 2:54 PM, Christian Schuhegger < christian.schuheg...@gmx.de> wrote: > Hello all, > > I am trying to access a hadoop 1 installation via the hadoop 2.2.0 command > line to

Re: IOException when using "dfs -put"

2014-04-04 Thread Jitendra Yadav
Can you check total running datanodes in your cluster and also free hdfs space? Thanks Jitendra On Fri, Apr 4, 2014 at 9:53 AM, Mahmood Naderan wrote: > Hi, > I want to put a file from local FS to HDFS but at the end I get an error > message and the copied file has zero size. Can someone help w

Re: IOException when using "dfs -put"

2014-04-04 Thread Jitendra Yadav
t; Mahmood > > > On Friday, April 4, 2014 9:44 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > >Can you check total running datanodes in your cluster and also > free hdfs space? > > > > > >Thanks > >Jitendra > > > > >

Re: IOException when using "dfs -put"

2014-04-04 Thread Jitendra Yadav
der replicated blocks: 0 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > > - > Datanodes available: 0 (0 total, 0 dead) > > > Regards, > Mahmood > On Friday, April 4, 2014 11:09 PM, Jitendra Yadav < > jeetuyadav200...@gmail.com> wrote: > Y

Re: IOException when using "dfs -put"

2014-04-04 Thread Jitendra Yadav
racker > > > Regards, > Mahmood > > > On Saturday, April 5, 2014 3:39 AM, Chris Mawata > wrote: > >How many machines do you have? This could be because you re - formatted > the Name Node and the >versions are not matching. Your Data Mode would then > be rejected by the Name Node. > >Chris > >On Apr 4, 2014 2:58 PM, "Jitendra Yadav" > wrote: > > >Use jps and check what all processes are running, is this a single node > cluster? > > > >