Re: Avoiding using hostname for YARN nodemanagers

2017-12-05 Thread Vinayakumar B
Hi Alvaro, I think you can configure to use custom hostname for docker containers as well. Hostname should be provided durin launch of containers using -h parameter. And with user created docker network DNS resolution of these hostnames among the containers is possible. provide --network-alias

Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Vinayakumar B
I think you might need to change the IP itself. Try something similar to 192.168.1.20 -Vinay On 27 Apr 2017 8:20 pm, "Bhushan Pathak" wrote: > Hello > > I have a 3-node cluster where I have installed hadoop 2.7.3. I have > updated core-site.xml, mapred-site.xml,

[DISCUSS] Retire BKJM from trunk?

2016-07-27 Thread Vinayakumar B
Hi All, BKJM was Active and made much stable when the NameNode HA was implemented and there was no QJM implemented. Now QJM is present and is much stable which is adopted by many production environment. I wonder whether it would be a good time to retire BKJM from trunk? Are there

RE: wrong remaining space reported by Data Nodes

2016-07-22 Thread Vinayakumar B
Hi You might be hitting https://issues.apache.org/jira/browse/HDFS-9530,. This will arrive soon in coming 2.7.3 Release. ☺ -Vinay From: Ophir Etzion [mailto:op...@foursquare.com] Sent: 22 July 2016 19:03 To: user@hadoop.apache.org Subject: wrong remaining space reported by Data Nodes Hi, I

RE: datanode is unable to connect to namenode

2016-06-30 Thread Vinayakumar B
menode side. -Vinay From: Aneela Saleem [mailto:ane...@platalytics.com] Sent: 30 June 2016 13:24 To: Vinayakumar B <vinayakumar...@huawei.com> Cc: user@hadoop.apache.org Subject: Re: datanode is unable to connect to namenode Thanks Vinayakumar Yes you got it right i was using different pri

RE: datanode is unable to connect to namenode

2016-06-29 Thread Vinayakumar B
Hi Aneela, 1. Looks like you have attached the hdfs-site.xml from 'hadoop-master' node. For this node datanode connection is successfull as mentioned in below logs. 2016-06-29 10:01:35,700 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for

Re: Why do non data nodes need rack awareness?

2016-06-02 Thread Vinayakumar B
Rack awareness feature introduced to place the data blocks distributed among multiple racks, to avoid the data loss in case of whole rack failure. Now while reading/writing data blocks, to find the closest, data locality w.r.t to client will be considered. To know the nearest datanode in terms of

Re: Eclipse debug HDFS server side code

2016-04-19 Thread Vinayakumar B
un the command? > > If I want to change some code, Could you please explain a little more > about how to debug/run my new modified code? Thanks so much. > > > > On Tue, Apr 19, 2016 at 2:17 PM, Vinayakumar B <vinayakum...@apache.org> > wrote: > >> >>

Fwd: Eclipse debug HDFS server side code

2016-04-19 Thread Vinayakumar B
-Vinay -- Forwarded message -- From: Vinayakumar B <vinayakum...@apache.org> Date: Tue, Apr 19, 2016 at 11:47 PM Subject: Re: Eclipse debug HDFS server side code To: Kun Ren <ren.h...@gmail.com> 1. Since you are debugging remote code, you can't change the code dynami

Re: Eclipse debug HDFS server side code

2016-04-19 Thread Vinayakumar B
Hi Kun Ren, You can follow the below steps. 1. configure HADOOP_NAMENODE_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=3988" in hadoop-env.sh 2. Start Namenode 3. Now Namenode will start debug port in 3988. 4. Configure Remote debug application to connect to :3988 in

[Important] What is the practical maximum HDFS blocksize used in clusters?

2016-02-16 Thread Vinayakumar B
Hi All, Just wanted to know, what is the maximum and practical dfs.block.size used in production/test clusters. Current default value is 128MB and it can support upto 128TB ( Yup, right. It's just a configuration value though) I have seen clusters using upto 1G block size for big files.

RE: Trash data after upgrade from 2.7.1 to 2.7.2

2016-02-14 Thread Vinayakumar B
Hi Chef, Can you confirm the below points? 1) Did you upgrade all datanodes to 2.7.2? 2) Did you finalized the upgrade using the following command? Run "hdfs dfsadmin -rollingUpgrade

RE: Unsubscribe footer for user@h.a.o messages

2015-11-05 Thread Vinayakumar B
+1, Thanks Arpit -Vinay From: Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com] Sent: Friday, November 06, 2015 8:27 AM To: user@hadoop.apache.org Subject: RE: Unsubscribe footer for user@h.a.o messages + 1 ( non-binding).. Nice thought,Arpit.. Thanks And Regards Brahma Reddy

Re: Utility to push data into HDFS

2015-11-04 Thread Vinayakumar B
thats cool. -Vinay On Tue, Nov 3, 2015 at 9:34 PM, Shashi Vishwakarma <shashi.vish...@gmail.com > wrote: > Thanks all...It was a cluster issue...Its working for me now:) > On 3 Nov 2015 7:01 am, "Vinayakumar B" <vinayakumar...@huawei.com> wrote: > >>

RE: Authenticating to Kerberos enabled Hadoop cluster using Java

2015-11-02 Thread Vinayakumar B
For simplicity You just can copy HADOOP_CONF_DIR from one of the cluster's machine. And place it in class path of the client program. Principal you are using to login is the client principal. It can be different from server principal. -Vinay On Nov 2, 2015 22:37, "Vishwakarma, Chhaya" <

RE: Utility to push data into HDFS

2015-11-02 Thread Vinayakumar B
Hi Shashi, Did you copy conf directory (ex: /etc/hadoop by default) from any of the cluster machine’s Hadoop installation as mentioned in #1 in Andreina’s reply below? I hope, if cluster is running successfully with Kerberos enabled, it should have a configuration

RE: DFSClient got deadlock when close file and failed to renew lease

2015-10-16 Thread Vinayakumar B
Looks like this issue is present in the latest code as well. Please report a ticket in Jira and if you have the fix, you can provide the patch as well. -Vinay From: daniedeng(邓飞) [mailto:danied...@tencent.com] Sent: Friday, October 16, 2015 1:15 PM To: hdfs-issues; user@hadoop.apache.org

Re: Problem running example (wrong IP address)

2015-09-28 Thread Vinayakumar B
ISTEN 22944/java >> >> I understand what you're saying about a gateway often existing at that >> address for a subnet. I'm not familiar enough with Vagrant to answer this >> right now, but I will put in a question there. >> >> I can also change the other two IP addres

Re: Problem running example (wrong IP address)

2015-09-28 Thread Vinayakumar B
192.168.51.1 might be gateway to 51.* subnet right? Can you verify whether connections from outside 51 subnet, to 51.4 machine using other subnet IP as remote IP. ? You can create any connection, may not be namenode-datanode. for ex: Connection from 192.168.52.4 dn to 192.168.51.4 namenode

Simple video for Understanding ErasureCoding

2015-07-28 Thread Vinayakumar B
Sharing the link to simple video for understanding why and what is ErasureCoding. http://www.intel.com/content/www/us/en/storage/erasure-code-isa-l-solution-video.html Thanks to intel for such a nice video. Regards, Vinay

RE: how to free up space of the old Data Node

2014-03-19 Thread Vinayakumar B
You can change the replication factor using the following command hdfs dfs - setrep [-R] rep path Once this is done, you can re-commission the datanode, then all the overreplicated blocks will be removed. If not removed, restart the datanode. Regards, Vinayakumar B From: Phan, Truong Q

RE: HA NN Failover question

2014-03-18 Thread Vinayakumar B
Correct david, Sshfence doesnot handle network unavailability. Since the JournalNodes ensures that only one NN can write, fencing of old active handled Automatically. So configuring fence method to shell(/bin/true) should be fine. Regards, Vinayakumar B. From: david marion [mailto:dlmar

RE: File_bytes_read vs hdfs_bytes_read

2014-03-14 Thread Vinayakumar B
Its simple, bytes read from local file system: File_bytes_read bytes read from HDFS file system: hdfs_bytes_read Regards, Vinayakumar B From: Sai Sai [mailto:saigr...@yahoo.in] Sent: 14 March 2014 14:51 To: user@hadoop.apache.org Subject: File_bytes_read vs hdfs_bytes_read Just wondering what

RE: Reading files from hdfs directory

2014-03-13 Thread Vinayakumar B
Hi Satyam, Check whether your Camel client-side configurations are pointing to correct NameNode(s). What is the deployment ? whether HA/Non-HA? And check whether same exception is present in (Active) NameNode logs. If not then request is going to some other NameNode. Regards, Vinayakumar B

RE: Wrong FS hdfs:/localhost:9000 ;expected file///

2014-02-25 Thread Vinayakumar B
in the following way by constructing the CLASSPATH which includes HADOOP_CONF_DIR java -cp CLASSPATH MAIN-CLASS args or simply use hadoop jar test.jar Cheers, Vinayakumar B From: Chris Mawata [mailto:chris.maw...@gmail.com] Sent: 25 February 2014 20:08 To: user@hadoop.apache.org Subject: Re: Wrong

RE: job failed on hadoop 2

2014-02-24 Thread Vinayakumar B
Hi Anil, I think multiple clients/tasks are trying to write to same file with overwrite enabled Second client is overwriting the first client's file, and first client is getting the below mentioned exception. Please check .. Regards, Vinayakumar B From: AnilKumar B [mailto:akumarb2

RE: job failed on hadoop 2

2014-02-24 Thread Vinayakumar B
. Run the job again, 3. Try to find out the files written by reducers using the hdfs-audit log and find out the exact file which is overwritten before closing. Regards, Vinayakumar B From: AnilKumar B [mailto:akumarb2...@gmail.com] Sent: 24 February 2014 16:15 To: user

RE: No job shown in Hadoop resource manager web UI when running jobs in the cluster

2014-02-23 Thread Vinayakumar B
Send a simple mail to user-unsubscr...@hadoop.apache.orgmailto:user-unsubscr...@hadoop.apache.org FYI, http://hadoop.apache.org/mailing_lists.html From: Suresh M03 [mailto:suresh@mphasis.com] Sent: 24 February 2014 11:40 To: user@hadoop.apache.org Subject: RE: No job shown in Hadoop

RE: JAVA cannot execute binary file

2014-01-07 Thread Vinayakumar B
/32 bit) as of machine...? Regards, Vinayakumar B From: Mr 0 [mailto:bobwolf...@hotmail.com] Sent: 08 January 2014 10:15 To: user@hadoop.apache.org Subject: RE: JAVA cannot execute binary file I have had, at some point on earlier versions of hadoop: Inside hadoop-env.sh where you set /usr/lib/jvm

RE: How to set hadoop.tmp.dir if I have multiple disks per node?

2013-12-16 Thread Vinayakumar B
configurations are for Hadoop 2.x Configure different subdirectories if you are using same disk for multiple processes. Ex: /hadoop/data1/dfs/data And /hadoop/data1/yarn/nm-local-dir Cheers, Vinayakumar B From: Tao Xiao [mailto:xiaotao.cs

RE: Yarn -- one of the daemons getting killed

2013-12-16 Thread Vinayakumar B
Hi Krishna, Please check the out files as well for daemons. You may find something. Cheers, Vinayakumar B From: Krishna Kishore Bonagiri [mailto:write2kish...@gmail.com] Sent: 16 December 2013 16:50 To: user@hadoop.apache.org Subject: Re: Yarn -- one of the daemons getting killed Hi Vinod

RE: MiniDFSCluster setup

2013-12-15 Thread Vinayakumar B
directory in eclipse project. 4. Rebuild hadoop-hdfs and run the test. If any more problems let me know. Cheers, Vinayakumar B From: Karim Awara [mailto:karim.aw...@kaust.edu.sa] Sent: 15 December 2013 22:26 To: user Subject: Re: MiniDFSCluster setup I imported all the projects under the root

RE: two version on the same cluster?

2013-12-11 Thread Vinayakumar B
, YARN_CONF_DIR, YARN_PID_DIR 3. And start both clusters with different ENV variables set Thanks and Regards, Vinayakumar B From: Geelong Yao [mailto:geelong...@gmail.com] Sent: 12 December 2013 07:09 To: user@hadoop.apache.org Subject: two version on the same cluster? Hi Everyone

RE: Compression LZO class not found issue in Hadoop-2.2.0

2013-12-10 Thread Vinayakumar B
Hi Viswa, Sorry for the late reply, Have you restarted NodeManagers after copying the lzo jars to lib? Thanks and Regards, Vinayakumar B From: Viswanathan J [mailto:jayamviswanat...@gmail.com] Sent: 06 December 2013 23:32 To: user@hadoop.apache.org Subject: Compression LZO class not found issue

RE: how to handle the corrupt block in HDFS?

2013-12-10 Thread Vinayakumar B
are killed in between these files will remain in hdfs showing underreplicated blocks. Thanks and Regards, Vinayakumar B From: ch huang [mailto:justlo...@gmail.com] Sent: 11 December 2013 06:48 To: user@hadoop.apache.org Subject: Re: how to handle the corrupt block in HDFS? By default this higher

RE: issue about Shuffled Maps in MR job summary

2013-12-10 Thread Vinayakumar B
It looks simple, :) Shuffled Maps= Number of Map Tasks * Number of Reducers Thanks and Regards, Vinayakumar B From: ch huang [mailto:justlo...@gmail.com] Sent: 11 December 2013 10:56 To: user@hadoop.apache.org Subject: issue about Shuffled Maps in MR job summary hi,maillist: i run

RE: error in copy from local file into HDFS

2013-12-05 Thread Vinayakumar B
Hi Ch huang, Please check whether all datanodes in your cluster have enough disk space and number non-decommissioned nodes should be non-zero. Thanks and regards, Vinayakumar B From: ch huang [mailto:justlo...@gmail.com] Sent: 06 December 2013 07:14 To: user@hadoop.apache.org Subject: error

RE: Problem viewing a job in hadoop v2 web UI

2013-12-03 Thread Vinayakumar B
should be able to view the Job details in JobHistoryServer once the Job execution is over. Thanks and Regards, Vinayakumar B From: Jian He [mailto:j...@hortonworks.com] Sent: 03 December 2013 12:07 To: user@hadoop.apache.org Subject: Re: Problem viewing a job in hadoop v2 web UI Can you try

RE: Error for larger jobs

2013-11-27 Thread Vinayakumar B
Hi Siddharth, Looks like the issue with one of the machine. Or its happening in different machines also? I don't think it's a problem with JVM heap memory. Suggest you to check this once, http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11 Thanks and Regards, Vinayakumar

Re: Path exception when running from inside IDE.

2013-11-02 Thread Vinayakumar B
In your eclipse classpath core-site.xml is there? Directory which contains site xmls should be there in classpath. Not directly xml files. Make sure fs.defaultFS points to correct hdfs path Regards, Vinayakumar B On Nov 2, 2013 5:21 PM, Harsh J ha...@cloudera.com wrote: Your job configuration

Re: UnsupportedOperationException occurs with Hadoop-2.1.0-beta jar files

2013-09-10 Thread Vinayakumar B
and generate code using 2.5 Protobuf. .compile and run again. Regards, Vinayakumar B On Sep 10, 2013 8:58 AM, sam liu samliuhad...@gmail.com wrote: This is an env issue. Hadoop-2.10-beta upgraded protobuf to 2.5 from 2.4.1, but the version of protobuf in my env is still 2.4.1, so the sqoop unit tests

Re: modify hdfs block size

2013-09-10 Thread Vinayakumar B
to decide based on your usecase. Regards, Vinayakumar B On Sep 10, 2013 9:02 AM, kun yan yankunhad...@gmail.com wrote: Hi all Can I modify HDFS data block size is 32MB, I know the default is 64MB thanks -- In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem, I hope one

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

2012-11-16 Thread Vinayakumar B
Hi, If you are moving from NonHA (single master) to HA, then follow the below steps. 1. Configure the another namenode's configuration in the running namenode and all datanode's configurations. And configure logical fs.defaultFS 2. Configure the shared storage related

RE: how to specify key and value for an input to mapreduce job

2012-02-14 Thread Vinayakumar B
, Vinayakumar B __ From: Vamshi Krishna [vamshi2...@gmail.com] Sent: Tuesday, February 14, 2012 8:28 PM To: mapreduce-user@hadoop.apache.org Subject: how to specify key and value for an input to mapreduce job Hi all, i have a job which read all the rows

RE: Does FileSplit respect the record boundary?

2012-02-10 Thread Vinayakumar B
same as the block size of the input file. If the split size is more than the block size then Task may need to get the block data from multiple datanodes. Thanks and Regards, Vinayakumar B From: GUOJUN Zhu [mailto:guojun_...@freddiemac.com] Sent: Saturday, February 11, 2012 3:50 AM

RE: Wonky reduce progress

2011-08-19 Thread Vinayakumar B
Please check the defect in MAPREDUCE jira https://issues.apache.org/jira/browse/MAPREDUCE-2264 This is because the compression is enabled for map outputs and statistics are taken on compressed data instead of original data. -Original Message- From: Joey Echeverria

Need Help in Setting up the NextGen MapReduce.

2011-08-02 Thread Vinayakumar B
Hi All, I need help in setting up the Next Gen Mapreduce. Please provide links to documents/Guide if any to start setting up the Next Gen MR. Thanks and Regards, Vinayakumar B *** This e-mail

RE: Need Help in Setting up the NextGen MapReduce.

2011-08-02 Thread Vinayakumar B
Thanks Praveen. I could able to run Sample word count Job after Reading http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INST ALL Thanks and Regards, Vinayakumar B