Re: Re: unsubscribe

2020-11-13 Thread Varun Kumar
unsubscribe On Tue, Nov 10, 2020 at 9:22 PM Man-Young Goo wrote: > unsubscribe > > Thanks. > > Manyoung Goo > > E-mail : my...@nate.com > Tel : +82-2-360-1590 > > *-- Original Message --* > > *Date:* Thursday, Sep 17, 2020 02:11:28 AM > *From:* "Niketh

Re: Hadoop 2.6.0 - No DataNode to stop

2015-03-01 Thread Varun Kumar
1.Stop the service 2.Change the permissions for log and pid directory once again to hdfs. 3.Start service with hdfs. This will resolve the issue On Sun, Mar 1, 2015 at 6:40 PM, Daniel Klinger d...@web-computing.de wrote: Thanks for your answer. I put the FQDN of the DataNodes in the

Re: error: [Errno 113] No route to host cloudera

2015-03-01 Thread Varun Kumar
Stop Iptables services on each datanode. On Sun, Mar 1, 2015 at 12:00 PM, Krish Donald gotomyp...@gmail.com wrote: Hi, I tried hard to debug the issue but nothing worked. I am getting error: [Errno 113] No route to host cloudera in cloudera agent log file. Below are some output :

Re: java.net.UnknownHostException on one node only

2015-02-22 Thread Varun Kumar
Hi Tariq, Issues looks like DNS configuration issue. On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com tesm...@gmail.com wrote: I am getting java.net.UnknownHost exception continuously on one node Hadoop MApReduce execution. That node is accessible via SSH. This node is shown in yarn node

Re: Delete a folder name containing *

2014-08-21 Thread varun kumar
Make sure namenode is not in safe mode. On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar praveen...@gmail.com wrote: Hi team I am in weird situation where I have following HDFS sample folders /data/folder/ /data/folder* /data/folder_day /data/folder_day/monday /data/folder/1

Re: A Datanode shutdown question?

2014-07-02 Thread varun kumar
Generally Datanode sends it heartbeat to namenode for every 3 seconds. If datanode stops sending its heartbeat to name node with in 3 seconds,Namenode thinks it is dead. On Wed, Jul 2, 2014 at 5:03 PM, MrAsanjar . afsan...@gmail.com wrote: If a datanode is shut-downed by calling

Re: HDFS undo Overwriting

2014-06-02 Thread varun kumar
Nope. Sorry :( On Mon, Jun 2, 2014 at 1:31 PM, Amjad ALSHABANI ashshab...@gmail.com wrote: Thanx Zesheng, I should admit that I m not an expert in Hadoop infrastructure, but I have heard my colleagues talking about HDFS replicas? Couldn't that help in retrieving the lost data?? Amjad

Re: Hadoop property precedence

2013-07-14 Thread varun kumar
What Shumin told is correct,hadoop configurations has been over written through client application. We have faced similar type of issue,Where default replication factor was mentioned 2 in hadoop configuration.But when when ever the client application writes a files,it was having 3 copies in

Re: Decomssion datanode - no response

2013-07-05 Thread varun kumar
Try to give IPaddressofDatanode:50010 On Fri, Jul 5, 2013 at 12:25 PM, Azuryy Yu azury...@gmail.com wrote: I filed this issue at : https://issues.apache.org/jira/browse/HDFS-4959 On Fri, Jul 5, 2013 at 1:06 PM, Azuryy Yu azury...@gmail.com wrote: Client hasn't any connection problem.

Re: Hadoop Master node migration

2013-06-26 Thread varun kumar
Hi Manickam, You need to copy the metadata also. This works. Regards, Varun Kumar.P On Wed, Jun 26, 2013 at 11:47 AM, Manickam P manicka...@outlook.com wrote: Hi, I want to move my master node alone from one server to another server. If i copy all the tmp, data directory and log

Re:

2013-06-26 Thread varun kumar
Is your namenode working? On Wed, Jun 26, 2013 at 12:38 PM, ch huang justlo...@gmail.com wrote: hi i build a new hadoop cluster ,but i can not ACCESS hdfs ,why? i use CDH3u4 ,redhat6.2 # hadoop fs -put /opt/test hdfs://192.168.10.22:9000/user/test 13/06/26 15:00:47 INFO ipc.Client:

Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang, * * *Some other service is running on the port or you did not stop the datanode service properly.* * * *Regards,* *Varun Kumar.P * On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote: i have running old cluster datanode,so it exist some conflict, i changed default

Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang, * * *Some other service is running on the port or you did not stop the datanode service properly.* * * *Regards,* *Varun Kumar.P * On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote: i have running old cluster datanode,so it exist some conflict, i changed default

Re: Adding new name node location

2013-04-17 Thread varun kumar
Hi Henry, As per your mail Point number 1 is correct. After doing these changes metadata will be written in the new partition. Regards, Varun Kumar.P On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung ythu...@winbond.com wrote: Hi Everyone, ** ** I’m using Hadoop 1.0.4 and only define 1

Re: are we able to decommission multi nodes at one time?

2013-04-01 Thread varun kumar
How many nodes do you have and replication factor for it.

Re: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-23 Thread varun kumar
Hope below link will be useful.. http://hadoop.apache.org/docs/stable/hdfs_user_guide.html On Sat, Mar 23, 2013 at 12:29 PM, David Parks davidpark...@yahoo.comwrote: For a new installation of the current stable build (1.1.2 ), is there any reason to use the CheckPointNode over the

Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010

2013-03-08 Thread varun kumar
Hi Dhana, Increase the ulimit for all the datanodes. If you are starting the service using hadoop increase the ulimit value for hadoop user. Do the changes in the following file. */etc/security/limits.conf* Example:- *hadoop softnofile 35000* *hadoop hard

Re: How to solve : Little bit urgent delivery (cluster-1(Hbase)--- cluster-2(HDFS))

2013-03-01 Thread varun kumar
Use hbase import and export for migration of data from one cluster to another. On Fri, Mar 1, 2013 at 2:36 PM, samir das mohapatra samir.help...@gmail.com wrote: Hi All, Problem Statement: 1) We have two cluster , let for example i) cluster-1 ii) cluster-2 There

Re: Prolonged safemode

2013-01-20 Thread varun kumar
Hi Tariq, When you start your namenode,Is it able to come out of Safemode Automatically. If no then there are under replicated blocks or corrupted blocks where namenode is trying to fetch it. Try to remove corrupted blocks. Regards, Varun Kumar.P On Sun, Jan 20, 2013 at 4:05 AM, Mohammad

Re: On a lighter note

2013-01-18 Thread varun kumar
:) :) On Fri, Jan 18, 2013 at 7:08 PM, shashwat shriparv dwivedishash...@gmail.com wrote: :) ∞ Shashwat Shriparv On Fri, Jan 18, 2013 at 6:43 PM, Fabio Pitzolu fabio.pitz...@gr-ci.comwrote: Someone should made one about unsubscribing from this mailing list ! :D *Fabio Pitzolu*

Re: Problems starting secondarynamenode in hadoop 1.0.3

2012-06-26 Thread varun kumar
Hi Jeff, Instead of localhost,mention the host-name of Primary namenod. On Wed, Jun 27, 2012 at 3:46 AM, Jeffrey Silverman jeffsilver...@google.com wrote: I am working with hadoop for the first time, and I am following instructions at

decommissioning datanodes

2012-06-12 Thread varun kumar
Hi All, I want to remove nodes from my cluster *gracefully*. I added the following lines to my hdfs-site.xml property namedfs.hosts.exclude/name value/opt/hadoop/conf/exclude/value /property In exclude file i have mentioned the hostname of the datanode. Then I run 'hadoop dfsadmin

HDFS Files Deleted

2012-04-26 Thread varun kumar
Dear All, By Mistake i have deleted file in from HDFS using the command: hadoop dfs -rmr /* Is there any way to retrieve the deleted data. -- Regards, Varun Kumar.P

Re: HDFS Files Deleted

2012-04-26 Thread varun kumar
not use –skipTrash, the file should be in your trash. Refer: http://hadoop.apache.org/common/docs/current/hdfs_design.html#File+Deletes+and+Undeletes for more information. From: varun kumar varun@gmail.com Reply-To: hdfs-user@hadoop.apache.org hdfs-user@hadoop.apache.org Date: Thu, 26