Re: Re: unsubscribe

2020-11-13 Thread Varun Kumar
unsubscribe

On Tue, Nov 10, 2020 at 9:22 PM Man-Young Goo  wrote:

> unsubscribe
>
> Thanks.
> 
> Manyoung Goo
>
> E-mail : my...@nate.com
> Tel : +82-2-360-1590
>
> *-- Original Message --*
>
> *Date:* Thursday, Sep 17, 2020 02:11:28 AM
> *From:* "Niketh Nikky" 
> *To:* "Dhiraj Choudhary" 
> *Cc:* 
> *Subject:* Re: unsubscribe
>
> unsubscribe
>
>
>
> Thanks
> Niketh
>
> > On Sep 16, 2020, at 12:10 PM, Dhiraj Choudhary <
> dhiraj.k.choudh...@gmail.com> wrote:
> >
> > 
> >
> >
> > --
> > Dhiraj Choudhary
> > Bangalore, India
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>
>
>
>

-- 
Regards,
Varun Kumar.P


Re: Hadoop 2.6.0 - No DataNode to stop

2015-03-01 Thread Varun Kumar
1.Stop the service

2.Change the permissions for log and pid directory once again to hdfs.

3.Start service with hdfs.

This will resolve the issue

On Sun, Mar 1, 2015 at 6:40 PM, Daniel Klinger d...@web-computing.de wrote:

 Thanks for your answer.



 I put the FQDN of the DataNodes in the slaves file on each node (one FQDN
 per line). Here’s the full DataNode log after the start (the log of the
 other DataNode is exactly the same):



 2015-03-02 00:29:41,841 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
 handlers for [TERM, HUP, INT]

 2015-03-02 00:29:42,207 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties

 2015-03-02 00:29:42,312 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).

 2015-03-02 00:29:42,313 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
 started

 2015-03-02 00:29:42,319 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
 hadoop.klinger.local

 2015-03-02 00:29:42,327 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
 maxLockedMemory = 0

 2015-03-02 00:29:42,350 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
 /0.0.0.0:50010

 2015-03-02 00:29:42,357 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
 1048576 bytes/s

 2015-03-02 00:29:42,358 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for
 balancing is 5

 2015-03-02 00:29:42,458 INFO org.mortbay.log: Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog

 2015-03-02 00:29:42,462 INFO org.apache.hadoop.http.HttpRequestLog: Http
 request log for http.requests.datanode is not defined

 2015-03-02 00:29:42,474 INFO org.apache.hadoop.http.HttpServer2: Added
 global filter 'safety'
 (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)

 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
 filter static_user_filter
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
 context datanode

 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
 filter static_user_filter
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
 context logs

 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
 filter static_user_filter
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
 context static

 2015-03-02 00:29:42,494 INFO org.apache.hadoop.http.HttpServer2:
 addJerseyResourcePackage:
 packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*

 2015-03-02 00:29:42,499 INFO org.mortbay.log: jetty-6.1.26

 2015-03-02 00:29:42,555 WARN org.mortbay.log: Can't reuse
 /tmp/Jetty_0_0_0_0_50075_datanodehwtdwq, using
 /tmp/Jetty_0_0_0_0_50075_datanodehwtdwq_3168831075162569402

 2015-03-02 00:29:43,205 INFO org.mortbay.log: Started HttpServer2$
 SelectChannelConnectorWithSafeStartup@0.0.0.0:50075

 2015-03-02 00:29:43,635 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hdfs

 2015-03-02 00:29:43,635 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup

 2015-03-02 00:29:43,802 INFO org.apache.hadoop.ipc.CallQueueManager: Using
 callQueue class java.util.concurrent.LinkedBlockingQueue

 2015-03-02 00:29:43,823 INFO org.apache.hadoop.ipc.Server: Starting Socket
 Reader #1 for port 50020

 2015-03-02 00:29:43,875 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
 0.0.0.0:50020

 2015-03-02 00:29:43,913 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
 for nameservices: null

 2015-03-02 00:29:43,953 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
 for nameservices: default

 2015-03-02 00:29:43,973 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool registering
 (Datanode Uuid unassigned) service to hadoop.klinger.local/10.0.1.148:8020
 starting to offer service

 2015-03-02 00:29:43,981 INFO org.apache.hadoop.ipc.Server: IPC Server
 Responder: starting

 2015-03-02 00:29:43,982 INFO org.apache.hadoop.ipc.Server: IPC Server
 listener on 50020: starting

 2015-03-02 00:29:44,620 INFO org.apache.hadoop.hdfs.server.common.Storage:
 DataNode version: -56 and NameNode layout version: -60

 2015-03-02 00:29:44,641 INFO org.apache.hadoop.hdfs.server.common.Storage:
 Lock on /cluster/storage/datanode/in_use.lock acquired by nodename
 1660@hadoop.klinger.local

 2015-03-02 00:29:44,822 INFO org.apache.hadoop.hdfs.server.common.Storage:
 Analyzing storage directories for bpid BP-158097147-10.0.1.148-1424966425688

 2015-03-02 00:29:44,822 INFO org.apache.hadoop.hdfs.server.common.Storage:
 Locking is disabled

 2015-03-02 00:29:44,825 INFO 

Re: error: [Errno 113] No route to host cloudera

2015-03-01 Thread Varun Kumar
Stop Iptables services on each datanode.

On Sun, Mar 1, 2015 at 12:00 PM, Krish Donald gotomyp...@gmail.com wrote:

 Hi,

 I tried hard to debug the issue but nothing worked.
 I am getting error: [Errno 113] No route to host cloudera in cloudera
 agent log file.

 Below are some output :

 [root@snncloudera cloudera-scm-agent]# more /etc/hosts
 127.0.0.1   localhost localhost.localdomain localhost4
 localhost4.localdomain4
 #::1 localhost localhost.localdomain localhost6
 localhost6.localdomain6
 ipaddress1  nncloudera.my.com nncloudera
 ipaddress2 s1cloudera.my.com s1cloudera
 ipaddress3 s2cloudera.my.com s2cloudera
 ipaddress4 s3cloudera.my.com s3cloudera
 ipaddress5 s4cloudera.my.com s4cloudera
 ipaddress6 hanncloudera.my.com hanncloudera
 ipaddress7 snncloudera.my.com snncloudera

 Here ipaddress are correct ip addresses which I got it from ifconfig .




  more /etc/sysconfig/network
 NETWORKING=yes
 HOSTNAME=snncloudera.my.com


 [root@snncloudera cloudera-scm-agent]# host -v -t A `hostname`
 Trying snncloudera.localdomain
 ;; connection timed out; trying next origin
 Trying snncloudera
 Host snncloudera not found: 3(NXDOMAIN)
 Received 104 bytes from 192.xxx.xx.x#53 in 68 ms


 I am not sure what is wrong.

 Please guide .







-- 
Regards,
Varun Kumar.P


Re: java.net.UnknownHostException on one node only

2015-02-22 Thread Varun Kumar
Hi Tariq,

Issues looks like DNS configuration issue.


On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com tesm...@gmail.com
wrote:

 I am getting java.net.UnknownHost exception continuously on one node
 Hadoop MApReduce execution.

 That node is accessible via SSH. This node is shown in yarn node -list
 and hadfs dfsadmin -report queries.

 Below is the log from execution

 15/02/22 20:17:42 INFO mapreduce.Job: Task Id :
 attempt_1424622614381_0008_m_43_0, Status : FAILED
 Container launch failed for container_1424622614381_0008_01_16 :
 java.lang.IllegalArgumentException: *java.net.UnknownHostException:
 101-master10*
 at
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
 at
 org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:352)
 at
 org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:237)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:218)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.init(ContainerManagementProtocolProxy.java:196)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 *Caused by: java.net.UnknownHostException: 101-master10*
 ... 12 more



 15/02/22 20:17:44 INFO

 Regards,
 Tariq




-- 
Regards,
Varun Kumar.P


Re: Delete a folder name containing *

2014-08-21 Thread varun kumar
Make sure namenode is not in safe mode.


On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar praveen...@gmail.com
wrote:

 Hi team

 I am in weird situation where I have  following HDFS sample folders

 /data/folder/
 /data/folder*
 /data/folder_day
 /data/folder_day/monday
 /data/folder/1
 /data/folder/2

 I want to delete /data/folder* without deleting its sub_folders. If I do
 hadoop fs -rmr /data/folder* it will delete everything which I want to
 avoid. I tried with escape character \ but HDFS FS shell is not taking it.
 Any hints/tricks ?


 Regards
 Praveenesh




-- 
Regards,
Varun Kumar.P


Re: A Datanode shutdown question?

2014-07-02 Thread varun kumar
Generally Datanode sends it heartbeat to namenode for  every 3 seconds.

If datanode stops sending its heartbeat to name node with in 3
seconds,Namenode thinks it is dead.


On Wed, Jul 2, 2014 at 5:03 PM, MrAsanjar . afsan...@gmail.com wrote:

 If a datanode is shut-downed by calling hadoop-daemons.sh stop datanode,
 how would the namenode gets notified that a datanode is no longer active?
 does datanode send a SHUTDOWN_MSG to the namenode?  does namenode has to
 wait for heartbeat timeout?




-- 
Regards,
Varun Kumar.P


Re: HDFS undo Overwriting

2014-06-02 Thread varun kumar
Nope.

Sorry :(


On Mon, Jun 2, 2014 at 1:31 PM, Amjad ALSHABANI ashshab...@gmail.com
wrote:

 Thanx Zesheng,

 I should admit that I m not an expert in Hadoop infrastructure, but I have
 heard my colleagues talking about HDFS replicas?
 Couldn't that help in retrieving the lost data??

 Amjad


 On Fri, May 30, 2014 at 1:44 PM, Zesheng Wu wuzeshen...@gmail.com wrote:

 I am afraid this cannot undo, in HDFS only the data which is deleted by
 the dfs client and goes into the trash can be undone.


 2014-05-30 18:18 GMT+08:00 Amjad ALSHABANI ashshab...@gmail.com:

 Hello Everybody,

 I ve made a mistake when writing to HDFS. I created new database in Hive
 giving the location on HDFS but I found that it removed all other data that
 exist already.

 =
 before creation, the directory on HDFS contains :
 pns@app11:~$ hadoop fs -ls /user/hive/warehouse
 Found 25 items
 drwxr-xr-x   - user1 supergroup  0 2013-11-20 13:40
 */user/hive/warehouse/*dfy_ans_autres
 drwxr-xr-x   - user1 supergroup  0 2013-11-20 13:40
 /user/hive/warehouse/dfy_ans_maillog
 drwxr-xr-x   - user1 supergroup  0 2013-11-20 14:28
 /user/hive/warehouse/dfy_cnx
 drwxr-xr-x   - user2   supergroup  0 2014-05-30 06:05
 /user/hive/warehouse/pns.db
 drwxr-xr-x   - user2  supergroup  0 2014-02-24 17:00
 /user/hive/warehouse/pns_fr_integ
 drwxr-xr-x   - user2  supergroup  0 2014-05-06 15:33
 /user/hive/warehouse/pns_logstat.db
 ...
 ...
 ...


 hive -e CREATE DATABASE my_stats LOCATION 'hdfs://:9000
 */user/hive/warehouse/*mystats.db'

 but now I couldn't see the other directories on HDFS:

 pns@app11:~/aalshabani$ hls /user/hive/warehouse
 Found 1 items
 drwxr-xr-x   - user2 supergroup  0 2014-05-30 11:37
 */user/hive/warehouse*/mystats.db


 Is there anyway I could restore the other directories??


 Best regards.




 --
 Best Wishes!

 Yours, Zesheng





-- 
Regards,
Varun Kumar.P


Re: Hadoop property precedence

2013-07-14 Thread varun kumar
What Shumin told is correct,hadoop configurations has been over written
through client application.

We have faced similar type of issue,Where default replication factor was
mentioned 2 in hadoop configuration.But when when ever the client
 application writes a files,it was having 3 copies in hadoop cluster.Later
on checking client application it's default replica factor has 3.


On Sun, Jul 14, 2013 at 4:51 AM, Shumin Guo gsmst...@gmail.com wrote:

 I Think the client side configuration will take effect.

 Shumin
 On Jul 12, 2013 11:50 AM, Shalish VJ shalis...@yahoo.com wrote:

 Hi,


 Suppose block size set in configuration file at client side is 64MB,
 block size set in configuration file at name node side is 128MB and block
 size set in configuration file at datanode side is something else.
 Please advice, If the client is writing a file to hdfs,which property
 would be executed.

 Thanks,
 Shalish.




-- 
Regards,
Varun Kumar.P


Re: Decomssion datanode - no response

2013-07-05 Thread varun kumar
Try to give IPaddressofDatanode:50010


On Fri, Jul 5, 2013 at 12:25 PM, Azuryy Yu azury...@gmail.com wrote:

 I filed this issue at :
 https://issues.apache.org/jira/browse/HDFS-4959


 On Fri, Jul 5, 2013 at 1:06 PM, Azuryy Yu azury...@gmail.com wrote:

 Client hasn't any connection problem.


 On Fri, Jul 5, 2013 at 12:46 PM, Devaraj k devara...@huawei.com wrote:

  And also could you check whether the client is connecting to NameNode
 or any failure in connecting to the NN.

 ** **

 Thanks

 Devaraj k

 ** **

 *From:* Azuryy Yu [mailto:azury...@gmail.com]
 *Sent:* 05 July 2013 09:15

 *To:* user@hadoop.apache.org
 *Subject:* Re: Decomssion datanode - no response

  ** **

 I added dfs.hosts.exclude before NN started.

  

 and I updated /usr/local/hadoop/conf/dfs_exclude whith new hosts, but It
 doesn't decomssion. 

 ** **

 On Fri, Jul 5, 2013 at 11:39 AM, Devaraj k devara...@huawei.com wrote:
 

 When did you add this configuration in NN conf? 

   property
 namedfs.hosts.exclude/name
 value/usr/local/hadoop/conf/dfs_exclude/value
   /property

  

 If you have added this configuration after starting NN, it won’t take
 effect and need to restart NN.

  

 If you have added this config with the exclude file before NN start, you
 can update the file with new hosts and refreshNodes command can be issued,
 then newly updated the DN’s will be decommissioned.

  

 Thanks

 Devaraj k

  

 *From:* Azuryy Yu [mailto:azury...@gmail.com]
 *Sent:* 05 July 2013 08:48
 *To:* user@hadoop.apache.org
 *Subject:* Re: Decomssion datanode - no response

  

 Thanks Devaraj,

  

 There are no any releated logs in the NN log and DN log.

  

 On Fri, Jul 5, 2013 at 11:14 AM, Devaraj k devara...@huawei.com wrote:
 

 Do you see any log related to this in Name Node logs when you issue
 refreshNodes dfsadmin command?

  

 Thanks

 Devaraj k

  

 *From:* Azuryy Yu [mailto:azury...@gmail.com]
 *Sent:* 05 July 2013 08:12
 *To:* user@hadoop.apache.org
 *Subject:* Decomssion datanode - no response

  

 Hi,

 I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude,
 

  

 hdfs-site.xml:

   property
 namedfs.hosts.exclude/name
 value/usr/local/hadoop/conf/dfs_exclude/value
   /property

  

 then:

 hdfs dfsadmin -refreshNodes

  

 but there is no decomssion nodes showed on the webUI. and not any
 releated logs in the datanode log. what's wrong?

  

 ** **






-- 
Regards,
Varun Kumar.P


Re: Hadoop Master node migration

2013-06-26 Thread varun kumar
Hi Manickam,

You need to copy the metadata also.

This works.

Regards,
Varun Kumar.P


On Wed, Jun 26, 2013 at 11:47 AM, Manickam P manicka...@outlook.com wrote:

 Hi,

 I want to move my master node alone from one server to another server.
 If i copy all the tmp, data directory and log information everything to
 the new server which has the same host name will it work properly?
 If not how should i do this server movement.
 Please help me to know this.


 Thanks,
 Manickam P




-- 
Regards,
Varun Kumar.P


Re:

2013-06-26 Thread varun kumar
Is your namenode working?


On Wed, Jun 26, 2013 at 12:38 PM, ch huang justlo...@gmail.com wrote:

 hi i build a new hadoop cluster ,but i can not ACCESS hdfs ,why? i use
 CDH3u4 ,redhat6.2

 # hadoop fs -put /opt/test hdfs://192.168.10.22:9000/user/test
 13/06/26 15:00:47 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 0 time(s).
 13/06/26 15:00:48 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 1 time(s).
 13/06/26 15:00:49 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 2 time(s).
 13/06/26 15:00:50 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 3 time(s).
 13/06/26 15:00:51 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 4 time(s).
 13/06/26 15:00:52 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 5 time(s).
 13/06/26 15:00:53 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 6 time(s).
 13/06/26 15:00:54 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 7 time(s).
 13/06/26 15:00:55 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 8 time(s).
 13/06/26 15:00:56 INFO ipc.Client: Retrying connect to server: /
 192.168.10.22:9000. Already tried 9 time(s).
 put: Call to /192.168.10.22:9000 failed on connection exception:
 java.net.ConnectException: Connection refused




-- 
Regards,
Varun Kumar.P


Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote:

 i have running old cluster datanode,so it exist some conflict, i changed
 default port, here is my hdfs-site.xml


 configuration

property

 namedfs.name.dir/name

 value/data/hadoopnamespace/value

 /property

 property

 namedfs.data.dir/name

 value/data/hadoopdata/value

 /property

 property

 namedfs.datanode.address/name

 value0.0.0.0:50011/value

 /property

 property

 namedfs.permissions/name

 valuefalse/value

 /property

 property

 namedfs.datanode.max.xcievers/name

 value4096/value

 /property

 property

 namedfs.webhdfs.enabled/name

 valuetrue/value

 /property

 property

 namedfs.http.address/name

 value192.168.10.22:50070/value

 /property

 /configuration


 2013-06-26 17:37:24,923 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting DataNode
 STARTUP_MSG:   host = CH34/192.168.10.34
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.2-cdh3u4
 STARTUP_MSG:   build =
 file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
 14:03:02 PDT 2012
 /
 2013-06-26 17:37:25,335 INFO
 org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
 set up for Hadoop, not re-installing.
 2013-06-26 17:37:25,421 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
 FSDatasetStatusMBean
 2013-06-26 17:37:25,429 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
 50011
 2013-06-26 17:37:25,430 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
 1048576 bytes/s
 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
 global filtersafety
 (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
 returned by webServer.getConnectors()[0].getLocalPort() before open() is
 -1. Opening the listener on 50075
 2013-06-26 17:37:25,519 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
 exit, active threads is 0
 2013-06-26 17:37:25,619 INFO
 org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
 down all async disk service threads...
 2013-06-26 17:37:25,619 INFO
 org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
 disk service threads have been shut down.
 2013-06-26 17:37:25,620 ERROR
 org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
 Address already in use
 at sun.nio.ch.Net.bind(Native Method)
 at
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
 at
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:303)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
 2013-06-26 17:37:25,622 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
 /




-- 
Regards,
Varun Kumar.P


Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote:

 i have running old cluster datanode,so it exist some conflict, i changed
 default port, here is my hdfs-site.xml


 configuration

property

 namedfs.name.dir/name

 value/data/hadoopnamespace/value

 /property

 property

 namedfs.data.dir/name

 value/data/hadoopdata/value

 /property

 property

 namedfs.datanode.address/name

 value0.0.0.0:50011/value

 /property

 property

 namedfs.permissions/name

 valuefalse/value

 /property

 property

 namedfs.datanode.max.xcievers/name

 value4096/value

 /property

 property

 namedfs.webhdfs.enabled/name

 valuetrue/value

 /property

 property

 namedfs.http.address/name

 value192.168.10.22:50070/value

 /property

 /configuration


 2013-06-26 17:37:24,923 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting DataNode
 STARTUP_MSG:   host = CH34/192.168.10.34
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.2-cdh3u4
 STARTUP_MSG:   build =
 file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
 14:03:02 PDT 2012
 /
 2013-06-26 17:37:25,335 INFO
 org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
 set up for Hadoop, not re-installing.
 2013-06-26 17:37:25,421 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
 FSDatasetStatusMBean
 2013-06-26 17:37:25,429 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
 50011
 2013-06-26 17:37:25,430 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
 1048576 bytes/s
 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
 global filtersafety
 (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
 returned by webServer.getConnectors()[0].getLocalPort() before open() is
 -1. Opening the listener on 50075
 2013-06-26 17:37:25,519 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
 exit, active threads is 0
 2013-06-26 17:37:25,619 INFO
 org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
 down all async disk service threads...
 2013-06-26 17:37:25,619 INFO
 org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
 disk service threads have been shut down.
 2013-06-26 17:37:25,620 ERROR
 org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
 Address already in use
 at sun.nio.ch.Net.bind(Native Method)
 at
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
 at
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:303)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
 2013-06-26 17:37:25,622 INFO
 org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
 /




-- 
Regards,
Varun Kumar.P


Re: Adding new name node location

2013-04-17 Thread varun kumar
Hi Henry,

As per your mail Point number 1 is correct.

After doing these changes metadata will be written in the new partition.

Regards,
Varun Kumar.P


On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung ythu...@winbond.com wrote:

  Hi Everyone,

 ** **

 I’m using Hadoop 1.0.4 and only define 1 location for name node files,
 like this:

   property

 namedfs.name.dir/name

 value/home/hadoop/hadoop-data/namenode/value

   /property

 ** **

 Now I want to protect my name node files by changing the configuration to:
 

   property

 namedfs.name.dir/name


 value/home/hadoop/hadoop-data/namenode,/backup/hadoop/hadoop-data/namenode/value
 

   /property

 ** **

 Where /backup is another mount point. This /backup can be another disk or
 from another NFS server.

 ** **

 My question are:

 **1.   **Is my procedure correct: do stop-dfs.sh then modify conf,
 and last start-dfs.sh?

 **2.   **If answer to no 1 is no, then could you provide the correct
 procedure?

 **3.   **Would the new name node files will auto copy the original
 name node files?

 ** **

 Best regards,

 Henry

 --
 The privileged confidential information contained in this email is
 intended for use only by the addressees as indicated by the original sender
 of this email. If you are not the addressee indicated in this email or are
 not responsible for delivery of the email to such a person, please kindly
 reply to the sender indicating this fact and delete all copies of it from
 your computer and network server immediately. Your cooperation is highly
 appreciated. It is advised that any unauthorized use of confidential
 information of Winbond is strictly prohibited; and any information in this
 email irrelevant to the official business of Winbond shall be deemed as
 neither given nor endorsed by Winbond.




-- 
Regards,
Varun Kumar.P


Re: are we able to decommission multi nodes at one time?

2013-04-01 Thread varun kumar
How many nodes do you have and replication factor for it.


Re: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-23 Thread varun kumar
Hope below link will be useful..

http://hadoop.apache.org/docs/stable/hdfs_user_guide.html


On Sat, Mar 23, 2013 at 12:29 PM, David Parks davidpark...@yahoo.comwrote:

 For a new installation of the current stable build (1.1.2 ), is there any
 reason to use the CheckPointNode over the BackupNode? 

 ** **

 It seems that we need to choose one or the other, and from the docs it
 seems like the BackupNode is more efficient in its processes.




-- 
Regards,
Varun Kumar.P


Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010

2013-03-08 Thread varun kumar
Hi Dhana,

Increase the ulimit for all the datanodes.

If you are starting the service using hadoop increase the ulimit value for
hadoop user.

Do the  changes in the following file.

*/etc/security/limits.conf*

Example:-
*hadoop  softnofile  35000*
*hadoop  hardnofile  35000*

Regards,
Varun Kumar.P

On Fri, Mar 8, 2013 at 1:15 PM, Dhanasekaran Anbalagan
bugcy...@gmail.comwrote:

 Hi Guys

 I am frequently getting is error in my Data nodes.

 Please guide what is the exact problem this.

 dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation 
 src: /172.16.30.138:50373 dest: /172.16.30.138:50010
 java.net.SocketTimeoutException: 7 millis timeout while waiting for 
 channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
 local=/172.16.30.138:34280 remote=/172.16.30.140:50010]


 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:154)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:127)


 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:115)
 at java.io.FilterInputStream.read(FilterInputStream.java:66)
 at java.io.FilterInputStream.read(FilterInputStream.java:66)
 at 
 org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:160)


 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:405)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)


 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
 at java.lang.Thread.run(Thread.java:662)


 dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation 
 src: /172.16.30.138:50531 dest: /172.16.30.138:50010
 java.io.EOFException: while trying to read 65563 bytes


 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:408)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:452)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:511)


 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:748)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:462)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)


 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
 at java.lang.Thread.run(Thread.java:662)




 How to resolve this.

 -Dhanasekaran.

 Did I learn something today? If not, I wasted it.

  --







-- 
Regards,
Varun Kumar.P


Re: How to solve : Little bit urgent delivery (cluster-1(Hbase)--- cluster-2(HDFS))

2013-03-01 Thread varun kumar
Use hbase import and export for migration of data from one cluster to
another.

On Fri, Mar 1, 2013 at 2:36 PM, samir das mohapatra samir.help...@gmail.com
 wrote:

 Hi All,

   Problem Statement:
1) We have two cluster   , let for example
  i) cluster-1
 ii) cluster-2


 There is one Scenario where we need to pull data from Hbase which is under
 cluster-1(Hbase)   to  cluster-2 (HDFS) .

Which approach I whould follow , (The volume of the data are 5TB which
 is there in Hbase)

 I am thinking , I will copy the data  throught distcp, Is this a good
 approach? If yes then how to pull only specific data not whole date into
 HDFS


 Note: Little bit urgent delivery


 regards,
 samir.

 --







-- 
Regards,
Varun Kumar.P


Re: Prolonged safemode

2013-01-20 Thread varun kumar
Hi Tariq,

When you start your namenode,Is it able to come out of Safemode
Automatically.

If no then there are under replicated blocks or corrupted blocks where
namenode is trying to fetch it.

Try to remove corrupted blocks.

Regards,
Varun Kumar.P

On Sun, Jan 20, 2013 at 4:05 AM, Mohammad Tariq donta...@gmail.com wrote:

 Hello list,

I have a pseudo distributed setup on my laptop. Everything was
 working fine untill now. But lately HDFS has started taking a lot of time
 to leave the safemode. Infact, I have to it manuaaly most of the times as
 TT and Hbase daemons get disturbed because of this.

 I am using hadoop-1.0.4. Is it a problem with this version? I have never
 faced any such issue with older versions. Or, is something going wrong on
 my side??

 Thank you so much for your precious time.

 Warm Regards,
 Tariq
 https://mtariq.jux.com/
 cloudfront.blogspot.com




-- 
Regards,
Varun Kumar.P


Re: On a lighter note

2013-01-18 Thread varun kumar
:) :)

On Fri, Jan 18, 2013 at 7:08 PM, shashwat shriparv 
dwivedishash...@gmail.com wrote:

 :)



 ∞
 Shashwat Shriparv



 On Fri, Jan 18, 2013 at 6:43 PM, Fabio Pitzolu fabio.pitz...@gr-ci.comwrote:

 Someone should made one about unsubscribing from this mailing list ! :D


 *Fabio Pitzolu*
 Consultant - BI  Infrastructure

 Mob. +39 3356033776
 Telefono 02 87157239
 Fax. 02 93664786

 *Gruppo Consulenza Innovazione - http://www.gr-ci.com*


 2013/1/18 Mohammad Tariq donta...@gmail.com

 Folks quite often get confused by the name. But this one is just
 unbeatable :)

 Warm Regards,
 Tariq
 https://mtariq.jux.com/
 cloudfront.blogspot.com


 On Fri, Jan 18, 2013 at 4:52 PM, Viral Bajaria 
 viral.baja...@gmail.comwrote:

 LOL just amazing... I remember having a similar conversation with
 someone who didn't understand meaning of secondary namenode :-)

 Viral
 --
 From: iwannaplay games
 Sent: 1/18/2013 1:24 AM

 To: user@hadoop.apache.org
 Subject: Re: On a lighter note

 Awesome
 :)



 Regards
 Prabhjot







-- 
Regards,
Varun Kumar.P


Re: Problems starting secondarynamenode in hadoop 1.0.3

2012-06-26 Thread varun kumar
Hi Jeff,

Instead of localhost,mention the host-name of Primary namenod.

On Wed, Jun 27, 2012 at 3:46 AM, Jeffrey Silverman jeffsilver...@google.com
 wrote:

 I am working with hadoop for the first time, and I am following
 instructions at
 http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/


 I am having problems starting the secondarynamenode daemon.  The error
 message in
 /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-master.out  is

 Exception in thread main java.lang.IllegalArgumentException: Does not
 contain a valid host:port authority: file:///
 at
 org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:198)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:222)
 at
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:161)
 at
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:129)
 at
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:567)



 I googled the error message and came across HDFS-2515, which says that I
 might get that error message if the fs.default.name property name had an
 incorrect value, but I think my value is okay.

 My core-site.xml file is :

 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?

 !-- Put site-specific property overrides in this file. --

 configuration
 property
   namehadoop.tmp.dir/name
   value/home/hduser/tmp/value
   descriptionA base for other temporary directories./description
 /property

 property
   namefs.default.name/name
   valuehdfs://localhost:54310/value
   descriptionThe name of the default file system.  A URI whose
   scheme and authority determine the FileSystem implementation.  The
   uri's scheme determines the config property (fs.SCHEME.impl) naming
   the FileSystem implementation class.  The uri's authority is used to
   determine the host, port, etc. for a filesystem./description
 /property
 /configuration
 ~

 Does anybody have a suggestion for how to further troubleshoot this
 problem, please?


 Thank you,


 Jeff Silverman





-- 
Regards,
Varun Kumar.P


decommissioning datanodes

2012-06-12 Thread varun kumar
Hi All,

I want to remove nodes from my cluster *gracefully*. I added the following
lines to my hdfs-site.xml

property

namedfs.hosts.exclude/name
value/opt/hadoop/conf/exclude/value
/property

In exclude file i have mentioned the hostname of the datanode.

Then I run 'hadoop dfsadmin -refreshNodes'

On the web interface the node now appear in both the 'Live Node' and
'Dead Node' (but there's nothing in the Decommissioning Node list).

What am I missing during the decommission process?


-- 
Regards,
Varun Kumar.P


HDFS Files Deleted

2012-04-26 Thread varun kumar
Dear All,

By Mistake i have deleted file in from HDFS using the command:

hadoop dfs -rmr /*

Is there any way to retrieve the deleted data.

-- 
Regards,
Varun Kumar.P


Re: HDFS Files Deleted

2012-04-26 Thread varun kumar
Thanks for your quick reply john.

I have't configured  *fs.trash.interval* in my core-site.xml,after this
disaster i have configured,Is there any other option to retrieve the data
back.

Regards,
Varun Kumar.P

On Thu, Apr 26, 2012 at 7:19 PM, John George john...@yahoo-inc.com wrote:

 If you did not use –skipTrash, the file should be in your trash. Refer:
 http://hadoop.apache.org/common/docs/current/hdfs_design.html#File+Deletes+and+Undeletes
  for
 more information.

 From: varun kumar varun@gmail.com
 Reply-To: hdfs-user@hadoop.apache.org hdfs-user@hadoop.apache.org
 Date: Thu, 26 Apr 2012 05:15:10 -0700
 To: common-u...@hadoop.apache.org common-u...@hadoop.apache.org
 Cc: hdfs-user@hadoop.apache.org hdfs-user@hadoop.apache.org
 Subject: HDFS Files Deleted

 Dear All,

 By Mistake i have deleted file in from HDFS using the command:

 hadoop dfs -rmr /*

 Is there any way to retrieve the deleted data.

 --
 Regards,
 Varun Kumar.P




-- 
Regards,
Varun Kumar.P