Re: Re: unsubscribe

2020-11-13 Thread Varun Kumar
unsubscribe

On Tue, Nov 10, 2020 at 9:22 PM Man-Young Goo  wrote:

> unsubscribe
>
> Thanks.
> 
> Manyoung Goo
>
> E-mail : my...@nate.com
> Tel : +82-2-360-1590
>
> *-- Original Message --*
>
> *Date:* Thursday, Sep 17, 2020 02:11:28 AM
> *From:* "Niketh Nikky" 
> *To:* "Dhiraj Choudhary" 
> *Cc:* 
> *Subject:* Re: unsubscribe
>
> unsubscribe
>
>
>
> Thanks
> Niketh
>
> > On Sep 16, 2020, at 12:10 PM, Dhiraj Choudhary <
> dhiraj.k.choudh...@gmail.com> wrote:
> >
> > 
> >
> >
> > --
> > Dhiraj Choudhary
> > Bangalore, India
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>
>
>
>

-- 
Regards,
Varun Kumar.P


Re: Hadoop 2.6.0 - No DataNode to stop

2015-03-01 Thread Varun Kumar
1.Stop the service

2.Change the permissions for log and pid directory once again to hdfs.

3.Start service with hdfs.

This will resolve the issue

On Sun, Mar 1, 2015 at 6:40 PM, Daniel Klinger  wrote:

> Thanks for your answer.
>
>
>
> I put the FQDN of the DataNodes in the slaves file on each node (one FQDN
> per line). Here’s the full DataNode log after the start (the log of the
> other DataNode is exactly the same):
>
>
>
> 2015-03-02 00:29:41,841 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
>
> 2015-03-02 00:29:42,207 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
>
> 2015-03-02 00:29:42,312 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
>
> 2015-03-02 00:29:42,313 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
>
> 2015-03-02 00:29:42,319 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
> hadoop.klinger.local
>
> 2015-03-02 00:29:42,327 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
> maxLockedMemory = 0
>
> 2015-03-02 00:29:42,350 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
>
> 2015-03-02 00:29:42,357 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
>
> 2015-03-02 00:29:42,358 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for
> balancing is 5
>
> 2015-03-02 00:29:42,458 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
>
> 2015-03-02 00:29:42,462 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.datanode is not defined
>
> 2015-03-02 00:29:42,474 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>
> 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
>
> 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
>
> 2015-03-02 00:29:42,476 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
>
> 2015-03-02 00:29:42,494 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
>
> 2015-03-02 00:29:42,499 INFO org.mortbay.log: jetty-6.1.26
>
> 2015-03-02 00:29:42,555 WARN org.mortbay.log: Can't reuse
> /tmp/Jetty_0_0_0_0_50075_datanodehwtdwq, using
> /tmp/Jetty_0_0_0_0_50075_datanodehwtdwq_3168831075162569402
>
> 2015-03-02 00:29:43,205 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
>
> 2015-03-02 00:29:43,635 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hdfs
>
> 2015-03-02 00:29:43,635 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
>
> 2015-03-02 00:29:43,802 INFO org.apache.hadoop.ipc.CallQueueManager: Using
> callQueue class java.util.concurrent.LinkedBlockingQueue
>
> 2015-03-02 00:29:43,823 INFO org.apache.hadoop.ipc.Server: Starting Socket
> Reader #1 for port 50020
>
> 2015-03-02 00:29:43,875 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
> 0.0.0.0:50020
>
> 2015-03-02 00:29:43,913 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
> for nameservices: null
>
> 2015-03-02 00:29:43,953 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
> for nameservices: 
>
> 2015-03-02 00:29:43,973 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool 
> (Datanode Uuid unassigned) service to hadoop.klinger.local/10.0.1.148:8020
> starting to offer service
>
> 2015-03-02 00:29:43,981 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
>
> 2015-03-02 00:29:43,982 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
>
> 2015-03-02 00:29:44,620 INFO org.apache.hadoop.hdfs.server.common.Storage:
> DataNode version: -56 and NameNode layout version: -60
>
> 2015-03-02 00:29:44,641 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /cluster/storage/datanode/in_use.lock acquired by nodename
> 1660@hadoop.klinger.local
>
> 2015-03-02 00:29:44,822 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid BP-158097147-10.0.1.148-1424966425688
>
> 2015-03-02 00:29:44,822 INFO org.apache.hadoop.hdfs.server.common.Sto

Re: error: [Errno 113] No route to host cloudera

2015-03-01 Thread Varun Kumar
Stop Iptables services on each datanode.

On Sun, Mar 1, 2015 at 12:00 PM, Krish Donald  wrote:

> Hi,
>
> I tried hard to debug the issue but nothing worked.
> I am getting error: [Errno 113] No route to host cloudera in cloudera
> agent log file.
>
> Below are some output :
>
> [root@snncloudera cloudera-scm-agent]# more /etc/hosts
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> #::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
> ipaddress1  nncloudera.my.com nncloudera
> ipaddress2 s1cloudera.my.com s1cloudera
> ipaddress3 s2cloudera.my.com s2cloudera
> ipaddress4 s3cloudera.my.com s3cloudera
> ipaddress5 s4cloudera.my.com s4cloudera
> ipaddress6 hanncloudera.my.com hanncloudera
> ipaddress7 snncloudera.my.com snncloudera
>
> Here ipaddress are correct ip addresses which I got it from ifconfig .
>
>
>
>
>  more /etc/sysconfig/network
> NETWORKING=yes
> HOSTNAME=snncloudera.my.com
>
>
> [root@snncloudera cloudera-scm-agent]# host -v -t A `hostname`
> Trying "snncloudera.localdomain"
> ;; connection timed out; trying next origin
> Trying "snncloudera"
> Host snncloudera not found: 3(NXDOMAIN)
> Received 104 bytes from 192.xxx.xx.x#53 in 68 ms
>
>
> I am not sure what is wrong.
>
> Please guide .
>
>
>
>
>


-- 
Regards,
Varun Kumar.P


Re: java.net.UnknownHostException on one node only

2015-02-25 Thread Varun Kumar
In /etc/hosts

Regards,
Varun

On Wed, Feb 25, 2015 at 9:38 AM, tesm...@gmail.com 
wrote:

> Thanks Varun,
>
> Where shall I check to resolve it?
>
>
> Regards,
> Tariq
>
> On Mon, Feb 23, 2015 at 4:07 AM, Varun Kumar  wrote:
>
>> Hi Tariq,
>>
>> Issues looks like DNS configuration issue.
>>
>>
>> On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com 
>> wrote:
>>
>>> I am getting java.net.UnknownHost exception continuously on one node
>>> Hadoop MApReduce execution.
>>>
>>> That node is accessible via SSH. This node is shown in "yarn node -list"
>>> and "hadfs dfsadmin -report" queries.
>>>
>>> Below is the log from execution
>>>
>>> 15/02/22 20:17:42 INFO mapreduce.Job: Task Id :
>>> attempt_1424622614381_0008_m_43_0, Status : FAILED
>>> Container launch failed for container_1424622614381_0008_01_16 :
>>> java.lang.IllegalArgumentException: *java.net.UnknownHostException:
>>> 101-master10*
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
>>> at
>>> org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:352)
>>> at
>>> org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:237)
>>> at
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:218)
>>> at
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.(ContainerManagementProtocolProxy.java:196)
>>> at
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
>>> at
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
>>> at
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
>>> at
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>> *Caused by: java.net.UnknownHostException: 101-master10*
>>> ... 12 more
>>>
>>>
>>>
>>> 15/02/22 20:17:44 INFO
>>>
>>> Regards,
>>> Tariq
>>>
>>
>>
>>
>> --
>> Regards,
>> Varun Kumar.P
>>
>
>


-- 
Regards,
Varun Kumar.P


Re: java.net.UnknownHostException on one node only

2015-02-22 Thread Varun Kumar
Hi Tariq,

Issues looks like DNS configuration issue.


On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com 
wrote:

> I am getting java.net.UnknownHost exception continuously on one node
> Hadoop MApReduce execution.
>
> That node is accessible via SSH. This node is shown in "yarn node -list"
> and "hadfs dfsadmin -report" queries.
>
> Below is the log from execution
>
> 15/02/22 20:17:42 INFO mapreduce.Job: Task Id :
> attempt_1424622614381_0008_m_43_0, Status : FAILED
> Container launch failed for container_1424622614381_0008_01_16 :
> java.lang.IllegalArgumentException: *java.net.UnknownHostException:
> 101-master10*
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
> at
> org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:352)
> at
> org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:237)
> at
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:218)
> at
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.(ContainerManagementProtocolProxy.java:196)
> at
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
> at
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
> at
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
> at
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> *Caused by: java.net.UnknownHostException: 101-master10*
> ... 12 more
>
>
>
> 15/02/22 20:17:44 INFO
>
> Regards,
> Tariq
>



-- 
Regards,
Varun Kumar.P


Re: Delete a folder name containing *

2014-08-20 Thread varun kumar
Make sure namenode is not in safe mode.


On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar 
wrote:

> Hi team
>
> I am in weird situation where I have  following HDFS sample folders
>
> /data/folder/
> /data/folder*
> /data/folder_day
> /data/folder_day/monday
> /data/folder/1
> /data/folder/2
>
> I want to delete /data/folder* without deleting its sub_folders. If I do
> hadoop fs -rmr /data/folder* it will delete everything which I want to
> avoid. I tried with escape character \ but HDFS FS shell is not taking it.
> Any hints/tricks ?
>
>
> Regards
> Praveenesh
>



-- 
Regards,
Varun Kumar.P


Re: A Datanode shutdown question?

2014-07-02 Thread varun kumar
Generally Datanode sends it heartbeat to namenode for  every 3 seconds.

If datanode stops sending its heartbeat to name node with in 3
seconds,Namenode thinks it is dead.


On Wed, Jul 2, 2014 at 5:03 PM, MrAsanjar .  wrote:

> If a datanode is shut-downed by calling "hadoop-daemons.sh stop datanode",
> how would the namenode gets notified that a datanode is no longer active?
> does datanode send a SHUTDOWN_MSG to the namenode?  does namenode has to
> wait for heartbeat timeout?
>



-- 
Regards,
Varun Kumar.P


Re: HDFS undo Overwriting

2014-06-02 Thread varun kumar
Nope.

Sorry :(


On Mon, Jun 2, 2014 at 1:31 PM, Amjad ALSHABANI 
wrote:

> Thanx Zesheng,
>
> I should admit that I m not an expert in Hadoop infrastructure, but I have
> heard my colleagues talking about HDFS replicas?
> Couldn't that help in retrieving the lost data??
>
> Amjad
>
>
> On Fri, May 30, 2014 at 1:44 PM, Zesheng Wu  wrote:
>
>> I am afraid this cannot undo, in HDFS only the data which is deleted by
>> the dfs client and goes into the trash can be undone.
>>
>>
>> 2014-05-30 18:18 GMT+08:00 Amjad ALSHABANI :
>>
>> Hello Everybody,
>>>
>>> I ve made a mistake when writing to HDFS. I created new database in Hive
>>> giving the location on HDFS but I found that it removed all other data that
>>> exist already.
>>>
>>> =
>>> before creation, the directory on HDFS contains :
>>> pns@app11:~$ hadoop fs -ls /user/hive/warehouse
>>> Found 25 items
>>> drwxr-xr-x   - user1 supergroup  0 2013-11-20 13:40
>>> */user/hive/warehouse/*dfy_ans_autres
>>> drwxr-xr-x   - user1 supergroup  0 2013-11-20 13:40
>>> /user/hive/warehouse/dfy_ans_maillog
>>> drwxr-xr-x   - user1 supergroup  0 2013-11-20 14:28
>>> /user/hive/warehouse/dfy_cnx
>>> drwxr-xr-x   - user2   supergroup  0 2014-05-30 06:05
>>> /user/hive/warehouse/pns.db
>>> drwxr-xr-x   - user2  supergroup  0 2014-02-24 17:00
>>> /user/hive/warehouse/pns_fr_integ
>>> drwxr-xr-x   - user2  supergroup  0 2014-05-06 15:33
>>> /user/hive/warehouse/pns_logstat.db
>>> ...
>>> ...
>>> ...
>>>
>>>
>>> hive -e "CREATE DATABASE my_stats LOCATION 'hdfs://:9000
>>> */user/hive/warehouse/*mystats.db'"
>>>
>>> but now I couldn't see the other directories on HDFS:
>>>
>>> pns@app11:~/aalshabani$ hls /user/hive/warehouse
>>> Found 1 items
>>> drwxr-xr-x   - user2 supergroup  0 2014-05-30 11:37
>>> */user/hive/warehouse*/mystats.db
>>>
>>>
>>> Is there anyway I could restore the other directories??
>>>
>>>
>>> Best regards.
>>>
>>
>>
>>
>> --
>> Best Wishes!
>>
>> Yours, Zesheng
>>
>
>


-- 
Regards,
Varun Kumar.P


Re: Hadoop property precedence

2013-07-14 Thread varun kumar
What Shumin told is correct,hadoop configurations has been over written
through client application.

We have faced similar type of issue,Where default replication factor was
mentioned 2 in hadoop configuration.But when when ever the client
 application writes a files,it was having 3 copies in hadoop cluster.Later
on checking client application it's default replica factor has 3.


On Sun, Jul 14, 2013 at 4:51 AM, Shumin Guo  wrote:

> I Think the client side configuration will take effect.
>
> Shumin
> On Jul 12, 2013 11:50 AM, "Shalish VJ"  wrote:
>
>> Hi,
>>
>>
>> Suppose block size set in configuration file at client side is 64MB,
>> block size set in configuration file at name node side is 128MB and block
>> size set in configuration file at datanode side is something else.
>> Please advice, If the client is writing a file to hdfs,which property
>> would be executed.
>>
>> Thanks,
>> Shalish.
>>
>


-- 
Regards,
Varun Kumar.P


Re: Decomssion datanode - no response

2013-07-05 Thread varun kumar
Try to give IPaddressofDatanode:50010


On Fri, Jul 5, 2013 at 12:25 PM, Azuryy Yu  wrote:

> I filed this issue at :
> https://issues.apache.org/jira/browse/HDFS-4959
>
>
> On Fri, Jul 5, 2013 at 1:06 PM, Azuryy Yu  wrote:
>
>> Client hasn't any connection problem.
>>
>>
>> On Fri, Jul 5, 2013 at 12:46 PM, Devaraj k  wrote:
>>
>>>  And also could you check whether the client is connecting to NameNode
>>> or any failure in connecting to the NN.
>>>
>>> ** **
>>>
>>> Thanks
>>>
>>> Devaraj k
>>>
>>> ** **
>>>
>>> *From:* Azuryy Yu [mailto:azury...@gmail.com]
>>> *Sent:* 05 July 2013 09:15
>>>
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: Decomssion datanode - no response
>>>
>>>  ** **
>>>
>>> I added dfs.hosts.exclude before NN started.
>>>
>>>  
>>>
>>> and I updated /usr/local/hadoop/conf/dfs_exclude whith new hosts, but It
>>> doesn't decomssion. 
>>>
>>> ** **
>>>
>>> On Fri, Jul 5, 2013 at 11:39 AM, Devaraj k  wrote:
>>> 
>>>
>>> When did you add this configuration in NN conf? 
>>>
>>>   
>>> dfs.hosts.exclude
>>> /usr/local/hadoop/conf/dfs_exclude
>>>   
>>>
>>>  
>>>
>>> If you have added this configuration after starting NN, it won’t take
>>> effect and need to restart NN.
>>>
>>>  
>>>
>>> If you have added this config with the exclude file before NN start, you
>>> can update the file with new hosts and refreshNodes command can be issued,
>>> then newly updated the DN’s will be decommissioned.
>>>
>>>  
>>>
>>> Thanks
>>>
>>> Devaraj k
>>>
>>>  
>>>
>>> *From:* Azuryy Yu [mailto:azury...@gmail.com]
>>> *Sent:* 05 July 2013 08:48
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: Decomssion datanode - no response
>>>
>>>  
>>>
>>> Thanks Devaraj,
>>>
>>>  
>>>
>>> There are no any releated logs in the NN log and DN log.
>>>
>>>  
>>>
>>> On Fri, Jul 5, 2013 at 11:14 AM, Devaraj k  wrote:
>>> 
>>>
>>> Do you see any log related to this in Name Node logs when you issue
>>> refreshNodes dfsadmin command?
>>>
>>>  
>>>
>>> Thanks
>>>
>>> Devaraj k
>>>
>>>  
>>>
>>> *From:* Azuryy Yu [mailto:azury...@gmail.com]
>>> *Sent:* 05 July 2013 08:12
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Decomssion datanode - no response
>>>
>>>  
>>>
>>> Hi,
>>>
>>> I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude,
>>> 
>>>
>>>  
>>>
>>> hdfs-site.xml:
>>>
>>>   
>>> dfs.hosts.exclude
>>> /usr/local/hadoop/conf/dfs_exclude
>>>   
>>>
>>>  
>>>
>>> then:
>>>
>>> hdfs dfsadmin -refreshNodes
>>>
>>>  
>>>
>>> but there is no decomssion nodes showed on the webUI. and not any
>>> releated logs in the datanode log. what's wrong?
>>>
>>>  
>>>
>>> ** **
>>>
>>
>>
>


-- 
Regards,
Varun Kumar.P


Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang  wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> 
>
>
>
> dfs.name.dir
>
> /data/hadoopnamespace
>
> 
>
> 
>
> dfs.data.dir
>
> /data/hadoopdata
>
> 
>
> 
>
> dfs.datanode.address
>
> 0.0.0.0:50011
>
> 
>
> 
>
> dfs.permissions
>
> false
>
> 
>
> 
>
> dfs.datanode.max.xcievers
>
> 4096
>
> 
>
> 
>
> dfs.webhdfs.enabled
>
> true
>
> 
>
> 
>
> dfs.http.address
>
> 192.168.10.22:50070
>
> 
>
> 
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> /
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
> at sun.nio.ch.Net.bind(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
> at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:303)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> /
>



-- 
Regards,
Varun Kumar.P


Re: datanode can not start

2013-06-26 Thread varun kumar
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang  wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> 
>
>
>
> dfs.name.dir
>
> /data/hadoopnamespace
>
> 
>
> 
>
> dfs.data.dir
>
> /data/hadoopdata
>
> 
>
> 
>
> dfs.datanode.address
>
> 0.0.0.0:50011
>
> 
>
> 
>
> dfs.permissions
>
> false
>
> 
>
> 
>
> dfs.datanode.max.xcievers
>
> 4096
>
> 
>
> 
>
> dfs.webhdfs.enabled
>
> true
>
> 
>
> 
>
> dfs.http.address
>
> 192.168.10.22:50070
>
> 
>
> 
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> /
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
> at sun.nio.ch.Net.bind(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
> at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:303)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> /
>



-- 
Regards,
Varun Kumar.P


Re:

2013-06-26 Thread varun kumar
Is your namenode working?


On Wed, Jun 26, 2013 at 12:38 PM, ch huang  wrote:

> hi i build a new hadoop cluster ,but i can not ACCESS hdfs ,why? i use
> CDH3u4 ,redhat6.2
>
> # hadoop fs -put /opt/test hdfs://192.168.10.22:9000/user/test
> 13/06/26 15:00:47 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 0 time(s).
> 13/06/26 15:00:48 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 1 time(s).
> 13/06/26 15:00:49 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 2 time(s).
> 13/06/26 15:00:50 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 3 time(s).
> 13/06/26 15:00:51 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 4 time(s).
> 13/06/26 15:00:52 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 5 time(s).
> 13/06/26 15:00:53 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 6 time(s).
> 13/06/26 15:00:54 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 7 time(s).
> 13/06/26 15:00:55 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 8 time(s).
> 13/06/26 15:00:56 INFO ipc.Client: Retrying connect to server: /
> 192.168.10.22:9000. Already tried 9 time(s).
> put: Call to /192.168.10.22:9000 failed on connection exception:
> java.net.ConnectException: Connection refused
>



-- 
Regards,
Varun Kumar.P


Re: Hadoop Master node migration

2013-06-26 Thread varun kumar
Hi Manickam,

You need to copy the metadata also.

This works.

Regards,
Varun Kumar.P


On Wed, Jun 26, 2013 at 11:47 AM, Manickam P  wrote:

> Hi,
>
> I want to move my master node alone from one server to another server.
> If i copy all the tmp, data directory and log information everything to
> the new server which has the same host name will it work properly?
> If not how should i do this server movement.
> Please help me to know this.
>
>
> Thanks,
> Manickam P
>



-- 
Regards,
Varun Kumar.P


Re: Adding new name node location

2013-04-16 Thread varun kumar
Hi Henry,

As per your mail Point number 1 is correct.

After doing these changes metadata will be written in the new partition.

Regards,
Varun Kumar.P


On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung  wrote:

>  Hi Everyone,
>
> ** **
>
> I’m using Hadoop 1.0.4 and only define 1 location for name node files,
> like this:
>
>   
>
> dfs.name.dir
>
> /home/hadoop/hadoop-data/namenode
>
>   
>
> ** **
>
> Now I want to protect my name node files by changing the configuration to:
> 
>
>   
>
> dfs.name.dir
>
>
> /home/hadoop/hadoop-data/namenode,/backup/hadoop/hadoop-data/namenode
> 
>
>   
>
> ** **
>
> Where /backup is another mount point. This /backup can be another disk or
> from another NFS server.
>
> ** **
>
> My question are:
>
> **1.   **Is my procedure correct: do stop-dfs.sh then modify conf,
> and last start-dfs.sh?
>
> **2.   **If answer to no 1 is no, then could you provide the correct
> procedure?
>
> **3.   **Would the new name node files will auto copy the original
> name node files?
>
> ** **
>
> Best regards,
>
> Henry
>
> --
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>



-- 
Regards,
Varun Kumar.P


Re: are we able to decommission multi nodes at one time?

2013-03-31 Thread varun kumar
How many nodes do you have and replication factor for it.


Re: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-23 Thread varun kumar
Hope below link will be useful..

http://hadoop.apache.org/docs/stable/hdfs_user_guide.html


On Sat, Mar 23, 2013 at 12:29 PM, David Parks wrote:

> For a new installation of the current stable build (1.1.2 ), is there any
> reason to use the CheckPointNode over the BackupNode? 
>
> ** **
>
> It seems that we need to choose one or the other, and from the docs it
> seems like the BackupNode is more efficient in its processes.
>



-- 
Regards,
Varun Kumar.P


Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010

2013-03-08 Thread varun kumar
Hi Dhana,

Increase the ulimit for all the datanodes.

If you are starting the service using hadoop increase the ulimit value for
hadoop user.

Do the  changes in the following file.

*/etc/security/limits.conf*

Example:-
*hadoop  softnofile  35000*
*hadoop  hardnofile  35000*

Regards,
Varun Kumar.P

On Fri, Mar 8, 2013 at 1:15 PM, Dhanasekaran Anbalagan
wrote:

> Hi Guys
>
> I am frequently getting is error in my Data nodes.
>
> Please guide what is the exact problem this.
>
> dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation 
> src: /172.16.30.138:50373 dest: /172.16.30.138:50010
> java.net.SocketTimeoutException: 7 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/172.16.30.138:34280 remote=/172.16.30.140:50010]
>
>
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:154)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:127)
>
>
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:115)
> at java.io.FilterInputStream.read(FilterInputStream.java:66)
> at java.io.FilterInputStream.read(FilterInputStream.java:66)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:160)
>
>
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:405)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
>
>
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
> at java.lang.Thread.run(Thread.java:662)
>
>
> dvcliftonhera138:50010:DataXceiver error processing WRITE_BLOCK operation 
> src: /172.16.30.138:50531 dest: /172.16.30.138:50010
> java.io.EOFException: while trying to read 65563 bytes
>
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:408)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:452)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:511)
>
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:748)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:462)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
>
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
>
> How to resolve this.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>  --
>
>
>
>



-- 
Regards,
Varun Kumar.P


Re: How to solve : Little bit urgent delivery (cluster-1(Hbase)---> cluster-2(HDFS))

2013-03-01 Thread varun kumar
Use hbase import and export for migration of data from one cluster to
another.

On Fri, Mar 1, 2013 at 2:36 PM, samir das mohapatra  wrote:

> Hi All,
>
>   Problem Statement:
>1) We have two cluster   , let for example
>  i) cluster-1
> ii) cluster-2
>
>
> There is one Scenario where we need to pull data from Hbase which is under
> cluster-1(Hbase)   to  cluster-2 (HDFS) .
>
>Which approach I whould follow , (The volume of the data are 5TB which
> is there in Hbase)
>
> I am thinking , I will copy the data  throught distcp, Is this a good
> approach? If yes then how to pull only specific data not whole date into
> HDFS
>
>
> Note: Little bit urgent delivery
>
>
> regards,
> samir.
>
> --
>
>
>
>



-- 
Regards,
Varun Kumar.P


Re: Prolonged safemode

2013-01-20 Thread varun kumar
Hi Tariq,

When you start your namenode,Is it able to come out of Safemode
Automatically.

If no then there are under replicated blocks or corrupted blocks where
namenode is trying to fetch it.

Try to remove corrupted blocks.

Regards,
Varun Kumar.P

On Sun, Jan 20, 2013 at 4:05 AM, Mohammad Tariq  wrote:

> Hello list,
>
>I have a pseudo distributed setup on my laptop. Everything was
> working fine untill now. But lately HDFS has started taking a lot of time
> to leave the safemode. Infact, I have to it manuaaly most of the times as
> TT and Hbase daemons get disturbed because of this.
>
> I am using hadoop-1.0.4. Is it a problem with this version? I have never
> faced any such issue with older versions. Or, is something going wrong on
> my side??
>
> Thank you so much for your precious time.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>



-- 
Regards,
Varun Kumar.P


Re: On a lighter note

2013-01-18 Thread varun kumar
:) :)

On Fri, Jan 18, 2013 at 7:08 PM, shashwat shriparv <
dwivedishash...@gmail.com> wrote:

> :)
>
>
>
> ∞
> Shashwat Shriparv
>
>
>
> On Fri, Jan 18, 2013 at 6:43 PM, Fabio Pitzolu wrote:
>
>> Someone should made one about unsubscribing from this mailing list ! :D
>>
>>
>> *Fabio Pitzolu*
>> Consultant - BI & Infrastructure
>>
>> Mob. +39 3356033776
>> Telefono 02 87157239
>> Fax. 02 93664786
>>
>> *Gruppo Consulenza Innovazione - http://www.gr-ci.com*
>>
>>
>> 2013/1/18 Mohammad Tariq 
>>
>>> Folks quite often get confused by the name. But this one is just
>>> unbeatable :)
>>>
>>> Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>> cloudfront.blogspot.com
>>>
>>>
>>> On Fri, Jan 18, 2013 at 4:52 PM, Viral Bajaria 
>>> wrote:
>>>
 LOL just amazing... I remember having a similar conversation with
 someone who didn't understand meaning of secondary namenode :-)

 Viral
 --
 From: iwannaplay games
 Sent: 1/18/2013 1:24 AM

 To: user@hadoop.apache.org
 Subject: Re: On a lighter note

 Awesome
 :)



 Regards
 Prabhjot


>>>
>>
>


-- 
Regards,
Varun Kumar.P


Re: Eclipse plugin not available in contrib folder

2012-08-13 Thread varun kumar
Hi Ananda,

Please download this jar from the given url and put it into eclipse pluging
dir and den try

https://dl.dropbox.com/u/19454506/hadoop-eclipse-plugin-0.20.203.0.jar


On Mon, Aug 13, 2012 at 3:06 PM, Chandra Mohan, Ananda Vel Murugan <
ananda.muru...@honeywell.com> wrote:

>  Hi, 
>
> ** **
>
> I could not find Hadoop eclipse plugin in /contrib. folder. I
> have Apache Hadoop 1.0.2 installed. Where can I find plugin jar? Thanks***
> *
>
> ** **
>
> Regards,
>
> Anand.C
>



-- 
Regards,
Varun Kumar.P