unsubscribe
On Tue, Nov 10, 2020 at 9:22 PM Man-Young Goo wrote:
> unsubscribe
>
> Thanks.
>
> Manyoung Goo
>
> E-mail : my...@nate.com
> Tel : +82-2-360-1590
>
> *-- Original Message --*
>
> *Date:* Thursday, Sep 17, 2020 02:11:28 AM
> *From:* "Niketh
1.Stop the service
2.Change the permissions for log and pid directory once again to hdfs.
3.Start service with hdfs.
This will resolve the issue
On Sun, Mar 1, 2015 at 6:40 PM, Daniel Klinger d...@web-computing.de wrote:
Thanks for your answer.
I put the FQDN of the DataNodes in the
Stop Iptables services on each datanode.
On Sun, Mar 1, 2015 at 12:00 PM, Krish Donald gotomyp...@gmail.com wrote:
Hi,
I tried hard to debug the issue but nothing worked.
I am getting error: [Errno 113] No route to host cloudera in cloudera
agent log file.
Below are some output :
Hi Tariq,
Issues looks like DNS configuration issue.
On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com tesm...@gmail.com
wrote:
I am getting java.net.UnknownHost exception continuously on one node
Hadoop MApReduce execution.
That node is accessible via SSH. This node is shown in yarn node
Make sure namenode is not in safe mode.
On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar praveen...@gmail.com
wrote:
Hi team
I am in weird situation where I have following HDFS sample folders
/data/folder/
/data/folder*
/data/folder_day
/data/folder_day/monday
/data/folder/1
Generally Datanode sends it heartbeat to namenode for every 3 seconds.
If datanode stops sending its heartbeat to name node with in 3
seconds,Namenode thinks it is dead.
On Wed, Jul 2, 2014 at 5:03 PM, MrAsanjar . afsan...@gmail.com wrote:
If a datanode is shut-downed by calling
Nope.
Sorry :(
On Mon, Jun 2, 2014 at 1:31 PM, Amjad ALSHABANI ashshab...@gmail.com
wrote:
Thanx Zesheng,
I should admit that I m not an expert in Hadoop infrastructure, but I have
heard my colleagues talking about HDFS replicas?
Couldn't that help in retrieving the lost data??
Amjad
What Shumin told is correct,hadoop configurations has been over written
through client application.
We have faced similar type of issue,Where default replication factor was
mentioned 2 in hadoop configuration.But when when ever the client
application writes a files,it was having 3 copies in
Try to give IPaddressofDatanode:50010
On Fri, Jul 5, 2013 at 12:25 PM, Azuryy Yu azury...@gmail.com wrote:
I filed this issue at :
https://issues.apache.org/jira/browse/HDFS-4959
On Fri, Jul 5, 2013 at 1:06 PM, Azuryy Yu azury...@gmail.com wrote:
Client hasn't any connection problem.
Hi Manickam,
You need to copy the metadata also.
This works.
Regards,
Varun Kumar.P
On Wed, Jun 26, 2013 at 11:47 AM, Manickam P manicka...@outlook.com wrote:
Hi,
I want to move my master node alone from one server to another server.
If i copy all the tmp, data directory and log
Is your namenode working?
On Wed, Jun 26, 2013 at 12:38 PM, ch huang justlo...@gmail.com wrote:
hi i build a new hadoop cluster ,but i can not ACCESS hdfs ,why? i use
CDH3u4 ,redhat6.2
# hadoop fs -put /opt/test hdfs://192.168.10.22:9000/user/test
13/06/26 15:00:47 INFO ipc.Client:
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*
On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote:
i have running old cluster datanode,so it exist some conflict, i changed
default
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*
On Wed, Jun 26, 2013 at 3:13 PM, ch huang justlo...@gmail.com wrote:
i have running old cluster datanode,so it exist some conflict, i changed
default
Hi Henry,
As per your mail Point number 1 is correct.
After doing these changes metadata will be written in the new partition.
Regards,
Varun Kumar.P
On Wed, Apr 17, 2013 at 11:32 AM, Henry Hung ythu...@winbond.com wrote:
Hi Everyone,
** **
I’m using Hadoop 1.0.4 and only define 1
How many nodes do you have and replication factor for it.
Hope below link will be useful..
http://hadoop.apache.org/docs/stable/hdfs_user_guide.html
On Sat, Mar 23, 2013 at 12:29 PM, David Parks davidpark...@yahoo.comwrote:
For a new installation of the current stable build (1.1.2 ), is there any
reason to use the CheckPointNode over the
Hi Dhana,
Increase the ulimit for all the datanodes.
If you are starting the service using hadoop increase the ulimit value for
hadoop user.
Do the changes in the following file.
*/etc/security/limits.conf*
Example:-
*hadoop softnofile 35000*
*hadoop hard
Use hbase import and export for migration of data from one cluster to
another.
On Fri, Mar 1, 2013 at 2:36 PM, samir das mohapatra samir.help...@gmail.com
wrote:
Hi All,
Problem Statement:
1) We have two cluster , let for example
i) cluster-1
ii) cluster-2
There
Hi Tariq,
When you start your namenode,Is it able to come out of Safemode
Automatically.
If no then there are under replicated blocks or corrupted blocks where
namenode is trying to fetch it.
Try to remove corrupted blocks.
Regards,
Varun Kumar.P
On Sun, Jan 20, 2013 at 4:05 AM, Mohammad
:) :)
On Fri, Jan 18, 2013 at 7:08 PM, shashwat shriparv
dwivedishash...@gmail.com wrote:
:)
∞
Shashwat Shriparv
On Fri, Jan 18, 2013 at 6:43 PM, Fabio Pitzolu fabio.pitz...@gr-ci.comwrote:
Someone should made one about unsubscribing from this mailing list ! :D
*Fabio Pitzolu*
Hi Jeff,
Instead of localhost,mention the host-name of Primary namenod.
On Wed, Jun 27, 2012 at 3:46 AM, Jeffrey Silverman jeffsilver...@google.com
wrote:
I am working with hadoop for the first time, and I am following
instructions at
Hi All,
I want to remove nodes from my cluster *gracefully*. I added the following
lines to my hdfs-site.xml
property
namedfs.hosts.exclude/name
value/opt/hadoop/conf/exclude/value
/property
In exclude file i have mentioned the hostname of the datanode.
Then I run 'hadoop dfsadmin
Dear All,
By Mistake i have deleted file in from HDFS using the command:
hadoop dfs -rmr /*
Is there any way to retrieve the deleted data.
--
Regards,
Varun Kumar.P
not use –skipTrash, the file should be in your trash. Refer:
http://hadoop.apache.org/common/docs/current/hdfs_design.html#File+Deletes+and+Undeletes
for
more information.
From: varun kumar varun@gmail.com
Reply-To: hdfs-user@hadoop.apache.org hdfs-user@hadoop.apache.org
Date: Thu, 26
24 matches
Mail list logo