Try increasing heap size of the client via HADOOP_CLIENT_OPTS. The default
is 128M IIRC
This might improve the performance.
You can bump it upto 1G.
On Tue, Jan 16, 2018 at 10:03 PM, ping wang wrote:
> Hi advisers,
> We use "hdfs setfacl -R" for file ACL control. As the data directory is
> big
Hi advisers,
We use "hdfs setfacl -R" for file ACL control. As the data directory is
big with 60,000+ sub-directories and files, the command is very
time-consuming. Seems it can not finish in hours, we can not image this
command will cost several days.
Any settings can help improve this?
Thanks a
Hi
I check the jps command on namenode:
8426 ResourceManager
23861 Jps
23356 SecondaryNameNode
23029 NameNode
datanode:
25104 NodeManager
25408 Jps
Obviously the datanode was not working,after I format the HDFS with hadoop
namenode -format,the problem remains still.
The latest log form the fil
It is possible that your datanode daemon has not started yet. Logon to the
datanode and check if the daemon is running by issue a jps command.
Another possible reason is that your namenode can not communicate with
datanode. Try ping datanode from the namenode.
The log files are supposed in HADOOP
where should I find this logs?
I think the problem mainly on slaves,where should I find the logs?
2013/12/16 shashwat shriparv
> Had your upgrade finished successfully?? check if datanode is able to
> connect to namenode, check datanode logs and please attach some log here if
> you are getting
Had your upgrade finished successfully?? check if datanode is able to
connect to namenode, check datanode logs and please attach some log here if
you are getting any error in if data node is running.
*Warm Regards_**∞_*
*Shashwat Shriparv*
Big-Data Engineer(HPC)
[image:
http://www.linkedin.com/
Now the datanode is not working
[image: 内嵌图片 1]
2013/12/16 Geelong Yao
> it is the namenode's problem.
> How can I fix this problem?
>
>
>
> 2013/12/16 Shekhar Sharma
>
>> Seems like DataNode is not running or went dead
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Mon, Dec 1
it is the namenode's problem.
How can I fix this problem?
2013/12/16 Shekhar Sharma
> Seems like DataNode is not running or went dead
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Mon, Dec 16, 2013 at 1:40 PM, Geelong Yao wrote:
> > Hi Everyone
> >
> > After I upgrade the hadoop t
Seems like DataNode is not running or went dead
Regards,
Som Shekhar Sharma
+91-8197243810
On Mon, Dec 16, 2013 at 1:40 PM, Geelong Yao wrote:
> Hi Everyone
>
> After I upgrade the hadoop to CDH 4.2.0 Hadoop 2.0.0,I try to running some
> test
> When I try to upload file to HDFS,error comes:
>
>
Hi Everyone
After I upgrade the hadoop to CDH 4.2.0 Hadoop 2.0.0,I try to running some
test
When I try to upload file to HDFS,error comes:
node32:/software/hadoop-2.0.0-cdh4.2.0 # hadoop dfs -put
/public/data/carinput1G_BK carinput1G
DEPRECATED: Use of this script to execute hdfs command is dep
10 matches
Mail list logo