This actually might be good timing. I’m going to want to upgrade my cluster 
soon anyway. When I installed the server the first time, I don’t think I got 
the option to specify my partition size. Is it possible to tell CENTOS to just 
use the entire disk?

Once I do that, is it possible to tell Hadoop to use more of the disk?

To be clear when you say reinstall all the server, do you mean like the OS and 
everything OR just my Hadoop components?

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net<http://www.massstreet.net/>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData<http://twitter.com/BobLovesData>


From: Philippe Kernévez [mailto:pkerne...@octo.com]
Sent: Tuesday, July 18, 2017 3:48 AM
To: Adaryl Wakefield <adaryl.wakefi...@hotmail.com>
Cc: user@hadoop.apache.org
Subject: Re: Disk maintenance

Hi Adaryl,

You have a disk mount issue.
The log are located in /var/log, and this folder is attached to the partition 
"/", the first line of the output : "/dev/mapper/centos-root   50G   31G   20G  
61% /"
This partition has only a total space of 50Go.

The main partition of you disk 866Go is for the users folder : "/home/" and is 
not available for logs.

You have 2 solutions :
1) Recreate the partition with fdisk ( 
http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html ) or another tools  
and provide a bigger partition for "/" (between 200 and 300Go) . Then 
*reinstall* all the server. =>  it's the longer, but the cleaner way.
2) Move the logs to your biggest partition. May be in /home/logs/. For a 
production server, it's clearly not the recommended way.
For moving the log your have to change the configuration of *all* tools like 
'hdfs_log_dir_prefix' property in hadoop env. It's fastidious but quicker than 
a full reinstallation.

Regards,
Philippe


On Tue, Jul 18, 2017 at 5:47 AM, Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:
Sorry for the slow response. I have to do this in my off hours. Here is the 
output.

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G   31G   20G  61% /
devtmpfs                  16G     0   16G   0% /dev
tmpfs                     16G  8.0K   16G   1% /dev/shm
tmpfs                     16G   18M   16G   1% /run
tmpfs                     16G     0   16G   0% /sys/fs/cgroup
/dev/sda1                494M  173M  321M  36% /boot
/dev/mapper/centos-home  866G   48M  866G   1% /home
tmpfs                    3.1G     0  3.1G   0% /run/user/1000
tmpfs                    3.1G     0  3.1G   0% /run/user/1006
tmpfs                    3.1G     0  3.1G   0% /run/user/1003
tmpfs                    3.1G     0  3.1G   0% /run/user/1004
tmpfs                    3.1G     0  3.1G   0% /run/user/1016
tmpfs                    3.1G     0  3.1G   0% /run/user/1020
tmpfs                    3.1G     0  3.1G   0% /run/user/1015
tmpfs                    3.1G     0  3.1G   0% /run/user/1021
tmpfs                    3.1G     0  3.1G   0% /run/user/1012
tmpfs                    3.1G     0  3.1G   0% /run/user/1018
tmpfs                    3.1G     0  3.1G   0% /run/user/1002
tmpfs                    3.1G     0  3.1G   0% /run/user/1009

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685<tel:(913)%20938-6685>
www.massstreet.net<http://www.massstreet.net/>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData<http://twitter.com/BobLovesData>


From: Philippe Kernévez [mailto:pkerne...@octo.com<mailto:pkerne...@octo.com>]
Sent: Friday, July 14, 2017 3:08 AM

To: Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Disk maintenance

Hi,

Would you run the command 'sudo df -kh'.

Regards,
Philippe

On Fri, Jul 14, 2017 at 6:40 AM, Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:
So I did the first command and did find some offenders:
5.9G    /var/log/ambari-infra-solr
5.9G    /var/log/Hadoop

While those are big numbers, they are sitting on a 1TB disk. This is the actual 
message I’m getting:
Capacity Used: [60.52%, 32.5 GB], Capacity Total: [53.7 GB], path=/usr/hdp

I figured out that HDFS isn’t actually taking up the whole disk which I didn’t 
know. I figured out how to expand that but before I do that, I want to know 
what is eating my space. I ran your command again with a modification:
sudo du -h --max-depth=1 /usr/hdp

That output is shown here:
395M    /usr/hdp/share
4.8G    /usr/hdp/2.5.0.0-1245
4.0K    /usr/hdp/current
5.2G    /usr/hdp

None of that adds up to 32.5 GB.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685<tel:(913)%20938-6685>
www.massstreet.net<http://www.massstreet.net/>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData<http://twitter.com/BobLovesData>


From: Shane Kumpf 
[mailto:shane.kumpf.apa...@gmail.com<mailto:shane.kumpf.apa...@gmail.com>]
Sent: Wednesday, July 12, 2017 7:17 AM
To: Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Disk maintenance

Hello Bob,

It's difficult to say based on the information provided, but I would suspect 
namenode and datanode logs to be the culprit. What does "sudo du -h 
--max-depth=1 /var/log" return?

If it is not logs, is there a specific filesystem/directory that you see 
filling up/alerting? i.e. /, /var, /data, etc? If you are unsure, you can start 
at / to try to track down where the space is going via "sudo du -xm 
--max-depth=1 / | sort -rn" and then walk the filesystem hierarchy for the 
directory listed as using the most space (change / in the previous command to 
the directory reported as using all the space, continue that process until you 
locate the files using up all the space).

-Shane

On Tue, Jul 11, 2017 at 9:22 PM, Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:
I'm running a test cluster that normally has no data in it. Despite that, I've 
been getting warnings of disk space usage. Something is growing on disk and I'm 
not sure what. Are there scrips that I should be running to clean out logs or 
something? What is really interesting is that this is only affecting the name 
node and one data node. The other data node isn’t having a space issue.

I'm running Hortonworks Data Platform 2.5 with HDFS 2.7.3 on CENTOS 7. I 
thought it might be a Linux issue but the problem is clearly confined to the 
parts of the disk taken up by HDFS.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685<tel:(913)%20938-6685>
www.massstreet.net<http://www.massstreet.net/>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData<http://twitter.com/BobLovesData>





--
Philippe Kernévez



Directeur technique (Suisse),
pkerne...@octo.com<mailto:pkerne...@octo.com>
+41 79 888 33 32<tel:+41%2079%20888%2033%2032>

Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
OCTO Technology http://www.octo.ch



--
Philippe Kernévez



Directeur technique (Suisse),
pkerne...@octo.com<mailto:pkerne...@octo.com>
+41 79 888 33 32

Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
OCTO Technology http://www.octo.ch

Reply via email to