On Mar 19, 2020, at 12:56, Lana Deere 
<lana.de...@gmail.com<mailto:lana.de...@gmail.com>> wrote:

The MDT shows 6% of storage in use and 9% of inodes in use.  The OST, however, 
shows 46% of storage and 100% of inodes in use (12 free).  (There is only one 
OST on this particular file system.)   I suppose if a lot of files are deleted, 
the system will recover, but then I'm not sure why I couldn't create any files 
after deleting a few.

Is there any way to increase the number of inodes on the OST without losing the 
data currently on the filesystem?

That depends on the storage that the OST is on.  You can use "resize2fs" on the 
OST if the underlying stoage is LVM or similar that can be resized.  Increasing 
the OST size adds inodes proportional to the added capacity.

The more common option for adding capacity to Lustre is to add another OST.  
Based on your earlier comments, you should probably double the inodes per unit 
space (reduce the bytes per inode ratio like "mkfs.lustre --mkfsoptions='-i 
131072' ..." or similar) compared to the first OST.  You can work out the 
average bytes per inode on the OST based on the (used OST capacity / used 
inodes).

Cheers, Andreas


.. Lana (lana.de...@gmail.com<mailto:lana.de...@gmail.com>)




On Thu, Mar 19, 2020 at 2:13 PM Degremont, Aurelien 
<degre...@amazon.com<mailto:degre...@amazon.com>> wrote:
Hi Lana,

Lustre dispatches the data across several servers, MDTs and OSTs. It is likely 
that one of this OST is full.
To see the usage per sub-component, you should check:

lfs df -h
lfs df -ih

See if this reports one OSTs or MDT is full.

Aurélien

De : lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 au nom de Lana Deere <lana.de...@gmail.com<mailto:lana.de...@gmail.com>>
Date : jeudi 19 mars 2020 à 19:08
À : "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Objet : [EXTERNAL] [lustre-discuss] "no space on device"
I have a Lustre 2.12 setup running on CentOS 7.  It has been working fine for 
some months but last night one of my users tried to untar a large file, which 
(among other things) created a single directory containing several million 
subdirectories.  At that point the untar failed, reporting "no space on 
device".  All attempts to create a file on this Lustre system now produce the 
same error message, but "df" and "df -i" indicate there is plenty of space and 
inodes left.  I checked the mount point on the metadata node and it appears to 
have plenty of space left too.

I can list directories and view files on this filesystem.  I can delete files 
or directories on it.  But even after removing a few files and a directory I 
cannot create a new file.

If anyone can offer some help here it would be appreciated.

.. Lana (lana.de...@gmail.com<mailto:lana.de...@gmail.com>)

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Cheers, Andreas
--
Andreas Dilger
Principal Lustre Architect
Whamcloud






_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to