[lustre-discuss] Lustre Orphaned Chunks

2016-10-17 Thread DeWitt, Chad
Hi All.

I am still learning Lustre and I have run into an issue.  I have referred
to both the Lustre admin manual and Google, but I've had no luck in finding
the answer.  We are using Lustre 2.8.0.

We had an OST fill due to a single large file.  I took the OST offline via
the lctl deactivate command to prevent new files from being created on the
OST.  While the OST was deactivated, the user deleted the file.  Now it
appears the metadata is gone from the MDS (which makes sense), but the data
chunks on the OST remain even after I reactivated the OST.

I believe that running lfsck would resolve this issue, but I am not sure if
I should run it on the MDS or the OST?  If this is the fix, what options
would I need to use?

Thank you in advance,
Chad




Chad DeWitt, CISSP | HPC Storage Administrator

UNC Charlotte *| *ITS – University Research Computing


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Orphaned Chunks

2016-10-18 Thread DeWitt, Chad
Hi All,

I wanted to follow up and explain how I solved the issue in case anyone
else encounters this situation.  All environments are different, so YMMV.

(Please note per my initial email, the problematic OST is marked as active
before performing these steps.)

First, I marked the problematic OST as degraded (on the OSS):
# lctl set_param obdfilter..degraded=1

Second, I kicked off a lfsck for the Lustre filesystem (on the MDS):
# lctl lfsck_start --orphan --device 

Once the lfsck had deleted the orphaned chunks, I marked the OST as normal
(on the OSS):
# lctl set_param obdfilter..degraded=0

In /var/log/messages on the OSS, I could see the orphans were deleted:
kernel: Lustre: : deleting orphan objects from 0x0:162957594 to
0x0:162957873

I would like to thank Shawn [Hall] and Bob [Ball] for their responses,
which lead me in the right direction.

Thank you,
Chad



Chad DeWitt, CISSP | HPC Storage Administrator

UNC Charlotte *| *ITS – University Research Computing



On Mon, Oct 17, 2016 at 2:32 PM, DeWitt, Chad  wrote:

> Hi All.
>
> I am still learning Lustre and I have run into an issue.  I have referred
> to both the Lustre admin manual and Google, but I've had no luck in finding
> the answer.  We are using Lustre 2.8.0.
>
> We had an OST fill due to a single large file.  I took the OST offline via
> the lctl deactivate command to prevent new files from being created on the
> OST.  While the OST was deactivated, the user deleted the file.  Now it
> appears the metadata is gone from the MDS (which makes sense), but the data
> chunks on the OST remain even after I reactivated the OST.
>
> I believe that running lfsck would resolve this issue, but I am not sure
> if I should run it on the MDS or the OST?  If this is the fix, what options
> would I need to use?
>
> Thank you in advance,
> Chad
>
>
> 
>
> Chad DeWitt, CISSP | HPC Storage Administrator
>
> UNC Charlotte *| *ITS – University Research Computing
>
> 
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre [2.8.0] flock Functionality

2017-03-28 Thread DeWitt, Chad
Good afternoon, All.

We've encountered several programs that require flock, so we are now
investigating enabling flock functionality.  However, the Lustre manual
includes a passage in regards to flocks which gives us pause:

"Warning
This mode affects the performance of the file being flocked and may affect
stability, depending on the Lustre version used.  Consider using a newer
Lustre version which is more stable. If the consistent mode is enabled and
no applications are using flock, then it has no effect."

We are running Lustre 2.8.0 (servers and clients).  I've looked through
Jira, but didn't see anything that looked like a showstopper.

Just curious if anyone has enabled flocks and encountered issues?  Anything
in particular to look out for?

Thank you in advance,
Chad



Chad DeWitt, CISSP | HPC Storage Administrator

UNC Charlotte *| *ITS – University Research Computing


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org