The flock locks (and regular LDLM locks for Lustre metadata and data extents) 
are reconstructed from client state if the MDS or OSS crash.

Cheers, Andreas

On Nov 26, 2017, at 21:03, John Bent 
<johnb...@gmail.com<mailto:johnb...@gmail.com>> wrote:

How does the lock manager avoid disk IO?  Locks don’t survive MDS0 failure?

On Nov 26, 2017, at 8:29 PM, Dilger, Andreas 
<andreas.dil...@intel.com<mailto:andreas.dil...@intel.com>> wrote:

The flock functionality only affects applications that are actually using it. 
It does not add any overhead for applications that do not use flock.

There are two flock options:

 - localflock, which only keeps locking on the local client node and is 
sufficient for applications that only run on a single node
- flock, which adds locking between applications on different clients mounted 
with this option. This is if you have a distributed application that is running 
on multiple clients that controls its file access via flock (e.g. 
Producer/consumer).

The overhead itself depends on how much the application is actually using 
flock. The lock manager is on MDS0, and uses Lustre RPCs (which can run at 
100k/s or higher), and does not involve any disk IO.

Cheers, Andreas

On Nov 26, 2017, at 12:03, E.S. Rosenberg 
<esr+lus...@mail.hebrew.edu<mailto:esr+lus...@mail.hebrew.edu>> wrote:

Hi Torsten,
Thanks that worked!

Do you or anyone on the list know if/how flock affects Lustre performance?

Thanks again,
Eli

On Tue, Nov 21, 2017 at 9:18 AM, Torsten Harenberg 
<torsten.harenb...@cern.ch<mailto:torsten.harenb...@cern.ch>> wrote:
Hi Eli,

Am 21.11.17 um 01:26 schrieb E.S. Rosenberg:
> So I was wondering would this issue be solved by Lustre bindings for
> Java or is this a way of locking that isn't supported by Lustre?

I know nothing about Elastic Search, but have you tried to mount Lustre
with "flock" in the mount options?

Cheers

 Torsten

--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>                                                              <>
<> Dr. Torsten Harenberg     
torsten.harenb...@cern.ch<mailto:torsten.harenb...@cern.ch>          <>
<> Bergische Universitaet                                       <>
<> Fakutät 4 - Physik        Tel.: +49 (0)202 
439-3521<tel:%2B49%20%280%29202%20439-3521>          <>
<> Gaussstr. 20              Fax : +49 (0)202 
439-2811<tel:%2B49%20%280%29202%20439-2811>          <>
<> 42097 Wuppertal           @CERN: Bat. 1-1-049                <>
<>                                                              <>
<><><><><><><>< Of course it runs NetBSD http://www.netbsd.org ><>

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to