We've seen exactly this behaviour. Removing and readding the lroc nsd device worked for us.
Simon ________________________________________ From: [email protected] [[email protected]] on behalf of [email protected] [[email protected]] Sent: 05 June 2017 14:55 To: Oesterlin, Robert Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] NSD access routes OK slightly ignore that last email. It's still not updating the output but I realise the Stats from line is when they started so probably won't update! :( Still nothing seems to being cached though. ---------------------------------------------------- Dave Goodbourn Head of Systems MILK<http://www.milk-vfx.com/> VISUAL EFFECTS [http://www.milk-vfx.com/src/milk_email_logo.jpg] 5th floor, Threeways House, 40-44 Clipstone Street London, W1W 5DW Tel: +44 (0)20 3697 8448 Mob: +44 (0)7917 411 069 On 5 June 2017 at 14:49, Dave Goodbourn <[email protected]<mailto:[email protected]>> wrote: Thanks Bob, That pagepool comment has just answered my next question! But it doesn't seem to be working. Here's my mmdiag output: === mmdiag: lroc === LROC Device(s): '0AF0000259355BA8#/dev/sdb;0AF0000259355BA9#/dev/sdc;0AF0000259355BAA#/dev/sdd;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 1151997 MB, currently in use: 0 MB Statistics from: Mon Jun 5 13:40:50 2017 Total objects stored 0 (0 MB) recalled 0 (0 MB) objects failed to store 0 failed to recall 0 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 0 (0 MB) Inode objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 % Inode objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB) Inode objects failed to store 0 failed to recall 0 failed to query 0 failed to inval 0 Directory objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 % Directory objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB) Directory objects failed to store 0 failed to recall 0 failed to query 0 failed to inval 0 Data objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 % Data objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB) Data objects failed to store 0 failed to recall 0 failed to query 0 failed to inval 0 agent inserts=0, reads=0 response times (usec): insert min/max/avg=0/0/0 read min/max/avg=0/0/0 ssd writeIOs=0, writePages=0 readIOs=0, readPages=0 response times (usec): write min/max/avg=0/0/0 read min/max/avg=0/0/0 I've restarted GPFS on that node just in case but that didn't seem to help. I have LROC on a node that DOESN'T have direct access to an NSD so will hopefully cache files that get requested over NFS. How often are these stats updated? The Statistics line doesn't seem to update when running the command again. Dave, ---------------------------------------------------- Dave Goodbourn Head of Systems MILK<http://www.milk-vfx.com/> VISUAL EFFECTS [http://www.milk-vfx.com/src/milk_email_logo.jpg] 5th floor, Threeways House, 40-44 Clipstone Street London, W1W 5DW Tel: +44 (0)20 3697 8448 Mob: +44 (0)7917 411 069 On 5 June 2017 at 13:48, Oesterlin, Robert <[email protected]<mailto:[email protected]>> wrote: Hi Dave I’ve done a large-scale (600 node) LROC deployment here - feel free to reach out if you have questions. mmdiag --lroc is about all there is but it does give you a pretty good idea how the cache is performing but you can’t tell which files are cached. Also, watch out that the LROC cached will steal pagepool memory (1% of the LROC cache size) Bob Oesterlin Sr Principal Storage Engineer, Nuance From: <[email protected]<mailto:[email protected]>> on behalf of Dave Goodbourn <[email protected]<mailto:[email protected]>> Reply-To: gpfsug main discussion list <[email protected]<mailto:[email protected]>> Date: Monday, June 5, 2017 at 7:19 AM To: gpfsug main discussion list <[email protected]<mailto:[email protected]>> Subject: [EXTERNAL] Re: [gpfsug-discuss] NSD access routes I'm testing out the LROC idea. All seems to be working well, but, is there anyway to monitor what's cached? How full it might be? The performance etc?? I can see some stats in mmfsadm dump lroc but that's about it. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
