On 09-Jul-2015 11:54, James Cuff wrote:

Super long shot...

http://blog.jcuff.net/2015/04/of-huge-pages-and-huge-performance-hits.html

That is a real possibility, as that is set to "always" now. Will test that next.

One more really confusing factoid. It isn't just having a lot of file cache in use and little free memory. That situation was obtained in another manner:

echo 3 > /proc/sys/vm/drop_caches #as root
ls -1 KTEMP[123]* \
 | extract -fmt 'echo [1,]; dd if=[1,] of=/dev/null bs=8192' \
 | execinput

This drove the system to this state:

cat /proc/meminfo | head -11
MemTotal:       529231456 kB
MemFree:        17662616 kB
Buffers:            9100 kB
Cached:         507229304 kB
SwapCached:          668 kB
Active:         36112532 kB
Inactive:       471255356 kB
Active(anon):       1528 kB
Inactive(anon):   132680 kB
Active(file):   36111004 kB
Inactive(file): 471122676 kB

Run the test and - no problem, fast loading of the large array.
And it stayed fast for a couple more tests on different nodes which pushed MemFree down to its "floor" as before. Then mysteriously it became slow again.

How does this situation differ from the preceding one? Beats the heck out of me. The only real difference is that the pages were all cached on one cpu, whereas in the generator case they were cached on 40 different cpus.

Thanks,

David Mathog
[email protected]
Manager, Sequence Analysis Facility, Biology Division, Caltech
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to