On 22 October 2013 16:21, Prakash Surya <sur...@llnl.gov> wrote: > This probably belongs on the Lustre mailing list.
I cross posted :) > Regardless, I don't > think you want to do that (do you?). It'll prevent any client side > caching, and more importantly, I don't think it's a case that's been > tested/optimized. What're you trying to acheive? Sorry I was not clear, I didn't action this and I cant kill the process. It seemed to start directly after running: "FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh" I have tried to kill it first with -2 upto -9 but the process will not budge. Here is the top lines from perf top 37.39% [osc] [k] osc_set_info_async 27.14% [lov] [k] lov_set_info_async 4.13% [kernel] [k] kfree 3.57% [ptlrpc] [k] ptlrpc_set_destroy 3.14% [kernel] [k] mutex_unlock 3.10% [lustre] [k] ll_wr_max_cached_mb 3.00% [kernel] [k] mutex_lock 2.82% [ptlrpc] [k] ptlrpc_prep_set 2.52% [kernel] [k] __kmalloc Thanks, Andrew > > Also, just curious, where's the CPU time being spent? What process and/or > kernel thread? What are the top entries listed when you run "perf top"? > > -- > Cheers, Prakash > > On Tue, Oct 22, 2013 at 12:53:44PM +0100, Andrew Holway wrote: >> Hello, >> >> I have just setup a "toy" lustre setup using this guide here: >> http://zfsonlinux.org/lustre and have this process chewing 100% cpu. >> >> sh -c echo 0 >> /proc/fs/lustre/llite/lustre-ffff88006b0c7c00/max_cached_mb >> >> Until I get something more beasty I am using my desktop machine with >> KVM. Using standard Centos 6.4 with latest kernel. (2.6.32-358.23.2). >> my machine has 2GB ram >> >> Any ideas? >> >> Thanks, >> >> Andrew >> >> To unsubscribe from this group and stop receiving emails from it, send an >> email to zfs-discuss+unsubscr...@zfsonlinux.org. > > To unsubscribe from this group and stop receiving emails from it, send an > email to zfs-discuss+unsubscr...@zfsonlinux.org. _______________________________________________ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss