Hi All,
We are currently doing a zfs send/recv with mbuffer to send incremental
changes across and it seems to be running quite slowly, with zfs receive
the apparent bottle neck.
The process itself seems to be using almost 100% of a single CPU in sys
time.
Wondering if anyone has any ideas if
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
genunix`list_next 5822 3.7%
unix`mach_cpu_idle 150261 96.1%
Rather idle.
Top shows:
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
22945 root
On 12/05/11 10:47, Lachlan Mulcahy wrote:
zfs`lzjb_decompress10 0.0%
unix`page_nextn31 0.0%
genunix`fsflush_do_pages 37 0.0%
zfs`dbuf_free_range
Hi Bob,
On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
genunix`list_next ** 5822 3.7%
unix`mach_cpu_idle** 150261 96.1%
Rather
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of doubting it
is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
and see if new low-level fault events are comming in during the zfs
receive.
Bob
--
Bob Friesenhahn
Hi Bob,
On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of
doubting it is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
Hi All,
Just a follow up - it seems like whatever it was doing it eventually got
done with and the speed picked back up again. The send/recv finally
finished -- I guess I could do with a little patience :)
Lachlan
On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy lmulc...@marinsoftware.com