Re: zfs recv hangs in kmem arena

2014-10-26 Thread James R. Van Artsdalen
I was able to complete a ZFS replication by manually intervening each
time zfs recv blocked on kmem arena: running the program at the end
was sufficient to unblock zfs each of the 17 times it stalled.

The program is intended to consume about 24GB RAM out of 32GB physical
RAM, thereby pressuring the ARC and kernel cache to shrink: when the
program exits it would leave plenty of free RAM for zfs or whatever
else.  What actually happened is that every time, zfs unblocked as the
program below was growing: it was never necessary to wait for the
program to exit and free memory before zfs unblocked.

On 10/16/2014 6:25 AM, James R. Van Artsdalen wrote:
 The zfs recv / kmem arena hang happens with -CURRENT as well as
 10-STABLE, on two different systems, with 16GB or 32GB of RAM, from
 memstick or normal multi-user environments,

 Hangs usually seem to hapeen 1TB to 3TB in, but last night one run hung
 after only 4.35MB.

 On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
 FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2 #2 r272070M:
 Wed Sep 24 17:36:56 CDT 2014
 ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC  amd64

 With current STABLE10 I am unable to replicate a ZFS pool using zfs
 send/recv without zfs hanging in state kmem arena, within the first
 4TB or so (of a 23TB Pool).

 The most recent attempt used this command line

 SUPERTEX:/root# zfs send -R BIGTEX/UNIX@syssnap | ssh BLACKIE zfs recv
 -duvF BIGTOX

 though local replications fail in kmem arena too.

 The two machines I've been attempting this on have 16BG and 32GB of RAM
 each and are otherwise idle.

 Any suggestions on how to get around, or investigate, kmem arena?

 # top
 last pid:  3272;  load averages:  0.22,  0.22,  0.23  up
 0+08:25:02  01:32:07
 34 processes:  1 running, 33 sleeping
 CPU:  0.0% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.9% idle
 Mem: 21M Active, 82M Inact, 15G Wired, 28M Cache, 450M Free
 ARC: 12G Total, 24M MFU, 12G MRU, 23M Anon, 216M Header, 47M Other
 Swap: 16G Total, 16G Free

   PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIMEWCPU
 COMMAND
  1173 root  1  520 86476K  7780K select  0 124:33   0.00% sshd
  1176 root  1  460 87276K 47732K kmem a  3  48:36   0.00% zfs
   968 root 32  200 12344K  1888K rpcsvc  0   0:13   0.00% nfsd
  1009 root  1  200 25452K  2864K select  3   0:01   0.00% ntpd
 ...

#include stdlib.h
#include string.h

long long s = ( (long long) 1  32) - 65;

main()
{
  char *p;

  p = calloc (s, 1);
  memset (p, 1, s);
  p = calloc (s, 1);
  memset (p, 1, s);
  p = calloc (s, 1);
  memset (p, 1, s);
  p = calloc (s, 1);
  memset (p, 1, s);
  p = calloc (s, 1);
  memset (p, 1, s);
  p = calloc (s, 1);
  memset (p, 1, s);
}

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-21 Thread Alan Cox
On Sun, Oct 19, 2014 at 10:30 AM, James R. Van Artsdalen 
james-freebsd-...@jrv.org wrote:

 Removing kern.maxfiles from loader.conf still hangs in kmem arena.

 I tried using a memstick image of -CURRENT made from the release/
 process and this also hangs in kmem arena

 An uninvolved server of mine hung Friday night in statekmem arena
 during periodic's zpool history.  After a reboot it did not hang
 Saturday night.




How up to date is your source tree?  r2720221 is relevant.  Without that
change, there are circumstances in which the code that is supposed to free
space from the kmem arena doesn't get called.




 On 10/16/2014 11:37 PM, James R. Van Artsdalen wrote:
  On 10/16/2014 11:10 PM, Xin Li wrote:
  On 10/16/14 8:43 PM, James R. Van Artsdalen wrote:
  On 10/16/2014 11:12 AM, Xin Li wrote:
  On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
  FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2
  #2 r272070M: Wed Sep 24 17:36:56 CDT 2014
  ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC
  amd64
 
  With current STABLE10 I am unable to replicate a ZFS pool
  using zfs send/recv without zfs hanging in state kmem
  arena, within the first 4TB or so (of a 23TB Pool).
  What does procstat -kk 1176 (or the PID of your 'zfs' process
  that stuck in that state) say?
 
  Cheers,
 
  SUPERTEX:/root# ps -lp 866 UID PID PPID CPU PRI NI   VSZ   RSS
  MWCHAN   STAT TT  TIME COMMAND 0 866  863   0  52  0 66800
  29716 kmem are D+1  57:40.82 zfs recv -duvF BIGTOX
  SUPERTEX:/root# procstat -kk 866 PIDTID COMM TDNAME
  KSTACK 866 101573 zfs  -mi_switch+0xe1
  sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d
  kmem_malloc+0x33 keg_alloc_slab+0xcd keg_fetch_slab+0x151
  zone_fetch_slab+0x7e zone_import+0x40 uma_zalloc_arg+0x34e
  arc_get_data_buf+0x31a arc_buf_alloc+0xaa dmu_buf_will_fill+0x169
  dmu_write+0xfc dmu_recv_stream+0xd40 zfs_ioc_recv+0x94e
  zfsdev_ioctl+0x5ca
  Do you have any special tuning in your /boot/loader.conf?
 
  Cheers,
 
  Below.  I had forgotten some of this was there.
 
  After sending the previous message I ran kgdb to see if I could get a
  backtrace with function args.  I didn't see how to do it for this proc,
  but during all this the process un-blocked and started running again.
 
  The process blocked again in kmem arena after a few minutes.
 
 
  SUPERTEX:/root# cat /boot/loader.conf
  zfs_load=YES   # ZFS
  vfs.root.mountfrom=zfs:SUPERTEX/UNIX# Specify root partition
  in a way the
  # kernel understands
  kern.maxfiles=32K# Set the sys. wide open files limit
  kern.ktrace.request_pool=512
  #vfs.zfs.debug=1
  vfs.zfs.check_hostid=0
 
  loader_logo=beastie# Desired logo: fbsdbw, beastiebw, beastie,
  none
  boot_verbose=YES# -v: Causes extra debugging information to be
  printed
  geom_mirror_load=YES# RAID1 disk driver (see gmirror(8))
  geom_label_load=YES# File system labels (see glabel(8))
  ahci_load=YES
  siis_load=YES
  mvs_load=YES
  coretemp_load=YES# Intel Core CPU temperature monitor
  #console=comconsole
  kern.msgbufsize=131072# Set size of kernel message buffer
 
  kern.geom.label.gpt.enable=0
  kern.geom.label.gptid.enable=0
  kern.geom.label.disk_ident.enable=0
  SUPERTEX:/root#
 

 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-21 Thread Alan Cox
On Tue, Oct 21, 2014 at 5:03 PM, Alan Cox alan.l@gmail.com wrote:

 On Sun, Oct 19, 2014 at 10:30 AM, James R. Van Artsdalen 
 james-freebsd-...@jrv.org wrote:

 Removing kern.maxfiles from loader.conf still hangs in kmem arena.

 I tried using a memstick image of -CURRENT made from the release/
 process and this also hangs in kmem arena

 An uninvolved server of mine hung Friday night in statekmem arena
 during periodic's zpool history.  After a reboot it did not hang
 Saturday night.




 How up to date is your source tree?  r2720221 is relevant.  Without that
 change, there are circumstances in which the code that is supposed to free
 space from the kmem arena doesn't get called.


That should be r272071.





 On 10/16/2014 11:37 PM, James R. Van Artsdalen wrote:
  On 10/16/2014 11:10 PM, Xin Li wrote:
  On 10/16/14 8:43 PM, James R. Van Artsdalen wrote:
  On 10/16/2014 11:12 AM, Xin Li wrote:
  On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
  FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2
  #2 r272070M: Wed Sep 24 17:36:56 CDT 2014
  ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC
  amd64
 
  With current STABLE10 I am unable to replicate a ZFS pool
  using zfs send/recv without zfs hanging in state kmem
  arena, within the first 4TB or so (of a 23TB Pool).
  What does procstat -kk 1176 (or the PID of your 'zfs' process
  that stuck in that state) say?
 
  Cheers,
 
  SUPERTEX:/root# ps -lp 866 UID PID PPID CPU PRI NI   VSZ   RSS
  MWCHAN   STAT TT  TIME COMMAND 0 866  863   0  52  0 66800
  29716 kmem are D+1  57:40.82 zfs recv -duvF BIGTOX
  SUPERTEX:/root# procstat -kk 866 PIDTID COMM TDNAME
  KSTACK 866 101573 zfs  -mi_switch+0xe1
  sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d
  kmem_malloc+0x33 keg_alloc_slab+0xcd keg_fetch_slab+0x151
  zone_fetch_slab+0x7e zone_import+0x40 uma_zalloc_arg+0x34e
  arc_get_data_buf+0x31a arc_buf_alloc+0xaa dmu_buf_will_fill+0x169
  dmu_write+0xfc dmu_recv_stream+0xd40 zfs_ioc_recv+0x94e
  zfsdev_ioctl+0x5ca
  Do you have any special tuning in your /boot/loader.conf?
 
  Cheers,
 
  Below.  I had forgotten some of this was there.
 
  After sending the previous message I ran kgdb to see if I could get a
  backtrace with function args.  I didn't see how to do it for this proc,
  but during all this the process un-blocked and started running again.
 
  The process blocked again in kmem arena after a few minutes.
 
 
  SUPERTEX:/root# cat /boot/loader.conf
  zfs_load=YES   # ZFS
  vfs.root.mountfrom=zfs:SUPERTEX/UNIX# Specify root partition
  in a way the
  # kernel understands
  kern.maxfiles=32K# Set the sys. wide open files limit
  kern.ktrace.request_pool=512
  #vfs.zfs.debug=1
  vfs.zfs.check_hostid=0
 
  loader_logo=beastie# Desired logo: fbsdbw, beastiebw, beastie,
  none
  boot_verbose=YES# -v: Causes extra debugging information to be
  printed
  geom_mirror_load=YES# RAID1 disk driver (see gmirror(8))
  geom_label_load=YES# File system labels (see glabel(8))
  ahci_load=YES
  siis_load=YES
  mvs_load=YES
  coretemp_load=YES# Intel Core CPU temperature monitor
  #console=comconsole
  kern.msgbufsize=131072# Set size of kernel message buffer
 
  kern.geom.label.gpt.enable=0
  kern.geom.label.gptid.enable=0
  kern.geom.label.disk_ident.enable=0
  SUPERTEX:/root#
 

 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 



___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-21 Thread James R. Van Artsdalen
On 10/21/2014 5:03 PM, Alan Cox wrote:
 How up to date is your source tree? r2720221 is relevant. Without that
 change, there are circumstances in which the code that is supposed to
 free space from the kmem arena doesn't get called.

I've tried HEAD/CURRENT at r272749

On 10-STABLE through r273364 - I do a nightly build  test.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-19 Thread James R. Van Artsdalen
Removing kern.maxfiles from loader.conf still hangs in kmem arena.

I tried using a memstick image of -CURRENT made from the release/
process and this also hangs in kmem arena

An uninvolved server of mine hung Friday night in statekmem arena
during periodic's zpool history.  After a reboot it did not hang
Saturday night.

On 10/16/2014 11:37 PM, James R. Van Artsdalen wrote:
 On 10/16/2014 11:10 PM, Xin Li wrote:
 On 10/16/14 8:43 PM, James R. Van Artsdalen wrote:
 On 10/16/2014 11:12 AM, Xin Li wrote:
 On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
 FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2
 #2 r272070M: Wed Sep 24 17:36:56 CDT 2014
 ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC
 amd64

 With current STABLE10 I am unable to replicate a ZFS pool
 using zfs send/recv without zfs hanging in state kmem
 arena, within the first 4TB or so (of a 23TB Pool).
 What does procstat -kk 1176 (or the PID of your 'zfs' process
 that stuck in that state) say?

 Cheers,

 SUPERTEX:/root# ps -lp 866 UID PID PPID CPU PRI NI   VSZ   RSS
 MWCHAN   STAT TT  TIME COMMAND 0 866  863   0  52  0 66800
 29716 kmem are D+1  57:40.82 zfs recv -duvF BIGTOX
 SUPERTEX:/root# procstat -kk 866 PIDTID COMM TDNAME
 KSTACK 866 101573 zfs  -mi_switch+0xe1
 sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d
 kmem_malloc+0x33 keg_alloc_slab+0xcd keg_fetch_slab+0x151
 zone_fetch_slab+0x7e zone_import+0x40 uma_zalloc_arg+0x34e
 arc_get_data_buf+0x31a arc_buf_alloc+0xaa dmu_buf_will_fill+0x169
 dmu_write+0xfc dmu_recv_stream+0xd40 zfs_ioc_recv+0x94e
 zfsdev_ioctl+0x5ca
 Do you have any special tuning in your /boot/loader.conf?

 Cheers,

 Below.  I had forgotten some of this was there.

 After sending the previous message I ran kgdb to see if I could get a
 backtrace with function args.  I didn't see how to do it for this proc,
 but during all this the process un-blocked and started running again.

 The process blocked again in kmem arena after a few minutes.


 SUPERTEX:/root# cat /boot/loader.conf
 zfs_load=YES   # ZFS
 vfs.root.mountfrom=zfs:SUPERTEX/UNIX# Specify root partition
 in a way the
 # kernel understands
 kern.maxfiles=32K# Set the sys. wide open files limit
 kern.ktrace.request_pool=512
 #vfs.zfs.debug=1
 vfs.zfs.check_hostid=0

 loader_logo=beastie# Desired logo: fbsdbw, beastiebw, beastie,
 none
 boot_verbose=YES# -v: Causes extra debugging information to be
 printed
 geom_mirror_load=YES# RAID1 disk driver (see gmirror(8))
 geom_label_load=YES# File system labels (see glabel(8))
 ahci_load=YES
 siis_load=YES
 mvs_load=YES
 coretemp_load=YES# Intel Core CPU temperature monitor
 #console=comconsole
 kern.msgbufsize=131072# Set size of kernel message buffer

 kern.geom.label.gpt.enable=0
 kern.geom.label.gptid.enable=0
 kern.geom.label.disk_ident.enable=0
 SUPERTEX:/root#


___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-16 Thread James R. Van Artsdalen
The zfs recv / kmem arena hang happens with -CURRENT as well as
10-STABLE, on two different systems, with 16GB or 32GB of RAM, from
memstick or normal multi-user environments,

Hangs usually seem to hapeen 1TB to 3TB in, but last night one run hung
after only 4.35MB.

On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
 FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2 #2 r272070M:
 Wed Sep 24 17:36:56 CDT 2014
 ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC  amd64

 With current STABLE10 I am unable to replicate a ZFS pool using zfs
 send/recv without zfs hanging in state kmem arena, within the first
 4TB or so (of a 23TB Pool).

 The most recent attempt used this command line

 SUPERTEX:/root# zfs send -R BIGTEX/UNIX@syssnap | ssh BLACKIE zfs recv
 -duvF BIGTOX

 though local replications fail in kmem arena too.

 The two machines I've been attempting this on have 16BG and 32GB of RAM
 each and are otherwise idle.

 Any suggestions on how to get around, or investigate, kmem arena?

 # top
 last pid:  3272;  load averages:  0.22,  0.22,  0.23  up
 0+08:25:02  01:32:07
 34 processes:  1 running, 33 sleeping
 CPU:  0.0% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.9% idle
 Mem: 21M Active, 82M Inact, 15G Wired, 28M Cache, 450M Free
 ARC: 12G Total, 24M MFU, 12G MRU, 23M Anon, 216M Header, 47M Other
 Swap: 16G Total, 16G Free

   PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIMEWCPU
 COMMAND
  1173 root  1  520 86476K  7780K select  0 124:33   0.00% sshd
  1176 root  1  460 87276K 47732K kmem a  3  48:36   0.00% zfs
   968 root 32  200 12344K  1888K rpcsvc  0   0:13   0.00% nfsd
  1009 root  1  200 25452K  2864K select  3   0:01   0.00% ntpd
 ...

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-16 Thread Xin Li
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 10/16/14 4:25 AM, James R. Van Artsdalen wrote:
 The zfs recv / kmem arena hang happens with -CURRENT as well as 
 10-STABLE, on two different systems, with 16GB or 32GB of RAM,
 from memstick or normal multi-user environments,
 
 Hangs usually seem to hapeen 1TB to 3TB in, but last night one run
 hung after only 4.35MB.
 
 On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
 FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2 #2
 r272070M: Wed Sep 24 17:36:56 CDT 2014 
 ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC  amd64
 
 With current STABLE10 I am unable to replicate a ZFS pool using
 zfs send/recv without zfs hanging in state kmem arena, within
 the first 4TB or so (of a 23TB Pool).
 
 The most recent attempt used this command line
 
 SUPERTEX:/root# zfs send -R BIGTEX/UNIX@syssnap | ssh BLACKIE zfs
 recv -duvF BIGTOX
 
 though local replications fail in kmem arena too.
 
 The two machines I've been attempting this on have 16BG and 32GB
 of RAM each and are otherwise idle.
 
 Any suggestions on how to get around, or investigate, kmem
 arena?
 
 # top last pid:  3272;  load averages:  0.22,  0.22,  0.23
 up 0+08:25:02  01:32:07 34 processes:  1 running, 33 sleeping 
 CPU:  0.0% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.9%
 idle Mem: 21M Active, 82M Inact, 15G Wired, 28M Cache, 450M Free 
 ARC: 12G Total, 24M MFU, 12G MRU, 23M Anon, 216M Header, 47M
 Other Swap: 16G Total, 16G Free
 
 PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIME
 WCPU COMMAND 1173 root  1  520 86476K  7780K select
 0 124:33   0.00% sshd 1176 root  1  460 87276K 47732K
 kmem a  3  48:36   0.00% zfs 968 root 32  200 12344K
 1888K rpcsvc  0   0:13   0.00% nfsd 1009 root  1  200
 25452K  2864K select  3   0:01   0.00% ntpd ...

What does procstat -kk 1176 (or the PID of your 'zfs' process that
stuck in that state) say?

Cheers,

-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJUP+5vAAoJEJW2GBstM+ns0v4P/31s7geR2j22etrRnfReUxbb
lbex0VkmLGm23TbTj2vpVce+ogBeA4zo6h4WzF/yYt2372MpWOfnEoVX2yOuuGku
AFapewXS3UMXLzaRWrdTWng1KQlOyQykAHI2rvQLlYlQNTLA5AbUm6uzNXaKpD8s
PbckREQ6wHnpZOiRcMN695QstjBNCal+XJHgvrwTfyp9vdFrPVD4UHnsN7MU6QSO
XobxOqbuw4Tq95mgYJqrjk+xEYMgzUy2zkVp2QTCBXZn3T3yroI2RcgUZQWaw5SO
xRegPa5jfJqcQJAdSxl8oVs9Sz8+5YDeksAnjCOxIQzLZBbNho+SOAzi+kjnT6W7
ijTc20z5eioQVPekdJ4MBweBsAeS1aGi8VWppuP+ZDLoirmxB0LaZyRv/W/HRQDD
j4CoZswkndh+J+9Crsa9SUkfNGNvVVNjhJUGyIfTGFUsMbWTAWwa4SMj7Ad04aqW
yhg+Ab4H3Yc14TahtX0jrhD3sTBer6ZoMFKE3tl8aStGXHVMyPkj0PHg5xjZEWL2
XGF86eoIgx03A9sIdbdHEZpyTMksfNatDXZk5XpPGF/sVd6txUoYP4Ch2wD8YRFM
O5Ny2r6ash2rZYmlyjf19n4gvKebdGo8d8NbzOJ3oYue6OI/88cu0rv6xLV9hHSF
fwgIbPo5uK4hIpEm0Dk4
=qY45
-END PGP SIGNATURE-
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs recv hangs in kmem arena

2014-10-16 Thread Xin Li
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 10/16/14 8:43 PM, James R. Van Artsdalen wrote:
 On 10/16/2014 11:12 AM, Xin Li wrote:
 On 9/26/2014 1:42 AM, James R. Van Artsdalen wrote:
 FreeBSD BLACKIE.housenet.jrv 10.1-BETA2 FreeBSD 10.1-BETA2
 #2 r272070M: Wed Sep 24 17:36:56 CDT 2014 
 ja...@blackie.housenet.jrv:/usr/obj/usr/src/sys/GENERIC
 amd64
 
 With current STABLE10 I am unable to replicate a ZFS pool
 using zfs send/recv without zfs hanging in state kmem
 arena, within the first 4TB or so (of a 23TB Pool).
 
 What does procstat -kk 1176 (or the PID of your 'zfs' process
 that stuck in that state) say?
 
 Cheers,
 
 SUPERTEX:/root# ps -lp 866 UID PID PPID CPU PRI NI   VSZ   RSS
 MWCHAN   STAT TT  TIME COMMAND 0 866  863   0  52  0 66800
 29716 kmem are D+1  57:40.82 zfs recv -duvF BIGTOX 
 SUPERTEX:/root# procstat -kk 866 PIDTID COMM TDNAME
 KSTACK 866 101573 zfs  -mi_switch+0xe1 
 sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d 
 kmem_malloc+0x33 keg_alloc_slab+0xcd keg_fetch_slab+0x151 
 zone_fetch_slab+0x7e zone_import+0x40 uma_zalloc_arg+0x34e 
 arc_get_data_buf+0x31a arc_buf_alloc+0xaa dmu_buf_will_fill+0x169 
 dmu_write+0xfc dmu_recv_stream+0xd40 zfs_ioc_recv+0x94e 
 zfsdev_ioctl+0x5ca

Do you have any special tuning in your /boot/loader.conf?

Cheers,

-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJUQJazAAoJEJW2GBstM+ns6dQQAK4NM6X40d7tS7pqoTQvZbrD
U0u5kid703tWgAlSFzvORxeOEB94BxcHu/z1a68nGhUlL2kip8SirWV9A1rqBpes
i4T6asHYTcFj4OvaPfSoA7lSVsZIaLK+RscraN1b7hehSG9UExeYF8D7cRIguhoa
1Gnlv5iZZkjJZGjR0R6DmxC8C1CyNxAZBXnj1L+ofpgUzqH0Rw2TCW1XVKqMcxvI
5lpt+V0uu7MPNgjzgVy/1z5ZD2SUBPco0eHuN/Npj0c6HkmHkoWqd53vxrBhlyCP
eDbzLw7QTO7PaV5hAuC9y9/X1JGlmTVa0GP2irKuE5t1bAbVwUPQqpn+iiFs1Le8
34fL/jkCeSBY6voYYj100CBU1/1mZOh93wuY6FdMVWPJp/bsjbDUtKZUmosGU86j
ZMikfVNl5Jc5dmH30JGFCDOWzfaFq+V34toSfYIihaBQPyFov0Mr7De5MvQ7VJ7D
qiXDcfAXE99CXzAboYpruwrbxyxTqhUmXlWp2uCPqvmo0WhVUsROmhhXhWXkG3tS
S7L0n4X8kgklveirZWq/oDsg4JXNTP2ernNdAYyhD7TbG/N4INdFaVuqZkDVDgny
ibwY0HEzg2zskJOJBqr9a21fZx6c2dvJ1n+j5BaAq6ve2Hw2NyvUVWfMTknp4I8j
t/JJtsDNs9xokH/veS3J
=aBKI
-END PGP SIGNATURE-
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org