On Thu, Jun 05, 2008 at 08:27:28AM +0200, Pawel Jakub Dawidek wrote:
> On Thu, Jun 05, 2008 at 01:53:37AM +0800, Tz-Huan Huang wrote:
> > On Thu, Jun 5, 2008 at 12:31 AM, Dag-Erling Sm??rgrav <[EMAIL PROTECTED]> 
> > wrote:
> > > "Tz-Huan Huang" <[EMAIL PROTECTED]> writes:
> > >> The vfs.zfs.arc_max was set to 512M originally, the machine survived for
> > >> 4 days and panicked this morning. Now the vfs.zfs.arc_max is set to 64M
> > >> by Oliver's suggestion, let's see how long it will survive. :-)
> > >
> > > [EMAIL PROTECTED] ~% uname -a
> > > FreeBSD ds4.des.no 8.0-CURRENT FreeBSD 8.0-CURRENT #27: Sat Feb 23 
> > > 01:24:32 CET 2008     [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ds4  amd64
> > > [EMAIL PROTECTED] ~% sysctl -h vm.kmem_size_min vm.kmem_size_max 
> > > vm.kmem_size vfs.zfs.arc_min vfs.zfs.arc_max
> > > vm.kmem_size_min: 1,073,741,824
> > > vm.kmem_size_max: 1,073,741,824
> > > vm.kmem_size: 1,073,741,824
> > > vfs.zfs.arc_min: 67,108,864
> > > vfs.zfs.arc_max: 536,870,912
> > > [EMAIL PROTECTED] ~% zpool list
> > > NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> > > raid                   1.45T    435G   1.03T    29%  ONLINE     -
> > > [EMAIL PROTECTED] ~% zfs list | wc -l
> > >     210
> > >
> > > Haven't had a single panic in over six months.
> > 
> > Thanks for your information, the major difference is that we
> > runs on 7-stable and the size of our zfs pool is much bigger.
> 
> I'm don't think the panics are related to pool size. More to the load
> and characteristics of your workload.

Not to add superfluous comments, but I agree.  It has little to do with
actual pool size, but more with I/O activity.  However, multiple zpools
will very likely result in the panic happening sooner, just based on the
"nature of the beast" -- more zpools likely means more "overall" I/O
because there's more things people will be utilising via ZFS, thus
you'll exhaust kmem quicker.

> beast:root:~# zpool list
> NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> tank                    732G    604G    128G    82%  ONLINE     -
> 
> but:
> 
> beast:root:~# zfs list | wc -l
>     1932
> 
> No panics.
> 
> PS. I'm quite sure the ZFS version I've in perforce will fix most if not
> all 'kmem_map too small' panics. It's not yet committed, but I do want
> to MFC it into RELENG_7.

That's great to hear, but the point I've made regarding kmem_size not
being able to extend past 2GB (on i386 and amd64) still stands.  I've
looked at the code myself, in attempt to figure out where the actual
limitation is, and the code is beyond my understanding.  (It's somewhat
abstracted, but only to those who are completely unfamiliar to the VM
piece -- like me :-) ).

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |

_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to