Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Daniel O'Connor
On Mon, 26 Oct 2009, Pete French wrote: just about to build a new ZFS based system and I was wondering what the recommended way to dedicate a whole disc to ZFS is these days. Should I just give it 'da1', 'da2' etc as I have done in the past, or is it better to use GPT to create a partition

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Artem Belevich
Unfortunately it appears ZFS doesn't search for GPT partitions so if you have them and swap the drives around you need to fix it up manually. When I used raw disk or GPT partitions, if disk order was changed the pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all that had to be

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Artis Caune
2009/10/27 Daniel O'Connor docon...@gsoft.com.au: Unfortunately it appears ZFS doesn't search for GPT partitions so if you have them and swap the drives around you need to fix it up manually. Every GPT partition have unique /dev/gptid/uuid, you can find it out with: glabel status and

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Patrick M. Hausen
Hello, On Tue, Oct 27, 2009 at 09:00:27AM +0200, Artis Caune wrote: 2009/10/27 Daniel O'Connor docon...@gsoft.com.au: Unfortunately it appears ZFS doesn't search for GPT partitions so if you have them and swap the drives around you need to fix it up manually. Every GPT partition have

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Daniel O'Connor
On Tue, 27 Oct 2009, Artem Belevich wrote: Unfortunately it appears ZFS doesn't search for GPT partitions so if you have them and swap the drives around you need to fix it up manually. When I used raw disk or GPT partitions, if disk order was changed the pool would come up in 'DEGRADED'

openldap unstable on freebsd

2009-10-27 Thread alexs
Good day. Last 2 years (maybe when began using bdb backend), we get slapd crash on read load. System on low load work with monit monitoring and fails 1-3 in month. When load up crashes frequency up too. Tuning helped but not much. load about 20-30 queryes/sec in peak. and crashes every hour.

Re: openldap unstable on freebsd

2009-10-27 Thread Oliver Brandmueller
Hi, On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote: Last 2 years (maybe when began using bdb backend), we get slapd crash on read load. System on low load work with monit monitoring and fails 1-3 in month. When load up crashes frequency up too. Tuning helped but not much.

No sound after update to RC2 from RC1.

2009-10-27 Thread Jakub Lach
Hello. I've lost sound. Dmesg content: hdac0: HDA Driver Revision: 20090624_0136 hdac0: [ITHREAD] Starting default moused . mixer: unknown device: mic (or ogain) hdac0: HDA Codec #0: Conexant CX20561 (Hermosa) pcm0: HDA Conexant CX20561 (Hermosa) PCM #0 Analog at cad 0 nid 1 on hdac0 It was

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread jfarmer
Quoting Daniel O'Connor docon...@gsoft.com.au: On Tue, 27 Oct 2009, Artem Belevich wrote: Unfortunately it appears ZFS doesn't search for GPT partitions so if you have them and swap the drives around you need to fix it up manually. When I used raw disk or GPT partitions, if disk order was

8.0-RC1 NFS client timeout issue

2009-10-27 Thread Olaf Seibert
I see an annoying behaviour with NFS over TCP. It happens both with nfs and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is some Linux or perhaps Solaris, I'm not entirely sure. After trying to find something in packet traces, I think I have found something. The scenario

Re: 8.0-RC1 NFS client timeout issue

2009-10-27 Thread Claus Guttesen
I see an annoying behaviour with NFS over TCP. It happens both with nfs and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is some Linux or perhaps Solaris, I'm not entirely sure. I used nfs with tcp on a 7.2-client without problems on a solaris nfs-server. When I upgraded to

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-27 Thread Daniel O'Connor
On Wed, 28 Oct 2009, jfar...@goldsword.com wrote: Check the archives for stable@ and f...@.  I believe that there was a   thread not that long ago detailing exactly how to do that.  IIRC,   while it took a bit of work, it wasn't difficult. Hmm do you have any idea what the subject was? I'm

ptrace problem 6.x/7.x - can someone explain this?

2009-10-27 Thread Dorr H. Clark
We believe ptrace has a problem in 6.3; we have not tried other releases. The same code, however, exists in 7.1. The bug was first encountered in gdb... (gdb) det Detaching from program: /usr/local/bin/emacs, process 66217 (gdb) att 66224 Attaching to program: /usr/local/bin/emacs, process

New devices appear in all devfs mounts

2009-10-27 Thread Marcus Reid
Hi, I have devfs mounted in a chroot jail, with just the basic device nodes visible: fstab:/dev/null /usr/data/home/scp/dev devfs rw 0 0 rc.conf: devfs_set_rulesets=/usr/data/home/scp/dev=devfsrules_hide_all /usr/data/home/scp/dev=devfsrules_unhide_basic When a

Re: openldap unstable on freebsd

2009-10-27 Thread alexs
* Oliver Brandmueller o...@e-gitt.net [2009-10-27 09:56:48 +0100]: Hi, On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote: Last 2 years (maybe when began using bdb backend), we get slapd crash on read load. System on low load work with monit monitoring and fails 1-3 in