On Mon, 26 Oct 2009, Pete French wrote:
just about to build a new ZFS based system and I was wondering
what the recommended way to dedicate a whole disc to ZFS is
these days. Should I just give it 'da1', 'da2' etc as I have
done in the past, or is it better to use GPT to create a
partition
Unfortunately it appears ZFS doesn't search for GPT partitions so if you
have them and swap the drives around you need to fix it up manually.
When I used raw disk or GPT partitions, if disk order was changed the
pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all
that had to be
2009/10/27 Daniel O'Connor docon...@gsoft.com.au:
Unfortunately it appears ZFS doesn't search for GPT partitions so if you
have them and swap the drives around you need to fix it up manually.
Every GPT partition have unique /dev/gptid/uuid, you can find it out with:
glabel status
and
Hello,
On Tue, Oct 27, 2009 at 09:00:27AM +0200, Artis Caune wrote:
2009/10/27 Daniel O'Connor docon...@gsoft.com.au:
Unfortunately it appears ZFS doesn't search for GPT partitions so if you
have them and swap the drives around you need to fix it up manually.
Every GPT partition have
On Tue, 27 Oct 2009, Artem Belevich wrote:
Unfortunately it appears ZFS doesn't search for GPT partitions so
if you have them and swap the drives around you need to fix it up
manually.
When I used raw disk or GPT partitions, if disk order was changed the
pool would come up in 'DEGRADED'
Good day.
Last 2 years (maybe when began using bdb backend), we get slapd crash on
read load.
System on low load work with monit monitoring and fails 1-3 in month.
When load up crashes frequency up too.
Tuning helped but not much.
load about 20-30 queryes/sec in peak.
and crashes every hour.
Hi,
On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote:
Last 2 years (maybe when began using bdb backend), we get slapd crash on
read load.
System on low load work with monit monitoring and fails 1-3 in month.
When load up crashes frequency up too.
Tuning helped but not much.
Hello.
I've lost sound. Dmesg content:
hdac0: HDA Driver Revision: 20090624_0136
hdac0: [ITHREAD]
Starting default moused
.
mixer:
unknown device: mic
(or ogain)
hdac0: HDA Codec #0: Conexant CX20561 (Hermosa)
pcm0: HDA Conexant CX20561 (Hermosa) PCM #0 Analog at cad 0 nid 1 on hdac0
It was
Quoting Daniel O'Connor docon...@gsoft.com.au:
On Tue, 27 Oct 2009, Artem Belevich wrote:
Unfortunately it appears ZFS doesn't search for GPT partitions so
if you have them and swap the drives around you need to fix it up
manually.
When I used raw disk or GPT partitions, if disk order was
I see an annoying behaviour with NFS over TCP. It happens both with nfs
and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is
some Linux or perhaps Solaris, I'm not entirely sure.
After trying to find something in packet traces, I think I have found
something.
The scenario
I see an annoying behaviour with NFS over TCP. It happens both with nfs
and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is
some Linux or perhaps Solaris, I'm not entirely sure.
I used nfs with tcp on a 7.2-client without problems on a solaris
nfs-server. When I upgraded to
On Wed, 28 Oct 2009, jfar...@goldsword.com wrote:
Check the archives for stable@ and f...@. I believe that there was a
thread not that long ago detailing exactly how to do that. IIRC,
while it took a bit of work, it wasn't difficult.
Hmm do you have any idea what the subject was? I'm
We believe ptrace has a problem in 6.3; we have not tried other
releases. The same code, however, exists in 7.1.
The bug was first encountered in gdb...
(gdb) det
Detaching from program: /usr/local/bin/emacs, process 66217
(gdb) att 66224
Attaching to program: /usr/local/bin/emacs, process
Hi,
I have devfs mounted in a chroot jail, with just the basic device nodes
visible:
fstab:/dev/null /usr/data/home/scp/dev devfs rw 0 0
rc.conf: devfs_set_rulesets=/usr/data/home/scp/dev=devfsrules_hide_all
/usr/data/home/scp/dev=devfsrules_unhide_basic
When a
* Oliver Brandmueller o...@e-gitt.net [2009-10-27 09:56:48 +0100]:
Hi,
On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote:
Last 2 years (maybe when began using bdb backend), we get slapd crash on
read load.
System on low load work with monit monitoring and fails 1-3 in
15 matches
Mail list logo