ahmad syarifudin added you as a friend on Boxbe

2009-07-09 Thread ahmad syarifudin

   Boxbe | Contact Request

   I use Boxbe to manage my inbox. I think Boxbe can help you, too!

   Here's the link:

   [1]https://www.boxbe.com/register?tc=211772005_1203235125

   -ahmad

   Please do not reply directly to this email. This message was sent at
   the request of syarifu...@gmail.com. Boxbe will not use your email
   address for any other purpose.

   If you would prefer not to receive any further invitations from Boxbe
   members, [2]click here.

   Boxbe, Inc. | 2390 Chestnut Street #201 | San Francisco, CA 94123

References

   1. https://www.boxbe.com/register?tc=211772005_1203235125
   2. 
https://www.boxbe.com/unsubscribe?email=freebsd-sta...@freebsd.org&tc=211772005_1203235125
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: smbfs panic when lost connection or unmount --force

2009-07-09 Thread Attilio Rao
2009/7/10 Oliver Pinter :
> Hi all!
>
> It is a kernel panic, when force unmount the smbfs volume or lost the
> connection with the samba server.
>
> --
> Thes OS is:
>
>
> kern.ostype: FreeBSD
> kern.osrelease: 7.2-STABLE
> kern.osrevision: 199506
> kern.version: FreeBSD 7.2-STABLE #4: Sat Jun 27 21:44:32 CEST 2009
>r...@oliverp:/usr/obj/usr/src/sys/stable
> kern.osreldate: 702103
>
> --
> make.conf:
>
>
> CPUTYPE?=core2
> CFLAGS= -O2 -fno-strict-aliasing -pipe
> MODULES_OVERRIDE=smbfs libiconv libmchain zfs opensolaris drm cd9660
> cd9660_iconv
>
> --
> panic message:
>
> Jul 10 01:58:39 oliverp syslogd: kernel boot file is /boot/kernel/kernel
> Jul 10 01:58:39 oliverp kernel: kernel trap 12 with interrupts disabled
> Jul 10 01:58:39 oliverp kernel:
> Jul 10 01:58:39 oliverp kernel:
> Jul 10 01:58:39 oliverp kernel: Fatal trap 12: page fault while in kernel mode
> Jul 10 01:58:39 oliverp kernel: cpuid = 2; apic id = 02
> Jul 10 01:58:39 oliverp kernel: fault virtual address   = 0x30
> Jul 10 01:58:39 oliverp kernel: fault code  = supervisor read 
> data,
> page not present
> Jul 10 01:58:39 oliverp kernel: instruction pointer = 
> 0x8:0x80327fd0
> Jul 10 01:58:39 oliverp kernel: stack pointer   = 
> 0x10:0xff8078360940
> Jul 10 01:58:39 oliverp kernel: frame pointer   = 
> 0x10:0xff0004c31390
> Jul 10 01:58:39 oliverp kernel: code segment= base 0x0, limit
> 0xf, type 0x1b
> Jul 10 01:58:39 oliverp kernel: = DPL 0, pres 1, long 1, def32 0, gran 1
> Jul 10 01:58:39 oliverp kernel: processor eflags= resume, IOPL = 0
> Jul 10 01:58:39 oliverp kernel: current process = 60406 (smbiod0)
> Jul 10 01:58:39 oliverp kernel: trap number = 12
> Jul 10 01:58:39 oliverp kernel: panic: page fault
> Jul 10 01:58:39 oliverp kernel: cpuid = 2
> Jul 10 01:58:39 oliverp kernel: Uptime: 6h51m16s
> Jul 10 01:58:39 oliverp kernel: Physical memory: 4087 MB
> Jul 10 01:58:39 oliverp kernel: Dumping 2448 MB:Copyright (c)
> 1992-2009 The FreeBSD Project.

Can you at least produce a backtrace for that?

Thanks,
Attilio


-- 
Peace can only be achieved by understanding - A. Einstein
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


smbfs panic when lost connection or unmount --force

2009-07-09 Thread Oliver Pinter
Hi all!

It is a kernel panic, when force unmount the smbfs volume or lost the
connection with the samba server.

--
Thes OS is:


kern.ostype: FreeBSD
kern.osrelease: 7.2-STABLE
kern.osrevision: 199506
kern.version: FreeBSD 7.2-STABLE #4: Sat Jun 27 21:44:32 CEST 2009
r...@oliverp:/usr/obj/usr/src/sys/stable
kern.osreldate: 702103

--
make.conf:


CPUTYPE?=core2
CFLAGS= -O2 -fno-strict-aliasing -pipe
MODULES_OVERRIDE=smbfs libiconv libmchain zfs opensolaris drm cd9660
cd9660_iconv

--
panic message:

Jul 10 01:58:39 oliverp syslogd: kernel boot file is /boot/kernel/kernel
Jul 10 01:58:39 oliverp kernel: kernel trap 12 with interrupts disabled
Jul 10 01:58:39 oliverp kernel:
Jul 10 01:58:39 oliverp kernel:
Jul 10 01:58:39 oliverp kernel: Fatal trap 12: page fault while in kernel mode
Jul 10 01:58:39 oliverp kernel: cpuid = 2; apic id = 02
Jul 10 01:58:39 oliverp kernel: fault virtual address   = 0x30
Jul 10 01:58:39 oliverp kernel: fault code  = supervisor read data,
page not present
Jul 10 01:58:39 oliverp kernel: instruction pointer = 0x8:0x80327fd0
Jul 10 01:58:39 oliverp kernel: stack pointer   = 
0x10:0xff8078360940
Jul 10 01:58:39 oliverp kernel: frame pointer   = 
0x10:0xff0004c31390
Jul 10 01:58:39 oliverp kernel: code segment= base 0x0, limit
0xf, type 0x1b
Jul 10 01:58:39 oliverp kernel: = DPL 0, pres 1, long 1, def32 0, gran 1
Jul 10 01:58:39 oliverp kernel: processor eflags= resume, IOPL = 0
Jul 10 01:58:39 oliverp kernel: current process = 60406 (smbiod0)
Jul 10 01:58:39 oliverp kernel: trap number = 12
Jul 10 01:58:39 oliverp kernel: panic: page fault
Jul 10 01:58:39 oliverp kernel: cpuid = 2
Jul 10 01:58:39 oliverp kernel: Uptime: 6h51m16s
Jul 10 01:58:39 oliverp kernel: Physical memory: 4087 MB
Jul 10 01:58:39 oliverp kernel: Dumping 2448 MB:Copyright (c)
1992-2009 The FreeBSD Project.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Andrew Snow

Patrick M. Hausen wrote:

You cannot escape the poor write performance of RAID 5 and
comparable setups with or without hardware. No matter how
much you cache, one time a block must be written to disk.


ZFS RAIDZ works differently:  It is based on variable-sized blocks 
written to the disks based on incoming data stream, grouped into 
transactions.


This makes it very efficient for clustering multi-threaded random I/O 
writes together into large physical disk writes.


(The downside is it has to read the entire "stripe" even if you are only 
reading one byte, in order to calculate and verify the checksum.)



- Andrew
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: process stuck in "umtxn"

2009-07-09 Thread Dan Nelson
In the last episode (Jul 09), Mikhail T. said:
> I noticed, that my build of KDE4 ports got suspiciously quiet...  Pressing
> Ctrl-T shows:
> 
> load: 0.01  cmd: automoc4 78507 [umtxn] 0.00u 0.00s 0% 3552k
> 
> According to gdb, the process' stack is:
> 
> #0  0x000800d9620a in __error () from /lib/libthr.so.3
> #1  0x000800d95f0c in __error () from /lib/libthr.so.3
> #2  0x000800d911eb in pthread_mutex_getyieldloops_np () from 
> /lib/libthr.so.3
> #3  0x000800f0941b in _malloc_postfork () from /lib/libc.so.7
> #4  0x000800d93c60 in fork () from /lib/libthr.so.3
> #5  0x000800778e4a in QProcessPrivate::startProcess () from 
> /opt/lib/qt4/libQtCore.so.4
> #6  0x00080073f2c6 in QProcess::start () from 
> /opt/lib/qt4/libQtCore.so.4
> 
> 
> My system is 7.2-PRERELEASE/amd64 from April 9th. Please, advise.
> Thanks! Yours,

That could be due to the following bug, fixed after 7.2 was released. 
Appliying the patch and rebuilding libc should be all you need to fix it:

http://security.freebsd.org/advisories/FreeBSD-EN-09:04.fork.asc

-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


pppoe on a VLAN interface issues (RELENG_7)

2009-07-09 Thread Mike Tancsa
I wanted to share a DSL modem that is in bridge mode between two 
FreeBSD boxes that make use of many VLANs on a pair of em 
interfaces.  In other words, I cant dedicate a physical interface to 
just using the DSL.  Normally, when creating vlans, I like to create them as so


/sbin/ifconfig em1.172 create 192.168.1.3/24


em1.172: flags=8843 metric 0 mtu 1500
options=3
ether 00:30:48:d2:d6:11
inet 192.168.1.3 netmask 0xff00 broadcast 192.168.1.255
media: Ethernet autoselect (1000baseTX )
status: active
vlan: 172 parent interface: em1

However, if I try and bring up pppoe using em1.10 as the PPPoE 
device, it does not work.



Jul  9 14:34:17 fw02 ppp[1484]: tun0: Phase: deflink: closed -> opening
Jul  9 14:34:17 fw02 kernel: ng_ether_attach: can't name node em1.10
Jul  9 14:34:17 fw02 kernel: ng_ether_attach: can't name node em1.172
Jul  9 14:34:17 fw02 kernel: ng_ether_attach: can't name node em1.24
Jul  9 14:34:17 fw02 kernel: WARNING: attempt to 
net_add_domain(netgraph) after domainfinalize()
Jul  9 14:34:17 fw02 kernel: Jul  9 14:34:17 fw02 kernel: 
ng_ether_attach: can't name node em1.10
Jul  9 14:34:17 fw02 kernel: Jul  9 14:34:17 fw02 kernel: 
ng_ether_attach: can't name node em1.172
Jul  9 14:34:17 fw02 kernel: Jul  9 14:34:17 fw02 kernel: 
ng_ether_attach: can't name node em1.24
Jul  9 14:34:17 fw02 ppp[1484]: tun0: Warning: em1.172: Cannot send a 
netgraph message: Invalid argument

Jul  9 14:34:17 fw02 ppp[1484]: tun0: Chat: Failed to open device
Jul  9 14:34:17 fw02 ppp[1484]: tun0: Phase: deflink: Enter pause 
(30) for redialing.

Jul  9 14:34:47 fw02 ppp[1484]: tun0: Chat: deflink: Redial timer expired.
Jul  9 14:34:47 fw02 ppp[1484]: tun0: Warning: em1.172: Cannot send a 
netgraph message: Invalid argument

Jul  9 14:34:47 fw02 ppp[1484]: tun0: Warning: deflink: PPPoE: unknown host
Jul  9 14:34:47 fw02 ppp[1484]: tun0: Warning: deflink: PPPoE: unknown host
Jul  9 14:34:47 fw02 ppp[1484]: tun0: Warning: deflink: Device 
(PPPoE:em1.172) must begin with a '/', a '!' or contain at least one ':'

Jul  9 14:34:47 fw02 ppp[1484]: tun0: Chat: Failed to open device
Jul  9 14:34:47 fw02 ppp[1484]: tun0: Phase: deflink: Enter pause 
(30) for redialing.

Jul  9 14:34:50 fw02 ppp[1484]: tun0: Phase: Signal 15, terminate.


BUT, if I make the vlan device the "old way"

/sbin/ifconfig vlan172 create 192.168.1.3/24 vlandev em1 vlan 172

it works


It still complains about the other 2 interfaces, but it does not seem 
to interfere with the PPPoE connection


Jul  9 14:48:15 macs-fw02 kernel: ng_ether_attach: can't name node em1.10
Jul  9 14:48:15 macs-fw02 kernel: ng_ether_attach: can't name node em1.24
Jul  9 14:48:15 macs-fw02 kernel: Jul  9 14:48:15 macs-fw02 kernel: 
ng_ether_attach: can't name node em1.10
Jul  9 14:48:15 macs-fw02 kernel: Jul  9 14:48:15 macs-fw02 kernel: 
ng_ether_attach: can't name node em1.24
Jul  9 14:48:16 macs-fw02 kernel: WARNING: attempt to 
net_add_domain(netgraph) after domainfinalize()


Is there some reason this does not work ?

---Mike




Mike Tancsa,  tel +1 519 651 3400
Sentex Communications,m...@sentex.net
Providing Internet since 1994www.sentex.net
Cambridge, Ontario Canada www.sentex.net/mike

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


process stuck in "umtxn"

2009-07-09 Thread Mikhail T.
Hello!

I noticed, that my build of KDE4 ports got suspiciously quiet...
Pressing Ctrl-T shows:

load: 0.01  cmd: automoc4 78507 [umtxn] 0.00u 0.00s 0% 3552k

According to gdb, the process' stack is:

#0  0x000800d9620a in __error () from /lib/libthr.so.3
#1  0x000800d95f0c in __error () from /lib/libthr.so.3
#2  0x000800d911eb in pthread_mutex_getyieldloops_np () from
/lib/libthr.so.3
#3  0x000800f0941b in _malloc_postfork () from /lib/libc.so.7
#4  0x000800d93c60 in fork () from /lib/libthr.so.3
#5  0x000800778e4a in QProcessPrivate::startProcess () from
/opt/lib/qt4/libQtCore.so.4
#6  0x00080073f2c6 in QProcess::start () from
/opt/lib/qt4/libQtCore.so.4


My system is 7.2-PRERELEASE/amd64 from April 9th. Please, advise.
Thanks! Yours,

-mi

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: FreeBSD 8.0-BETA1 Available

2009-07-09 Thread Andreas Tobler

Ken Smith wrote:

On Wed, 2009-07-08 at 22:01 +0200, Andreas Tobler wrote:
I was successful in installing the image, although, a few packages did 
not install. I did a kern developer package. I guess the packages are 
missing on the .img?


Correct, no packages with BETA1.  It's probably the documentation
packages it went looking for and couldn't find.  I'll be providing at
least the docs packages with BETA2.  Not sure if I'll start trying to
provide a larger set of packages for the DVD or wait for BETA3 for that
(leaning towards waiting at the moment).


Ok, that means no doc/dict/docproj/man/ports/src (and the ones I didn't 
choose) in the BETA1?


I did the install again with a real stick and everything went fine 
besides not finding the above packages.


Thanks,
Andreas
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS: zpool scrub lockup

2009-07-09 Thread Thomas Ronner

Andriy Gapon wrote:


For watchdog to fire it first needs to be enabled, e.g. by starting watchdogd.
Try to run /etc/rd.d/watchdogd onestart before zfs start and then wait for about
16 seconds (default timeout).


I tried this. When only running 'watchdog' (without starting the daemon) 
it enters the debugger in 16 seconds. The only way to continue is 
issuing the 'watchdog' debugger command (I presume this disables the 
watchdog?), followed by 'c'.


But when re-enabling the watchdog by running /etc/rc.d/watchdogd start 
(I already added watchdogd to rc.conf)





If that doesn't help, then it seems that the only option would be debugging via
serial console. Or manually generating NMI, if your system has an NMI
button/switch/jumper.


No, I don't have a manual NMI thingy.

Is this: 
http://www.freebsd.org/doc/en_US.ISO8859-1/books/developers-handbook/kerneldebug-online-gdb.html 
(10.6 On-Line Kernel Debugging Using Remote GDB) the debugging via 
serial console you're referring to?




Thanks,
Thomas
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS: zpool scrub lockup

2009-07-09 Thread Andriy Gapon
on 09/07/2009 20:21 Thomas Ronner said the following:
> I put the following in my kernel config:
> 
> # debugging
> options KDB
> options DDB
> options GDB
> options BREAK_TO_DEBUGGER
> options INVARIANTS
> options INVARIANT_SUPPORT
> options WITNESS
> options WITNESS_KDB
> options DEBUG_VFS_LOCKS
> options DIAGNOSTIC
> options SW_WATCHDOG
> 
> When I send a BREAK from my serial console it enters the debugger, so
> that works. But when I start ZFS (/etc/rc.d/zfs start) it freezes again
> and BREAK doesn't enter the debugger. I'll try playing with the watchdog
> now, but I doubt this will help. Any clues?
> 

For watchdog to fire it first needs to be enabled, e.g. by starting watchdogd.
Try to run /etc/rd.d/watchdogd onestart before zfs start and then wait for about
16 seconds (default timeout).
If that doesn't help, then it seems that the only option would be debugging via
serial console. Or manually generating NMI, if your system has an NMI
button/switch/jumper.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS: zpool scrub lockup

2009-07-09 Thread Thomas Ronner

Thomas Ronner wrote:

Hi Andriy,

Andriy Gapon wrote:

on 08/07/2009 23:30 Thomas Ronner said the following:

Hello,

I don't know whether this is the right list; maybe freebsd-fs is more
appropriate. So please redirect me there if this isn't the right place.

My system (i386, Athlon XP) locks hard when scrubbing a certain pool. It
has been doing this for at least a couple of months now. For this reason
I upgraded to 7.2-STABLE recently as this had the latest ZFS bits, but
this doesn't help. It even makes the problem worse: in previous versions
I just hit the reset button and forgot about it, but now it "remembers"
that it was scrubbing (I presume) and tries to resume at the exact same
place, locking up again. This means I haven't been able to mount these
ZFS volumes successfully: the moment I do a /etc/rc.d/zfs start from
single user mode (I have my /, /var and /usr on UFS) it locks up in a
couple of seconds. And by locks up I really mean locks up. No panic,
nothing. Pressing the reset button on the chassis is the only way to
reboot. 


You can try adding SW_WATCHDOG option to your kernel which might help 
catching the

lockup. Things like INVARIANTS and WITNESS might help th debugging too.
Serial console for remote debugging would be very useful too.



I'll definitely try those and report back on this list. Thanks for your 
answer!


I put the following in my kernel config:

# debugging
options KDB
options DDB
options GDB
options BREAK_TO_DEBUGGER
options INVARIANTS
options INVARIANT_SUPPORT
options WITNESS
options WITNESS_KDB
options DEBUG_VFS_LOCKS
options DIAGNOSTIC
options SW_WATCHDOG

When I send a BREAK from my serial console it enters the debugger, so 
that works. But when I start ZFS (/etc/rc.d/zfs start) it freezes again 
and BREAK doesn't enter the debugger. I'll try playing with the watchdog 
now, but I doubt this will help. Any clues?



Thanks,
Thomas
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS: zpool scrub lockup

2009-07-09 Thread Thomas Ronner

Hi Andriy,

Andriy Gapon wrote:

on 08/07/2009 23:30 Thomas Ronner said the following:

Hello,

I don't know whether this is the right list; maybe freebsd-fs is more
appropriate. So please redirect me there if this isn't the right place.

My system (i386, Athlon XP) locks hard when scrubbing a certain pool. It
has been doing this for at least a couple of months now. For this reason
I upgraded to 7.2-STABLE recently as this had the latest ZFS bits, but
this doesn't help. It even makes the problem worse: in previous versions
I just hit the reset button and forgot about it, but now it "remembers"
that it was scrubbing (I presume) and tries to resume at the exact same
place, locking up again. This means I haven't been able to mount these
ZFS volumes successfully: the moment I do a /etc/rc.d/zfs start from
single user mode (I have my /, /var and /usr on UFS) it locks up in a
couple of seconds. And by locks up I really mean locks up. No panic,
nothing. Pressing the reset button on the chassis is the only way to
reboot. 


You can try adding SW_WATCHDOG option to your kernel which might help catching 
the
lockup. Things like INVARIANTS and WITNESS might help th debugging too.
Serial console for remote debugging would be very useful too.



I'll definitely try those and report back on this list. Thanks for your 
answer!



Regards, Thomas
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS: zpool scrub lockup

2009-07-09 Thread Andriy Gapon
on 08/07/2009 23:30 Thomas Ronner said the following:
> Hello,
> 
> I don't know whether this is the right list; maybe freebsd-fs is more
> appropriate. So please redirect me there if this isn't the right place.
> 
> My system (i386, Athlon XP) locks hard when scrubbing a certain pool. It
> has been doing this for at least a couple of months now. For this reason
> I upgraded to 7.2-STABLE recently as this had the latest ZFS bits, but
> this doesn't help. It even makes the problem worse: in previous versions
> I just hit the reset button and forgot about it, but now it "remembers"
> that it was scrubbing (I presume) and tries to resume at the exact same
> place, locking up again. This means I haven't been able to mount these
> ZFS volumes successfully: the moment I do a /etc/rc.d/zfs start from
> single user mode (I have my /, /var and /usr on UFS) it locks up in a
> couple of seconds. And by locks up I really mean locks up. No panic,
> nothing. Pressing the reset button on the chassis is the only way to
> reboot. 

You can try adding SW_WATCHDOG option to your kernel which might help catching 
the
lockup. Things like INVARIANTS and WITNESS might help th debugging too.
Serial console for remote debugging would be very useful too.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Steve Bertrand
Patrick M. Hausen wrote:

> So we switched to GEOM for mirroring a long time ago for
> one simple reason: hardware replacement.

Amen.

Steve


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ZFS - thanks

2009-07-09 Thread Patrick M. Hausen
Hello,

On Thu, Jul 09, 2009 at 03:21:48PM +0200, Tonix (Antonio Nati) wrote:

> I see a lot of people advicing to use ZFS RAID instead of HW RAID.
> I'm going to use HP duplicated iSCSI subsystems, which have autonomous 
> RAID, so I'm confused about this advice.
> Following the ZFS RAID stream, should I keep each disk alone in iSCSI and 
> let the ZFS make the RAID job?
> Should not HW RAID to be (a lot) more efficient?

You cannot escape the poor write performance of RAID 5 and
comparable setups with or without hardware. No matter how
much you cache, one time a block must be written to disk.

And for RAID 1 or 1+0 we found the impact on modern CPUs
negligible.

So we switched to GEOM for mirroring a long time ago for
one simple reason: hardware replacement.

With a "hardware RAID" you need the precise brand and model
of the controller (worst case) to read a disk with valuable
data on it in case of a complete machine failure and replacement.
What, if that's not avaliable any more?

With software you just need an arbitrary machine with the
matching HDD interface (P-ATA, S-ATA, SCSI, ...)

This is my first attempt at software RAID-other-than-1,
but I'm really pleased with the results so far.

The system is a Fujitsu (former Fujitsu Siemens) SX 40
JBOD with a SAS host interface and SAS or S-ATA disks.
You can daisy chain 3 of these boxes with twelve disks
each to one server.
The host adapter in my server  doesn't have any RAID functions.
LSI something, easily replaced.

Reasonably priced, nice, scalable solution for our needs.
It's a datastore, so it doesn't do anything but backup
and restore. I would not use ZFS for something that needs
to be "up" 24x7 just yet. We had a couple of crashes before
we changed some memory parameters.

Kind regards,
Patrick
-- 
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Tonix (Antonio Nati)

Patrick M. Hausen ha scritto:

Hello,

On Thu, Jul 09, 2009 at 09:17:35AM -0300, Nenhum_de_Nos wrote:

  

So now we have this setup:

NAME   STATE READ WRITE CKSUM
zfsONLINE   0 0 0
  raidz2   ONLINE   0 0 0
label/disk100  ONLINE   0 0 0
label/disk101  ONLINE   0 0 0
label/disk102  ONLINE   0 0 0
label/disk103  ONLINE   0 0 0
label/disk104  ONLINE   0 0 0
label/disk105  ONLINE   0 0 0
  raidz2   ONLINE   0 0 0
label/disk106  ONLINE   0 0 0
label/disk107  ONLINE   0 0 0
label/disk108  ONLINE   0 0 0
label/disk109  ONLINE   0 0 0
label/disk110  ONLINE   0 0 0
label/disk111  ONLINE   0 0 0

which will get another enclosure with 6 750-GB-disks, soon.
  


  

I've always been curious about this. is said not good to have many disks
in one pool. ok then. but this layout you're using in here will have the
same effect as the twelve disks in only one pool ? (the space here is the
sum of both pools ?)



It is not good to have too many disks in one group. What you see
above is one pool with two raidz2 groups.

As far as I understood the documentation after that helpful
comment on this list, this is the recommended configuration.

---
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

"The recommended number of disks per group is between 3 and 9.
 If you have more disks, use multiple groups."
---

The result is, of course, one big pool with lots of storage
space, but the overhead necessary for redundancy is roughly
twice that of my "dangerous" twelve-disk configuration.

So I lost the equivalent of two disks or about 1 TB here.
Fast, reliable, cheap - pick any two ;-)

Kind regards,
Patrick
  


I see a lot of people advicing to use ZFS RAID instead of HW RAID.
I'm going to use HP duplicated iSCSI subsystems, which have autonomous 
RAID, so I'm confused about this advice.
Following the ZFS RAID stream, should I keep each disk alone in iSCSI 
and let the ZFS make the RAID job?

Should not HW RAID to be (a lot) more efficient?
Which would be the wrong side of using HW RAID with ZFS?

Thanks,

Tonino




--

   in...@zioniInterazioni di Antonio Nati 
  http://www.interazioni.it  to...@interazioni.it   



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


pw groupadd/useradd fail when the nscd cache is used for name/group resolution

2009-07-09 Thread Vlad Galu
I've stumbled upon this while installing postgres. In
/etc/nsswitch.conf I had "group: cache files compat" and "passwd:
cache files compat". Once I commented them out things started working
again. Before the change, this is how it looked like:

-- cut here --
[r...@vgalu /usr/ports/databases/postgresql84-server]# pw group add pgsql -g 70
pw: group disappeared during update
[r...@vgalu /usr/ports/databases/postgresql84-server]# pw group add pgsql -g 70
pw: group 'pgsql' already exists
[r...@vgalu /usr/ports/databases/postgresql84-server]#
-- and here --

Shouldn't 'files' be used upon a cache miss? If this is a PEBKAC,
sorry for the noise.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Dan Naumov
On Thu, Jul 9, 2009 at 3:30 PM, Nenhum_de_Nos wrote:
>
> On Thu, July 9, 2009 09:25, Dan Naumov wrote:
>> On Thu, Jul 9, 2009 at 3:17 PM, Nenhum_de_Nos
>> wrote:
>>>
>>> On Thu, July 9, 2009 08:25, Patrick M. Hausen wrote:
 Hi, all,

 I just wanted to say a big big thank you to Kip and all the
 developers who made ZFS on FreeBSD real.

 And to everyone who provided helpful comments in the
 last couple of days.

 I had to delete and rebuild my zpool to switch from a
 12-disk raidz2 to two 6-disk ones, but yesterday I could
 replace the raw devices with glabel devices and practice
 replacing a failed disk at the same time. ;-)

 So now we have this setup:

       NAME               STATE     READ WRITE CKSUM
       zfs                ONLINE       0     0     0
         raidz2           ONLINE       0     0     0
           label/disk100  ONLINE       0     0     0
           label/disk101  ONLINE       0     0     0
           label/disk102  ONLINE       0     0     0
           label/disk103  ONLINE       0     0     0
           label/disk104  ONLINE       0     0     0
           label/disk105  ONLINE       0     0     0
         raidz2           ONLINE       0     0     0
           label/disk106  ONLINE       0     0     0
           label/disk107  ONLINE       0     0     0
           label/disk108  ONLINE       0     0     0
           label/disk109  ONLINE       0     0     0
           label/disk110  ONLINE       0     0     0
           label/disk111  ONLINE       0     0     0

 which will get another enclosure with 6 750-GB-disks, soon.

 I really like the way I can manage storage from the operating
 system without propriatary controller management software or
 even rebooting into the BIOS.

 Kind regards,
 Patrick
>>>
>>> I've always been curious about this. is said not good to have many disks
>>> in one pool. ok then. but this layout you're using in here will have the
>>> same effect as the twelve disks in only one pool ? (the space here is
>>> the
>>> sum of both pools ?)
>>
>> Having an enormous pool consisting of dozens of disks is not the
>> actual problem. Having the pool consist of large (> 9 disks)
>> raidz/raidz2 "groups" is.
>>
>> A single pool consising of 5 x 8 disk raidz (40 disks total) is fine.
>> A single pool consisting of a 40 (or any amount bigger than 9) disk
>> raidz is not.
>
> thanks. but the final file system in both these cases are the same ? (what
> I'll see in df -h).

No.

A single pool consisting of 5 x 8 disk raidz will have 40 disks total,
35 disks worth of space
A single pool consisting of 5 x 8 disk raidz2 will have 40 disks
total, 30 disks worth of space

A single 40 disk raidz (DO NOT DO THIS) will have 40 disks total, 39
disks worth of space and will definately explode on you sooner rather
than later (probably on the first import, export or scrub).

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Patrick M. Hausen
Hello,

On Thu, Jul 09, 2009 at 09:17:35AM -0300, Nenhum_de_Nos wrote:

> > So now we have this setup:
> >
> > NAME   STATE READ WRITE CKSUM
> > zfsONLINE   0 0 0
> >   raidz2   ONLINE   0 0 0
> > label/disk100  ONLINE   0 0 0
> > label/disk101  ONLINE   0 0 0
> > label/disk102  ONLINE   0 0 0
> > label/disk103  ONLINE   0 0 0
> > label/disk104  ONLINE   0 0 0
> > label/disk105  ONLINE   0 0 0
> >   raidz2   ONLINE   0 0 0
> > label/disk106  ONLINE   0 0 0
> > label/disk107  ONLINE   0 0 0
> > label/disk108  ONLINE   0 0 0
> > label/disk109  ONLINE   0 0 0
> > label/disk110  ONLINE   0 0 0
> > label/disk111  ONLINE   0 0 0
> >
> > which will get another enclosure with 6 750-GB-disks, soon.

> I've always been curious about this. is said not good to have many disks
> in one pool. ok then. but this layout you're using in here will have the
> same effect as the twelve disks in only one pool ? (the space here is the
> sum of both pools ?)

It is not good to have too many disks in one group. What you see
above is one pool with two raidz2 groups.

As far as I understood the documentation after that helpful
comment on this list, this is the recommended configuration.

---
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

"The recommended number of disks per group is between 3 and 9.
 If you have more disks, use multiple groups."
---

The result is, of course, one big pool with lots of storage
space, but the overhead necessary for redundancy is roughly
twice that of my "dangerous" twelve-disk configuration.

So I lost the equivalent of two disks or about 1 TB here.
Fast, reliable, cheap - pick any two ;-)

Kind regards,
Patrick
-- 
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Nenhum_de_Nos

On Thu, July 9, 2009 09:25, Dan Naumov wrote:
> On Thu, Jul 9, 2009 at 3:17 PM, Nenhum_de_Nos
> wrote:
>>
>> On Thu, July 9, 2009 08:25, Patrick M. Hausen wrote:
>>> Hi, all,
>>>
>>> I just wanted to say a big big thank you to Kip and all the
>>> developers who made ZFS on FreeBSD real.
>>>
>>> And to everyone who provided helpful comments in the
>>> last couple of days.
>>>
>>> I had to delete and rebuild my zpool to switch from a
>>> 12-disk raidz2 to two 6-disk ones, but yesterday I could
>>> replace the raw devices with glabel devices and practice
>>> replacing a failed disk at the same time. ;-)
>>>
>>> So now we have this setup:
>>>
>>>       NAME               STATE     READ WRITE CKSUM
>>>       zfs                ONLINE       0     0     0
>>>         raidz2           ONLINE       0     0     0
>>>           label/disk100  ONLINE       0     0     0
>>>           label/disk101  ONLINE       0     0     0
>>>           label/disk102  ONLINE       0     0     0
>>>           label/disk103  ONLINE       0     0     0
>>>           label/disk104  ONLINE       0     0     0
>>>           label/disk105  ONLINE       0     0     0
>>>         raidz2           ONLINE       0     0     0
>>>           label/disk106  ONLINE       0     0     0
>>>           label/disk107  ONLINE       0     0     0
>>>           label/disk108  ONLINE       0     0     0
>>>           label/disk109  ONLINE       0     0     0
>>>           label/disk110  ONLINE       0     0     0
>>>           label/disk111  ONLINE       0     0     0
>>>
>>> which will get another enclosure with 6 750-GB-disks, soon.
>>>
>>> I really like the way I can manage storage from the operating
>>> system without propriatary controller management software or
>>> even rebooting into the BIOS.
>>>
>>> Kind regards,
>>> Patrick
>>
>> I've always been curious about this. is said not good to have many disks
>> in one pool. ok then. but this layout you're using in here will have the
>> same effect as the twelve disks in only one pool ? (the space here is
>> the
>> sum of both pools ?)
>
> Having an enormous pool consisting of dozens of disks is not the
> actual problem. Having the pool consist of large (> 9 disks)
> raidz/raidz2 "groups" is.
>
> A single pool consising of 5 x 8 disk raidz (40 disks total) is fine.
> A single pool consisting of a 40 (or any amount bigger than 9) disk
> raidz is not.

thanks. but the final file system in both these cases are the same ? (what
I'll see in df -h).

matheus

-- 
We will call you cygnus,
The God of balance you shall be

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

http://en.wikipedia.org/wiki/Posting_style
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Dan Naumov
On Thu, Jul 9, 2009 at 3:17 PM, Nenhum_de_Nos wrote:
>
> On Thu, July 9, 2009 08:25, Patrick M. Hausen wrote:
>> Hi, all,
>>
>> I just wanted to say a big big thank you to Kip and all the
>> developers who made ZFS on FreeBSD real.
>>
>> And to everyone who provided helpful comments in the
>> last couple of days.
>>
>> I had to delete and rebuild my zpool to switch from a
>> 12-disk raidz2 to two 6-disk ones, but yesterday I could
>> replace the raw devices with glabel devices and practice
>> replacing a failed disk at the same time. ;-)
>>
>> So now we have this setup:
>>
>>       NAME               STATE     READ WRITE CKSUM
>>       zfs                ONLINE       0     0     0
>>         raidz2           ONLINE       0     0     0
>>           label/disk100  ONLINE       0     0     0
>>           label/disk101  ONLINE       0     0     0
>>           label/disk102  ONLINE       0     0     0
>>           label/disk103  ONLINE       0     0     0
>>           label/disk104  ONLINE       0     0     0
>>           label/disk105  ONLINE       0     0     0
>>         raidz2           ONLINE       0     0     0
>>           label/disk106  ONLINE       0     0     0
>>           label/disk107  ONLINE       0     0     0
>>           label/disk108  ONLINE       0     0     0
>>           label/disk109  ONLINE       0     0     0
>>           label/disk110  ONLINE       0     0     0
>>           label/disk111  ONLINE       0     0     0
>>
>> which will get another enclosure with 6 750-GB-disks, soon.
>>
>> I really like the way I can manage storage from the operating
>> system without propriatary controller management software or
>> even rebooting into the BIOS.
>>
>> Kind regards,
>> Patrick
>
> I've always been curious about this. is said not good to have many disks
> in one pool. ok then. but this layout you're using in here will have the
> same effect as the twelve disks in only one pool ? (the space here is the
> sum of both pools ?)

Having an enormous pool consisting of dozens of disks is not the
actual problem. Having the pool consist of large (> 9 disks)
raidz/raidz2 "groups" is.

A single pool consising of 5 x 8 disk raidz (40 disks total) is fine.
A single pool consisting of a 40 (or any amount bigger than 9) disk
raidz is not.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - thanks

2009-07-09 Thread Nenhum_de_Nos

On Thu, July 9, 2009 08:25, Patrick M. Hausen wrote:
> Hi, all,
>
> I just wanted to say a big big thank you to Kip and all the
> developers who made ZFS on FreeBSD real.
>
> And to everyone who provided helpful comments in the
> last couple of days.
>
> I had to delete and rebuild my zpool to switch from a
> 12-disk raidz2 to two 6-disk ones, but yesterday I could
> replace the raw devices with glabel devices and practice
> replacing a failed disk at the same time. ;-)
>
> So now we have this setup:
>
>   NAME   STATE READ WRITE CKSUM
>   zfsONLINE   0 0 0
> raidz2   ONLINE   0 0 0
>   label/disk100  ONLINE   0 0 0
>   label/disk101  ONLINE   0 0 0
>   label/disk102  ONLINE   0 0 0
>   label/disk103  ONLINE   0 0 0
>   label/disk104  ONLINE   0 0 0
>   label/disk105  ONLINE   0 0 0
> raidz2   ONLINE   0 0 0
>   label/disk106  ONLINE   0 0 0
>   label/disk107  ONLINE   0 0 0
>   label/disk108  ONLINE   0 0 0
>   label/disk109  ONLINE   0 0 0
>   label/disk110  ONLINE   0 0 0
>   label/disk111  ONLINE   0 0 0
>
> which will get another enclosure with 6 750-GB-disks, soon.
>
> I really like the way I can manage storage from the operating
> system without propriatary controller management software or
> even rebooting into the BIOS.
>
> Kind regards,
> Patrick

I've always been curious about this. is said not good to have many disks
in one pool. ok then. but this layout you're using in here will have the
same effect as the twelve disks in only one pool ? (the space here is the
sum of both pools ?)

thanks,

matheus

-- 
We will call you cygnus,
The God of balance you shall be

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

http://en.wikipedia.org/wiki/Posting_style
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ZFS - thanks

2009-07-09 Thread Patrick M. Hausen
Hi, all,

I just wanted to say a big big thank you to Kip and all the
developers who made ZFS on FreeBSD real.

And to everyone who provided helpful comments in the
last couple of days.

I had to delete and rebuild my zpool to switch from a
12-disk raidz2 to two 6-disk ones, but yesterday I could
replace the raw devices with glabel devices and practice
replacing a failed disk at the same time. ;-)

So now we have this setup:

NAME   STATE READ WRITE CKSUM
zfsONLINE   0 0 0
  raidz2   ONLINE   0 0 0
label/disk100  ONLINE   0 0 0
label/disk101  ONLINE   0 0 0
label/disk102  ONLINE   0 0 0
label/disk103  ONLINE   0 0 0
label/disk104  ONLINE   0 0 0
label/disk105  ONLINE   0 0 0
  raidz2   ONLINE   0 0 0
label/disk106  ONLINE   0 0 0
label/disk107  ONLINE   0 0 0
label/disk108  ONLINE   0 0 0
label/disk109  ONLINE   0 0 0
label/disk110  ONLINE   0 0 0
label/disk111  ONLINE   0 0 0

which will get another enclosure with 6 750-GB-disks, soon.

I really like the way I can manage storage from the operating
system without propriatary controller management software or
even rebooting into the BIOS.

Kind regards,
Patrick
-- 
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 8.0-BETA1 Source Upgrade breaks NTP configuration

2009-07-09 Thread Vincent Hoffman
John Marshall wrote:
> Yesterday I source-upgraded a 7.2-RELEASE-p2 test i386 server to
> 8.0-BETA1.  I have just discovered that it broke that server's NTP
> service.
> 
> PROBLEM 1 - Existing /etc/ntp.conf overwritten
> 
> 
> Digging deeper, it looks like it may be due to the fact that this is a
> new supplied file and an entry for /etc/ntp.conf didn't exist in
> /var/db/mergemaster.mtree from the previous (7.2-RELEASE) run.  How
> should this be handled?
you are correct, There was a thread on -CURRENT about this recently see
http://lists.freebsd.org/pipermail/freebsd-current/2009-July/008968.html
 I think there was discussion of a patch to mergemaster.
> 
> PROBLEM 2 - Default ntp.conf uses LOCAL clock
> 

Again much discussed, a new improved version using freebsd.pool.ntp.org
and commenting out the LOCAL option was posted to the freebsd-net list
not long ago by David MAlone, hopefully to be applied soon.

Vince
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Help With Custom Disk Layout For New Install

2009-07-09 Thread Adrian Wontroba
On Tue, Jul 07, 2009 at 06:53:31PM -0700, Drew Tomlinson wrote:

>  ... gmirror fails silently (i.e. nothing exists in /dev/mirror). ...

I can't speak for the rest of your post but have you got the following
in /boot/loader.conf?

geom_mirror_load="YES"

-- 
Adrian Wontroba
It is far better to be deceived than to be undeceived by those we love.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"