Re: whats best pracfive for ZFS on a whole disc these days ?
2009/10/27 Daniel O'Connor : > Unfortunately it appears ZFS doesn't search for GPT partitions so if you > have them and swap the drives around you need to fix it up manually. Every GPT partition have unique /dev/gptid/, you can find it out with: glabel status and instead of using e.x.: zpool create tank mirror ad4p3 ad6p3 you can use: zpool create tank mirror gptid/0f32d2e6-c227-11de-8d6c-001708386b68 gptid/bc78a46e-c227-11de-8d6c-001708386b68 and you can swap disk without worries -- Artis Caune Everything should be made as simple as possible, but not simpler. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: whats best pracfive for ZFS on a whole disc these days ?
Hello, On Tue, Oct 27, 2009 at 09:00:27AM +0200, Artis Caune wrote: > 2009/10/27 Daniel O'Connor : > > Unfortunately it appears ZFS doesn't search for GPT partitions so if you > > have them and swap the drives around you need to fix it up manually. > > Every GPT partition have unique /dev/gptid/, you can find it out with: > glabel status > > and instead of using e.x.: > zpool create tank mirror ad4p3 ad6p3 > you can use: > zpool create tank mirror > gptid/0f32d2e6-c227-11de-8d6c-001708386b68 > gptid/bc78a46e-c227-11de-8d6c-001708386b68 > > and you can swap disk without worries Nice. Is there any reason to prefer GPT labels over glabel on the raw disk like so? NAME STATE READ WRITE CKSUM zfsONLINE 0 0 0 raidz2 ONLINE 0 0 0 label/disk100 ONLINE 0 0 0 label/disk101 ONLINE 0 0 0 label/disk102 ONLINE 0 0 0 label/disk103 ONLINE 0 0 0 label/disk104 ONLINE 0 0 0 label/disk105 ONLINE 0 0 0 Could GPT labelled disks be read on a Solaris host without further modification? Thanks, Patrick M. Hausen Leiter Netzwerke und Sicherheit -- punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe Tel. 0721 9109 0 * Fax 0721 9109 100 i...@punkt.de http://www.punkt.de Gf: Jürgen Egeling AG Mannheim 108285 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: whats best pracfive for ZFS on a whole disc these days ?
On Tue, 27 Oct 2009, Artem Belevich wrote: > > Unfortunately it appears ZFS doesn't search for GPT partitions so > > if you have them and swap the drives around you need to fix it up > > manually. > > When I used raw disk or GPT partitions, if disk order was changed the > pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all > that had to be done is export/import the pool. After the pool has > been re-imported it was back to ONLINE. Hmm OK, I thought it supposedly DTRT for raw disks but apparently not. > Now I'm using GPT labels (gpart -l) specifically because that avoids > issues with disk order or driver change. The pool I've built from GPT > labels has survived several migrations between different > controllers/drivers adX (ata) -> daX (SATA disks on mpt) -> adaX > (ahci) and multiple drive permutations without any manual > intervention at all. All that was done on 8-RC1/amd64. > I have also successfully imported the pool on OpenSolaris and back > again on FreeBSD. Damn, if I'd realised I'd have done that :) Do you know if it's possible to change? Thanks. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C signature.asc Description: This is a digitally signed message part.
openldap unstable on freebsd
Good day. Last 2 years (maybe when began using bdb backend), we get slapd crash on read load. System on low load work with monit monitoring and fails 1-3 in month. When load up crashes frequency up too. Tuning helped but not much. load about 20-30 queryes/sec in peak. and crashes every hour. Problem watched on Freebsd7,7.1,7.2 i386, amd64 and openldap2.3,2.4 (bdb,hdb backends) in any combinations. I tested openldap 2.4 on debian lenny, its work under my load without tuning (once was crashed whole linux :), but not slapd). Mybe some freebsd tuning needed? Some debug: ber_scanf fmt ({m) ber: ber_dump: buf=0x8037161b0 ptr=0x803716248 end=0x803716274 len=44 : 30 84 00 00 00 26 04 16 31 2e 32 2e 38 34 30 2e 0&..1.2.840. 0010: 31 31 33 35 35 36 2e 31 2e 34 2e 33 31 39 04 0c 113556.1.4.319.. 0020: 30 84 00 00 00 06 02 02 03 e8 04 00 0... ber_scanf fmt (m) ber: ber_dump: buf=0x8037161b0 ptr=0x803716266 end=0x803716274 len=14 : 00 0c 30 84 00 00 00 06 02 02 03 e8 04 00 ..0... => get_ctrls: oid="1.2.840.113556.1.4.319" (noncritical) ber_scanf fmt ({im}) ber: ber_dump: buf=0x803831000 ptr=0x803831000 end=0x80383100c len=12 : 30 84 00 00 00 06 02 02 03 e8 04 00 0... <= get_ctrls: n=1 rc=0 err="" attrs: cn userPassword memberUid uniqueMember gidNumber conn=105 op=1 SRCH base="ou=staff,dc=ulgsm,dc=ru" scope=1 deref=0 filter="(&(objectClass=posixGroup))" conn=105 op=1 SRCH attr=cn userPassword memberUid uniqueMember gidNumber slap_global_control: unavailable control: 1.2.840.113556.1.4.319 ==> limits_get: conn=105 op=1 dn="cn=bind,ou=staff,dc=ulgsm,dc=ru" => hdb_search bdb_dn2entry("ou=staff,dc=ulgsm,dc=ru") search_candidates: base="ou=staff,dc=ulgsm,dc=ru" (0x0002) scope=1 => hdb_dn2idl("ou=staff,dc=ulgsm,dc=ru") => bdb_filter_candidates AND => bdb_list_candidates 0xa0 => bdb_filter_candidates OR => bdb_list_candidates 0xa1 => bdb_filter_candidates EQUALITY => bdb_equality_candidates (objectClass) => key_read zsh: segmentation fault /usr/local/libexec/slapd -d -1 -- Email: al...@ulgsm.ru Email/Jabber: al...@ulgsm.ru Тел. +7 951 0985685, Вн. 368 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: openldap unstable on freebsd
Hi, On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote: > Last 2 years (maybe when began using bdb backend), we get slapd crash on > read load. > System on low load work with monit monitoring and fails 1-3 in month. > When load up crashes frequency up too. > > Tuning helped but not much. > > load about 20-30 queryes/sec in peak. > and crashes every hour. > > Problem watched on Freebsd7,7.1,7.2 i386, amd64 and openldap2.3,2.4 > (bdb,hdb backends) in any combinations. > > I tested openldap 2.4 on debian lenny, its work under my load without > tuning (once was crashed whole linux :), but not slapd). > > Mybe some freebsd tuning needed? We have slapd running on several servers with read loads of between 50 and 200 requests per second and it runs rock stable. What comes tomind, did your server crash at some point? Have you tried to either do a db_recover on the database files (while slapd is not running of course) or slapcat/slapadd to rebuild the BDB from scratch? I get the feeling your BDB is somehow damaged. - Oliver -- | Oliver Brandmueller http://sysadm.in/ o...@sysadm.in | |Ich bin das Internet. Sowahr ich Gott helfe. | ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
No sound after update to RC2 from RC1.
Hello. I've lost sound. Dmesg content: hdac0: HDA Driver Revision: 20090624_0136 hdac0: [ITHREAD] Starting default moused . mixer: unknown device: mic (or ogain) hdac0: HDA Codec #0: Conexant CX20561 (Hermosa) pcm0: at cad 0 nid 1 on hdac0 It was previously working with such device.hints: hint.hdac.0.cad0.nid22.config="as=1" hint.hdac.0.cad0.nid26.config="as=1 seq=1" hint.hdac.0.cad0.nid24.config="as=2" hint.hdac.0.cad0.nid29.config="as=2 seq=1" + sysctl.conf dev.pcm.0.play.vchans=0 dev.pcm.0.bitperfect=1 -best regards, Jakub Lach -- View this message in context: http://www.nabble.com/No-sound-after-update-to-RC2-from-RC1.-tp26078773p26078773.html Sent from the freebsd-stable mailing list archive at Nabble.com. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: whats best pracfive for ZFS on a whole disc these days ?
Quoting Daniel O'Connor : On Tue, 27 Oct 2009, Artem Belevich wrote: > Unfortunately it appears ZFS doesn't search for GPT partitions so > if you have them and swap the drives around you need to fix it up > manually. When I used raw disk or GPT partitions, if disk order was changed the pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all that had to be done is export/import the pool. After the pool has been re-imported it was back to ONLINE. Hmm OK, I thought it supposedly DTRT for raw disks but apparently not. Now I'm using GPT labels (gpart -l) specifically because that avoids issues with disk order or driver change. The pool I've built from GPT labels has survived several migrations between different controllers/drivers adX (ata) -> daX (SATA disks on mpt) -> adaX (ahci) and multiple drive permutations without any manual intervention at all. All that was done on 8-RC1/amd64. I have also successfully imported the pool on OpenSolaris and back again on FreeBSD. Damn, if I'd realised I'd have done that :) Do you know if it's possible to change? Check the archives for stable@ and f...@. I believe that there was a thread not that long ago detailing exactly how to do that. IIRC, while it took a bit of work, it wasn't difficult. John - J. T. Farmer GoldSword Systems, Knoxville TN Coach & Instructor Consulting, Knoxville Academy of the Blade Software Development, Maryville Fencing Club Project Management ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
8.0-RC1 NFS client timeout issue
I see an annoying behaviour with NFS over TCP. It happens both with nfs and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is some Linux or perhaps Solaris, I'm not entirely sure. After trying to find something in packet traces, I think I have found something. The scenario seems to be as follows. Sorry for the width of the lines. No. TimeSourceDestination Protocol Info 2296 2992.216855 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 LOOKUP Call (Reply In 2297), DH:0x3819da36/w 2297 2992.217107 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 LOOKUP Reply (Call In 2296) Error:NFS3ERR_NOENT 2298 2992.217141 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 LOOKUP Call (Reply In 2299), DH:0x170cb16a/bin 2299 2992.217334 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 LOOKUP Reply (Call In 2298), FH:0x61b8eb12 2300 2992.217361 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 ACCESS Call (Reply In 2301), FH:0x61b8eb12 2301 2992.217582 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 ACCESS Reply (Call In 2300) 2302 2992.217605 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 LOOKUP Call (Reply In 2303), DH:0x61b8eb12/w 2303 2992.217860 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 LOOKUP Reply (Call In 2302) Error:NFS3ERR_NOENT 2304 2992.318770 xxx.xxx.31.43 xxx.xxx.16.142TCP 934 > nfs [ACK] Seq=238293 Ack=230289 Win=8192 Len=0 TSV=86492342 TSER=12393434 2306 3011.537520 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 GETATTR Reply (Call In 2305) Directory mode:2755 uid:4100 gid:4100 2307 3011.637744 xxx.xxx.31.43 xxx.xxx.16.142TCP 934 > nfs [ACK] Seq=238429 Ack=230405 Win=8192 Len=0 TSV=86511662 TSER=12395366 2308 3371.534980 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 934 [FIN, ACK] Seq=230405 Ack=238429 Win=49232 Len=0 TSV=12431366 TSER=86511662 The server decides, for whatever reason, to terminate the connection and sends a FIN. 2309 3371.535018 xxx.xxx.31.43 xxx.xxx.16.142TCP 934 > nfs [ACK] Seq=238429 Ack=230406 Win=8192 Len=0 TSV=86871578 TSER=12431366 Client acknowledges this, 2310 3375.379693 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 ACCESS Call, FH:0x008002a2 but tries to sneak in another call anyway. [A] 2311 3375.474788 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 934 [ACK] Seq=230406 Ack=238569 Win=49232 Len=0 TSV=12431760 TSER=86875423 Server ACKs but doesn't send anything else... [B] Time passes... 2312 3675.366081 xxx.xxx.31.43 xxx.xxx.16.142TCP 934 > nfs [FIN, ACK] Seq=238569 Ack=230406 Win=8192 Len=0 TSV=87175425 TSER=12431760 Client finally decides after 300 secs to close the connection too 2313 3675.366149 xxx.xxx.31.43 xxx.xxx.16.142TCP 904 > nfs [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=5 TSV=87175425 TSER=0 and to re-open a new one. 2314 3675.366318 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 934 [ACK] Seq=230406 Ack=238570 Win=49232 Len=0 TSV=12461749 TSER=87175425 2315 3675.366446 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 904 [SYN, ACK] Seq=0 Ack=1 Win=49232 Len=0 TSV=12461749 TSER=87175425 MSS=1460 WS=0 2316 3675.366483 xxx.xxx.31.43 xxx.xxx.16.142TCP 904 > nfs [ACK] Seq=1 Ack=1 Win=66592 Len=0 TSV=87175425 TSER=12461749 2317 3675.366506 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 ACCESS Call (Reply In 2319), FH:0x008002a2 2318 3675.30 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 904 [ACK] Seq=1 Ack=141 Win=49092 Len=0 TSV=12461749 TSER=87175425 2319 3675.367356 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 ACCESS Reply (Call In 2317) 2320 3675.367425 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 GETATTR Call (Reply In 2322), FH:0x170cb16a 2321 3675.367644 xxx.xxx.16.142xxx.xxx.31.43 TCP nfs > 904 [ACK] Seq=125 Ack=277 Win=49232 Len=0 TSV=12461749 TSER=87175426 2322 3675.367730 xxx.xxx.16.142xxx.xxx.31.43 NFS V3 GETATTR Reply (Call In 2320) Directory mode:2755 uid:4100 gid:4100 2323 3675.367759 xxx.xxx.31.43 xxx.xxx.16.142NFS V3 ACCESS Call (Reply In 2325), FH:0x170cb16a Point [A] seems somwehat worrisome to me: Though technically the connection is closed in one direction only, the intention of the server seems clear, and it would be better to be careful and make a new connection right away. [B] would be a bug of the server in my opinion. If it ACKs a call, it should send a reply. And if it can't, it shouldn't. Please Cc me on replies, I am not subscribed to this list. -Olaf. -- ___ freebsd-stable@freebsd.or
Re: 8.0-RC1 NFS client timeout issue
> I see an annoying behaviour with NFS over TCP. It happens both with nfs > and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is > some Linux or perhaps Solaris, I'm not entirely sure. I used nfs with tcp on a 7.2-client without problems on a solaris nfs-server. When I upgraded to RC1 I had 'server not responding - alive again' messages so I swithced to udp which works flawlessly. I haven't had time to investigate it though. -- regards Claus When lenity and cruelty play for a kingdom, the gentler gamester is the soonest winner. Shakespeare twitter.com/kometen ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: whats best pracfive for ZFS on a whole disc these days ?
On Wed, 28 Oct 2009, jfar...@goldsword.com wrote: > Check the archives for stable@ and f...@. I believe that there was a > thread not that long ago detailing exactly how to do that. IIRC, > while it took a bit of work, it wasn't difficult. Hmm do you have any idea what the subject was? I'm having trouble finding it :( -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C signature.asc Description: This is a digitally signed message part.
ptrace problem 6.x/7.x - can someone explain this?
We believe ptrace has a problem in 6.3; we have not tried other releases. The same code, however, exists in 7.1. The bug was first encountered in gdb... (gdb) det Detaching from program: /usr/local/bin/emacs, process 66217 (gdb) att 66224 Attaching to program: /usr/local/bin/emacs, process 66224 Error accessing memory address 0x281ba5a4: Device busy. (gdb) det Detaching from program: /usr/local/bin/emacs, process 66224 ptrace: Device busy. (gdb) quit <--- target process 66224 dies here To isolate this problem, a wrote a simple minded test program was written that just attached and detached. This test program found even the very first detach fails with EBUSY (see test source below): $ ./test1 -p 66217 -c 1 -d 10 pid 66217 count 1 delay 10 Start of pass 0 Calling PT_ATTACH pid 66217 addr 0x0 sig 0 Calling PT_DETACH pid 66217 addr 0x sig 0 Call 0 to PT_DETACH returned -1, errno 16 Once again, the target process died when the ptracing test program exitted, as would be expected if a detach had failed. The failure return was coming from the following test in kern_ptrace() in sys_process.c /* not currently stopped */ if ((p->p_flag & (P_STOPPED_SIG | P_STOPPED_TRACE)) == 0 || p->p_suspcount != p->p_numthreads || (p->p_flag & P_WAITED) == 0) { error = EBUSY; goto fail; } This is applied to all operations except PT_TRACE_ME, PT_ATTACH, and some instances of PT_CLEAR_STEP. P_WAITED is generally not true. In particular, it's not set automatically when a process is PT_ATTACHed. It is cleared by PT_DETACH and again when ptrace sends a signal (PT_CONTINUE, PT_DETACH.) _But_ it's set in only two places, and they aren't in ptrace code. 2 sys/kern/kern_exit.c kern_wait 773 p->p_flag |= P_WAITED; 3 compat/svr4/svr4_misc.c svr4_sys_waitsys 1351 q->p_flag |= P_WAITED; The relevant one is the first one, primarily. Here's the code: mtx_lock_spin(&sched_lock); if ((p->p_flag & P_STOPPED_SIG) && (p->p_suspcount == p->p_numthreads) && (p->p_flag & P_WAITED) == 0 && (p->p_flag & P_TRACED || options & WUNTRACED)) { mtx_unlock_spin(&sched_lock); p->p_flag |= P_WAITED; sx_xunlock(&proctree_lock); td->td_retval[0] = p->p_pid; if (status) *status = W_STOPCODE(p->p_xstat); PROC_UNLOCK(p); return (0); } mtx_unlock_spin(&sched_lock); So it's only set on processes which are already traced. But it's not set until someone calls wait4() on them - or the equivalent sysV compatability routine. Gdb doesn't always wait4() for processes immediately opon tracing them, and the ptrace man page does not imply this is needed. Moreover, it's not clear why it should matter. The process needs to be stopped in order for it to make sense to do most of the things ptrace does. But - why should it need to be waited for? And what kind of sense does this make to someone writing a debugging tool, where the natural logic seems to be: - attach to process - look at some stuff - stick in some kind of breakpoint or similar and start it going again (or 'step' it) - wait for it to stop - look at and modify stuff - detach, or set it moving again By way of experiment, the test for P_WAITED was removed. Gdb no longer had problems, and no new issues with gdb were encountered (although this was just interactive, no "gdb coverage test" was attempted). The test program also stopped having issues. /* not currently stopped */ if ((p->p_flag & (P_STOPPED_SIG | P_STOPPED_TRACE)) == 0 || p->p_suspcount != p->p_numthreads { error = EBUSY; goto fail; } So does anyone know whether it's safe to simply remove that test? Thanks, Arlie Stephens Engineer Dorr H. Clark Advisor Graduate School of Engineering Santa Clara University, Santa Clara, CA - Test program here - /* * experiment with ptrace, try to see which is broken - gdb or ptrace */ #include #include #include #include #include #include void usage(void) { printf("Simple program to play with ptrace\n"); printf("Usage: test1 -p -c -d \n"); printf("Specify -n for no explicit detach\n"); printf("Will attach and detach repeatedly from target process\n"); exit(1); } int main(int argc, char *argv[]) { pid_t pid = -1; int count = 2; int delay = 5; int nodetach = 0;
New devices appear in all devfs mounts
Hi, I have devfs mounted in a chroot jail, with just the basic device nodes visible: fstab:/dev/null /usr/data/home/scp/dev devfs rw 0 0 rc.conf: devfs_set_rulesets="/usr/data/home/scp/dev=devfsrules_hide_all /usr/data/home/scp/dev=devfsrules_unhide_basic" When a new device is created, such as when adding a new scsi device or having enough concurrent logins to instantiate new ttys, the new devices appear in both /dev and /usr/data/home/scp/dev, which is not the intent for the stripped-down chrooted dev dir. Is there a way to work around this? Thanks, Marcus ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: openldap unstable on freebsd
* Oliver Brandmueller [2009-10-27 09:56:48 +0100]: > Hi, > > On Tue, Oct 27, 2009 at 11:25:16AM +0300, al...@ulgsm.ru wrote: > > Last 2 years (maybe when began using bdb backend), we get slapd crash on > > read load. > > System on low load work with monit monitoring and fails 1-3 in month. > > When load up crashes frequency up too. > > > > Tuning helped but not much. > > > > load about 20-30 queryes/sec in peak. > > and crashes every hour. > > > > Problem watched on Freebsd7,7.1,7.2 i386, amd64 and openldap2.3,2.4 > > (bdb,hdb backends) in any combinations. > > > > I tested openldap 2.4 on debian lenny, its work under my load without > > tuning (once was crashed whole linux :), but not slapd). > > > > Mybe some freebsd tuning needed? > > We have slapd running on several servers with read loads of between 50 > and 200 requests per second and it runs rock stable. > > What comes tomind, did your server crash at some point? Have you tried > to either do a db_recover on the database files (while slapd is not > running of course) or slapcat/slapadd to rebuild the BDB from scratch? I > get the feeling your BDB is somehow damaged. I reinstall opneldap, remove all tunung, make slapadd < backup.ldif and get about 50 failures at the night. :( > > - Oliver > > -- > | Oliver Brandmueller http://sysadm.in/ o...@sysadm.in | > |Ich bin das Internet. Sowahr ich Gott helfe. | > ___ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" -- Email: al...@ulgsm.ru Email/Jabber: al...@ulgsm.ru ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"