ndis not producing ndis_driver_data.h on freebsd 64
Hello, I tried compiling if_ndis after doing the ndiscvt bit. ndiscvt does not produce ndis_driver_data.h, even when i've specified the output file as ndis_driver_data.h Is this a known issue and how do I resolve this? Jayton ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: ata softraid rebuild problem
Hi, list I find problem. On Tue, Oct 18, 2005 at 11:37:32AM +0400, Dmitriy Kirhlarov wrote: I use ata soft RAID1 on FreeBSD 5.4-RELEASE-p5. Now my raid in degraded status and I can't rebuild it. --- [EMAIL PROTECTED] 11:32$ sudo atacontrol list ATA channel 0: Master: ad0 ST380011A/8.01 ATA/ATAPI revision 6 Slave: no device present ATA channel 1: Master: acd0 NEC DVD RW ND-3500AG/2.16 ATA/ATAPI revision 0 Slave: ad3 ST380011A/8.01 ATA/ATAPI revision 6 [EMAIL PROTECTED] 11:32$ sudo atacontrol status ar0 ar0: ATA RAID1 subdisks: ad0 DOWN status: DEGRADED [EMAIL PROTECTED] 11:32$ sudo atacontrol rebuild ar0 [EMAIL PROTECTED] 11:32$ sudo atacontrol status ar0 ar0: ATA RAID1 subdisks: ad0 DOWN status: DEGRADED [EMAIL PROTECTED] 11:32$ sudo atacontrol addspare ar0 ad0 atacontrol: ioctl(ATARAIDADDSPARE): Device busy [EMAIL PROTECTED] 11:33$ dmesg | head -3 ad0: deleted from ar0 disk0 ar0: ERROR - array broken ad0: WARNING - removed from configuration [EMAIL PROTECTED] 11:36$ uname -rs FreeBSD 5.4-RELEASE-p5 --- [EMAIL PROTECTED] 15:09$ sudo grep -iE (disk|ata).*(disk|ata) /var/run/dmesg.boot ata0: channel #0 on atapci0 ata1: channel #1 on atapci0 ar0: 76319MB ATA RAID1 array [9729/255/63] status: DEGRADED subdisks: disk0 READY on ad0 at ata0-master disk1 DOWN no device found for this disk I has DOWN ad3. Not ad0. Is atacontrol status -- wrong? Why he report ad0 down? -- WBR Dmitriy ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Network performance 6.0 with netperf
Robert Watson wrote: On Fri, 14 Oct 2005, Michael VInce wrote: I been doing some network benchmarking using netperf and just simple 'fetch' on a new network setup to make sure I am getting the most out of the router and servers, I thought I would post some results in case some one can help me with my problems or if others are just interested to see the results. Until recently (or maybe still), netperf was compiled with -DHISTOGRAM by our port/package, which resulted in a significant performance drop. I believe that the port maintainer and others have agreed to change it, but I'm not sure if it's been committed yet, or which packages have been rebuilt. You may want to manually rebuild it to make sure -DHISTOGRAM isn't set. You may want to try setting net.isr.direct=1 and see what performance impact that has for you. Robert N M Watson I reinstalled the netperf to make sure its the latest. I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 and noticed it gave a large improvement of network performance with a SMP kernel. As with the network setup ( A --- B --- C ) with server B being the gateway, doing a basic 'fetch' from the gateway (B) to the Apache server (C) it gives up to 700mbits/sec transfer performance, doing a fetch from server A thus going through the gateway gives slower but still decent performance of up to 400mbits/sec. B fetch -o - /dev/null http://server-c/file1gig.iso - 100% of 1055 MB 69 MBps 00m00s A fetch -o - /dev/null http://server-c/file1gig.iso - 100% of 1055 MB 39 MBps 00m00s Netperf from the gateway directly to the apache server (C) 916mbits/sec B /usr/local/netperf/netperf -l 20 -H server-C -t TCP_STREAM -i 10,2 -I 99,5 -- -m 4096 -s 57344 -S 57344 Elapsed Throughput - 10^6bits/sec: 916.50 Netperf from the client machine through the gateway to the apache server (C) 315mbits/sec A /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 99,5 -- -m 4096 -s 57344 -S 57344 Elapsed Throughput - 10^6bits/sec: 315.89 Client to gateway netperf test shows the direct connection between these machines is fast. 912mbits/sec A /usr/local/netperf/netperf -l 30 -H server-B -t TCP_STREAM -i 10,2 -I 99,5 -- -m 4096 -s 57344 -S 5734 Elapsed Throughput - 10^6bits/sec: 912.11 The strange thing now is in my last post I was able to get faster speeds from server A to C with 'fetch' tests on non-smp kernels and slower speeds with netperf tests. Now I get speeds a bit slower with fetch tests but faster netperf speed tests with or without SMP on server-C. I was going to test with 'net.isr.dispatch' but the sysctl doesn't appear to exist, doing this returns nothing. 'sysctl -a | grep 'net.isr.dispatch' I also tried polling but its also like that doesn't exist either. ifconfig em3 inet 192.168.1.1 netmask 255.255.255.224 polling ifconfig: polling: Invalid argument When doing netperf tests there was high interrupt usage. CPU states: 0.7% user, 0.0% nice, 13.5% system, 70.0% interrupt, 15.7% idle Also the server B is using its last 2 gigabit ethernet ports which are listed from pciconf -lv as '82547EI Gigabit Ethernet Controller' While the first 2 are listed as 'PRO/1000 P' Does any one know if the PRO/1000P would be better? [EMAIL PROTECTED]:4:0: class=0x02 card=0x118a8086 chip=0x108a8086 rev=0x03 hdr=0x00 vendor = 'Intel Corporation' device = 'PRO/1000 P' [EMAIL PROTECTED]:8:0: class=0x02 card=0x016d1028 chip=0x10768086 rev=0x05 hdr=0x00 vendor = 'Intel Corporation' device = '82547EI Gigabit Ethernet Controller' Cheers, Mike The network is currently like this, where machines A and B are the Dell 1850s and C is the 2850 x 2 CPU (Server C has Apache2 worker MPM on it) and server B is the gateway and A is acting as a client for fetch and netperf tests. A --- B --- C The 2 1850s are running AMD64 Freebsd 6.0rc1 (A and B) while C is running 5.4-stable i386 from Oct 12 My main problem is that if I compile SMP into the machine C (5.4stable) the network speed goes down to a range between 6mbytes/sec to 15mbytes/sec on SMP. If I use GENERIC kernel the performance goes up to what I have show below which is around 65megabytes/sec for a 'fetch' get test from Apache server and 933mbits/sec for netperf. Does any know why why network performance would be so bad on SMP? Does any one think that if I upgrade the i386 SMP server to 6.0RC1 the SMP network performance would improve? This server will be running java so I need it to be stable and is the the reason I am using i386 and Java 1.4 I am happy with performance of direct machine to machine (non SMP) which is pretty much full 1gigabit/sec speeds. Going through the gateway server-B seems to drop its speed down a bit for in and out direction tcp speed tests using netperf I get around 266mbits/sec from server A through
Problems with PCI SATA controller (bug in ATA driver? both ATAng and ATAmkIII)
(Please CC me in replies as I'm not a subscriber of this list) Hello, I'm having difficulties getting a system with an PCI Serial ATA controller stable under stress. Sorry for the long email, but I'm pretty confused about the problem, so my description won't be as structured as I'd have liked. First some background about the system. It's a Intel D915GAV mainboard with 1GB RAM and a P4 3GHz (Hyperthreading turned on, both in BIOS and in FreeBSD). It has four SATA channels via the ICH6 SATA controller and one PATA channel. It has two SATA HD's (mirrored via geom_mirror) for the OS, and three SATA HD's (RAID 5 via gvinum) for data. The system disks work flawlessly, and the system is stable under stress (eg. a make world). However, after some time of stress on the data RAID array, eg. copying 35G via rsync over the fxp0 network interface, I get errors about READ_DMA timeouts (exact errors below). The drive that gives the errors eventually disappears, and gvinum takes down the RAID array if two disks do this. This only happens to the disks that are connected to the PCI SATA controller. I tried both a Promise SATAII 150 TX4 controller (PDC 40518) and an Adaptec SATA Connect SII3112A based controller. The SII3112A even gives errors (ad6: TIMEOUT - READ_DMA retrying (2 retries left) LBA=21679174) when I run newfs. I planned to run FreeBSD 5.4 on it. When copying lots of data to the data RAID array, I eventually (sometimes after 10 minutes, sometimes after 2 hours) get this errors: ad6: TIMEOUT - READ_DMA retrying (2 retries left) LBA=21679174 ad6: FAILURE - ATA_IDENTIFY timed out ad6: FAILURE - ATA_IDENTIFY timed out agvinum: lost drive 'backup2' d6: WARNING - removed from configuration aGEOM_VINUM: subdisk backup.p0.s1 state change: up - down GEOM_VINUM: plex backup.p0 state change: up - degraded ta3-master: FAILURE - READ_DMA timed out Sometimes it spontaneously rebooted, or crashed, in other cases, all processes that tried to access the disks stopped. This is one of the crashes: GEOM_VINUM: plex backup.p0 state change: degraded - down ARNING - removed from configuration GEOM_VINUM: plex backup.p0 state change: degraded - down ARNING - removed from configuration a Fatal trap 12: page fault while in kernel mode cpuid = 1; apic id = 01 fault virtual address = 0x98 fault code= supervisor write, page not present instruction pointer = 0x8:0xc04edb6a stack pointer = 0x10:0xe336bcc8 frame pointer = 0x10:0xe336bcc8 code segment = base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 3 (g_up) trap number = 12 panic: page fault cpuid=1 ta5-master: FAILURE - READ_DMA timed out boot() called on cpu#0 Uptime: 12m57s GEOM_MIRROR: Device gm0: provider mirror/gm0 destroyed. ad14: WARNING - WRITE_DMA interrupt was seen but timeout fired LBA=69625709 ad14: WARNING - WRITE_DMA interrupt was seen but timeout fired LBA=69625709 (sorry, no backtrace or dump, don't have that kernel anymore) ad6 and ad14 were connected to the Promise TX4. I've never seen this error for a disk connected to the ICH6 controller. I also ran the HD manufacturer's test utility (advanced test), and it didn't find any errors, so I assume the HD's are fine. I also replaced the SATA cables. I tried disabling ACPI and hyperthreading. I don't think a SATA disk can run without UDMA (PIO mode), at least atacontrol won't do it. It doesn't many how many disks are on the PCI controller, so it doesn't seem to have anything todo with concurrent access to multiple channels. It doesn't surprise me that I get this errors with the SII3112A controller, since it's supposed to be pretty crappy, but I expected the promise controller to be well supported. I also tried the ATA-mkIIIn patch from people.freebsd.org/~sos/ATA, the only difference was a different error message: ad4: WARNING - SETFEATURES SET TRANSFER MODE interrupt was seen but timeout fired ad4: req=0xc3cdfa28 SETFEATURES SET TRANSFER MODE semaphore timeout !! DANGER Will Robinsion !! (lots of these, and all processes that try to access the FS hang) Same with FreeBSD 6.0-RC1. I've yet to try a different mainboard, but I want to have this server working soon, so maybe I'll just put four disks on the ICH6 controller and forget about the fifth disk until I figure this out. Please let me know if you've any ideas, this certainly looks like a bug in the ATA driver to me. Alson A verbose dmesg from 5.4-RELEASE (most recent RELENG_5_4, -p7 or so) is below (I believe the top part is missing, probably because it didn't fit in the ring buffer, let me know if it's important and I'll try to obtain it). bus=0, slot=28, func=1 class=06-04-00, hdrtype=0x01, mfdev=1 cmdreg=0x0106, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x06 (1500 ns), maxlat=0x00 (0 ns) intpin=b, irq=255 found-
Re: No SATA disks appear on E7520 with 5.4-RELEASE
On Mon, Oct 17, 2005 at 06:20:59PM -0700, Danny Howard wrote: Yeah just for anyone inclined to puzzle this one ... I pored through the system manual on the way home. Apparently there are TWO SATA controllers on the MB. It would appear that by default, the Marvell controller is used, so maybe the thing is that I can get in tomorrow, swap the cable from the Marvell connector to the ICH5R connectors, and zoom zoom zoom! :) We'll see ... Works great! We DO see, thanks to the T-Mobile Sidekick: http://www.flickr.com/photos/dannyman/54052366/ http://www.flickr.com/photos/dannyman/54052350/ Thanks everyone! Sincerely, -danny -- http://dannyman.toldme.com/ ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: 6.0 release date and stability
On Mon, 17 Oct 2005 15:52:00 -0400 Vivek Khera [EMAIL PROTECTED] wrote: On Oct 16, 2005, at 7:57 AM, dick hoogendijk wrote: The *ONLY* question is: will I need to *recompile* all installed ports if I go from 5.4 to 6.0 release? No, the kernel has COMPAT_FREEBSD5 and COMPAT_FREEBSD4 by default, so just keep those and your shared libs around and you're golden. Of course, ports like lsof which dependon the kernel version will have to be rebuilt, but that's true no matter the version change... I get contradicting advice. You tell me I'm golden 'cause of the compat_xx settings; others tell me it's way better to *recompile* all portsto get the cleanest system. Wat is the best way to get the cleanest FreeBSD-6.x system without installing from scratch? Recompile each port? Or use the COMPAT_FREEBSD5 layer? -- dick -- http://nagual.st/ -- PGP/GnuPG key: F86289CE ++ Running FreeBSD 4.11-stable ++ FreeBSD 5.4 + Nai tiruvantel ar vayuvantel i Valar tielyanna nu vilja ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Network performance 6.0 with netperf
Michael VInce wrote: I reinstalled the netperf to make sure its the latest. I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 and noticed it gave a large improvement of network performance with a SMP kernel. As with the network setup ( A --- B --- C ) with server B being the gateway, doing a basic 'fetch' from the gateway (B) to the Apache server (C) it gives up to 700mbits/sec transfer performance, doing a fetch from server A thus going through the gateway gives slower but still decent performance of up to 400mbits/sec. Are you by any chance using PCI NIC's? PCI Bus is limited to somewhere around 1 Gbit/s. So if you consider; Theoretical maxium = ( 1Gbps - pci_overhead ) -- Sten Daniel Sørsdal ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: 6.0 release date and stability
On Wed, 19 Oct 2005 23:10:46 +0200, dick hoogendijk [EMAIL PROTECTED] wrote: On Mon, 17 Oct 2005 15:52:00 -0400 Vivek Khera [EMAIL PROTECTED] wrote: On Oct 16, 2005, at 7:57 AM, dick hoogendijk wrote: The *ONLY* question is: will I need to *recompile* all installed ports if I go from 5.4 to 6.0 release? No, the kernel has COMPAT_FREEBSD5 and COMPAT_FREEBSD4 by default, so just keep those and your shared libs around and you're golden. Of course, ports like lsof which dependon the kernel version will have to be rebuilt, but that's true no matter the version change... I get contradicting advice. You tell me I'm golden 'cause of the compat_xx settings; others tell me it's way better to *recompile* all portsto get the cleanest system. Wat is the best way to get the cleanest FreeBSD-6.x system without installing from scratch? Recompile each port? Or use the COMPAT_FREEBSD5 layer? You are answering your own question I think. Does the term COMPAT_FREEBSD5 sound as the 'cleanest FreeBSD-6.x'? No. You get the cleanest system by recompiling all ports. (portupgrade -fa is your friend here.) COMPAT_FREEBSD5 is meant for running FreeBSD-5 binary applications. If you have them it's ok. If you recompile everything you don't need the COMPAT_FREEBSD5 stuff. If you don't have the source of some of your FreeBSD-5 applications you have to run with COMPAT_FREEBSD5. And the switch to 6 is easier because your 5-applications keep running. Ronald. -- Ronald Klop Amsterdam, The Netherlands ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: 6.0 release date and stability
On Wed, Oct 19, 2005 at 11:36:35PM +0200, Ronald Klop wrote: On Wed, 19 Oct 2005 23:10:46 +0200, dick hoogendijk [EMAIL PROTECTED] wrote: On Mon, 17 Oct 2005 15:52:00 -0400 Vivek Khera [EMAIL PROTECTED] wrote: On Oct 16, 2005, at 7:57 AM, dick hoogendijk wrote: The *ONLY* question is: will I need to *recompile* all installed ports if I go from 5.4 to 6.0 release? No, the kernel has COMPAT_FREEBSD5 and COMPAT_FREEBSD4 by default, so just keep those and your shared libs around and you're golden. Of course, ports like lsof which dependon the kernel version will have to be rebuilt, but that's true no matter the version change... I get contradicting advice. You tell me I'm golden 'cause of the compat_xx settings; others tell me it's way better to *recompile* all portsto get the cleanest system. Wat is the best way to get the cleanest FreeBSD-6.x system without installing from scratch? Recompile each port? Or use the COMPAT_FREEBSD5 layer? You are answering your own question I think. Does the term COMPAT_FREEBSD5 sound as the 'cleanest FreeBSD-6.x'? No. You get the cleanest system by recompiling all ports. (portupgrade -fa is your friend here.) COMPAT_FREEBSD5 is meant for running FreeBSD-5 binary applications. If you have them it's ok. If you recompile everything you don't need the COMPAT_FREEBSD5 stuff. If you don't have the source of some of your FreeBSD-5 applications you have to run with COMPAT_FREEBSD5. And the switch to 6 is easier because your 5-applications keep running. Yes. As long as you only use your old 5.x applications, you're fine with just the compat. The problem is when you start to link *new* 6.0 applications with *old* 5.x libraries (e.g. by installing a new port, e.g. a new X application, without rebuilding your 5.x X installation first). Thus, unless you upgrade all your 5.x ports (well, actually many, i.e. only those that provide libraries or shared object modules, but it's easier to just do all) you'll end up with 6.0 binaries that are linked to e.g. two versions of libc at once (the 5.x libc and the 6.0 libc), which is a recipe for disaster. Kris pgpgvSsVLKo0G.pgp Description: PGP signature
Re: Problems with PCI SATA controller (bug in ATA driver? both ATAng and ATAmkIII)
(Please CC me in replies as I'm not a subscriber of this list) Update: I tested with an i810-based mainboard (Celeron 1GHz, RTL8139 ethernet, Promise SATAII 150 TX4 controller, 3 SATA disks in RAID 5, FreeBSD 6.0-RC1). It remained stable for two hours. I suspect this is because it has far less bandwith (iostat showed only about 3MB/s to the disks, as opposed to 12MB/s with the i915 mainboard). After I added a dd if=/dev/zero of=foo bs=128k (this increased the bandwidth usage to the disk to about 9MB/s according to iostat), it crashed in about 40 minutes. This suggests that it crashes because of the large amount of I/O. However, it's only about 10MB/s per disk (for three disks), so it doesn't seem to be that exotic to me. Since this is a completely different mainboard, it seems clearly a software issue to me. The built-in ICH6 controller works fine however, so it may be PDC*0518/SII311* specific (which basically means any PCI SATA controller available locally). Alson ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Problems with PCI SATA controller (bug in ATA driver? both ATAng and ATAmkIII)
On Thu, Oct 20, 2005 at 04:30:48AM +0200, Alson van der Meulen wrote: (Please CC me in replies as I'm not a subscriber of this list) Update: I tested with an i810-based mainboard (Celeron 1GHz, RTL8139 ethernet, Promise SATAII 150 TX4 controller, 3 SATA disks in RAID 5, FreeBSD 6.0-RC1). It remained stable for two hours. I suspect this is because it has far less bandwith (iostat showed only about 3MB/s to the disks, as opposed to 12MB/s with the i915 mainboard). After I added a dd if=/dev/zero of=foo bs=128k (this increased the bandwidth usage to the disk to about 9MB/s according to iostat), it crashed in about 40 minutes. This suggests that it crashes because of the large amount of I/O. However, it's only about 10MB/s per disk (for three disks), so it doesn't seem to be that exotic to me. Since this is a completely different mainboard, it seems clearly a software issue to me. The built-in ICH6 controller works fine however, so it may be PDC*0518/SII311* specific (which basically means any PCI SATA controller available locally). The SII3112 is a piece of crap that won't work reliably. Order something better (Soren recommends Promise cards). -- Brooks ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Problems with PCI SATA controller (bug in ATA driver? both ATAng and ATAmkIII)
(Please CC me in replies as I'm not a subscriber of this list) * Brooks Davis [EMAIL PROTECTED] [2005-10-20 06:15]: On Thu, Oct 20, 2005 at 04:30:48AM +0200, Alson van der Meulen wrote: Since this is a completely different mainboard, it seems clearly a software issue to me. The built-in ICH6 controller works fine however, so it may be PDC*0518/SII311* specific (which basically means any PCI SATA controller available locally). The SII3112 is a piece of crap that won't work reliably. Order something better (Soren recommends Promise cards). That's why I ordered a Promise SATAII 150 TX4. Only when I encountered the problems described in my emails, I got a controller from the only different brand I could find: SII (which costs about 125 euro because it has an Adaptec sticker on it, this thing is definitely going back), to rule out a broken controller. Most of my testing is done with the PDC40518, I occasionally use the SII3112A for comparison purposes (just an extra data point). Because the Promise cards are supposed to be pretty well supported, I'm suprised to encounter these issues in both ATAng and ATAmkIII. BTW: The ICH6 is now being stress tested for 5 hours, and still working fine, so the rest of the system is definitely stable. Alson ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Problems with PCI SATA controller (bug in ATA driver? both ATAng and ATAmkIII)
On Wednesday 19 October 2005 21:15, Brooks Davis wrote: On Thu, Oct 20, 2005 at 04:30:48AM +0200, Alson van der Meulen wrote: (Please CC me in replies as I'm not a subscriber of this list) Update: I tested with an i810-based mainboard (Celeron 1GHz, RTL8139 ethernet, Promise SATAII 150 TX4 controller, 3 SATA disks in RAID 5, FreeBSD 6.0-RC1). It remained stable for two hours. I suspect this is because it has far less bandwith (iostat showed only about 3MB/s to the disks, as opposed to 12MB/s with the i915 mainboard). After I added a dd if=/dev/zero of=foo bs=128k (this increased the bandwidth usage to the disk to about 9MB/s according to iostat), it crashed in about 40 minutes. This suggests that it crashes because of the large amount of I/O. However, it's only about 10MB/s per disk (for three disks), so it doesn't seem to be that exotic to me. Since this is a completely different mainboard, it seems clearly a software issue to me. The built-in ICH6 controller works fine however, so it may be PDC*0518/SII311* specific (which basically means any PCI SATA controller available locally). The SII3112 is a piece of crap that won't work reliably. Order something better (Soren recommends Promise cards). -- Brooks Why does the SII3112 work better in FreeBSD 5.4 than in 6.0? I have a Highpoint 1820 on order but it still bugs me having to toss hardware that worked in 5.4 inorder to keep current. -Mike ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]