Re: Current status of support for high end SAN hardware
man gmultipath is your friend ;) MULTIPATH ARCHITECTURE This is an active/passive multiple path architecture with no device knowledge or presumptions other than size matching built in. Therefore the user must exercise some care in selecting providers that do indeed represent multiple paths to the same underlying disk device. The reason for this is that there are several criteria across multiple underlying transport types that can indicate identity, but in all respects such identity can rarely be considered definitive. Regards, Daniel Stefan Lambrev ha scritto: Daniel Ponticello wrote: On FreeBSD7, i'm succesfully using Qlogic 4gb fibre channel HBAs (ISP driver) attached to Fibre Brocade Switch and IBM DS4700 (14 disks array) using 4 way multipath with gmultipath. So far the support in gmultipath is active/passive only? I think in RH5 you can have active/active. Am I right? Regards, Daniel Andy Kosela ha scritto: Hi all, What is the current status of support for high end SAN hardware in FreeBSD? I'm especially interested in support for HP EVA/XP disk arrays, Qlogic HBAs, multipathing. How FreeBSD compares in this environment to RHEL 5? -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: kernel panic on em0/taskq
I just checked the link you have reported. It looks like the problem is present only on SMP machines with both ULE and 4BSD scheduler. I can confirm that the problem is also present on 6.3-Stable. Basically, it freezes before collecting dump and before being able to reboot. I wish i could have more informations to open a PR. Thanks, Daniel Jeremy Chadwick ha scritto: On Sun, Jun 08, 2008 at 03:07:08PM +0200, Daniel Ponticello wrote: Sorry, I did not read well you suggestion ;) Anyway, the system reboots correctly if I issue the reboot command from command line. Should i adjust those values anyway? I'd recommend adjusting them and see if the bug (not automatically rebooting on panic) changes. I'm guessing it won't (especially if reboot(8) works fine), but it's worth trying. You're the 2nd person I've seen who has reported this problem (re: FreeBSD not properly rebooting on panic). The other person: http://lists.freebsd.org/pipermail/freebsd-stable/2008-May/042250.html I'll add this to my Commonly reported issues Wiki. As for the problem itself, sorry, I have no idea what's causing it. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: kernel panic on em0/taskq
Sorry, I did not read well you suggestion ;) Anyway, the system reboots correctly if I issue the reboot command from command line. Should i adjust those values anyway? Thanks, Daniel Daniel Ponticello ha scritto: Hello, disabling acpi is not an option, since i'm running SMP. I have several other systems running 7.0 Release without problems, so it might be something on 7-Stable. Thanks, Daniel Jeremy Chadwick ha scritto: On Sat, Jun 07, 2008 at 11:34:41PM +0200, Daniel Ponticello wrote: Hello, i'm experiencing periodic kernel panics on a server with FreeBSD 7.0-STABLE #0: Tue May 20 19:09:43 CEST 2008. My big problem is that the system is not performing memory dumping and/or automatic reoboot, it just stays there. Try adjusting some of these sysctl values: hw.acpi.disable_on_reboot hw.acpi.handle_reboot You're using VMware, which may or may not behave properly anyways. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: kernel panic on em0/taskq
Hello, disabling acpi is not an option, since i'm running SMP. I have several other systems running 7.0 Release without problems, so it might be something on 7-Stable. Thanks, Daniel Jeremy Chadwick ha scritto: On Sat, Jun 07, 2008 at 11:34:41PM +0200, Daniel Ponticello wrote: Hello, i'm experiencing periodic kernel panics on a server with FreeBSD 7.0-STABLE #0: Tue May 20 19:09:43 CEST 2008. My big problem is that the system is not performing memory dumping and/or automatic reoboot, it just stays there. Try adjusting some of these sysctl values: hw.acpi.disable_on_reboot hw.acpi.handle_reboot You're using VMware, which may or may not behave properly anyways. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: kernel panic on em0/taskq
Hello Jack! There was a problem in the watchdog path, I don't recall if it was checked in to STABLE, I will check after the weekend. But, there is also the question of why you are in the watchdog path in the first place. I tried to apply the latest patch 1.184.2.3 2008/05/21 21:34:05 which includes the watchdog fix you mentioned. Let's see if it helps! No idea of why i'm in the watchdog path, but I forgot to add that this is virtualized VM on VmWare ESX (it emulates em interface)... so i guess it might the cause of why i am in the watchdog path. Do you have any ideas of why i wasn't able to collect dump and the system did not reboot automatically? Thanks, Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
kernel panic on em0/taskq
Hello, i'm experiencing periodic kernel panics on a server with FreeBSD 7.0-STABLE #0: Tue May 20 19:09:43 CEST 2008. My big problem is that the system is not performing memory dumping and/or automatic reoboot, it just stays there. Here' console output: em0: watchdog timeout -- resetting kernel trap 12 with interrupts disabled Fatal trap 12: page fault while in kernel mode cpuid = 0; apic_id = 00 fault virtual address = 0x14 fault code = supervisor read, page not present intruction pointerr = 0x20:0xc056e2ce stack pointer = 0x28:0xe537fc08 frame pointer= 0x28:0xe537fc28 code segment= base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags = resume, IOPL = 0 current process = 29 (em0 taskq) trap number = 12 panic: page fault cpuid = 0 It just stays there, unresponsive (no automatic reboot). Any ideas? Thanks, Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Current status of support for high end SAN hardware
On FreeBSD7, i'm succesfully using Qlogic 4gb fibre channel HBAs (ISP driver) attached to Fibre Brocade Switch and IBM DS4700 (14 disks array) using 4 way multipath with gmultipath. Regards, Daniel Andy Kosela ha scritto: Hi all, What is the current status of support for high end SAN hardware in FreeBSD? I'm especially interested in support for HP EVA/XP disk arrays, Qlogic HBAs, multipathing. How FreeBSD compares in this environment to RHEL 5? -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
lagg interfaces on -stable
Hello, i have configured lagg interface on two Broacom (bce0 bce1). I have tried with laggproto lacp (supported by the Nortel switch), with fce and failover, but they all shows the same symptom: Everything works fine until i unplug the cable of the first interface (bce0), it will show status: no carrier even on lagg0 interface, while bce0 shows no carrier (correct) and bce1 is active. Any ideas? Thanks! Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: 7-STABLE: bridge and em
Guido Falsi ha scritto: I discovered the same thing while experimenting with qemu and bridgeng. I think it simply works different from (for example) widnows bridging. I think it's meant to be like that. It also looks more logical either. I think of the bridge as just a packet router, which routes them to all the interfaces(physical and virtual as well) so if the bridge intercepts them with it's own address they can't ereach other interfaces, obviosusly. Maybe I'm wrong here? No, you are absolutely right! This is how bridging works. It is not necessary for the birdge interface to have its own mac address, since it will just forward the ethernet frame to all interfaces that are part of the bridge device, with an exception for the interface that received the frame. Later, when the bridge learns the mac address location, it will just forward the ethernet frame to the correct interface. Therefore, if the frame destination is the mac address of the bridge, it will not forward it, since it simply already arrived at destination. Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Disk access/MPT under ESX3.5
Oh, btw, this make improvements only on 7.0-Release and Stable using ULE scheduler. - Using 4BSD Scheduler will improve disk access speed but no cpu usage. - On FreeBSD 6.2 and 6.3 this make no difference on cpu usage/speed (about 60mb/sec maximum using 100% cpu). Thanks, Daniel Daniel Ponticello ha scritto: Much better: endevor# camcontrol negotiate 0:0 -W 16 -O 127 -R 40.000 Current Parameters: (pass0:mpt0:0:0:0): sync parameter: 8 (pass0:mpt0:0:0:0): frequency: 160.000MHz (pass0:mpt0:0:0:0): offset: 127 (pass0:mpt0:0:0:0): bus width: 16 bits (pass0:mpt0:0:0:0): disconnection is enabled (pass0:mpt0:0:0:0): tagged queueing is enabled endevor# camcontrol inquiry da0 pass0: Fixed Direct Access SCSI-2 device pass0: 80.000MB/s transfers (40.000MHz, offset 127, 16bit), Command Queueing Enabled endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 20.364577 secs (51490193 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.525560 secs (90978311 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.833996 secs (88607094 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 12.779979 secs (82048335 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.812926 secs (88765137 bytes/sec) Also CPU system load is lower: 40% instead of 70% on dual cpu, and 23% instead of 50% on quad cpu. Still not as fast as debian, but really really usable now =) how i can make it mantain these settings at boot? Also, do you know why the first dd was slower? Thanks, Daniel Shunsuke SHINOMIYA ha scritto: Hello, Daniel Can you execute camcontrol with some parameters again? For example, `camcontrol negotiate 0:0 -W 16 -O 127 -R 40.000'. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Disk access/MPT under ESX3.5
Much better: endevor# camcontrol negotiate 0:0 -W 16 -O 127 -R 40.000 Current Parameters: (pass0:mpt0:0:0:0): sync parameter: 8 (pass0:mpt0:0:0:0): frequency: 160.000MHz (pass0:mpt0:0:0:0): offset: 127 (pass0:mpt0:0:0:0): bus width: 16 bits (pass0:mpt0:0:0:0): disconnection is enabled (pass0:mpt0:0:0:0): tagged queueing is enabled endevor# camcontrol inquiry da0 pass0: Fixed Direct Access SCSI-2 device pass0: 80.000MB/s transfers (40.000MHz, offset 127, 16bit), Command Queueing Enabled endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 20.364577 secs (51490193 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.525560 secs (90978311 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.833996 secs (88607094 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 12.779979 secs (82048335 bytes/sec) endevor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 11.812926 secs (88765137 bytes/sec) Also CPU system load is lower: 40% instead of 70% on dual cpu, and 23% instead of 50% on quad cpu. Still not as fast as debian, but really really usable now =) how i can make it mantain these settings at boot? Also, do you know why the first dd was slower? Thanks, Daniel Shunsuke SHINOMIYA ha scritto: Hello, Daniel Can you execute camcontrol with some parameters again? For example, `camcontrol negotiate 0:0 -W 16 -O 127 -R 40.000'. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Disk access/MPT under ESX3.5
Hello, monitor# camcontrol negotiate 0:0 -W 16 Current Parameters: (pass0:mpt0:0:0:0): sync parameter: 0 (pass0:mpt0:0:0:0): offset: 0 (pass0:mpt0:0:0:0): bus width: 8 bits (pass0:mpt0:0:0:0): disconnection is enabled (pass0:mpt0:0:0:0): tagged queueing is enabled monitor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 32.421679 secs (32341817 bytes/sec) monitor# dd if=/dev/zero of=/var/tmp/dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 20.355797 secs (51512402 bytes/sec) No improvements. But it looks like it did not renegotiated the transfers data rate that for some odd reasons are setted as 3.3MB/S instead of 320mb/s. I made some tests using linux 2.18 (debian): debiantest:/home/daniel# uname -a Linux debiantest 2.6.18-6-686 #1 SMP Sun Feb 10 22:11:31 UTC 2008 i686 GNU/Linux scsi0 : ioc0: LSI53C1030, FwRev=h, Ports=1, MaxQ=128, IRQ=169 Vendor: VMwareModel: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 target0:0:0: Beginning Domain Validation target0:0:0: Domain Validation skipping write tests target0:0:0: Ending Domain Validation target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU RDSTRM RTI WRFLOW PCOMP (6.25 ns, offset 127) debiantest:/home/daniel# dd if=/dev/zero of=dead.file bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 5.01316 seconds, 209 MB/s =( Shunsuke SHINOMIYA ha scritto: da0 at mpt0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-2 device da0: 3.300MB/s transfers da0: Command Queueing Enabled da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C) Can you re-negotiate transfer rate, using camcontrol? `camcontrol negotiate 0:0 -W 16' might improve that transfer settings. camcontrol needs passthrough device support in a kernel. -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Disk access/MPT under ESX3.5
Clifton Royston ha scritto: If you are accessing a software emulation of a SCSI disk, I would offhand expect the CPU load to go up substantially when you are reading or writing it at the maximum achievable bandwidth. You can't expect normal relative load results under an emulator, and while most application or kernel code runs natively, I/O under VMWare will zoom in and out of the emulator a lot. I'm afraid I can't give you a definitive answer as I have VMWare but haven't set up FreeBSD under it yet. -- Clifton Thanks Clifton, my problem is that system (console) becomes very unresponsive when I/O is writing at maximum bandwidth. Anyway, system becomes more responsive when using ULE scheduler instead of 4BSD during I/O. Is there a way to limit the maximum I/O bandwidth used by the controller? Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Disk access/MPT under ESX3.5
Hello, i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both AMD64 and I386 arch with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server. Everything runs almost fine, except for disk access. Performance is quite OK (around 60mb/sec), but when accessing disks, System (kernel) CPU load goes up to 70%. This doesn't look normal. The same behavior is present on all test configurations. monitor# dmesg | grep mpt mpt0: port 0x1080-0x10ff mem 0xf481-0xf4810fff irq 17 at device 16.0 on pci0 mpt0: [ITHREAD] mpt0: MPI Version=1.2.0.0 da0 at mpt0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-2 device da0: 3.300MB/s transfers da0: Command Queueing Enabled da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C) Any suggestions? Thanks, Daniel -- WBR, Cordiali Saluti, Daniel Ponticello, VP of Engineering Network Coordination Centre of Skytek --- - For further information about our services: - Please visit our website at http://www.Skytek.it --- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"