Re: Clang as default compiler November 4th
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 As of last week, 4,680 ports out of 23,857 failed to build with clang on 9-amd64. That's almost a 20% failure rate. Until we have better support for either building ports with clang, or have better support for the idea of a "ports compiler," this change is premature. The ports are an important part of the FreeBSD Operating _System_, and pulling the trigger on the default compiler before the ports problems are addressed robustly seems like a big fat FU. That said, I agree that this issue needs to be addressed. In fact, 9 months before the release of 9.0 I said on the internal committers list that there was no point in making a new release until we had thoroughly addressed both the default compiler for the base, and resolving the "ports compiler" issue. While there has been some movement on the former, there has been nothing done on the latter for years now, even though everyone agrees that it is an important issue. I'd like to request that rather than moving the default compiler prematurely that you call for volunteers to address the problems with the ports. Both the issues of fixing more ports to build correctly with clang, and the issue of defining a "ports compiler" version of gcc (and appropriate infrastructure) for those that can't. Once those issues are resolved there would not be any further obstacles to moving the default. Until they are, the change is premature. Doug On 09/10/2012 14:12, Brooks Davis wrote: > [Please confine your replies to toolch...@freebsd.org to keep the thread > on the most relevant list.] > > For the past several years we've been working towards migrating from > GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD > 10.0 with Clang as the default compiler on i386 and amd64 platforms. To > this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 > platforms on November 4th. > > What does the mean to you? > > * When you build world after the default is changed /usr/bin/cc, cpp, and >c++ will be links to clang. > > * This means the initial phase of buildworld and "old style" kernel >compilation will use clang instead of gcc. This is known to work. > > * It also means that ports will build with clang by default. A major >of ports work, but a significant number are broken or blocked by >broken ports. For more information see: > http://wiki.freebsd.org/PortsAndClang > > What issues remain? > > * The gcc->clang transition currently requires setting CC, CXX, and CPP >in addition to WITH_CLANG_IS_CC. I will post a patch to toolchain@ >to address this shortly. > > * Ports compiler selection infrastructure is still under development. > > * Some ports could build with clang with appropriate tweaks. > > What can you do to help? > > * Switch (some of) your systems. Early adoption can help us find bugs. > > * Fix ports to build with clang. If you don't have a clang system, you >can use the CLANG/amd64 or CLANG/i386 build environments on >redports.org. > > tl;dr: Clang will become the default compiler for x86 architectures on > 2012-11-04 > > -- Brooks > - -- I am only one, but I am one. I cannot do everything, but I can do something. And I will not let what I cannot do interfere with what I can do. -- Edward Everett Hale, (1822 - 1909) -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.19 (FreeBSD) iQEcBAEBCAAGBQJQTtH8AAoJEFzGhvEaGryEU3gIAJ3X2EHDCVnkC/CYTMOkceho KS6qVcQK4OCbbG+8TKkjrHNdiBO7ZuJKxfvr/TZC1zNKc8wYBlWo3s07wCHmu8Nj OP8UwTMKumnljnYlRanQiLO9iAZKwGfI2gdxJTb5YABN2StRMXnD17Yyic6pw090 7l+cQw3iJAI8vbO4su33HJOhru0o4XLodbazHXFc6RjabAfXfuk1W6V0PfAodVC9 ZUGbF4WA7F0sJOEVuohmSk8ICHQRzTWofpdvCTlhHc1XYTaQ9u/dLGUp1C8g/BUG CJQua7wsBdf4VgsvlYBxTAOEpURqot0Ild7zQL+9vZtf7cGCsfalpwBWzQ9J/Wk= =gRkN -END PGP SIGNATURE- ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
On Mon, Sep 10, 2012 at 7:15 PM, matt wrote: ... > mfip was necessary, and allowed smartctl to work with '-d sat' > > bonnie++ comparison. Run with no options immediately after system boot. In > both cases the same disks are used, two Seagate Barracuda 1TB 3G/S (twin > platter) and a Barracuda 500G 3G/s (single platter) in a zfs triple mirror > that the system was booted from. All are 7200 RPM drives with 32mb cache, > and mediocre performance compared to my hitachi 7k3000s or the 15k sas > cheetahs at work etc. Firmwares were the latest 2108it vs the latest imr_fw > that work on the 9240/9220/m1015/drake skinny. I wish I had some 6g ssds to > try! > > MPS: > Version 1.96 --Sequential Output-- --Sequential Input- --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > flatline.local 32G 122 99 71588 24 53293 20 284 90 222157 33 252.6 49 > Latency 542ms 356ms 914ms 991ms 337ms 271ms > Version 1.96 --Sequential Create-- Random Create > flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 22197 93 9367 27 16821 99 23555 99 + +++ 23717 99 > Latency 31650us 290ms 869us 23036us 66us 131us > 1.96,1.96,flatline.local,1,1347322810,32G,,122,99,71588,24,53293,20,284,90,222157,33,252.6,49,16,22197,93,9367,27,16821,99,23555,99,+,+++,23717,99,542ms,356ms,914ms,991ms,337ms,271ms,31650us,290ms,869us,23036us,66us,131us > > MFI: > Version 1.96 --Sequential Output-- --Sequential Input- --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > flatline.local 32G 125 99 71443 24 53177 21 317 99 220280 33 255.3 52 > Latency 533ms 566ms 1134ms 86565us 357ms 252ms > Version 1.96 --Sequential Create-- Random Create > flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 22347 94 12389 30 16804 100 18729 99 27798 99 5317 99 > Latency 33818us 233ms 558us 26581us 75us 12319us > 1.96,1.96,flatline.local,1,1347329123,32G,,125,99,71443,24,53177,21,317,99,220280,33,255.3,52,16,22347,94,12389,30,16804,100,18729,99,27798,99,5317,99,533ms,566ms,1134ms,86565us,357ms,252ms,33818us,233ms,558us,26581us,75us,12319us > > A close race, with some wins for each. Latency on sequential input and > deleted files per second appear to be interesting salients. > A lot of the other stuff is back and forth and probably not statistically > significant (although not much of a sample set :) ). > > I tried to control as many variables as possible, but obviously it's one > controller in one configuration, Your Mileage May Vary. Try upping the queue depth (hw.mfi.max_cmds); this is controller dependent. Cheers, -Garrett ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Clang as default compiler November 4th
On 09/10/12 14:22, Daniel Eischen wrote: On Mon, 10 Sep 2012, Brooks Davis wrote: [Please confine your replies to toolch...@freebsd.org to keep the thread on the most relevant list.] For the past several years we've been working towards migrating from GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD 10.0 with Clang as the default compiler on i386 and amd64 platforms. To this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 platforms on November 4th. What does the mean to you? * When you build world after the default is changed /usr/bin/cc, cpp, and c++ will be links to clang. * This means the initial phase of buildworld and "old style" kernel compilation will use clang instead of gcc. This is known to work. * It also means that ports will build with clang by default. A major of ports work, but a significant number are broken or blocked by broken ports. For more information see: http://wiki.freebsd.org/PortsAndClang What issues remain? * The gcc->clang transition currently requires setting CC, CXX, and CPP in addition to WITH_CLANG_IS_CC. I will post a patch to toolchain@ to address this shortly. I assume this will be done, tested and committed before 2012-11-04 (or whenever the switchover date is). * Ports compiler selection infrastructure is still under development. This should be a prerequisite before making the switch, given that ports will be broken without a work-around for building them with gcc. I've been using a somewhat dirty method of doing this by checking the presence of a file in the port's main directory, e.g. if "basegcc" is present, build with that, if "clang" is present use it, otherwise default to gcc47. Obviously that configuration is system specific, but the fundamental idea is look for a file in the ports directory that dictates the compiler. Perhaps even add a make ccconfig. It works quite nicely because you can resume a portmaster spree without having to suspend and change CC manually, or build all clang ports first etc. Further csup doesn't touch files it doesn't no about, so updating the tree (without wiping it out) preserves the fact you'd prefer or need to build a given port with something else. There are definitely some ports that have been ignoring libmap.conf, which tends to require me to build some of their dependencies with base gcc, but otherwise I've been running this system for a few months and it works quite well...portmaster can upgrade without user intervention, and it's quite easy to add cflags logic. Granted this works for me and is probably not the ideal solution...also hacked on it to post, so probably typos :) Something like this in make.conf (with fstack-protector-all for all ports which works great) .if !empty(.CURDIR:M/usr/ports/*) CFLAGS+= -fstack-protector-all .endif .if empty(.CURDIR:M/usr/ports/*) && exists(/usr/local/bin/gcc47) && !exists(basegcc) && !exists(clang) # this was occasionally necessary #LDFLAGS+=-lintl # custom cflags if desired #CFLAGS+=-custom cflags for gcc47 #custom cputype if desired CPUTYPE=amdfam10 CC=gcc47 CPP=cpp47 CXX=g++47 .endif .if empty(.CURDIR:M/usr/ports/*) && exists(/usr/bin/clang) && exists(clang) .if !defined(CC) || ${CC} == "cc" CC=clang .endif .if !defined(CXX) || ${CXX} == "c++" CXX=clang++ .endif .if !defined(CPP) || ${CPP} == "cpp" CPP=clang-cpp .endif NO_WERROR= WERROR= .endif Usage is as simple as "touch basegcc" in the port dir or "touch clang" etc. to select appropriate compiler Matt ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
On 09/10/12 11:35, Andrey Zonov wrote: On 9/10/12 9:14 PM, matt wrote: On 09/10/12 05:38, Achim Patzner wrote: Hi! We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as mfi0: port 0x2000-0x20ff mem 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 mfi0: Using MSI mfi0: Megaraid SAS driver Ver 4.23 mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 or mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 rev=0x03 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 2208 [Thunderbolt]' class = mass storage subclass = RAID and seems to be doing quite well. As long as it isn’t used… When the system is getting a bit more IO load it is getting close to unusable as soon as there are a few writes (independent of configuration, it is even sucking as a glorified S-ATA controller). Equipping it with an older (unsupported) controller like an SRCSASRB (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 rev=0x04 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 1078' class = mass storage subclass = RAID) solves the problem but won’t make Intel’s support happy. Has anybody similar experiences with the mfi driver? Any good ideas besides running an unsupported configuration? Achim ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi. Performance was excellent for mfisyspd volumes, as I compared using the same hardware but with firmware (2108it.bin) that attaches under mps. Bonnie++ results on random disks were very close if not identical between mfi and mps. ZFS performance was also identical between a mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear mfisyspd volumes are true sector-for-sector pass through devices. However, I could not get smartctl to see an mfisyspd volume (it claimed there was no such file...?) and so I flashed the controller back to mps for now. A shame, because I really like the mfi driver better, and mfiutil worked great (even to flash firmware updates). Have you got /dev/pass* when the controller run under mfi driver? If so, try to run smartctl on them. If not, add 'device mfip' in your kernel config file. mfip was necessary, and allowed smartctl to work with '-d sat' bonnie++ comparison. Run with no options immediately after system boot. In both cases the same disks are used, two Seagate Barracuda 1TB 3G/S (twin platter) and a Barracuda 500G 3G/s (single platter) in a zfs triple mirror that the system was booted from. All are 7200 RPM drives with 32mb cache, and mediocre performance compared to my hitachi 7k3000s or the 15k sas cheetahs at work etc. Firmwares were the latest 2108it vs the latest imr_fw that work on the 9240/9220/m1015/drake skinny. I wish I had some 6g ssds to try! MPS: Version 1.96 --Sequential Output-- --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP flatline.local 32G 122 99 71588 24 53293 20 284 90 222157 33 252.6 49 Latency 542ms 356ms 914ms 991ms 337ms 271ms Version 1.96 --Sequential Create-- Random Create flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 22197 93 9367 27 16821 99 23555 99 + +++ 23717 99 Latency 31650us 290ms 869us 23036us 66us 131us 1.96,1.96,flatline.local,1,1347322810,32G,,122,99,71588,24,53293,20,284,90,222157,33,252.6,49,16,22197,93,9367,27,16821,99,23555,99,+,+++,23717,99,542ms,356ms,914ms,991ms,337ms,271ms,31650us,290ms,869us,23036us,66us,131us MFI: Version 1.96 --Sequential Output-- --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP flatline.local 32G 125 99 71443 24 53177 21 317 99 220280 33 255.3 52 Latency 533ms 566ms 1134ms 86565us 357ms 252ms Version 1.96 --Sequential Create-- Random Create flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 22347 94 12389 30 16804 100 18729 99 27798 99 5317 99 Latency 33818us 233ms 558us 26581us 75us 12319us 1.96,1.96,flatline.local,1,1347329123,32G,,125,99,71443,24,53177,21,317,99,220280,33,255.3,52,16,22347,94,12389,30,16804,100,18729,99,27798,99,5317,99,533ms,566ms,1134ms,86565us,357ms,252ms,33818us,233ms,558us,26581us,75us,12319us A close race, with some w
Re: Clang as default compiler November 4th
On Mon, Sep 10, 2012 at 05:22:37PM -0400, Daniel Eischen wrote: > On Mon, 10 Sep 2012, Brooks Davis wrote: > > > [Please confine your replies to toolch...@freebsd.org to keep the thread > > on the most relevant list.] > > > > For the past several years we've been working towards migrating from > > GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD > > 10.0 with Clang as the default compiler on i386 and amd64 platforms. To > > this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 > > platforms on November 4th. > > > > What does the mean to you? > > > > * When you build world after the default is changed /usr/bin/cc, cpp, and > > c++ will be links to clang. > > > > * This means the initial phase of buildworld and "old style" kernel > > compilation will use clang instead of gcc. This is known to work. > > > > * It also means that ports will build with clang by default. A major > > of ports work, but a significant number are broken or blocked by > > broken ports. For more information see: > > http://wiki.freebsd.org/PortsAndClang > > > > What issues remain? > > > > * The gcc->clang transition currently requires setting CC, CXX, and CPP > > in addition to WITH_CLANG_IS_CC. I will post a patch to toolchain@ > > to address this shortly. > > I assume this will be done, tested and committed before 2012-11-04 > (or whenever the switchover date is). Pending review it will be done this week. > > * Ports compiler selection infrastructure is still under development. > > This should be a prerequisite before making the switch, given > that ports will be broken without a work-around for building > them with gcc. We've defacto done that for more than a year. Some progress has resulted, but not enough. I will be helping fix ports and I hope others do as well. It's worth noting that a switchable compiler isn't a magic bullet. Many ports will need to be patched to support a compiler other than /usr/bin/cc or /usr/bin/gcc. -- Brooks pgphhVw0hzHmw.pgp Description: PGP signature
Re: Clang as default compiler November 4th
On Mon, 10 Sep 2012, Brooks Davis wrote: [Please confine your replies to toolch...@freebsd.org to keep the thread on the most relevant list.] For the past several years we've been working towards migrating from GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD 10.0 with Clang as the default compiler on i386 and amd64 platforms. To this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 platforms on November 4th. What does the mean to you? * When you build world after the default is changed /usr/bin/cc, cpp, and c++ will be links to clang. * This means the initial phase of buildworld and "old style" kernel compilation will use clang instead of gcc. This is known to work. * It also means that ports will build with clang by default. A major of ports work, but a significant number are broken or blocked by broken ports. For more information see: http://wiki.freebsd.org/PortsAndClang What issues remain? * The gcc->clang transition currently requires setting CC, CXX, and CPP in addition to WITH_CLANG_IS_CC. I will post a patch to toolchain@ to address this shortly. I assume this will be done, tested and committed before 2012-11-04 (or whenever the switchover date is). * Ports compiler selection infrastructure is still under development. This should be a prerequisite before making the switch, given that ports will be broken without a work-around for building them with gcc. -- DE ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Clang as default compiler November 4th
[Please confine your replies to toolch...@freebsd.org to keep the thread on the most relevant list.] For the past several years we've been working towards migrating from GCC to Clang/LLVM as our default compiler. We intend to ship FreeBSD 10.0 with Clang as the default compiler on i386 and amd64 platforms. To this end, we will make WITH_CLANG_IS_CC the default on i386 and amd64 platforms on November 4th. What does the mean to you? * When you build world after the default is changed /usr/bin/cc, cpp, and c++ will be links to clang. * This means the initial phase of buildworld and "old style" kernel compilation will use clang instead of gcc. This is known to work. * It also means that ports will build with clang by default. A major of ports work, but a significant number are broken or blocked by broken ports. For more information see: http://wiki.freebsd.org/PortsAndClang What issues remain? * The gcc->clang transition currently requires setting CC, CXX, and CPP in addition to WITH_CLANG_IS_CC. I will post a patch to toolchain@ to address this shortly. * Ports compiler selection infrastructure is still under development. * Some ports could build with clang with appropriate tweaks. What can you do to help? * Switch (some of) your systems. Early adoption can help us find bugs. * Fix ports to build with clang. If you don't have a clang system, you can use the CLANG/amd64 or CLANG/i386 build environments on redports.org. tl;dr: Clang will become the default compiler for x86 architectures on 2012-11-04 -- Brooks pgpNRll1uE4zc.pgp Description: PGP signature
Re: mfi driver performance
On 09/10/12 11:35, Andrey Zonov wrote: > On 9/10/12 9:14 PM, matt wrote: >> On 09/10/12 05:38, Achim Patzner wrote: >>> Hi! >>> >>> We’re testing a new Intel S2600GL-based server with their recommended RAID >>> adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified >>> as >>> >>> mfi0: port 0x2000-0x20ff mem >>> 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 >>> mfi0: Using MSI >>> mfi0: Megaraid SAS driver Ver 4.23 >>> mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 >>> >>> or >>> >>> mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 >>> rev=0x03 hdr=0x00 >>> vendor = 'LSI Logic / Symbios Logic' >>> device = 'MegaRAID SAS 2208 [Thunderbolt]' >>> class = mass storage >>> subclass = RAID >>> >>> and seems to be doing quite well. >>> >>> As long as it isn’t used… >>> >>> When the system is getting a bit more IO load it is getting close to >>> unusable as soon as there are a few writes (independent of configuration, >>> it is even sucking as a glorified S-ATA controller). Equipping it with an >>> older (unsupported) controller like an SRCSASRB >>> (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 >>> rev=0x04 hdr=0x00 >>> vendor = 'LSI Logic / Symbios Logic' >>> device = 'MegaRAID SAS 1078' >>> class = mass storage >>> subclass = RAID) solves the problem but won’t make Intel’s support >>> happy. >>> >>> Has anybody similar experiences with the mfi driver? Any good ideas besides >>> running an unsupported configuration? >>> >>> >>> Achim >>> >>> ___ >>> freebsd-current@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-current >>> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" >> I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi. >> Performance was excellent for mfisyspd volumes, as I compared using the >> same hardware but with firmware (2108it.bin) that attaches under mps. >> Bonnie++ results on random disks were very close if not identical >> between mfi and mps. ZFS performance was also identical between a >> mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear >> mfisyspd volumes are true sector-for-sector pass through devices. >> >> However, I could not get smartctl to see an mfisyspd volume (it claimed >> there was no such file...?) and so I flashed the controller back to mps >> for now. A shame, because I really like the mfi driver better, and >> mfiutil worked great (even to flash firmware updates). >> > Have you got /dev/pass* when the controller run under mfi driver? If > so, try to run smartctl on them. If not, add 'device mfip' in your > kernel config file. > I will try mfi firmware again tonight. With ZFS it seemed happy whether the pool was /dev/da* or /dev/mfisyspd*. Is the mfisyspd device name set in stone? It's quite long! Matt ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
On 9/10/12 9:14 PM, matt wrote: > On 09/10/12 05:38, Achim Patzner wrote: >> Hi! >> >> We’re testing a new Intel S2600GL-based server with their recommended RAID >> adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as >> >> mfi0: port 0x2000-0x20ff mem >> 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 >> mfi0: Using MSI >> mfi0: Megaraid SAS driver Ver 4.23 >> mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 >> >> or >> >> mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 >> rev=0x03 hdr=0x00 >> vendor = 'LSI Logic / Symbios Logic' >> device = 'MegaRAID SAS 2208 [Thunderbolt]' >> class = mass storage >> subclass = RAID >> >> and seems to be doing quite well. >> >> As long as it isn’t used… >> >> When the system is getting a bit more IO load it is getting close to >> unusable as soon as there are a few writes (independent of configuration, it >> is even sucking as a glorified S-ATA controller). Equipping it with an >> older (unsupported) controller like an SRCSASRB >> (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 >> rev=0x04 hdr=0x00 >> vendor = 'LSI Logic / Symbios Logic' >> device = 'MegaRAID SAS 1078' >> class = mass storage >> subclass = RAID) solves the problem but won’t make Intel’s support >> happy. >> >> Has anybody similar experiences with the mfi driver? Any good ideas besides >> running an unsupported configuration? >> >> >> Achim >> >> ___ >> freebsd-current@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-current >> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" > I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi. > Performance was excellent for mfisyspd volumes, as I compared using the > same hardware but with firmware (2108it.bin) that attaches under mps. > Bonnie++ results on random disks were very close if not identical > between mfi and mps. ZFS performance was also identical between a > mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear > mfisyspd volumes are true sector-for-sector pass through devices. > > However, I could not get smartctl to see an mfisyspd volume (it claimed > there was no such file...?) and so I flashed the controller back to mps > for now. A shame, because I really like the mfi driver better, and > mfiutil worked great (even to flash firmware updates). > Have you got /dev/pass* when the controller run under mfi driver? If so, try to run smartctl on them. If not, add 'device mfip' in your kernel config file. -- Andrey Zonov signature.asc Description: OpenPGP digital signature
Re: mfi driver performance
On 09/10/12 05:38, Achim Patzner wrote: > Hi! > > We’re testing a new Intel S2600GL-based server with their recommended RAID > adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as > > mfi0: port 0x2000-0x20ff mem > 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 > mfi0: Using MSI > mfi0: Megaraid SAS driver Ver 4.23 > mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 > > or > > mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 > rev=0x03 hdr=0x00 > vendor = 'LSI Logic / Symbios Logic' > device = 'MegaRAID SAS 2208 [Thunderbolt]' > class = mass storage > subclass = RAID > > and seems to be doing quite well. > > As long as it isn’t used… > > When the system is getting a bit more IO load it is getting close to unusable > as soon as there are a few writes (independent of configuration, it is even > sucking as a glorified S-ATA controller). Equipping it with an older > (unsupported) controller like an SRCSASRB > (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 > rev=0x04 hdr=0x00 > vendor = 'LSI Logic / Symbios Logic' > device = 'MegaRAID SAS 1078' > class = mass storage > subclass = RAID) solves the problem but won’t make Intel’s support > happy. > > Has anybody similar experiences with the mfi driver? Any good ideas besides > running an unsupported configuration? > > > Achim > > ___ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi. Performance was excellent for mfisyspd volumes, as I compared using the same hardware but with firmware (2108it.bin) that attaches under mps. Bonnie++ results on random disks were very close if not identical between mfi and mps. ZFS performance was also identical between a mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear mfisyspd volumes are true sector-for-sector pass through devices. However, I could not get smartctl to see an mfisyspd volume (it claimed there was no such file...?) and so I flashed the controller back to mps for now. A shame, because I really like the mfi driver better, and mfiutil worked great (even to flash firmware updates). Matt ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
buildworld after r240303 wants "make cleandepend" if -DNOCLEAN used
In case others want to save a bit of time by using -DNOCLEAN for buildworld, I found that after r240303 if "make cleandepend" isn't done, src/cddl/lib/libnvpair build fails with a whine: make: don't know how to make /usr/src/cddl/lib/libnvpair/../../../sys/cddl/compat/opensolaris/sys/debug.h. Stop The "cleandepend" first appears to take care of it. Peace, david -- David H. Wolfskill da...@catwhisker.org Depriving a girl or boy of an opportunity for education is evil. See http://www.catwhisker.org/~david/publickey.gpg for my public key. pgp2JVvACtC90.pgp Description: PGP signature
Raspberry PI gets USB support [FreeBSD 10 current]
Hi, For those that want to try the Raspberry PI and its USB ports: Add this to "sys/conf/files": dev/usb/controller/dwc_otg.coptional dwcotg arm/broadcom/bcm2835/dwc_otg_brcm.c optional dwcotg And add this to "RPI-B": device dwcotg device usb device umass Open ISSUE: External USB ports do not enumerate. Set address times out. Reason unknown. Maybe someone out there has any clues? --HPS ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
Might be worth testing anyway, as that would help prove controller or driver issue. - Original Message - From: "Achim Patzner" To: "Steven Hartland" Cc: Sent: Monday, September 10, 2012 2:19 PM Subject: Re: mfi driver performance Am 10.09.2012 um 14:57 schrieb Steven Hartland: How are you intending to use the controller? As a “launch-and-forget” RAID 1+0 sub-system using UFS (which reminds me to complain about sysinstall on volumes > 2 TB later). If you're looking to put a load of disk in for say ZFS have you tried flashing to a none RAID firmware so it uses mps instead of mfi? That would be wasting a perfectly good brain; the on-board SAS controller would be sufficient for that. Achim This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmas...@multiplay.co.uk. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
Am 10.09.2012 um 14:57 schrieb Steven Hartland: > How are you intending to use the controller? As a “launch-and-forget” RAID 1+0 sub-system using UFS (which reminds me to complain about sysinstall on volumes > 2 TB later). > If you're looking to put a load of disk in for say ZFS have you tried > flashing to a none RAID firmware so it uses mps instead of mfi? That would be wasting a perfectly good brain; the on-board SAS controller would be sufficient for that. Achim ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mfi driver performance
How are you intending to use the controller? If you're looking to put a load of disk in for say ZFS have you tried flashing to a none RAID firmware so it uses mps instead of mfi? Regards Steve - Original Message - From: "Achim Patzner" To: Sent: Monday, September 10, 2012 1:38 PM Subject: mfi driver performance Hi! We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as mfi0: port 0x2000-0x20ff mem 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 mfi0: Using MSI mfi0: Megaraid SAS driver Ver 4.23 mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 or mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 rev=0x03 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 2208 [Thunderbolt]' class = mass storage subclass = RAID and seems to be doing quite well. As long as it isn’t used… When the system is getting a bit more IO load it is getting close to unusable as soon as there are a few writes (independent of configuration, it is even sucking as a glorified S-ATA controller). Equipping it with an older (unsupported) controller like an SRCSASRB (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 rev=0x04 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 1078' class = mass storage subclass = RAID) solves the problem but won’t make Intel’s support happy. Has anybody similar experiences with the mfi driver? Any good ideas besides running an unsupported configuration? Achim ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmas...@multiplay.co.uk. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
mfi driver performance
Hi! We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as mfi0: port 0x2000-0x20ff mem 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5 mfi0: Using MSI mfi0: Megaraid SAS driver Ver 4.23 mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 or mfi0@pci0:5:0:0:class=0x010400 card=0x35138086 chip=0x005b1000 rev=0x03 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 2208 [Thunderbolt]' class = mass storage subclass = RAID and seems to be doing quite well. As long as it isn’t used… When the system is getting a bit more IO load it is getting close to unusable as soon as there are a few writes (independent of configuration, it is even sucking as a glorified S-ATA controller). Equipping it with an older (unsupported) controller like an SRCSASRB (mfi0@pci0:10:0:0: class=0x010400 card=0x100a8086 chip=0x00601000 rev=0x04 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'MegaRAID SAS 1078' class = mass storage subclass = RAID) solves the problem but won’t make Intel’s support happy. Has anybody similar experiences with the mfi driver? Any good ideas besides running an unsupported configuration? Achim ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"