Re: [zfs-discuss] SPARC SATA, please.
> > So if you get such a board be sure to avoid Samsung 750GB and > > 1TB disks. Samsung never aknowledged the bug, nor have they released > > a firmware update. And nVidia never said anything about it either. [...] > I'm a Hitachi disk user myself, and they work swell. The Seagates I have > in my X2200 M2 seem to work fine, as well. Yes, all HGST disks I've tried so far work just fine. > I've not tried any SSDs yet with the MCP55 - since they're heavily > Samsung under the hood (regardless of whose name is on the outside), I > _hope_ it was just a HD-specific firmware bug. I think it is quite HD-specific. I have another, slightly older, 160GB Samsung disk that worked fine as root disk in the X2200M2. If you do try an SSD please let us know. :-) Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Volker A. Brandt wrote: The MCP55 is the chipset currently in use in the Sun X2200 M2 series of servers. ... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have they released a firmware update. And nVidia never said anything about it either. Of course I only found out about it after buying lots of Samsung disks for our X2200s. Sigh... Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt That is true, and it slipped my mind. Thanks for reminding me, Volker. I'm a Hitachi disk user myself, and they work swell. The Seagates I have in my X2200 M2 seem to work fine, as well. I've not tried any SSDs yet with the MCP55 - since they're heavily Samsung under the hood (regardless of whose name is on the outside), I _hope_ it was just a HD-specific firmware bug. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of > servers. ... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have they released a firmware update. And nVidia never said anything about it either. Of course I only found out about it after buying lots of Samsung disks for our X2200s. Sigh... Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Simon Breden wrote: Miles, thanks for helping clear up the confusion surrounding this subject! My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. For the next NAS build later on this summer, I will go for an LSI 1068-based SAS/SATA configuration based on a PCIe expansion slot, rather than the ageing PCI-X slots. Using PCIe instead of PCI-X also opens up a load more possible motherboards, although as I want ECC support this still limits choices for mobos. I was thinking of using something like a Xeon E5504 (Nehalem) in the new NAS, and I've been hunting for a good, highly compatible mobo that will give the least aggro (trouble) with OpenSolaris, and this one looks good as it's pretty much totally Intel chipsets, and it has an LSI SAS1068E, which I trust should be supported by Solaris, and it also has additional PCIe slots for additional future expansion, and basic onboard graphics chip, and dual Intel GbE NICs: SuperMicro X8STi-3F: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STi-3F.cfm Any comments on this mobo welcome, plus suggestions for a possible PCIe-based 2+ port SATA card that is reliable and has a solid driver. Simon Note that the X8STi-3F requires an L-bracket riser card to use both the PCI-E x16 and the x8 slot, which will be mounted horizontally (and, likely, limited to low-profile cards). You'd likely have to use a custom Supermicro case for this to work. Otherwise, you're limited to the PCI-E x16 slot, in a standard vertical orientation. The board does have an IPMI-based KVM ethernet port, but I have no idea if it's supported under Solaris. Also, remember, that you'll have to order a Xeon CPU with this, NOT the i7 CPU, in order to get ECC memory support. Personally, I'd go for an AMD-based system, which is about the same cost, and a much better board: http://www.supermicro.com/Aplus/motherboard/Opteron2000/MCP55/H8DM3-2.cfm (comes with a 1068E SAS controller, AND the nVidia MCP55-based 6-port SATA controller, no need for any more PCI-cards, and it supports the add-in card for remote KVM console; it's a dual-socket, Extended ATX size, though). The MCP55 is the chipset currently in use in the Sun X2200 M2 series of servers. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Simon Breden wrote: I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. So, it sounds like if I use a 1068-based device, and I *don't* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller's BIOS or something? In the Sun onboard version of the 1068, the "JBOD" mode is the default. I don't know about the add-in cards, but I suspect it's the same. Worst case, you push Cntr-L (or whatever it prompts you for) at the BIOS initialization, and remove any RAID device it's configured. With no RAID devices configured, it runs as a pure HBA (i.e. in JBOD mode). -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
OK, thanks James. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On Thu, 25 Jun 2009 16:11:04 -0700 (PDT) Simon Breden wrote: > That sounds even better :) > > So what's the procedure to create a zpool using the 1068? same as any other device: # zpool create poolname vdev vdev vdev > Also, any special 'tricks /tips' / commands required for using a 1068-based > SAS/SATA device? no James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
That sounds even better :) So what's the procedure to create a zpool using the 1068? Also, any special 'tricks /tips' / commands required for using a 1068-based SAS/SATA device? Simon -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On Fri, Jun 26 at 8:55, James C. McPherson wrote: On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) Simon Breden wrote: > I think the confusion is because the 1068 can do "hardware" RAID, > it can and does write its own labels, as well as reserve space > for replacements of disks with slightly different sizes. But > that is only one mode of operation. So, it sounds like if I use a 1068-based device, and I *don't* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller's BIOS or something? It's even simpler than that with the 1068 - just don't use raidctl or the bios to create raid volumes and you'll have a bunch of plain disks. No forcing required. Exactly. Worked as such out-of-the-box with no forcing of any kind for me. --eric -- Eric D. Mudama edmud...@mail.bounceswoosh.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) Simon Breden wrote: > > I think the confusion is because the 1068 can do "hardware" RAID, it > can and does write its own labels, as well as reserve space for replacements > of disks with slightly different sizes. But that is only one mode of > operation. > > So, it sounds like if I use a 1068-based device, and I *don't* want it to > write labels to the drives to allow easy portability of drives to a different > controller, then I need to avoid the "RAID" mode of the device and instead > force it to use JBOD mode. Is this easily selectable? I guess you just avoid > the "Use RAID mode" option in the controller's BIOS or something? It's even simpler than that with the 1068 - just don't use raidctl or the bios to create raid volumes and you'll have a bunch of plain disks. No forcing required. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. So, it sounds like if I use a 1068-based device, and I *don't* want it to write labels to the drives to allow easy portability of drives to a different controller, then I need to avoid the "RAID" mode of the device and instead force it to use JBOD mode. Is this easily selectable? I guess you just avoid the "Use RAID mode" option in the controller's BIOS or something? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Miles, thanks for helping clear up the confusion surrounding this subject! My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. For the next NAS build later on this summer, I will go for an LSI 1068-based SAS/SATA configuration based on a PCIe expansion slot, rather than the ageing PCI-X slots. Using PCIe instead of PCI-X also opens up a load more possible motherboards, although as I want ECC support this still limits choices for mobos. I was thinking of using something like a Xeon E5504 (Nehalem) in the new NAS, and I've been hunting for a good, highly compatible mobo that will give the least aggro (trouble) with OpenSolaris, and this one looks good as it's pretty much totally Intel chipsets, and it has an LSI SAS1068E, which I trust should be supported by Solaris, and it also has additional PCIe slots for additional future expansion, and basic onboard graphics chip, and dual Intel GbE NICs: SuperMicro X8STi-3F: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STi-3F.cfm Any comments on this mobo welcome, plus suggestions for a possible PCIe-based 2+ port SATA card that is reliable and has a solid driver. Simon -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Miles Nordin wrote: "sb" == Simon Breden writes: sb> The situation regarding lack of open source drivers for these sb> LSI 1068/1078-based cards is quite scary. meh I dunno. The amount of confusion is a little scary, I guess. sb> And did I understand you correctly when you say that these LSI sb> 1068/1078 drivers write labels to drives, no incorrect. I'm using a 1068 (``closed-source / proprietary driver''), and it doesn't write such labels. I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. Nit: the definition of "proprietary" is "relating to ownership." One could argue that Linus still "owns" Linux since he has such strong control over what is accepted in the Linux kernel :-) Similarly, one could argue that a forker would own the fork. In other words, "open source" and "proprietary" are not mutually exclusive, nor is "closed source" a synonym for "proprietary." You say tomato, I say 'mater. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> "sb" == Simon Breden writes: sb> The situation regarding lack of open source drivers for these sb> LSI 1068/1078-based cards is quite scary. meh I dunno. The amount of confusion is a little scary, I guess. sb> And did I understand you correctly when you say that these LSI sb> 1068/1078 drivers write labels to drives, no incorrect. I'm using a 1068 (``closed-source / proprietary driver''), and it doesn't write such labels. The firmware piece is big, so not all 1068 are necessarily the same: I think some are capable of RAID0/RAID1. but so far I've not heard of a 1068 demanding LSI labels, and mine doesn't. The LSI 1078 (PERC) with the open-source x86-only driver is the one with the big ``closed-source / proprietary'' firmware blob running on the card itself. Others have reported this blob demands LSI labels on the disks. I don't have one. who knows, maybe you can cross-flash some weird firmware from some strange variant of card that doesn't need LSI labels on each disk, or maybe some binary blob config tool will flip a magic undocumented switch inside the card to make it JBOD-able. I don't like to deal in such circus-hoop messes unless someone else can do the work and tell me exactly how. sb> go for the non-LSI controllers -- e.g. the AOC-SAT2-MV8 no, you misunderstood because there are two kinds of LSI card with two different drivers. compared to Marvell, LSI 1068 has a cheaper bus (PCIe), performs better, and seems to have fewer bugs (ex. 6787312 is duplicate of a secret Marvell bug), and its proprietary driver includes a SPARC object. The Marvell controller is still ``closed-source / proprietary'' driver (Linux driver for the same chip: open source), so you gain nothing there. The one thing Marvell might gain you is, it's SATA framework, so smartctl/hd may be closer to working. On Linux both cards use their uniform SCSI framework so smartctl works. I have both AOC-SAT2-MV8 and AOC-USAS-L8i and suggest the latter. You have to unscrew teh reverse-polarity card-edge bracket and buy some octopus cables from thenerds.net or adaptec or similar, is all. AOC-USAS-L8i works with these cables among others: http://www.thenerds.net/3WARE.AMCC_Serial_Attached_SCSI_SAS_Internal_Cable.CBLSFF8087OCF10M.html pgpaAcdBpkIe7.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
The situation regarding lack of open source drivers for these LSI 1068/1078-based cards is quite scary. And did I understand you correctly when you say that these LSI 1068/1078 drivers write labels to drives, meaning you can't move drives from an LSI controlled array to another arbitrary array due to these labels? If this is the case then surely my best bet would be to go for the non-LSI controllers -- e.g. the AOC-SAT2-MV8 instead, which I presume does not write labels to the array drives? Please correct me if I have misunderstood. Cheers, Simon -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> "jl" == James Lever writes: jl> I thought they were both closed source yes, both are closed source / proprietary. If you are really confused and not just trying to pick a dictionary fight, I can start saying ``closed source / proprietary'' on Solaris lists from now on. On Linux lists, ``proprietary'' is clear enough, but maybe the people around here are different. jl> and that the LSI chipset specifications were proprietary. I don't know about specifications, but I do know that Linux has an open source driver for 1068, and Solaris has an open source driver for 1078. Getting source without specifications is a problem, though, yes, if you want to track down a bug in the driver or write a driver for another OS. The other problem is, with both chips but especially with the 1078, it soudns like these cards are very ``firmware'' heavy, and the firmware is proprietary. This causes the complaints here that 'hd' (smartctl equivalent) doesn't work. And that with PERC/1078 they have to make RAID0's of each disk with LSI labels on the disk which blocks moving the disk from one controller to another---meaning a broken controller could potentially toast your whole zpool no matter what disk redundancy you had, unless you figure out some way to escape the trap. If not for the ``closed-source / proprietary'' firmware, these two problems could never persist. so, there is still no SATA driver for Solaris that: * is open-source. like a fully-open stack, not just ``here look! here is some source. is that a rabbit over there?'' open-source meaning I can add smartctl or DVD writer or NCQ support without bumping into some strange blob that stops me. open-source meaning I can swap out a disk without having to run any proprietary code to ``bless'' the disk first. no BIOS bluescreen garbage either. * supports NCQ and hotplug * performs well and doesn't have a lot of bugs, like ``freezes'' and so on * works on x86 and SPARC * comes in card form so it can achieve high port density on Linux, both Marvell and LSI 1068 driver come close to or meet all these. (smartctl DOES work with Linux's open source 1068 driver.) Sun has more leverage with LSI than Linux not less because they are an actual customer of LSI's chips for the hardware they sell---even ditched Marvell for LSI!---yet they do worse on driver openness negotiation and then try to blame LSI's whim, and tell random scmuck user to ``go complain to LSI'' when we are not LSI's customer, Sun is. The issue gets more complicated, but not better, IMHO. pgpQpHDvDu5iT.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Miles Nordin wrote: There's also been talk of two tools, MegaCli and lsiutil, which are both binary only and exist for both Linux and Solaris, and I think are used only with the 1078 cards but maybe not. lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI, SAS, and FC), including the 1068. I've used it with both the mpt and itmpt driver. MegaCLI appears to be for MegaRAID SAS and SATA II controllers (using the mega_sas driver), including the 1078. I've never used it. -- Carson ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On 25/06/2009, at 5:16 AM, Miles Nordin wrote: and mpt is the 1068 driver, proprietary, works on x86 and SPARC. then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. I'm confused. Why do you say the mpt driver is proprietary and the LSI provided tool is closed source? I thought they were both closed source and that the LSI chipset specifications were proprietary. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> "jr" == Jacob Ritorto writes: jr> I think this is the board that shipped in the original jr> T2000 machines before they began putting the sas/sata onboard: jr> LSISAS3080X-R jr> Can anyone verify this? can't verify but FWIW i fucked it up: I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). ^ me. this is wrong. mega_sas, the open source driver for 1078/PERC, is x86-only. http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027338.html and mpt is the 1068 driver, proprietary, works on x86 and SPARC. mfi is some other (abandoned?) random third-party open-source driver for some of these cards that no one's mentioned using yet, at https://svn.itee.uq.edu.au/repo/mfi/ then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. sorry. There's also been talk of two tools, MegaCli and lsiutil, which are both binary only and exist for both Linux and Solaris, and I think are used only with the 1078 cards but maybe not. pgpwVYyC49Hzf.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
I think this is the board that shipped in the original T2000 machines before they began putting the sas/sata onboard: LSISAS3080X-R Can anyone verify this? Justin Stringfellow wrote: Richard Elling wrote: Miles Nordin wrote: "ave" == Andre van Eyssen writes: "et" == Erik Trimble writes: "ea" == Erik Ableson writes: "edm" == "Eric D. Mudama" writes: ave> The LSI SAS controllers with SATA ports work nicely with ave> SPARC. I think what you mean is ``some LSI SAS controllers work nicely with SPARC''. It would help if you tell exactly which one you're using. I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). Sun has been using the LSI 1068[E] and its cousin, 1064[E] in SPARC machines for many years. In fact, I can't think of a SPARC machine in the current product line that does not use either 1068 or 1064 (I'm sure someone will correct me, though ;-) -- richard Might be worth having a look at the T1000 to see what's in there. We used to ship those with SATA drives in. cheers, --justin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Richard Elling wrote: Miles Nordin wrote: "ave" == Andre van Eyssen writes: "et" == Erik Trimble writes: "ea" == Erik Ableson writes: "edm" == "Eric D. Mudama" writes: ave> The LSI SAS controllers with SATA ports work nicely with ave> SPARC. I think what you mean is ``some LSI SAS controllers work nicely with SPARC''. It would help if you tell exactly which one you're using. I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). Sun has been using the LSI 1068[E] and its cousin, 1064[E] in SPARC machines for many years. In fact, I can't think of a SPARC machine in the current product line that does not use either 1068 or 1064 (I'm sure someone will correct me, though ;-) -- richard Might be worth having a look at the T1000 to see what's in there. We used to ship those with SATA drives in. cheers, --justin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> "vab" == Volker A Brandt writes: >> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 >> only). I thought the 1078 are supposed to work with SPARC >> (mega_sas). vab> uname -a SunOS shelob 5.10 vab> Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000 vab> man mpt [...] vab> DESCRIPTION The mpt host bus adapter driver is a SCSA vab> compliant nexus driver that supports the LSI 53C1030 SCSI, vab> SAS1064, SAS1068 and Dell SAS 6i/R controllers. ... damnit. I guess I got it backwards. mega_sas and 1078 are x86-only, need lsiutil/MegaCli/whatever blob and use LSI-labeled disks that don't move between controllers easily, but driver comes with source. mpt and 1068 are x86/sparc, use plain moveable disks not LSI RAID0's, but have a proprietary driver. I'm sorry---I'm only making it worse. pgpcAqheYt3Sf.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Miles Nordin wrote: "ave" == Andre van Eyssen writes: "et" == Erik Trimble writes: "ea" == Erik Ableson writes: "edm" == "Eric D. Mudama" writes: ave> The LSI SAS controllers with SATA ports work nicely with ave> SPARC. I think what you mean is ``some LSI SAS controllers work nicely with SPARC''. It would help if you tell exactly which one you're using. I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). Sun has been using the LSI 1068[E] and its cousin, 1064[E] in SPARC machines for many years. In fact, I can't think of a SPARC machine in the current product line that does not use either 1068 or 1064 (I'm sure someone will correct me, though ;-) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). > I thought the 1078 are supposed to work with SPARC (mega_sas). Hmmm uname -a SunOS shelob 5.10 Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000 man mpt Devices mpt(7D) NAME mpt - SCSI host bus adapter driver SYNOPSIS s...@unit-address DESCRIPTION The mpt host bus adapter driver is a SCSA compliant nexus driver that supports the LSI 53C1030 SCSI, SAS1064, SAS1068 and Dell SAS 6i/R controllers. ... :-) -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> "ave" == Andre van Eyssen writes: > "et" == Erik Trimble writes: > "ea" == Erik Ableson writes: > "edm" == "Eric D. Mudama" writes: ave> The LSI SAS controllers with SATA ports work nicely with ave> SPARC. I think what you mean is ``some LSI SAS controllers work nicely with SPARC''. It would help if you tell exactly which one you're using. I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). I thought the 1078 are supposed to work with SPARC (mega_sas). ave> Oh, and if you do grab the LSI card, don't let James catch you ave> using the itmpt driver or lsiutils ;-) What's the itmpt driver? closed-source sparc driver? Does it work with 1068-based cards? http://www.lsi.com/DistributionSystem/AssetDocument/itmpt_sparc_5.07.04.txt edm> We bought a Dell T610 as a fileserver, and it comes with an edm> LSI 1068E based board (PERC6/i SAS). carton> pciids.sourceforge.net says this is a 1078 board, not a 1068 carton> board. edm> it's called the "Dell SAS6i/R" while they reserve the PERC name edm> for the ones with cache. I had understood that they were edm> basically identical except for the cache, but maybe not. edm> (driver name: pcie_pci) edm> pci1028,1f10, instance #0 (driver name: mpt) ah, I am guessing not identical. mpt = 1068. I found this in pciids: 1000 LSI Logic / Symbios Logic 0058 SAS1068E PCI-Express Fusion-MPT SAS 1028 1f10 SAS 6/iR Integrated RAID Controller pciids.sourceforge.net doesn't always specify 1068 vs 1078 but in this case it does. so, AIUI, this card will not work on SPARC. but maybe with this other proprietary driver itmpt it will? ea> Just a side note on the PERC labelled cards: they don't have a ea> JBOD mode so you _have_ to use hardware RAID. This problem is not true of my AOC-USAS-L8i (1068) with the proprietary mpt driver---it uses unlabeled disks. so, I bet it's not true of the dell no-battery-nvram cards either. sopossibly, the Dell PERC cards with a battery/cache will work in SPARC. Has anyone tried? also: et> I have an AOC-SAT2-MV8 in an older Opteron-based system. It's et> Also, it's a 3.3v card (won't work in 5v slots). None et> of this should be a problem in any modern motherboard/case et> setup, only in really old stuff. first, I think that's not true because I have that card working in a 5V 32-bit slot. second, if you really did have a 3.3V-only card, a modern system would not make it magically okay. It would be a serious inconvenience. Slots are either 5V or 3.3V, not both. There's such thing as a dual-voltage card, but there is no such thing as a dual-voltage slot, and most 32-bit slots are 5V (they have the key farther from the external-connector face of the card). I think the reason there cannot be a dual-voltage slot is that the slot's on a bus shared with other cards, so all cards on the same bus must agree on the same voltage. pgpOm3efMkGie.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Volker A. Brandt schrieb: 2) disks that were attached once leave a stale /dev/dsk entry behind that takes full 7 seconds to stat() with kernel running at 100%. >>> Such entries should go away with an invocation of "devfsadm -vC". >>> If they don't, it's a bug IMHO. >> yes, they go away. But the problem is when you do this and replug the >> disks they don't show up again... And that's even worse IMO... > > So you want such disks to behave more like USB sticks? If there was > a good way to mark certain devices or a device tree as "volatile" > then this would be an interesting RFE. I would certainly not want > *all* of my disks to "come and go as they please". :-) > > I am not sure how feasible an implementation would be though. > > > Regards -- Volker yes - that's my usage scenario. Or to be more precise I have a small chassis with two disks, which I only want to attach for backup purposes. I just send/receive from my active pool to the backup pool, and then detach the backup pool. I just like having backup disks being physically detached when not in use. Like this, nothing can really screw them up but a fire in the room... I thought SAS/SATA would be hot-pluggable - so what's the difference between USB's hot-plug feature and the one of SAS/SATA other that that USB is handled by the volume manager? So, yes, it would be nice if one could assign a SATA disk to the volume manager. - Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> >> 2) disks that were attached once leave a stale /dev/dsk entry behind > >> that takes full 7 seconds to stat() with kernel running at 100%. > > > > Such entries should go away with an invocation of "devfsadm -vC". > > If they don't, it's a bug IMHO. > > yes, they go away. But the problem is when you do this and replug the > disks they don't show up again... And that's even worse IMO... So you want such disks to behave more like USB sticks? If there was a good way to mark certain devices or a device tree as "volatile" then this would be an interesting RFE. I would certainly not want *all* of my disks to "come and go as they please". :-) I am not sure how feasible an implementation would be though. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Volker A. Brandt schrieb: >> 2) disks that were attached once leave a stale /dev/dsk entry behind >> that takes full 7 seconds to stat() with kernel running at 100%. > > Such entries should go away with an invocation of "devfsadm -vC". > If they don't, it's a bug IMHO. > > > Regards -- Volker yes, they go away. But the problem is when you do this and replug the disks they don't show up again... And that's even worse IMO... - Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> 2) disks that were attached once leave a stale /dev/dsk entry behind > that takes full 7 seconds to stat() with kernel running at 100%. Such entries should go away with an invocation of "devfsadm -vC". If they don't, it's a bug IMHO. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On Tue, 23 Jun 2009, Thomas Maier-Komor wrote: 1) Once the disks spin down due to idleness it can become impossible to reactivate them without doing a full reboot (i.e. hot plugging won't help) That's a good point - I don't think a second goes by without at least a little I/O on those disks, so they've probably spun down twice since installion - for two other hardware upgrades. -- Andre van Eyssen. mail: an...@purplecow.org jabber: an...@interact.purplecow.org purplecow.org: UNIX for the masses http://www2.purplecow.org purplecow.org: PCOWpix http://pix.purplecow.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
Andre van Eyssen schrieb: > On Mon, 22 Jun 2009, Jacob Ritorto wrote: > >> Is there a card for OpenSolaris 2009.06 SPARC that will do SATA >> correctly yet? Need it for a super cheapie, low expectations, >> SunBlade 100 filer, so I think it has to be notched for 5v PCI slot, >> iirc. I'm OK with slow -- main goals here are power saving (sleep all >> 4 disks) and 1TB+ space. Oh, and I hate to be an old head, but I >> don't want a peecee. They still scare me :) Thinking root pool on >> 16GB ssd, perhaps, so the thing can spin down the main pool and idle >> *really* cheaply.. > > The LSI SAS controllers with SATA ports work nicely with SPARC. I have > one in my V880. On a Blade-100, however, you might have some issues due > to the craptitude of the PCI slots. > > To be honest, the Grover was a fun machine at the time, but I think that > time may have passed. > > Oh, and if you do grab the LSI card, don't let James catch you using the > itmpt driver or lsiutils ;-) > I'm also using an LSI SAS card for attaching sata disks to a Blade 2500. In my experience there is some severe problems: 1) Once the disks spin down due to idleness it can become impossible to reactivate them without doing a full reboot (i.e. hot plugging won't help) 2) disks that were attached once leave a stale /dev/dsk entry behind that takes full 7 seconds to stat() with kernel running at 100%. Apart from that it works fine. - Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
On Mon, 22 Jun 2009, Jacob Ritorto wrote: Is there a card for OpenSolaris 2009.06 SPARC that will do SATA correctly yet? Need it for a super cheapie, low expectations, SunBlade 100 filer, so I think it has to be notched for 5v PCI slot, iirc. I'm OK with slow -- main goals here are power saving (sleep all 4 disks) and 1TB+ space. Oh, and I hate to be an old head, but I don't want a peecee. They still scare me :) Thinking root pool on 16GB ssd, perhaps, so the thing can spin down the main pool and idle *really* cheaply.. The LSI SAS controllers with SATA ports work nicely with SPARC. I have one in my V880. On a Blade-100, however, you might have some issues due to the craptitude of the PCI slots. To be honest, the Grover was a fun machine at the time, but I think that time may have passed. Oh, and if you do grab the LSI card, don't let James catch you using the itmpt driver or lsiutils ;-) -- Andre van Eyssen. mail: an...@purplecow.org jabber: an...@interact.purplecow.org purplecow.org: UNIX for the masses http://www2.purplecow.org purplecow.org: PCOWpix http://pix.purplecow.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss