Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Karl Wagner
On 2012-11-13 17:42, Peter Tribble wrote: Given storage provisioned off a SAN (I know, but sometimes that's what you have to work with), what's the best way to expand a pool? Specifically, I can either grow existing LUNs, a]or add new LUNs. As an example, If I have 24x 2TB LUNs, and

Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Brian Wilson
Not sure if this will make it to the list, but I'll try... On 11/13/12, Peter Tribble wrote: Given storage provisioned off a SAN (I know, but sometimes that's what you have to work with), what's the best way to expand a pool? Specifically, I can either grow existing LUNs, a]or add new

[zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Peter Tripp
Hi folks, I'm in the market for a couple of JBODs. Up until now I've been relatively lucky with finding hardware that plays very nicely with ZFS. All my gear currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i) with backplanes powered by LSI SAS expanders (Sun x4250,

Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Rocky Shek
Peter, You may consider DataON JBOD. Lots of users are using DataON JBOD for ZFS storage. Yes, LSI SAS HBA is the best choice for ZFS DataON DNS-1600 4U 24 Bay 6G SAS JBOD http://dataonstorage.com/dns-1600 DataON DNS-1640 2U 24 Bay 6G SAS JBOD http://dataonstorage.com/dns-1640 DataON DNS-1660

Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Sašo Kiselkov
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you have to wire it yourself (plus you need to buy a pair of internal SFF-8087 cables to connect the back and front backplanes - incredible SuperMicro doesn't provide those out of the box), but other than that, never had a problem

Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Schweiss, Chip
I've had perfect success with the SuperMicro SC847E25-RJOB1 It has two back planes and holds 45 3.5 inch drives. It's built as a JBOD so it has everything that is needed. On Tue, Nov 13, 2012 at 2:08 PM, Peter Tripp pe...@psych.columbia.eduwrote: Hi folks, I'm in the market for a couple of

Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Ray Van Dolson
On Tue, Nov 13, 2012 at 03:08:04PM -0500, Peter Tripp wrote: Hi folks, I'm in the market for a couple of JBODs. Up until now I've been relatively lucky with finding hardware that plays very nicely with ZFS. All my gear currently in production uses LSI SAS controllers (3801e, 9200-16e,

Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Cedric Tineo
Peter, Could you please give info or links to clarify Sata tunneling protocol nonsense issues? Also have you considered Supermicro's 45 drive in 4U enclosure - The SC847E26-RJBOD1 ? It's cheap too at around 2000$ and based on LSI backplanes anyway. SGI has a nice and crazy 81(!) 3.5 disks in

Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Edmund White
I'm quite happy with my HP D2700 and D2600 enclosures. I'm using them with LSI 9205-8e controllers and NexentaStor, but MPxIO definitely works. You will need to find HP drive traysŠ They're available on eBay. -- Edmund White On 11/13/12 2:08 PM, Peter Tripp pe...@psych.columbia.edu wrote:

Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Edmund White
Also consider looking at the HP MDS600 enclosure. http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930445-3930445- 3930445-3936271.html They're available for $1000US on eBay, fully ZFS-friendly and hold 70 x 3.5 disks. The only downside is that they were introduced as 3G SAS units. I

Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Mauricio Tavares
Trying again: Intel just released those drives. Any thoughts on how nicely they will play in a zfs/hardware raid setup? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Jim Klimov
On 2012-11-13 22:56, Mauricio Tavares wrote: Trying again: Intel just released those drives. Any thoughts on how nicely they will play in a zfs/hardware raid setup? Seems interesting - fast, assumed reliable and consistent in its IOPS (according to marketing talk), addresses power loss

Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Richard Elling
On Nov 13, 2012, at 12:08 PM, Peter Tripp pe...@psych.columbia.edu wrote: Hi folks, I'm in the market for a couple of JBODs. Up until now I've been relatively lucky with finding hardware that plays very nicely with ZFS. All my gear currently in production uses LSI SAS controllers

Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Freddie Cash
Anandtech.com has a thorough review of it. Performance is consistent (within 10-15% IOPS) across the lifetime of the drive, has capacitors to flush RAM cache to disk, and doesn't store user data in the cache. It's also cheaper per GB than the 710 it replaces. On 2012-11-13 3:32 PM, Jim Klimov

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Dan Swartzendruber
Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. Supposedly that works in headless mode with RDP for management, but nothing but fail for me. Found quite a few posts on various

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Ian Collins
On 11/14/12 15:20, Dan Swartzendruber wrote: Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. Supposedly that works in headless mode with RDP for management, but nothing but

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Jim Klimov
On 2012-11-14 03:20, Dan Swartzendruber wrote: Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. Supposedly that works in headless mode with RDP for management, but nothing but

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Edmund White
What was wrong with the suggestion to use VMWare ESXi and Nexenta or OpenIndiana to do this? -- Edmund White On 11/13/12 8:20 PM, Dan Swartzendruber dswa...@druber.com wrote: Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome

Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Fajar A. Nugraha
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson brian.wil...@doit.wisc.edu wrote: So it depends on your setup. In your case if it's at all painful to grow the LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB LUNs with them one at a time with zpool replace, and wait for