> The testing was utilizing a portion of our drives, we
> have 120 x 750
> SATA drives in J4400s dual pathed. We ended up with
> 22 vdevs each a
> raidz2 of 5 drives, with one drive in each of the
> J4400, so we can
> lose two complete J4400 chassis and not lose any
> data.
Thanks pk.
You know I
Thanks Trond.
I am aware of this, but to be honest I will not be upgrading very often (my
current WHS setup has lasted 5 years without a single change!) and certainly
not to each iteration of TB size increase, so by the time I do upgrade, say in
the next 5 years PCIe will have probably been rep
On Tue, Jul 5, 2011 at 12:54 PM, Lanky Doodle wrote:
> OK, I have finally settled on hardware;
> 2x LSI SAS3081E-R controllers
Beware that this controller does not support drives larger than 2TB.
--
Trond Michelsen
___
zfs-discuss mailing list
zfs-dis
On Tue, 5 Jul 2011, Lanky Doodle wrote:
I am still undecided as to how to group the disks. I have read
elsewhere that raid-z1 is best suited with either 3 or 5 disks and
raid-z2 is better suited with 6 or 10 disks - is there any truth in
this, although I think this was in reference to 4K sect
On Tue, Jul 5, 2011 at 7:47 AM, Lanky Doodle wrote:
> Thanks.
>
> I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so
> would not have been able to make the most of the difference in increased
> bandwidth.
Only PCIe 1.0? What chipset is that based on? Might be worthwhile to
On Tue, Jul 5, 2011 at 6:54 AM, Lanky Doodle wrote:
> OK, I have finally settled on hardware;
>
> 2x LSI SAS3081E-R controllers
> 2x Seagate Momentus 5400.6 rpool disks
> 15x Hitachi 5K3000 'data' disks
>
> I am still undecided as to how to group the disks. I have read elsewhere that
> raid-z1 is
Thanks.
I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so would
not have been able to make the most of the difference in increased bandwidth.
I can't see myself upgrading every few months (my current WHZ build has lasted
over 4 years without a single change) so by the tim
The LSI2008 chipset is supported and works very well.
I would actually use 2 vdevs; 8 disks in each. And I would configure each vdev
as raidz2. Maybe use one hot spare.
And I also have personal, subjective reasons: I like to use the number of 8 in
computers. 7 is an ugly number. Everything is b
OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks
I am still undecided as to how to group the disks. I have read elsewhere that
raid-z1 is best suited with either 3 or 5 disks and raid-z2 is better suited
Sorry to pester, but is anyone able to say if the Marvell 9480 chip is now
supported in Solaris?
The article I read saying it wasn't supported was dated May 2010 so over a year
ago.
--
This message posted from opensolaris.org
___
zfs-discuss mailing l
Thanks for all the replies.
I have a pretty good idea how the disk enclosure assigns slot locations so
should be OK.
One last thing - I see thet Supermicro has just released a newer version of the
card I mentioned in the first post that supports SATA 6Gbps. From what I can
see it uses the Marv
On Jun 17, 2011, at 12:55 AM, Lanky Doodle wrote:
> Thanks Richard.
>
> How does ZFS enumerate the disks? In terms of listing them does it do them
> logically, i.e;
>
> controller #1 (motherboard)
>|
>|--- disk1
>|--- disk2
> controller #3
>|--- disk3
>|--- disk4
>|--- d
12 matches
Mail list logo