2012-10-14 17:51, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
zpool create datapool \
mirror c0t0d0p2 c0t1d0p2 \
mirror c0t2d0p1 c0t3d0p1 \
mirror c0t2d0p2 c0t3d0p2 \
mirror c0t4d0p1 c0t5d0p1 \
mirror c0t4d0p2 c0t4d0p2
Add a spare? A seventh disk, c0t6d0
Partition it.
add sp
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> A solid point. I don't.
>
> This doesn't mean you can't - it just means I don't.
This response was kind of long-winded. So here's a simpler version:
Suppose 6 disks i
> From: Ian Collins [mailto:i...@ianshome.com]
>
> On 10/13/12 02:12, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
> > There are at least a couple of solid reasons *in favor* of partitioning.
> >
> > #1 It seems common, at least to me, that I'll build a server with let's
> >
2012-10-14 1:56, Ian Collins пишет:
On 10/13/12 22:13, Jim Klimov wrote:
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
#1 It seems common, at least to me, that I'll build a server with
let's say, 12 disk slots, and we'll
On 10/13/12 22:13, Jim Klimov wrote:
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server wit
Ah, okay, that makes sense. I wasn't offended, just confused. :)
Thanks for the clarification
On Oct 13, 2012 2:01 AM, "Jim Klimov" wrote:
> 2012-10-12 19:34, Freddie Cash пишет:
>
>> On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote:
>>
>>> In fact, you can (although not recommended due to bal
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server with
let's say, 12 disk slots, and we'll
2012-10-12 19:34, Freddie Cash пишет:
On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote:
In fact, you can (although not recommended due to balancing reasons)
have tlvdevs of mixed size (like in Freddie's example) and even of
different structure (i.e. mixing raidz and mirrors or even single
LUNs
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server with let's say,
12 disk slots, and we'll be using 2T disks or something like
On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote:
> In fact, you can (although not recommended due to balancing reasons)
> have tlvdevs of mixed size (like in Freddie's example) and even of
> different structure (i.e. mixing raidz and mirrors or even single
> LUNs) by forcing the disk attachment.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of andy thomas
>
> According to a Sun document called something like 'ZFS best practice' I
> read some time ago, best practice was to use the entire disk for ZFS and
> not to partition or slice it
On 2012-Oct-12 08:11:13 +0100, andy thomas wrote:
>This is apparently what had been done in this case:
>
> gpart add -b 34 -s 600 -t freebsd-swap da0
> gpart add -b 634 -s 1947525101 -t freebsd-zfs da1
> gpart show
Assuming that you can be sure that you'll keep 512B sect
2012-10-12 11:11, andy thomas wrote:
Great, thanks for the explanation! I didn't realise you could have a
sort of 'stacked pyramid' vdev/pool structure.
Well, you can - the layers are "pool" - "top-level VDEVs" - "leaf
VDEVs", though on trivial pools like single-disk ones, the layers
kinda merg
On Thu, 11 Oct 2012, Richard Elling wrote:
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and
On Thu, 11 Oct 2012, Freddie Cash wrote:
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
partition or slice it in any way. Does this advic
On 11/10/12 5:47 PM, andy thomas wrote:
...
This doesn't sound like a very good idea to me as surelt disk seeks for
swap and for ZFS file I/O are bound to clash. aren't they?
As Phil implied, if your system is swapping, you already have bigger
problems.
--Toby
Andy
__
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
>
> On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
>
>> According to a Sun document called something like 'ZFS best practice' I read
>> some time ago, best practice was to use the entire disk for ZFS and not to
>> partition or slice it in
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas wrote:
> According to a Sun document called something like 'ZFS best practice' I read
> some time ago, best practice was to use the entire disk for ZFS and not to
> partition or slice it in any way. Does this advice hold good for FreeBSD as
> well?
Sol
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
> According to a Sun document called something like 'ZFS best practice' I read
> some time ago, best practice was to use the entire disk for ZFS and not to
> partition or slice it in any way. Does this advice hold good for FreeBSD as
> well?
According to a Sun document called something like 'ZFS best practice' I
read some time ago, best practice was to use the entire disk for ZFS and
not to partition or slice it in any way. Does this advice hold good for
FreeBSD as well?
I looked at a server earlier this week that was running Free
20 matches
Mail list logo