On Wed, 17 Apr 2013, Jay Heyl wrote:
On Tue, Apr 16, 2013 at 7:53 PM, Carl Brewer wrote:
2 x 2TB HDDs for rpool (ZFS mirror)
4 x 2TB HDD's to get at least a 4TB mirror (or is RAID-Z a better option?)
Would I be better off with some 500GB HDD's for the rpool? And while I
fiddle with this thin
On Tue, Apr 16, 2013 at 7:53 PM, Carl Brewer wrote:
>
> 2 x 2TB HDDs for rpool (ZFS mirror)
> 4 x 2TB HDD's to get at least a 4TB mirror (or is RAID-Z a better option?)
>
> Would I be better off with some 500GB HDD's for the rpool? And while I
> fiddle with this thing, is there any way to get th
On 17/04/2013 12:53 PM, Carl Brewer wrote:
Further to my original post, I have a new (desktop, I know ... but I am
on a tight budget) Intel MB with an i5-3750 CPU and 32 GB of desktop
RAM. Booting the 151a7 live DVD shows that it thinks it's a 32 bit
system (huh?). It regognises almost all the d
Check your nfsmapid/domain value, the one suffixed to all the NFS ID values. OI
tries to default to local domain name, check sharectl. Linux variants seem to
default 'localdomain', which won't match. Don't recall where CentOS puts
theirs. I have to set it explicitly for all my boxes in a mixed e
On 2013-04-17 21:25, Jay Heyl wrote:
It (finally) occurs to me that not all mirrors are created equal. I've been
assuming, and probably ignoring hints to the contrary, that what was being
compared here was a raid-z2 configuraton with a 2-way mirror composed of
two 8-disk vdevs. I now realize you'
On Wed, Apr 17, 2013 at 11:13:37AM -0700, Peter Wood wrote:
> I'm using OI 151.a.7 to export a dataset via NFS and mount it on CentOS5.9
> clients using NFSv4.
>
> On the clients I have apache running as user daemon and it needs access to
> the exported directory.
>
> Then problem is that user da
On Wed, Apr 17, 2013 at 5:38 AM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:
> > From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
> >
> > Raid-Z indeed does stripe data across all
> > leaf vdevs (minus parity) and does so by splitting the logical block up
> > into equall
On Wed, Apr 17, 2013 at 12:57 PM, Timothy Coalson wrote:
> On Wed, Apr 17, 2013 at 7:38 AM, Edward Ned Harvey (openindiana) <
> openindi...@nedharvey.com> wrote:
>
>> You also said the raidz2 will offer more protection against failure,
>> because you can survive any two disk failures (but no more.
On Wed, Apr 17, 2013 at 11:21 AM, Jim Klimov wrote:
> On 2013-04-17 20:09, Jay Heyl wrote:
>
>> reply. Unless the first device to answer returns garbage (something
>>> that doesn't match the expected checksum), other copies are not read
>>> as part of this request.
>>>
>>>
>> Ah, that makes much
On Wed, Apr 17, 2013 at 7:38 AM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:
> You also said the raidz2 will offer more protection against failure,
> because you can survive any two disk failures (but no more.) I would argue
> this is incorrect (I've done the probability a
On 2013-04-17 20:09, Jay Heyl wrote:
reply. Unless the first device to answer returns garbage (something
that doesn't match the expected checksum), other copies are not read
as part of this request.
Ah, that makes much more sense. Thanks for the clarification. Now that you
put it that way I ha
I'm using OI 151.a.7 to export a dataset via NFS and mount it on CentOS5.9
clients using NFSv4.
On the clients I have apache running as user daemon and it needs access to
the exported directory.
Then problem is that user daemon on CentOS5 has UID=2 and on OI has UID=1.
On the clients when I writ
On Tue, Apr 16, 2013 at 5:49 PM, Jim Klimov wrote:
> On 2013-04-17 02:10, Jay Heyl wrote:
>
>> Not to get into bickering about semantics, but I asked, "Or am I wrong
>> about reads being issued in parallel to all the mirrors in the array?", to
>> which you replied, "Yes, in normal case... this as
Hi,
I had the same trouble with ff 20. I have updated to 20.0.1 and the bug
seems to be gone (otherwise I would be unable to write this mail...)
Yours
Olaf
2013/4/16 Bob Friesenhahn
> On Tue, 16 Apr 2013, Apostolos Syropoulos wrote:
>
>
>> I tried as well with the tar.bz2 package, and it is
> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
>
> Raid-Z indeed does stripe data across all
> leaf vdevs (minus parity) and does so by splitting the logical block up
> into equally sized portions.
Jay, there you have it. You asked why use mirrors, and you said you would use
raidz2 or r
On 17 April 2013 03:53, Carl Brewer wrote:
> Further to my original post, I have a new (desktop, I know ... but I am on a
> tight budget) Intel MB with an i5-3750 CPU and 32 GB of desktop RAM.
> Booting the 151a7 live DVD shows that it thinks it's a 32 bit system (huh?).
> It regognises almost all
Am 17.04.2013 11:16, schrieb "Edward Ned Harvey (openindiana)"
It's a fact that NAND has a finite number of write cycles, and it gets slower
to write, the more times it's been re-written.
AFAIC, these are two facts, and the latter is much more relevant in
production. Someone mentioned it ear
On 04/17/2013 02:08 AM, Edward Ned Harvey (openindiana) wrote:
>> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
>>
>> If you are IOPS constrained, then yes, raid-zn will be slower, simply
>> because any read needs to hit all data drives in the stripe.
>
> Saso, I would expect you to know th
18 matches
Mail list logo