On Sep 26, 2010, at 1:16 PM, Roy Sigurd Karlsbakk wrote:
>>> Upgrading is definitely an option. What is the current snv favorite
>>> for ZFS stability? I apologize, with all the Oracle/Sun changes I
>>> haven't been paying as close attention to big reports on zfs-discuss
>>> as I used to.
>>
>> Op
> > Upgrading is definitely an option. What is the current snv favorite
> > for ZFS stability? I apologize, with all the Oracle/Sun changes I
> > haven't been paying as close attention to big reports on zfs-discuss
> > as I used to.
>
> OpenIndiana b147 is the latest binary release, but it also in
On Sep 26, 2010, at 11:03 AM, Jason J. W. Williams wrote:
> Upgrading is definitely an option. What is the current snv favorite for ZFS
> stability? I apologize, with all the Oracle/Sun changes I haven't been paying
> as close attention to big reports on zfs-discuss as I used to.
OpenIndiana b1
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
On 9/26/2010 8:06 AM, devsk wrote:
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back. They're dirt cheap, readily
available,
| [sn
On Sun, 26 Sep 2010, Edward Ned Harvey wrote:
27G on a 6-disk raidz2 means approx 6.75G per disk. Ideally, the
disk could write 7G = 56 Gbit in a couple minutes if it were all
sequential and no other activity in the system. So you're right to
suspect something is suboptimal, but the root cau
- Original Message -
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jason J. W. Williams
>
> I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x
> raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No
> checksum errors.
27G o
On Sep 26, 2010, at 4:41 AM, "Edward Ned Harvey" wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> It is relatively easy to find the latest, common snapshot on two file
>> systems.
>> Once you know the latest, common snapshot, you can send the
>> incrementals
>> up to the la
>
>
> On 9/23/2010 at 12:38 PM Erik Trimble wrote:
>
> | [snip]
> |If you don't really care about ultra-low-power, then
> there's
> absolutely
> |no excuse not to buy a USED server-class machine
> which is 1- or 2-
> |generations back. They're dirt cheap, readily
> available,
> | [snip]
> =
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each during
Richard L. Hamilton wrote:
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump tapes).
"Richard L. Hamilton" wrote:
> Typically on most filesystems, the inode number of the root
> directory of the filesystem is 2, 0 being unused and 1 historically
> once invisible and used for bad blocks (no longer done, but kept
> reserved so as not to invalidate assumptions implicit in ufsdump ta
>Typically on most filesystems, the inode number of the root
>directory of the filesystem is 2, 0 being unused and 1 historically
>once invisible and used for bad blocks (no longer done, but kept
>reserved so as not to invalidate assumptions implicit in ufsdump tapes).
>
>However, my observation s
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump tapes).
However, my observation seems to
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> It is relatively easy to find the latest, common snapshot on two file
> systems.
> Once you know the latest, common snapshot, you can send the
> incrementals
> up to the latest.
I've always relied on the snapshot names matching. Is the
On 25 Sep 2010, at 19:56, Giovanni Tirloni wrote:
> We have correctable memory errors on ECC systems on a monthly basis. It's not
> if they'll happen but how often.
"DRAM Errors in the wild: a large-scale field study" is worth a read if you
have time.
http://www.cs.toronto.edu/~bianca/papers
>hi all
>
>I'm using a custom snaopshot scheme which snapshots every hour, day,
>week and month, rotating 24h, 7d, 4w and so on. What would be the best
>way to zfs send/receive these things? I'm a little confused about how
>this works for delta udpates...
>
>Vennlige hilsener / Best regards
T
18 matches
Mail list logo