I have one that looks like this:
pool: preplica-1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
s
> On Sun, 18 Feb 2007, Calvin Liu wrote:
>
>> I want to run command "rm Dis*" in a folder but mis-typed a space in it
>> so it became "rm Dis *". Unfortunately I had pressed the return button
>> before I noticed the mistake. So you all know what happened... :( :( :(
>
> Ouch!
>
>> How can I get th
I think so. After all there are features shipped which are not fully
baked/guranteed like the send/receive. Isn't shipping the header files better
than letting developers guess their structure and possibly make mistakes? Of
course the developer can compile against OpenSolaris source but far easi
Richard Elling wrote:
JS wrote:
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO,
respectively. Both work fine - the only caveat is to drop your
sd_queue to around 20 or so, otherwise you can run into an ugly
display of bus resets.
This is sd_max_throttle or ssd_max_throt
Robert wrote:
> Before jumping to any conclusions - first try to
> eliminate nfs and do readdirs locally - I guess that would be quite fast.
> Then check on a client (dtrace) the time distribution of nfs requests
> and sends us results.
We used this test program that is doing readdirs and can be
On Wed, Feb 14, 2007 at 01:56:33PM -0700, Matthew Ahrens wrote:
> These files are not shipped with Solaris 10. You can find them in
> opensolaris: usr/src/uts/common/fs/zfs/sys/
>
> The interfaces in these files are not supported, and may change without
> notice at any time.
Even if they're no
JS wrote:
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO, respectively.
Both work fine - the only caveat is to drop your sd_queue to around 20 or so,
otherwise you can run into an ugly display of bus resets.
This is sd_max_throttle or ssd_max_throttle. The problem is tha
Have you tried PowerPath/EMC and MPxIO/Pillar on the same host?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 18/2/07 4:56, "Akhilesh Mritunjai" <[EMAIL PROTECTED]> wrote:
> Hi Folks
>
> I believe that the word would have gone around already, Google engineers have
> published a paper on disk reliability. It might supplement the ZFS FMA
> integration and well - all the numerous debates on spares etc et