On Sat, 12 Sep 2009, Paul B. Henson wrote:
> In any case, I agree with you that the firmware is buggy; however I
> disagree with you as to the outcome of that bug. The drive is not
> returning random garbage, it has *one* byte wrong. Other than that all of
> the data seems ok, at least to my inexp
On Sat, 12 Sep 2009, Eric Schrock wrote:
> Also, were you ever able to get this disk behind a SAS transport (X4540,
> J4400, J4500, etc)? It would be interesting to see how hardware SATL
> deals with this invalid data. Output from 'smartctl -d sat' and
> 'smartctl -d scsi' on such a system would
On Fri, Sep 11, 2009 at 8:33 PM, Owen Davies wrote:
> I tried editing the /etc/group file to swap the GIDs but this didn't seem to
> have the effect I wanted. Now, when I view the ACLs with an ls -V from the
> OSOL side I see that the Parents group has full permissions but from the
> Windows s
On Sat, 12 Sep 2009, Eric Schrock wrote:
> Your statement that it is "just fine" is false:
I didn't say it worked "perfectly", I said it worked "fine". Yes, it gave a
*warning* that the "SMART Selective Self-Test Log Data Structure Revision
Number" was 0 instead of 1, **however** other than that
Carson Gaspar wrote:
Except you replied to me, not to the person who has SSDs. I have dead
standard hard disks, and the mpt driver is just not happy. After
applying 141737-04 to my Sol 10 system, things improved greatly, and
the constant bus resets went away. After upgrading to OpenSolaris 6/
James C. McPherson wrote:
On Thu, 10 Sep 2009 12:31:11 -0700
Carson Gaspar wrote:
Alex Li wrote:
We finally resolved this issue by change LSI driver. For details, please
refer to here
http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
Anyone from Sun have any knowledge o
> How are the parent and kids defined in the /etc/passwd file?
These two are parents (names changed) :
Dad:x:101:10:Dad:/export/home/Dad:/bin/bash
Mom:x:102:1::/home/Mom:/bin/sh
and these are the kids:
Kid_a:x:103:1::/home/Kid_a:/bin/sh
Kid_b:x:104:1::/home/Kid_b:/bin/sh
Kid_c:x:105:1::/home/Ki
On 9/12/2009 10:33 PM, Mark J. Musante wrote:
That could be a bug with the status output. Could you try "zdb -l" on
one of the good drives and see if the label for c5t9d0 has "/old"
oops, i just realized i took this thread off list. i hope you dont mind me
putting it back on -- mea culpa.
On 9/12/2009 9:41 PM, Mark J Musante wrote:
The device is listed with s0; did you try using c5t9d0s0 as the name?
I didn't -- I never used s0 in the config setting up the zpool -- it
changed to s0 after reboot. but in either case, it's a good thought:
# zpool replace nfspool c5t9d0s0 c5t9d
On Thu, 10 Sep 2009 12:31:11 -0700
Carson Gaspar wrote:
> Alex Li wrote:
> > We finally resolved this issue by change LSI driver. For details, please
> > refer to here
> > http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
>
> Anyone from Sun have any knowledge of when the
Owen Davies wrote:
I had a OpenSolaris server running basically as a fileserver for all my windows
machines. The CIFS server was running in WORKGROUP mode. I had several users
defined on the server to match my windows users. I had these users in a few
groups (the most important being Parent
[sorry for the cross post to solarisx86]
One of my disks died that i had in a raidz configuration on a Sun V40z with
Solaris 10u5. I took the bad disk out, replaced the disk, and issued
'zpool replace pool c5t9d0'. the resilver process started, and before it
was done i rebooted the system.
On Sat, Sep 12, 2009 at 10:17 AM, Damjan Perenic <
damjan.pere...@guest.arnes.si> wrote:
> On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook wrote:
> >
> >
> > On Fri, Sep 11, 2009 at 4:46 PM, Chris Du wrote:
> >>
> >> You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS
> >> share m
oh okay! But I still don't understand why is my zpool acting like this? What
kind of error could this be then?
I can read/write to the pool but it's going extremely slow. All my disc are
fine! I'm sure about that!
When I upgraded to 122 I didn't notice this problem until I rebooted after 5
days
tranceash wrote:
Zfs will have deduplication in summer 2009 was the news ? But there seems to be
no news when will it have this feature???
http://www.codestrom.com/wandering/2009/09/faq-zfs-deduplication.html
___
zfs-discuss mailing list
zfs-discus
You shouldn't hit the Raid-Z issue because it only happens with an odd number
of disks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Also, were you ever able to get this disk behind a SAS transport
(X4540, J4400, J4500, etc)? It would be interesting to see how
hardware SATL deals with this invalid data. Output from 'smartctl -d
sat' and 'smartctl -d scsi' on such a system would show both the ATA
data and the translated
On Sep 12, 2009, at 12:00 AM, Paul B. Henson wrote:
Well, I won't claim the drive firmware is completely innocent, but as
evidenced in
http://mail.opensolaris.org/pipermail/fm-discuss/2009-June/
000436.html
smartctl on a Linux box seems to work just fine. The exact same
model drive
also
On Sat, 12 Sep 2009, Thomas Burgess wrote:
This is because with ZFS the directories aren't REALLY there.
You need to either use NFSv4 or you need to export each ZFS filesystem
independently
It should be sufficient to use an appropriate automount rule on the
client so that the "subordinate" fi
On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook wrote:
>
>
> On Fri, Sep 11, 2009 at 4:46 PM, Chris Du wrote:
>>
>> You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS
>> share most of the design, but they are still different, cache, interface,
>> firmware are all different.
>
> An
Do you thing that this is a bug? If it is a bug, its okay for me. I can wait
for future releases. But if this is happening only for me, then I really need
help to solve this problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing l
On Sat, 12 Sep 2009 07:38:43 PDT
Hamed wrote:
> Please help me. I really need help. I did a stupid thing i know.
Afaik help does not exist in this case other than making a full
backup / restore. There is no return to former zfs versions possible.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+
This is because with ZFS the directories aren't REALLY there.
You need to either use NFSv4 or you need to export each ZFS filesystem
independently
On Fri, Sep 11, 2009 at 4:54 PM, Thomas Uebermeier wrote:
> Hello,
>
> I have a ZFS filesystem structure, which is basically like this:
>
> /foo
>
Hi everyone!
I did a huge mistake by upgrading my zpool from build 118 to 122. I didn't know
about the checksum error. The strange thing here is that I don't get any error
at all. My zpool is working very slowly, everything work fine beside the speed.
It goes between 900 kb/s to 2 MB/s and acce
Zfs will have deduplication in summer 2009 was the news ? But there seems to be
no news when will it have this feature???
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
im playing around with a home raidz2 install and i can see this pulsing as well.
The only difference is i have 6 ext usb drives with activity lights on them so
i can see whats actually being written to the disk and when :)
What i see is about 8 second pauses while data is being sent over the net
On Sep 11, 2009, at 13:40, Maurice Volaski wrote:
At 8:25 PM +0300 9/11/09, Markus Kovero wrote:
I believe failover is best to be done manually just to be sure
active node is really dead before importing it on another node,
otherwise there could be serious issues I think.
I believe there a
Probably a dumb (but basic) question about incremental zfs backups.
After reading docs I'm still nnot sure, so I ask here.
# zfs snapshot -r rpool/ROOT/b...@0901
# zfs send rpool/ROOT/b...@0901 | zfs recv -Fdu tank
# zfs snapshot -r rpool/ROOT/b...@0902
# zfs send -i rpool/ROOT/b...@0901 rpool/ROO
Hi,
yesterday, my backup zpool on two usb drives failed for USB errors (I don't know
if connecting my iPhone plays a role) while scrubbing the pool. This lead to all
I/O on the zpool hanging, including df, zpool and zfs commands.
init 6 would also hang due to bootadm hanging:
process id 1632
On Sep 11, 2009, at 10:41 PM, Frank Middleton wrote:
On 09/11/09 03:20 PM, Brandon Mercer wrote:
They are so well known that simply by asking if you were using them
suggests that they suck. :) There are actually pretty hit or miss
issues with all 1.5TB drives but that particular manufacture
On Fri, 11 Sep 2009, Eric Schrock wrote:
> It's clearly bad firmware - there's no bug in the sata driver. That
> drive basically returns random data, and if you're unlucky that
> randomness will look like a valid failure response. In the process I
> found one or two things that could be tightene
31 matches
Mail list logo