Did the same test again and here is the result:

1)

zpool create myPool mirror c6t0d0p0 c7t0d0p0

2)

-bash-3.2# zfs create myPool/myfs

-bash-3.2# zpool status

pool: myPool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool ONLINE 0 0 0

mirror ONLINE 0 0 0

c6t0d0p0 ONLINE 0 0 0

c7t0d0p0 ONLINE 0 0 0

errors: No known data errors

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

3)Copy a file to /myPool/myfs

ls -ltrh

total 369687

-rwxr-xr-x 1 root root 184M Jun 3 22:38 test.bin

4)Copy a second file

cp test.bin test2.bin &

And shutdown

Startup

5)

-bash-3.2# zpool status

pool: myPool

state: UNAVAIL

status: One or more devices could not be opened. There are insufficient

replicas for the pool to continue functioning.

action: Attach the missing device and online it using 'zpool online'.

see: http://www.sun.com/msg/ZFS-8000-3C

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool UNAVAIL 0 0 0 insufficient replicas

mirror UNAVAIL 0 0 0 insufficient replicas

c6t0d0p0 UNAVAIL 0 0 0 cannot open

c7t0d0p0 UNAVAIL 0 0 0 cannot open

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

6)Remove and attached the usb sticks:

zpool status

pool: myPool

state: UNAVAIL

status: One or more devices could not be used because the label is missing

or invalid. There are insufficient replicas for the pool to continue

functioning.

action: Destroy and re-create the pool from a backup source.

see: http://www.sun.com/msg/ZFS-8000-5E

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool UNAVAIL 0 0 0 insufficient replicas

mirror UNAVAIL 0 0 0 insufficient replicas

c6t0d0p0 FAULTED 0 0 0 corrupted data

c7t0d0p0 FAULTED 0 0 0 corrupted data

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

-----------

So it's not a hub problem, but it seems to be a zfs & usb storage problem. I
just hope zfs works fine on hardisks. Because it's not working on usb
sticks. It would be nice somebody from SUN could fix this problem...



Thanks & Regards

Paulo


On Tue, Jun 3, 2008 at 8:19 PM, Paulo Soeiro <[EMAIL PROTECTED]> wrote:

> I'll try the same without the hub.
>
> Thanks & Regards
> Paulo
>
>
>
>
> On 6/2/08, Thommy M. <[EMAIL PROTECTED]> wrote:
>>
>> Paulo Soeiro wrote:
>> > Greetings,
>> >
>> > I was experimenting with zfs, and i made the following test, i shutdown
>> > the computer during a write operation
>> > in a mirrored usb storage filesystem.
>> >
>> > Here is my configuration
>> >
>> > NGS USB 2.0 Minihub 4
>> > 3 USB Silicom Power Storage Pens 1 GB each
>> >
>> > These are the ports:
>> >
>> > hub devices
>> > /-------------------------------------------\
>> > | port 2         | port  1              |
>> > | c10t0d0p0  | c9t0d0p0          |
>> > ---------------------------------------------
>> > | port 4         | port  4              |
>> > | c12t0d0p0  | c11t0d0p0        |
>> > \________________________/
>> >
>> > Here is the problem:
>> >
>> > 1)First i create a mirror with port2 and port1 devices
>> >
>> > zpool create myPool mirror c10t0d0p0 c9t0d0p0
>> > -bash-3.2# zpool status
>> >   pool: myPool
>> >  state: ONLINE
>> >  scrub: none requested
>> > config:
>> >
>> >     NAME           STATE     READ WRITE CKSUM
>> >     myPool         ONLINE       0     0     0
>> >       mirror       ONLINE       0     0     0
>> >         c10t0d0p0  ONLINE       0     0     0
>> >         c9t0d0p0   ONLINE       0     0     0
>> >
>> > errors: No known data errors
>> >
>> >   pool: rpool
>> >  state: ONLINE
>> >  scrub: none requested
>> > config:
>> >
>> >     NAME        STATE     READ WRITE CKSUM
>> >     rpool       ONLINE       0     0     0
>> >       c5t0d0s0  ONLINE       0     0     0
>> >
>> > errors: No known data errors
>> >
>> > 2)zfs create myPool/myfs
>> >
>> > 3)created a random file (file.txt - more or less 100MB size)
>> >
>> > digest -a md5 file.txt
>> > 3f9d17531d6103ec75ba9762cb250b4c
>> >
>> > 4)While making a second copy of the file:
>> >
>> > cp file.txt test &
>> >
>> > I've shutdown the computer while the file was being copied. And
>> > restarted the computer again. And here is the result:
>> >
>> >
>> > -bash-3.2# zpool status
>> >   pool: myPool
>> >  state: UNAVAIL
>> > status: One or more devices could not be used because the label is
>> missing
>> >     or invalid.  There are insufficient replicas for the pool to
>> continue
>> >     functioning.
>> > action: Destroy and re-create the pool from a backup source.
>> >    see: http://www.sun.com/msg/ZFS-8000-5E
>> >  scrub: none requested
>> > config:
>> >
>> >     NAME           STATE     READ WRITE CKSUM
>> >     myPool         UNAVAIL      0     0     0  insufficient replicas
>> >       mirror       UNAVAIL      0     0     0  insufficient replicas
>> >         c12t0d0p0  OFFLINE      0     0     0
>> >         c9t0d0p0   FAULTED      0     0     0  corrupted data
>> >
>> >   pool: rpool
>> >  state: ONLINE
>> >  scrub: none requested
>> > config:
>> >
>> >     NAME        STATE     READ WRITE CKSUM
>> >     rpool       ONLINE       0     0     0
>> >       c5t0d0s0  ONLINE       0     0     0
>> >
>> > errors: No known data errors
>> >
>> > -------------------------------------------------------------------
>> >
>> > I was expecting that only one of the files was corrupted, not the all
>> > the filesystem.
>>
>> This looks exactly like the problem I had (thread "USB stick unavailable
>> after restart") and the answer I got was that you can't relay on the HUB
>> ...
>>
>> I haven't tried another HUB yet but will eventually test the Adaptec
>> XHub 4 (AUH-4000) which is on the HCL list...
>>
>>
>>
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to