Thanks a lot Sanjeev!
If you look my first message you will see that discrepancy in zdb...
Leal.
[http://www.eall.com.br/blog]
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Marcelo,
On Wed, Dec 31, 2008 at 02:17:37AM -0800, Marcelo Leal wrote:
Thanks a lot Sanjeev!
If you look my first message you will see that discrepancy in zdb...
Apologies. Now, in the hindsight I understand why you gave the zdb details :-(
I should have read the mail carefully.
Thanks and
Hello all,
# zpool status
pool: mypool
state: ONLINE
scrub: scrub completed after 0h2m with 0 errors on Fri Dec 19 09:32:42 2008
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror ONLINE 0 0 0
Marcelo,
Thanks for the details ! This rules out a bug that I was suspecting :
http://bugs.opensolaris.org/view_bug.do?bug_id=6664765
This needs more analysis.
What does the rm command fail with ?
We could probably run truss on the rm command like : truss -o
/tmp/rm.truss rm filename
You then
execve(/usr/bin/rm, 0x08047DBC, 0x08047DC8) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
resolvepath(/usr/bin/rm, /usr/bin/rm, 1023) = 11
sysconfig(_CONFIG_PAGESIZE)
Marcelo,
Thanks for the details.
Comments inline...
Marcelo Leal wrote:
execve(/usr/bin/rm, 0x08047DBC, 0x08047DC8) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
execve(/usr/bin/ls, 0x08047DA8, 0x08047DB4) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
resolvepath(/usr/bin/ls, /usr/bin/ls, 1023) = 11
xstat(2, /usr/bin/ls, 0x08047A58)
Marcello,
Comments inline...
On Tue, Dec 30, 2008 at 10:35:37AM -0800, Marcelo Leal wrote:
pathconf(., 20) = 2
acl(., ACE_GETACLCNT, 0, 0x) = 6
stat64(., 0x08046890) = 0
acl(., ACE_GETACL, 6, 0x08071C48) = 6
Hello all...
Can that be caused by some cache on the LSI controller?
Some flush that the controller or disk did not honour?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Marcelo,
Marcelo Leal wrote:
Hello all...
Can that be caused by some cache on the LSI controller?
Some flush that the controller or disk did not honour?
More details on the problem would help. Can you please give the
following details :
- zpool status
- zfs list -r
- The details of the
10 matches
Mail list logo