>
> Good. Run 'zpool scrub' to make sure there are no
> other errors.
>
> regards
> victor
>
Yes, scrubbed successfully with no errors. Thanks again for all of your
generous assistance.
/AJ
--
This message posted from opensolaris.org
___
zfs-discus
On Jul 4, 2010, at 4:58 AM, Andrew Jones wrote:
> Victor,
>
> The zpool import succeeded on the next attempt following the crash that I
> reported to you by private e-mail!
>From the threadlist it looked like system was pretty low on memory with stacks
>of userland stuff swapped out, hence s
>
> - Original Message -
> > Victor,
> >
> > The zpool import succeeded on the next attempt
> following the crash
> > that I reported to you by private e-mail!
> >
> > For completeness, this is the final status of the
> pool:
> >
> >
> > pool: tank
> > state: ONLINE
> > scan: resilvere
- Original Message -
> Victor,
>
> The zpool import succeeded on the next attempt following the crash
> that I reported to you by private e-mail!
>
> For completeness, this is the final status of the pool:
>
>
> pool: tank
> state: ONLINE
> scan: resilvered 1.50K in 165h28m with 0 erro
Victor,
The zpool import succeeded on the next attempt following the crash that I
reported to you by private e-mail!
For completeness, this is the final status of the pool:
pool: tank
state: ONLINE
scan: resilvered 1.50K in 165h28m with 0 errors on Sat Jul 3 08:02:30 2010
config:
> Andrew,
>
> Looks like the zpool is telling you the devices are
> still doing work of
> some kind, or that there are locks still held.
>
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in this thread where
Vict
On Jul 1, 2010, at 10:28 AM, Andrew Jones wrote:
> Victor,
>
> I've reproduced the crash and have vmdump.0 and dump device files. How do I
> query the stack on crash for your analysis? What other analysis should I
> provide?
Output of 'echo "::threadlist -v" | mdb 0' can be a good start in th
Victor,
A little more info on the crash, from the messages file is attached here. I
have also decompressed the dump with savecore to generate unix.0, vmcore.0, and
vmdump.0.
Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice]
Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60:
Jun
Victor,
I've reproduced the crash and have vmdump.0 and dump device files. How do I
query the stack on crash for your analysis? What other analysis should I
provide?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
>
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
>
> > Victor,
> >
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset,
> as I had
On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
> Victor,
>
> The 'zpool import -f -F tank' failed at some point last night. The box was
> completely hung this morning; no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset, as I had no diagnostic
Victor,
The 'zpool import -f -F tank' failed at some point last night. The box was
completely hung this morning; no core dump, no ability to SSH into the box to
diagnose the problem. I had no choice but to reset, as I had no diagnostic
ability. I don't know if there would be anything in the log
Andrew,
Looks like the zpool is telling you the devices are still doing work of
some kind, or that there are locks still held.
From man of section 2 intro page the errors are listed. Number 16
looks to be an EBUSY.
16 EBUSYDevice busy
An
Thanks Victor. I will give it another 24 hrs or so and will let you know how it
goes...
You are right, a large 2TB volume (CSV1) was not in the process of being
deleted, as described above. It is showing error 16 on 'zdb -e'
--
This message posted from opensolaris.org
_
On Jun 28, 2010, at 9:32 PM, Andrew Jones wrote:
> Update: have given up on the zdb write mode repair effort, as least for now.
> Hoping for any guidance / direction anyone's willing to offer...
>
> Re-running 'zpool import -F -f tank' with some stack trace debug, as
> suggested in similar thr
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error
16:
Could not open tank/CSV1, error 16
Considering my attempt to delete the CSV1 volume lead to the failure in the
first place, I have to think that if I can either 1) complete the deletion of
this volume or 2) ro
- Original Message -
> Dedup had been turned on in the past for some of the volumes, but I
> had turned it off altogether before entering production due to
> performance issues. GZIP compression was turned on for the volume I
> was trying to delete.
Was there a lot of deduped data still on
Malachi,
Thanks for the reply. There were no snapshots for the CSV1 volume that I
recall... very few snapshots on the any volume in the tank.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Dedup had been turned on in the past for some of the volumes, but I had turned
it off altogether before entering production due to performance issues. GZIP
compression was turned on for the volume I was trying to delete.
--
This message posted from opensolaris.org
___
I had a similar issue on boot after upgrade in the past and it was due to
the large number of snapshots I had... don't know if that could be related
or not...
Malachi de Ælfweald
http://www.google.com/profiles/malachid
On Mon, Jun 28, 2010 at 8:59 AM, Andrew Jones wrote:
> Now at 36 hours sin
- Original Message -
> Now at 36 hours since zdb process start and:
>
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
> 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
>
> Idling at 0.2% processor for nearly the past 24 hours... feels very
> stuck. Thoughts on how to
Update: have given up on the zdb write mode repair effort, as least for now.
Hoping for any guidance / direction anyone's willing to offer...
Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested
in similar threads elsewhere. Note that this appears hung at near idle.
f
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck.
Thoughts on how to determine where and
23 matches
Mail list logo