After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don't think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname
On Mon, Jun 28, 2010 at 11:26 AM, Tristram Scott
wrote:
> For quite some time I have been using zfs send -R fsn...@snapname | dd
> of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks
> back the size of the file system grew to larger than would fit on a single
> DAT72 tape,
Thanks Cindy. I'm running 111b at the moment. I ran a scrub last
night, and it still reports the same status.
r...@weyl:~# uname -a
SunOS weyl 5.11 snv_111b i86pc i386 i86pc Solaris
r...@weyl:~# zpool status -x
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Suffici
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tristram Scott
>
> If you would like to try it out, download the package from:
> http://www.quantmodels.co.uk/zfsdump/
I haven't tried this yet, but thank you very much!
Other people have poi
Andrew,
Looks like the zpool is telling you the devices are still doing work of
some kind, or that there are locks still held.
From man of section 2 intro page the errors are listed. Number 16
looks to be an EBUSY.
16 EBUSYDevice busy
An
Oh well, thanks for this answer.
It makes me feel much better!
What are eventual risks?
Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com
--
I havnt tried it yet, but supposedly this will backup/restore the
comstar config:
$ svccfg export -a stmf > comstar.bak.${DATE}
If you ever need to restore the configuration, you can attach the
storage and run an import:
$ svccfg import comstar.bak.${DATE}
- Mike
On 6/28/10, bso...@ep
Thanks Victor. I will give it another 24 hrs or so and will let you know how it
goes...
You are right, a large 2TB volume (CSV1) was not in the process of being
deleted, as described above. It is showing error 16 on 'zdb -e'
--
This message posted from opensolaris.org
_
Hi all,
Having osol b134 exporting a couple of iscsi targets to some hosts,how can the
COMSTAR configuration be migrated to other host?
I can use the ZFS send/receive to replicate the luns but how can I "replicate"
the target,views from serverA to serverB ?
Is there any best procedures to foll
I've attached the output of those commands. The machine is a v20z if that makes
any difference.
Thanks,
George
--
This message posted from opensolaris.orgmdb: logging to "debug.txt"
> ::status
debugging crash dump vmcore.0 (64-bit) from crypt
operating system: 5.11 snv_111b (i86pc)
panic messag
On Jun 28, 2010, at 11:27 PM, George wrote:
> I've tried removing the spare and putting back the faulty drive to give:
>
> pool: storage2
> state: FAULTED
> status: An intent log record could not be read.
>Waiting for adminstrator intervention to fix the faulted pool.
> action: Either r
Hi Donald,
I think this is just a reporting error in the zpool status output,
depending on what Solaris release is.
Thanks,
Cindy
On 06/27/10 15:13, Donald Murray, P.Eng. wrote:
Hi,
I awoke this morning to a panic'd opensolaris zfs box. I rebooted it
and confirmed it would panic each time it
On Jun 28, 2010, at 9:32 PM, Andrew Jones wrote:
> Update: have given up on the zdb write mode repair effort, as least for now.
> Hoping for any guidance / direction anyone's willing to offer...
>
> Re-running 'zpool import -F -f tank' with some stack trace debug, as
> suggested in similar thr
On 6/28/2010 12:53 PM, Roy Sigurd Karlsbakk wrote:
2. Are the RAM requirements for ZFS with dedup based on the total
available zpool size (I'm not using thin provisioning), or just on how
much data is in the filesystem being deduped? That is, if I have 500
GB of deduped data but 6 TB of possible
On 6/28/2010 12:33 PM, valrh...@gmail.com wrote:
I'm putting together a new server, based on a Dell PowerEdge T410.
I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA
drives. The processor is a quad-core 2 GHz Core i7-based Xeon.
I will run the drives as one set of thre
> 2. Are the RAM requirements for ZFS with dedup based on the total
> available zpool size (I'm not using thin provisioning), or just on how
> much data is in the filesystem being deduped? That is, if I have 500
> GB of deduped data but 6 TB of possible storage, which number is
> relevant for calcu
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error
16:
Could not open tank/CSV1, error 16
Considering my attempt to delete the CSV1 volume lead to the failure in the
first place, I have to think that if I can either 1) complete the deletion of
this volume or 2) ro
- Original Message -
> Dedup had been turned on in the past for some of the volumes, but I
> had turned it off altogether before entering production due to
> performance issues. GZIP compression was turned on for the volume I
> was trying to delete.
Was there a lot of deduped data still on
I'm putting together a new server, based on a Dell PowerEdge T410.
I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA
drives. The processor is a quad-core 2 GHz Core i7-based Xeon.
I will run the drives as one set of three mirror pairs striped together, for 6
TB of homo
Hi,
I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure.
I had a drive fail and accidentally replaced the wrong one, which
unsurprisingly caused the rebuild to fail. The status of the zpool then ended
up as:
pool: storage2
state: FAULTED
status: An intent log reco
Malachi,
Thanks for the reply. There were no snapshots for the CSV1 volume that I
recall... very few snapshots on the any volume in the tank.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Dedup had been turned on in the past for some of the volumes, but I had turned
it off altogether before entering production due to performance issues. GZIP
compression was turned on for the volume I was trying to delete.
--
This message posted from opensolaris.org
___
I had a similar issue on boot after upgrade in the past and it was due to
the large number of snapshots I had... don't know if that could be related
or not...
Malachi de Ælfweald
http://www.google.com/profiles/malachid
On Mon, Jun 28, 2010 at 8:59 AM, Andrew Jones wrote:
> Now at 36 hours sin
- Original Message -
> Now at 36 hours since zdb process start and:
>
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
> 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
>
> Idling at 0.2% processor for nearly the past 24 hours... feels very
> stuck. Thoughts on how to
Update: have given up on the zdb write mode repair effort, as least for now.
Hoping for any guidance / direction anyone's willing to offer...
Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested
in similar threads elsewhere. Note that this appears hung at near idle.
f
On Jun 28, 2010, at 12:26 PM, Tristram Scott wrote:
>> I use Bacula which works very well (much better than
>> Amanda did).
>> You may be able to customize it to do direct zfs
>> send/receive, however I find that although they are
>> great for copying file systems to other machines,
>> they are i
> I use Bacula which works very well (much better than
> Amanda did).
> You may be able to customize it to do direct zfs
> send/receive, however I find that although they are
> great for copying file systems to other machines,
> they are inadequate for backups unless you always
> intend to restore
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck.
Thoughts on how to determine where and
I use Bacula which works very well (much better than Amanda did).
You may be able to customize it to do direct zfs send/receive, however I find
that although they are great for copying file systems to other machines, they
are inadequate for backups unless you always intend to restore the whole f
For quite some time I have been using zfs send -R fsn...@snapname | dd
of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks back
the size of the file system grew to larger than would fit on a single DAT72
tape, and I once again searched for a simple solution to allow dumping
On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote:
> Yes...they're still running...but being aware that a power failure causing an
> unexpected poweroff may make the pool unreadable is a pain
>
> Yes. Patches should be available.
> Or adoption may be lowering a lot...
I don't have ac
On 6/26/10 9:47 AM -0400 David Magda wrote:
Crickey. Who's the genius who thinks of these URLs?
SEOs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 28.06.10 16:16, Gabriele Bulfon wrote:
Yes...they're still running...but being aware that a power failure causing an
unexpected poweroff may make the pool unreadable is a pain
Pool integrity is not affected by this issue.
___
zfs-discuss maili
Yes...they're still running...but being aware that a power failure causing an
unexpected poweroff may make the pool unreadable is a pain
Yes. Patches should be available.
Or adoption may be lowering a lot...
--
This message posted from opensolaris.org
On 28-6-2010 12:13, Gabriele Bulfon wrote:
*sweat*
These systems are all running for years nowand I considered them safe...
Have I been at risk all this time?!
They're still running, are they not? So, stop sweating.
But you're right about the changed patching service from Oracle.
It sucks
mmmI double checked some of the running systems.
Most of them have the first patch (sparc-122640-05 and x86-122641-06), but not
the second one (sparc-142900-09 and x86-142901-09)...
...I feel I'm right in the middle of the problem...
How much am I risking?! These systems are all mirrored via
I think zfs on ubuntu currently is a rather bad idea. See test below with
ubuntu Lucid 10.04 (amd64)
r...@bigone:~# cat /proc/partitions
major minor #blocks name
80 312571224 sda
81 979933 sda1
823911827 sda2
83 48829567 sda3
8
Yes, I did read it.
And what worries me is patches availability...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I ran 'zpool scrub' and will report what happens once it's finished. (It will
take pretty long.)
The scrub finished successfully (with no errors) and 'zpool status -v' doesn't
crash the kernel any more.
Andrej
smime.p7s
Description: S/MIME Cryptographic Signature
__
All true, I just saw too many "need ubuntu and zfs" and thought to state the
obvious in case the patch set for nexenta happen to differ enough to provide a
working set. I've had nexenta succeed where opensolaris quarter releases failed
and vice versa
On Jun 27, 2010, at 9:54 PM, Erik Trimble w
On 06/28/10 08:15 PM, Gabriele Bulfon wrote:
I found this today:
http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29&utm_content=FriendFeed+Bot
How can I be sure my
Thanks I don't know how I missed it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I found this today:
http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29&utm_content=FriendFeed+Bot
How can I be sure my Solaris 10 systems are fine?
Is latest OpenSola
43 matches
Mail list logo