Hmm... got it working after a reboot. Odd that it had problems before that. I
was able to rename the pools and the system seems to be running well now.
Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get
copied over with the zfs send/recv. I didn't have that many filesystem
I have a Solaris 10 update 6 system with a snapshot I can't remove.
zfs destroy -f reports the device as being busy. fuser doesn't
shore any process using the filesystem and it isn't shared.
I can unmount the filesystem OK.
Any clues or suggestions of bigger sticks to hit it with?
--
Ian.
r...@nas:~# zpool export -f raid
cannot export 'raid': pool is busy
I've disabled all the services I could think of. I don't see anything accessing
it. I also don't see any of the filesystems mounted with mount or "zfs mount".
What's the deal? This is not the rpool, so I'm not booted off it or
On 16-Jan-10, at 6:51 PM, Mike Gerdts wrote:
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain
wrote:
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with
zfs. Some
of the data on this is quite valuable, some small subset to be
backed
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain wrote:
> On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
>
>>> I am considering building a modest sized storage system with zfs. Some
>>> of the data on this is quite valuable, some small subset to be backed
>>> up "forever", and I am evaluating back-
Thanks for the tip both of you. The zdb approach seems viable.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with zfs.
Some
of the data on this is quite valuable, some small subset to be backed
up "forever", and I am evaluating back-up options with that in mind.
You don't need to store the "z
Which drive model/revision number are you using?
I presume you are using the 4-platter version: WD15EADS-00R6B0, but perhaps I
am wrong.
Also did you run WDTLER.EXE on the drives first, to hasten error reporting
times?
--
This message posted from opensolaris.org
We're in the process of upgrading our storage servers from Seagate RE.2 500 GB
and WD 500 GB "black" drives to WD 1.5 TB "green" drives (ones with 512B
sectors). So far, no problems to report.
We've replaced 6 out of 8 drives in one raidz2 vdev so far (1 drive each
weekend). re-silver times h
Which consumer-priced 1.5TB drives do people currently recommend?
I had zero read/write/checksum errors so far in 2 years with my trusty old
Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of
drives that are big, reliable and cheap.
As of Jan 2010 it seems the price sw
>
>
>
> NO, zfs send is not a backup.
>
> From a backup, you could restore individual files.
>
> Jörg
>
>
I disagree.
It is a backup. It's just not "an enterprise backup solution"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Edward Ned Harvey wrote:
> > I am considering building a modest sized storage system with zfs. Some
> > of the data on this is quite valuable, some small subset to be backed
> > up "forever", and I am evaluating back-up options with that in mind.
>
> You don't need to store the "zfs send" data st
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:
> Personally, I use "zfs send | zfs receive" to an external disk. Initially a
> full image, and later incrementals.
Do these incrementals go into the same filesystem that received the
original zfs stream?
__
OK, the third question (localhost transmission failure) should have been posted
to storage-discuss.
I'll subscribe to this list and ask there.
Regarding the first question, after having removed the lun from the target,
devfsadm -C removes the device and then the pool shows as unavailable. I gu
In my previous post I was refering more to mdbox (Multi-dbox) rather than dbox,
however I beleive the meta data is store with the mail msg in version 1.x and
2.x meta is not updated within the msg which would be better for ZFS.
What I am saying is msg per file which is not updated is better for
According to DoveCot Wiki dbox files are re-written by a secondary process. ie
delete do not happen immediately, they happen latter as a background process
and the whole message file is re-written. You can set a size limit on message
files.
Some time ago I email Tim, on a few ideas to make it m
Thx all, I understand now.
BR, Jeffry
>
> if an application requests a synchronous write then it is commited to
> ZIL immediately, once it is done the IO is acknowledged to application.
> But data written to ZIL is still in memory as part of an currently open
> txg and will be committed to a pool
> I am considering building a modest sized storage system with zfs. Some
> of the data on this is quite valuable, some small subset to be backed
> up "forever", and I am evaluating back-up options with that in mind.
You don't need to store the "zfs send" data stream on your backup media.
This woul
> What is the best way to back up a zfs pool for recovery? Recover
> entire pool or files from a pool... Would you use snapshots and
> clones?
>
> I would like to move the "backup" to a different disk and not use
> tapes.
Personally, I use "zfs send | zfs receive" to an external disk. Initiall
19 matches
Mail list logo