On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
The recommended thing is to zfs send | zfs receive [...]
[cut the rest of the reply]
I want to thank everyone for the insights shared on this
Harry Putnam wrote:
I would like some input about the use of zfs snapshot.
The Auto snapshot is nice on rpool but in some of the other zfs fs
I've created that kind of frequency doesn't seem necessary.
The frequency is configurable on a per ZFS dataset basis:
James Lever wrote:
Is there a mechanism by which you can perform a zfs send | zfs receive
and not have the data uncompressed and recompressed at the other end?
I have a gzip-9 compressed filesystem that I want to backup to a remote
system and would prefer not to have to recompress everything
Folks,
Need help with ZFS recovery following zfs create ...
We recently received new laptops (hardware refresh) and I simply transfered the
multiboot hdd (using OpenSolaris 2008.11 as the primary production OS) from the
old laptop to the new one (used the live DVD to do the zpool import,
Hi.
I recently transferred a zfs drive containing one pool from a FreeNAS 0.7RC1
box to a FreeNAS 0.7RC2 box.
I had earlier reinstalled the first box several times, and managed to get the
pool back just using the FreeNAS gui. However, after moving the drive to the
new box, and doing the same
It's a strange question anyway - You want a single file to have
permissions
(suppose 755) in one directory, and some different permissions
(suppost 700)
in some other directory? Then some users could access the file if
they use
path A, but would be denied access to the same file if they
My point exactly. I'm being bold or brazen or ignorant by saying: There is
no point to do a chmod and not follow symlink. Chmod should always follow
symlinks. That's why it's the default behavior, and that's why it's rare,
strange, or impossible to override that behavior.
As long as you're
I believe I had just redirected zfs send to a file, then ftp'ed the file. I
tried that after my script had been producing the error. It had been trying to
do zfs send | socat | zfs receive essentially.
--
This message posted from opensolaris.org
___
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
The recommended thing is to zfs send | zfs receive
I have a zpool named backup for this purpose (mirrored).
Do I create a seperate FS (backup/FS)
dick hoogendijk d...@nagual.nl wrote:
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
The recommended thing is to zfs send | zfs receive
I have a zpool named backup for this purpose
You can zpool replace a bad slog device now.
From which kernel release is this implemented/working?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Joerg Schilling wrote:
dick hoogendijk d...@nagual.nl wrote:
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908
The recommended thing is to zfs send | zfs receive
I have a zpool named backup for this
On Mon, Aug 24, 2009 at 8:08 AM, Edward Ned Harveysola...@nedharvey.com wrote:
It's a strange question anyway - You want a single file to have
permissions
(suppose 755) in one directory, and some different permissions
(suppost 700)
in some other directory? Then some users could access the
Hi Thomas,
Yes, that is exactly whats happening to us. I've tried to share the zfs
inside the other zfs. Like so, but I'm still seeing an empty directory.
export1/dfazi sharenfs rw local
So in our particular setup, we have the following:
export1/dfazi
you could mount both, but you should talk to the nfs/zfs experts about the
proper way.
Theres more than one method of doing so. What i did was something like
this:
i had a ZFS share called /store and another at /store/Video
i wanted to share them via nfs, so i mount /store on /mnt/store on my
Gregory Skelton wrote:
Yes, that is exactly whats happening to us. I've tried to share the
zfs inside the other zfs. Like so, but I'm still seeing an empty directory.
What are you using for a client? What version of NFS?
NFSv4 in Solaris Nevada build 77 and later, or any OpenSolaris
On Aug 24, 2009, at 8:32 AM, Mike Gerdts wrote:
On Mon, Aug 24, 2009 at 8:08 AM, Edward Ned Harveysola...@nedharvey.com
wrote:
It's a strange question anyway - You want a single file to have
permissions
(suppose 755) in one directory, and some different permissions
(suppost 700)
in some
On Aug 23, 2009, at 8:12 PM, Daniel Carosone wrote:
On Sun, 23 Aug 2009, Daniel Carosone wrote:
Userland tools to read and verify a stream, without
having to play
it into a pool (seek and io overhead) could really
help here.
This assumes that the problem is data corruption of
the stream,
On Mon, Aug 24, 2009 at 12:55, Richard Ellingrichard.ell...@gmail.com wrote:
Alice$ cd ~/proj1; ln -s /etc .,
Alice$ echo Hi helpdesk, Bob is on vacation and he has a bunch of
files in my home directory for a project that we are working on
together. Unfortunately, his umask was messed up and
On Aug 23, 2009, at 11:17 AM, dick hoogendijk wrote:
On Sun, 23 Aug 2009 09:54:07 PDT
Ross myxi...@googlemail.com wrote:
If you really want to store a backup, create another ZFS filesystem
somewhere and do a send/receive into it. Please don't try to dump
zfs send to a file and store the
Hi,
What are you using for a client? What version of NFS?
We're using Red Hat Enterprise Linux(Centos) 5.3 for the clients, with
nfs 3
NFSv4 in Solaris Nevada build 77 and later, or any OpenSolaris
versions, will do this mirror mounts stuff automatically.
Other than that, it's manual
On Aug 24, 2009, at 10:22 AM, Will Murnane wrote:
On Mon, Aug 24, 2009 at 12:55, Richard
Ellingrichard.ell...@gmail.com wrote:
Alice$ cd ~/proj1; ln -s /etc .,
Alice$ echo Hi helpdesk, Bob is on vacation and he has a bunch of
files in my home directory for a project that we are working on
Gregory Skelton wrote:
What are you using for a client? What version of NFS?
We're using Red Hat Enterprise Linux(Centos) 5.3 for the clients, with
nfs 3
You should try NFSv4 - Linux NFSv4 support came in with this
mirror mount support.
If its possible, I'd still like to mount the base
I'm putting together a 48 bay NAS for my company [24 drives to start]. My
manager has already ordered 24 2TB [b]WD Caviar Green[/b] consumer drives -
should we send these back and order the 2TB [b]WD RE4-GP[/b] enterprise drives
instead?
I'm tempted to try these out. First off, they're about
Thanks Robert,
The older version of nfs that was exactly the problem. Nfs3 made the
directorieslook like they were empty, while nfs4 displays the contents of
each directory.
Many Thanks!
Gregory
On Mon, 24 Aug 2009, Robert Thurlow wrote:
Gregory Skelton wrote:
What are you using for
On Aug 24, 2009, at 11:10 AM, Ron Mexico wrote:
I'm putting together a 48 bay NAS for my company [24 drives to
start]. My manager has already ordered 24 2TB [b]WD Caviar Green[/b]
consumer drives - should we send these back and order the 2TB [b]WD
RE4-GP[/b] enterprise drives instead?
I
Suffice to say, 2 top-level raidz2 vdevs of similar size with copies=2
should offer very nearly the same protection as raidz2+1.
-- richard
This looks like the way to go. Thanks for your input. It's much appreciated!
--
This message posted from opensolaris.org
On Mon, Aug 24, 2009 at 5:55 PM, Richard Ellingrichard.ell...@gmail.com wrote:
...
No it shouldn't.
Alice$ cd ~/proj1; ln -s /etc .,
Alice$ echo Hi helpdesk, Bob is on vacation and he has a bunch of
files in my home directory for a project that we are working on
together. Unfortunately,
On Mon, 24 Aug 2009, Albert Chin wrote:
Seems some of the new drives are having problems, resulting in CKSUM
errors. I don't understand why I have so many data errors though. Why
does the third raidz2 vdev report 34.0K CKSUM errors?
Is it possible that this third raidz2 is inflicted with a
Is there a formula to determine the optimal size of dedicated cache space for
zraid systems to improve speed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, 24 Aug 2009 16:36:13 +0100
Darren J Moffat darr...@opensolaris.org wrote:
Joerg Schilling wrote:
dick hoogendijk d...@nagual.nl wrote:
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
zfs send -Rv rp...@0908
Added a third raidz2 vdev to my pool:
pool: tww
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
Will Murnane will.murn...@gmail.com wrote:
Helpdesk$ pfexec chmod -fR a+rw /home/alice/proj1
Alice$ rm /etc/shadow
Alice$ cp myshadow /etc
Alice$ su -
root#
One could achieve the same result with a request to chmod a+rw
/etc/shadow, but this would be more noticeable.
One of my
Richard Elling richard.ell...@gmail.com wrote:
Helpdesk$ pfexec chmod -fR a+rw /home/alice/proj1
Alice$ rm /etc/shadow
Alice$ cp myshadow /etc
Alice$ su -
root#
One could achieve the same result with a request to chmod a+rw
/etc/shadow, but this would be more noticeable.
One
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
On Mon, 24 Aug 2009, Albert Chin wrote:
Seems some of the new drives are having problems, resulting in CKSUM
errors. I don't understand why I have so many data errors though. Why
does the third raidz2 vdev report 34.0K CKSUM
On Sun, 23 Aug 2009 22:05:15 -0400
Edward Ned Harvey sola...@nedharvey.com wrote:
## Create Full snapshot and send it
zfs send sourc...@uniquesnapname | ssh somehost 'zfs receive -F
targe...@uniquesnapname'
this is what I want to do. However I want a recursive backup from the
root pool. From
You can validate a stream stored as a file at any
time using the zfs receive -n option.
Interesting. Maybe it's just a documentation issue, but the man page doesn't
make it clear that this command verifies much more than the names in the
stream, and suggests that the rest of the data could
On Aug 24, 2009, at 5:22 PM, Daniel Carosone wrote:
You can validate a stream stored as a file at any
time using the zfs receive -n option.
Interesting. Maybe it's just a documentation issue, but the man
page doesn't make it clear that this command verifies much more than
the names in
On Mon, 24 Aug 2009, Daniel Carosone wrote:
I got burnt (thankfully only in testing) by a previous attempt to
use mirrors and resilvering with such files. They're ~useless once
detached. The downside is the need to completely re-write the
How about if you don't 'detach' them? Just unplug
How about if you don't 'detach' them? Just unplug
the backup device in the pair, plug in the
temporary replacement, and tell zfs to
replace the device.
Hm. I had tried a variant: a three-way mirror, with one device missing most of
the time. The annoyance of that was that the pool
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
sed -e s/real work name/$WORK/
;)
Thank You,
Sean Collins
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.12 (Darwin)
iEYEARECAAYFAkqTLTUACgkQ9g9WSXiROTG08wCfZd6imdS500VOF2TJGW+dr+Lw
m8YAnjJ6GwNzOishkC8AIbpgzCYpGvV1
=pJMT
-END
Hi, I have just setup an ISCSI volume on ZFS to use with OS X as a backup disk
but performance is extremely bad. I am using the GlobalSAN iSCSI initiator.
Without getting into too much detail does anyone know whether there is a reason
why an iSCSI device seems to perform so badly ? I have
Hi Duncan,
I also do the same with my Mac for timemachine and get the same WOEFUL
performance to my x4500 filer.
I have mounted ISCSI zvols on a linux machine and it performs as expected
(50 mbytes a second) as apposed to my Mac that goes @ 1mbyte a second. I do
believe the client for Mac is
On Aug 24, 2009, at 10:02 PM, LEES, Cooper c...@ansto.gov.au wrote:
Hi Duncan,
I also do the same with my Mac for timemachine and get the same WOEFUL
performance to my x4500 filer.
I have mounted ISCSI zvols on a linux machine and it performs as
expected
(50 mbytes a second) as apposed to
Ross,
Do you have any links to doco / blog posts for time machine over NFS? I
would love to do this ... I have mounted NFS before with Connect to Server
/ Automoutner but don't know how to get Mac OS X to see it as a valid 'disk'
to backup to.
Ta,
Cooper
On 25/08/09 2:10 PM, Ross Walker
45 matches
Mail list logo