> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
>
> Which man page are you referring to?
>
> I see the zfs receive -o syntax in the S11 man page.
Oh ... It's the latest openindiana. So I suppose it must be a new feature
post-rev-28 in the non-open branch...
But it's no big deal
Hi Ned,
Which man page are you referring to?
I see the zfs receive -o syntax in the S11 man page.
The bottom line is that not all properties can be set on the
receiving side and the syntax is one property setting per -o
option.
See below for several examples.
Thanks,
Cindy
I don't think ver
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz
>
> I have not yet tried this syntax. Because you mentioned it, I looked for it
> in
> the man
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz
>
> I have not yet tried this syntax. Because you mentioned it, I looked for it
> in
> the man
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of bob netherton
>
> You can, with recv, override any property in the sending stream that can
> be
> set from the command line (ie, a writable).
>
> # zfs send repo/support@cpu-0412 | zfs recv
Hi All,
Just a follow up - it seems like whatever it was doing it eventually got
done with and the speed picked back up again. The send/recv finally
finished -- I guess I could do with a little patience :)
Lachlan
On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy wrote:
> Hi All,
>
> We are cur
Hi Bob,
On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>>
>> Anything else you suggest I'd check for faults? (Though I'm sort of
>> doubting it is an issue, I'm happy to be
>> thorough)
>>
>
> Try running
>
>
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of doubting it
is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
and see if new low-level fault events are comming in during the zfs
receive.
Bob
--
Bob Friesenhahn
b
Hi Bob,
On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>> genunix`list_next ** 5822 3.7%
>> unix`mach_cpu_idle** 150261 96.1%
>>
>
>
On 12/05/11 10:47, Lachlan Mulcahy wrote:
> zfs`lzjb_decompress10 0.0%
> unix`page_nextn31 0.0%
> genunix`fsflush_do_pages 37 0.0%
> zfs`dbuf_free_range
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
genunix`list_next 5822 3.7%
unix`mach_cpu_idle 150261 96.1%
Rather idle.
Top shows:
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
22945 root
Hi All,
We are currently doing a zfs send/recv with mbuffer to send incremental
changes across and it seems to be running quite slowly, with zfs receive
the apparent bottle neck.
The process itself seems to be using almost 100% of a single CPU in "sys"
time.
Wondering if anyone has any ideas if
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> See FEC suggestion from another poster ;)
>
> Well, of course, all storage mediums have built-in hardware FEC. At lea
On Jun 11, 2011, at 10:37, Edward Ned Harvey wrote:
>> From: David Magda [mailto:dma...@ee.ryerson.ca]
>> Sent: Saturday, June 11, 2011 9:38 AM
>>
>> These parity files use a forward error correction-style system that can be
>> used to perform data verification, and allow recovery when data is lo
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:38 AM
>
> These parity files use a forward error correction-style system that can be
> used to perform data verification, and allow recovery when data is lost or
> corrupted.
>
> http://en.wikipedia.org/wiki/Parch
2011-06-11 17:20, Edward Ned Harvey пишет:
From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: Saturday, June 11, 2011 9:04 AM
If one is saving streams to a disk, it pay be worth creating parity files
for them
(especially if the destination file system is not ZFS):
Parity is just a really s
On Jun 11, 2011, at 09:20, Edward Ned Harvey wrote:
> Parity is just a really simple form of error detection. It's not very
> useful for error correction. If you look into error correction codes,
> you'll see there are many other codes which would be more useful for the
> purposes of zfs send da
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:04 AM
>
> If one is saving streams to a disk, it pay be worth creating parity files
for them
> (especially if the destination file system is not ZFS):
Parity is just a really simple form of error detection. It's
On Jun 11, 2011, at 08:46, Edward Ned Harvey wrote:
> If you simply want to layer on some more FEC, there must be some standard
> generic FEC utilities out there, right?
> zfs send | fec > /dev/...
> Of course this will inflate the size of the data stream somewhat, but
> improves the relia
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> See FEC suggestion from another poster ;)
Well, of course, all storage mediums have built-in hardware FEC. At least disk
& tape for sure. But naturally you can't always trust
On Jun 10, 2011, at 8:59 AM, David Magda wrote:
> On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote:
>
>> #1 A single bit error causes checksum mismatch and then the whole data
>> stream is not receivable.
>
> I wonder if it would be worth adding a (toggleable?) forward error
> correction (F
> I stored a snapshot stream to a file
The tragic irony here is that the file was stored on a non-zfs filesystem. You
had had undetected bitrot which unknowingly corrupted the stream. Other files
also might have been silently corrupted as well.
You may have just made one of the strongest case
On Fri, Jun 10, 2011 at 8:59 AM, Jim Klimov wrote:
> Is such "tape" storage only intended for reliable media such as
> another ZFS or triple-redundancy tape archive with fancy robotics?
> How would it cope with BER in transfers to/from such media?
Large and small businesses have been using T
2011-06-10 20:58, Marty Scholes пишет:
If it is true that unlike ZFS itself, the replication
stream format has
no redundancy (even of ECC/CRC sort), how can it be
used for
long-term retention "on tape"?
It can't. I don't think it has been documented anywhere, but I believe that it
has been wel
> If it is true that unlike ZFS itself, the replication
> stream format has
> no redundancy (even of ECC/CRC sort), how can it be
> used for
> long-term retention "on tape"?
It can't. I don't think it has been documented anywhere, but I believe that it
has been well understood that if you don't
On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote:
> #1 A single bit error causes checksum mismatch and then the whole data
> stream is not receivable.
I wonder if it would be worth adding a (toggleable?) forward error
correction (FEC) [1] scheme to the 'zfs send' stream.
Even if we're talki
2011-06-10 15:58, Darren J Moffat пишет:
As I pointed out last time this came up the NDMP service on Solaris 11
Express and on the Oracle ZFS Storage Appliance uses the 'zfs send'
stream as what is to be stored on the "tape".
This discussion turns interesting ;)
Just curious: how do these
On 06/10/11 12:47, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jonathan Walker
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a fil
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jonathan Walker
>
> New to ZFS, I made a critical error when migrating data and
> configuring zpools according to needs - I stored a snapshot stream to
> a file using "zfs send -R [filesystem]@
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Besides, the format
> is not public and subject to change, I think. So future compatibility
> is not guaranteed.
That is not correct.
Years ago, there was a comment in the ma
On 09/06/11 1:33 PM, Paul Kraus wrote:
> On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov wrote:
>> 2011-06-09 18:52, Paul Kraus пишет:
>>>
>>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
>>>
New to ZFS, I made a critical error when migrating data and
configuring zpools according t
>> New to ZFS, I made a critical error when migrating data and
>> configuring zpools according to needs - I stored a snapshot stream to
>> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
>
>Why is this a critical error, I thought you were supposed to be
>able to save the outp
2011-06-09 21:33, Paul Kraus пишет:
If some bits in the saved file flipped,
Then you have a bigger problem, namely that the file was corrupted.
That is not a limitation of the zfs send format. If the stream gets
corrupted via network transmission you have the same problem.
No it is not quite a
On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov wrote:
> 2011-06-09 18:52, Paul Kraus пишет:
>>
>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
>>
>>> New to ZFS, I made a critical error when migrating data and
>>> configuring zpools according to needs - I stored a snapshot stream to
>>> a f
2011-06-09 18:52, Paul Kraus пишет:
On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot]>[stream_file]".
W
On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
> New to ZFS, I made a critical error when migrating data and
> configuring zpools according to needs - I stored a snapshot stream to
> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
Why is this a critical error, I th
Hey all,
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
When I attempted to receive the stream onto to the newly configured
pool, I ended up with a
OK FORGET IT... I MUST BE VERY TIRED AND CONFUSED ;-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have additional problem, whicxh worries me.
I tried different ways of sending/receiving my data pool.
I took some snapshots, sent them, then destroyed them, using destroy -r.
AFAIK this shoud not have affected the filesystem's _current_ state or am I
mislead ?
Now I succeeded to send a snapsho
actually I succeded using :
# zfs create ezdata/data
# zfs send -RD d...@prededup | zfs recv -duF ezdata/data
I still have to check the result, though
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Sorry if my question was confused.
Yes I'm wondering about the catch22 resulting of the two errors : it means we
are not able to send/receive a pool's root filesystem without using -F.
The zpool list was just meant to say it was a whole pool...
Bruno
--
This message posted from opensolaris.org
_
> amber ~ # zpool list data
> NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
> data 930G 295G 635G31% 1.00x ONLINE -
>
> amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
> cannot receive new filesystem stream: destination 'ezdata' exists
> must specify -F to overwrit
amber ~ # zpool list data
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
data 930G 295G 635G31% 1.00x ONLINE -
amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination 'ezdata' exists
must specify -F to overwrite it
amber ~
On Sat, Dec 19, 2009 at 3:56 AM, Steven Sim wrote:
> r...@sunlight:/root# zfs list -r myplace/Docs
> NAME USED AVAIL REFER MOUNTPOINT
> myplace/Docs 3.37G 1.05T 3.33G
> /export/home/admin/Docs/e/Docs <--- *** Here is the extra "e/Docs"..
I saw a sim
Hi;
After some very hairy testing, I came up with the following procedure
for sending a zfs send datastream to a gzip staging file and later
"receiving" it back to the same filesystem in the same pool.
The above was to enable the filesystem data to be dedup.
However, after the final ZFS r
Hi;
After some very hairy testing, I came up with the following procedure
for sending a zfs send datastream to a gzip staging file and later
"receiving" it back to the same filesystem in the same pool.
The above was to enable the filesystem data to be dedup.
Here is the procedure and under c
On Mon, Sep 28, 2009 at 03:16:17PM -0700, Igor Velkov wrote:
> Not so good as I hope.
> zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx
> zfs recv -vuFd xxx/xxx
>
> invalid option 'u'
> usage:
> receive [-vnF]
> receive [-vnF] -d
>
> For the property
On 09/28/09 16:16, Igor Velkov wrote:
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs
recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF]
receive [-vnF] -d
For the property list, run: zfs set|get
For the delegate
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs
recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF]
receive [-vnF] -d
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallo
Wah!
Thank you, lalt!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 09/28/09 15:54, Igor Velkov wrote:
zfs receive should allow option to disable immediately mount of received filesystem.
In case of original filesystem have changed mountpoints, it's hard to make clone fs with send-receive, because received filesystem immediately try to mount to old mountpoint
zfs receive should allow option to disable immediately mount of received
filesystem.
In case of original filesystem have changed mountpoints, it's hard to make
clone fs with send-receive, because received filesystem immediately try to
mount to old mountpoint, that locked by sourcr fs.
In case
Hi,
One thing I miss in zfs is the ability to override an attribute value
in zfs receive - something like the -o option in zfs create. This
option would be particularly useful with zfs send -R to make a backup
and be sure that the destination won't be mounted
zfs send -R f...@snap | ss
David Dyer-Bennet wrote:
Solaris 2008.11
r...@fsfs:/export/home/localddb/src/bup2# zfs send -R -I
bup-20090223-033745UTC z...@bup-20090225-184857utc > foobar
r...@fsfs:/export/home/localddb/src/bup2# ls -l --si foobar
-rw-r--r-- 1 root root 2.4G 2009-02-27 21:24 foobar
r...@fsfs:/export/home/l
Solaris 2008.11
r...@fsfs:/export/home/localddb/src/bup2# zfs send -R -I
bup-20090223-033745UTC z...@bup-20090225-184857utc > foobar
r...@fsfs:/export/home/localddb/src/bup2# ls -l --si foobar
-rw-r--r-- 1 root root 2.4G 2009-02-27 21:24 foobar
r...@fsfs:/export/home/localddb/src/bup2# zfs recei
Thanks, Matt. Are you interested in feedback on various questions regarding
how to display results? On list or off? Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
Robert Lawhead wrote:
> Apologies up front for failing to find related posts...
> Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL
> PROTECTED] | zfs receive -n -v ...' to show the contents of the stream? I'm
> looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv
Apologies up front for failing to find related posts...
Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED]
| zfs receive -n -v ...' to show the contents of the stream? I'm looking for
the equivalent of ufsdump 1f - fs ... | ufsrestore tv - . I'm hoping that this
mig
Thanks, Will, but for the solution I'm building, I can't predict the
hardware the VM will run on, and I don't want to restrict it to the
limited list in the Processor Check document. So unfortunately I think
I'm stuck running 32-bit Solaris for this one.
So it would be nice to have zfs not hang (o
On 7/5/07, David Goldsmith <[EMAIL PROTECTED]> wrote:
> 2. I'm running S10U3 as 32-bit. I don't know if I can run 64-bit Solaris
> 10 with 32-bit Linux as the host OS. Does anyone know if that will work?
> If so, I'll give it a shot.
ISTR that if you have hardware virtualization (Intel VT, on Core
Hi, all,
Environment: S10U3 running as VMWare Workstation 6 guest; Fedora 7 is
the VMWare host, 1 GB RAM
I'm creating a solution in which I need to be able to save off state on
one host, then restore it on another. I'm using ZFS snapshots with ZFS
receive and it's all working fine, except for som
Hi, Matt,
1. My VMWare host has 4 GB. The VMWare guest (Solaris 10) has 1 GB. I
think that at one point I reset the guest to have 2 GB and ran into the
same problem, but I'm not 100% sure. If you think it's worth trying, I
will.
2. I'm running S10U3 as 32-bit. I don't know if I can run 64-bit Sol
David Goldsmith wrote:
> - Restore the state of the second host to the initial state (using zfs
> rollback -r snapshotname)
> - Run zfs receive for a second time on the second host
>
> Now the second host appears to lock up. I wait half an hour and the zfs
> receive command has not completed. I tr
Russell Aspinwall wrote:
Hi,
As part of a disk subsystem upgrade I am thinking of using ZFS but there are two issues at present
1) The current filesystems are mounted as /hostname/mountpoint
except for one directory where the mount point is /. Is is possible to mount a ZFS
filesystem as /h
Hi,
As part of a disk subsystem upgrade I am thinking of using ZFS but there are
two issues at present
1) The current filesystems are mounted as /hostname/mountpoint except for one
directory where the mount point is /.
Is is possible to mount a ZFS filesystem as /hostname// so that /ho
On Mon, 23 Apr 2007, Eric Schrock wrote:
> On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> > So If I send a snapshot of a filesystem to a receive command like this:
> > zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
> >
> > In order to get compression turned on, am I corr
On Mon, 23 Apr 2007, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that I
> need to start the send/receive and then in a
On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that
> I need to start the send/r
So If I send a snapshot of a filesystem to a receive command like this:
zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
In order to get compression turned on, am I correct in my thought that I need
to start the send/receive and then in a separate window set the compression
property?
O
Jeff Victor wrote:
If I add a ZFS dataset to a zone, and then want to "zfs send" from
another computer into a file system that the zone has created in that
data set, can I "zfs send" to the zone, or can I send to that zone's
global zone, or will either of those work?
I believe that the 'zfs s
If I add a ZFS dataset to a zone, and then want to "zfs send" from another
computer into a file system that the zone has created in that data set, can I "zfs
send" to the zone, or can I send to that zone's global zone, or will either of
those work?
Hi,
I'm running some experiments with zfs send and receive on Solaris 10u2
between two different machines. On server 1 I have the following
data/zones/app1838M 26.5G 836M /zones/app1
data/zones/[EMAIL PROTECTED] 2.35M - 832M -
I have a script that creates a new snapshot and
72 matches
Mail list logo