Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
> 
> Which man page are you referring to?
> 
> I see the zfs receive -o syntax in the S11 man page.

Oh ...  It's the latest openindiana.  So I suppose it must be a new feature 
post-rev-28 in the non-open branch...

But it's no big deal.  I found that if I "zfs create" and then "zfs set" a few 
times, and then "zfs receive" I get the desired behavior.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Cindy Swearingen

Hi Ned,

Which man page are you referring to?

I see the zfs receive -o syntax in the S11 man page.

The bottom line is that not all properties can be set on the
receiving side and the syntax is one property setting per -o
option.

See below for several examples.

Thanks,

Cindy

I don't think version is a property that can be set on the
receiving size. The version must be specified when the file
system is created:

# zfs create -o version=5 tank/joe

You can't change blocksize on the receiving side either because
it is set during the I/O path. You can use shadow migration to
migrate a file system's blocksize.

This syntax errors because the supported syntax is "-o
property" not "-o properties."

# zfs send tank/home/cindy@now | zfs receive -o
compression=on,sync=disabled pond/cindy.backup
cannot receive new filesystem stream: 'compression' must be one of 'on
| off | lzjb | gzip | gzip-[1-9] | zle'

Set multiple properties like this:

# zfs send tank/home/cindy@now | zfs receive -o compression=on -o
sync=disabled pond/cindy.backup2

Enabling compression on the receiving side works, but verifying
the compression can't be done with ls.

The data is compressed on the receiving side:

# zfs list -r pond | grep data
pond/cdata   168K  63.5G   168K  /pond/cdata
pond/nocdata 289K  63.5G   289K  /pond/nocdata

# zfs send -p pond/nocdata@snap1 | zfs recv -Fo compression=on rpool/cdata

# zfs get compression pond/nocdata
NAME  PROPERTY VALUE  SOURCE
pond/nocdata  compression  offdefault
# zfs get compression rpool/cdata
NAME PROPERTY VALUE  SOURCE
rpool/cdata  compression  on local

You can't see the compressed size with the ls command:

# ls -lh /pond/nocdata/file.1
-r--r--r--   1 root root  202K Dec 21 13:52 /pond/nocdata/file.1
# ls -lh /rpool/cdata/file.1
-r--r--r--   1 root root  202K Dec 21 13:52 /rpool/cdata/file.1

You can see the size difference with zfs list:

# zfs list -r pond rpool | grep data
pond/cdata  168K  63.5G   168K  /pond/cdata
pond/nocdata289K  63.5G   289K  /pond/nocdata
rpool/cdata 168K  47.6G   168K  /rpool/cdata

You can also see the size differences with du -h:

# du -h pond/nocdata/file.1
 258K   pond/nocdata/file.1
# du -h rpool/cdata/file.1
 137K   rpool/cdata/file.1


On 12/21/12 11:41, Edward Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey

zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz

I have not yet tried this syntax.  Because you mentioned it, I looked for it in
the man page, and because it's not there, I hesitate before using it.


Also, readonly=on

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> 
> zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz
> 
> I have not yet tried this syntax.  Because you mentioned it, I looked for it 
> in
> the man page, and because it's not there, I hesitate before using it.

Also, readonly=on
...
and ...
Bummer.  When I try zfs receive with -o, I get the message:
invalid option 'o'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Edward Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> 
> zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz
> 
> I have not yet tried this syntax.  Because you mentioned it, I looked for it 
> in
> the man page, and because it's not there, I hesitate before using it.

Also, readonly=on

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of bob netherton
> 
> You can, with recv, override any property in the sending stream that can
> be
> set from the command line (ie, a writable).  
> 
> # zfs send repo/support@cpu-0412 | zfs recv -o version=4 repo/test
> cannot receive: cannot override received version

Are you sure you can do this with other properties?  It's not in the man page.  
I would like to set the compression & sync on the receiving end:

zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz

I have not yet tried this syntax.  Because you mentioned it, I looked for it in 
the man page, and because it's not there, I hesitate before using it.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi All,

Just a follow up - it seems like whatever it was doing it eventually got
done with and the speed picked back up again. The send/recv finally
finished -- I guess I could do with a little patience :)

Lachlan

On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy  wrote:

> Hi All,
>
> We are currently doing a zfs send/recv with mbuffer to send incremental
> changes across and it seems to be running quite slowly, with zfs receive
> the apparent bottle neck.
>
> The process itself seems to be using almost 100% of a single CPU in "sys"
> time.
>
> Wondering if anyone has any ideas if this is normal or if this is just
> going to run forever and never finish...
>
>
> details - two machines connected via Gigabit Ethernet on the same LAN.
>
> Sending server:
>
> zfs send -i 20111201_1 data@20111205_1 | mbuffer -s 128k -m 1G -O
> tdp03r-int:9090
>
> Receiving server:
>
> mbuffer -s 128k -m 1G -I 9090 | zfs receive -vF tank/db/data
>
> mbuffer showing:
>
> in @  256 KiB/s, out @  256 KiB/s,  306 GiB total, buffer 100% ful
>
>
>
> My debug:
>
> DTraceToolkit hotkernel reports:
>
> zfs`lzjb_decompress10   0.0%
> unix`page_nextn31   0.0%
> genunix`fsflush_do_pages   37   0.0%
> zfs`dbuf_free_range   183   0.1%
> genunix`list_next5822   3.7%
> unix`mach_cpu_idle 150261  96.1%
>
>
> Top shows:
>
>PID USERNAME NLWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
>  22945 root1  600   13M 3004K cpu/6  144:21  3.79% zfs
>550 root   28  590   39M   22M sleep   10:19  0.06% fmd
>
> I'd say the 3.7% or so here is so low because we are providing not per
> CPU, but aggregate CPU usage. mpstat seems to show the real story.
>
> mpstat 1 shows output much like this each second:
>
> CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt
> idl
>   00   00   329  108   830   1730 00   0   0
> 100
>   10   00   1001   940   2310 00   0   0
> 100
>   20   00320   280510 00   0   0
> 100
>   30   00180   110000 00   0   0
> 100
>   40   00166   100200 00   0   0
> 100
>   50   00 6020000 00   0   0
> 100
>   60   00 2000000 00   0   0
> 100
>   70   00 9040000160   0   0
> 100
>   80   00 6030000 00   3   0
> 97
>   90   00 3100000 00   0   0
> 100
>  100   00222   350110 00  89   0
> 11
>  110   00 2000000 00   0   0
> 100
>  120   00 3020100 20   0   0
> 100
>  130   00 2000000 00   0   0
> 100
>  140   0024   1760020610   0   0
> 100
>  150   00140   240010 20   0   0
> 100
>  160   00 2000000 00   0   0
> 100
>  170   0010280050780   1   0
> 99
>  180   00 2000000 00   0   0
> 100
>  190   00 5120000100   0   0
> 100
>  200   00 2000000 00   0   0
> 100
>  210   00 9240000 40   0   0
> 100
>  220   00 4000000 00   0   0
> 100
>  230   00 2000000 00   0   0
> 100
>
>
> So I'm lead to believe that zfs receive is spending almost 100% of a
> single CPUs time doing a lot of genunix`list_next ...
>
> Any ideas what is going on here?
>
> Best Regards,
> --
> Lachlan Mulcahy
> Senior DBA,
> Marin Software Inc.
> San Francisco, USA
>
> AU Mobile: +61 458 448 721
> US Mobile: +1 (415) 867 2839
> Office : +1 (415) 671 6080
>
>


-- 
Lachlan Mulcahy
Senior DBA,
Marin Software Inc.
San Francisco, USA

AU Mobile: +61 458 448 721
US Mobile: +1 (415) 867 2839
Office : +1 (415) 671 6080
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi Bob,


On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>>
>> Anything else you suggest I'd check for faults? (Though I'm sort of
>> doubting it is an issue, I'm happy to be
>> thorough)
>>
>
> Try running
>
>  fmdump -ef
>
> and see if new low-level fault events are comming in during the zfs
> receive.
>
>
Just a bunch of what I would guess is unrelated USB errors?

Dec 05 20:58:09.0246 ereport.io.usb.epse
Dec 05 20:58:50.9207 ereport.io.usb.epse
Dec 05 20:59:36.0242 ereport.io.usb.epse
Dec 05 20:59:39.0230 ereport.io.usb.epse
Dec 05 21:00:21.0223 ereport.io.usb.epse
Dec 05 21:01:06.0215 ereport.io.usb.epse
Dec 05 21:01:09.0314 ereport.io.usb.epse
Dec 05 21:01:50.9213 ereport.io.usb.epse
Dec 05 21:02:36.0299 ereport.io.usb.epse
Dec 05 21:02:39.0298 ereport.io.usb.epse

Regards,
-- 
Lachlan Mulcahy
Senior DBA,
Marin Software Inc.
San Francisco, USA

AU Mobile: +61 458 448 721
US Mobile: +1 (415) 867 2839
Office : +1 (415) 671 6080
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn

On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:


Anything else you suggest I'd check for faults? (Though I'm sort of doubting it 
is an issue, I'm happy to be
thorough)


Try running

  fmdump -ef

and see if new low-level fault events are comming in during the zfs 
receive.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi Bob,

On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>> genunix`list_next **   5822   3.7%
>> unix`mach_cpu_idle** 150261  96.1%
>>
>
> Rather idle.


Actually this is a bit misleading since it is averaged over all the cores
in the system.

This system has 24 cores and the approximate 90% used on a single core in
system time can be shown to be 3.75%

90 / 24 = 3.75

Similarly 23/24 cores are close to 100% idle.

 Top shows:
>
>PID USERNAME NLWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
>  22945 root1  600   13M 3004K cpu/6  144:21  3.79% zfs
>550 root   28  590   39M   22M sleep   10:19  0.06% fmd
>

Having 'fmd' (fault monitor daemon) show up in top at all is rather ominous
> since it implies that faults are actively being reported.  You might want
> to look into what is making it active.


Nothing in /var/adm/messages as this is the last two days entries:

Dec  4 18:46:20 mslvstdp03r sshd[20926]: [ID 800047 auth.crit] fatal: Read
from socket failed: Connection reset by peer
Dec  5 01:40:07 mslvstdp03r sshd[21808]: [ID 800047 auth.crit] fatal: Read
from socket failed: Connection reset by peer


Anything else you suggest I'd check for faults? (Though I'm sort of
doubting it is an issue, I'm happy to be thorough)



 So I'm lead to believe that zfs receive is spending almost 100% of a
> single CPUs time doing a lot of
> genunix`list_next ...
>

Or maybe it is only doing 3.7% of this.  There seems to be a whole lot of
> nothing going on.


See the math above -- also mpstat shows a single CPU at around 90% system
time and since top reports zfs as the only active process the
circumstantial evidence is fairly indicative.


> Any ideas what is going on here?
>

Definitely check if low-level faults are being reported to fmd.
>

Any logs other than /var/adm/messages I should check (apologies, I'm quite
new to (open)solaris )

Regards,
-- 
Lachlan Mulcahy
Senior DBA,
Marin Software Inc.
San Francisco, USA

AU Mobile: +61 458 448 721
US Mobile: +1 (415) 867 2839
Office : +1 (415) 671 6080
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bill Sommerfeld
On 12/05/11 10:47, Lachlan Mulcahy wrote:
> zfs`lzjb_decompress10   0.0%
> unix`page_nextn31   0.0%
> genunix`fsflush_do_pages   37   0.0%
> zfs`dbuf_free_range   183   0.1%
> genunix`list_next5822   3.7%
> unix`mach_cpu_idle 150261  96.1%

your best bet in a situation like this -- where there's a lot of cpu time
spent in a generic routine -- is to use an alternate profiling method that
shows complete stack traces rather than just the top function on the stack.

often the names of functions two or three or four deep in the stack will point
at what's really responsible.

something as simple as:

dtrace -n 'profile-1001 { @[stack()] = count(); }'

(let it run for a bit then interrupt it).

should show who's calling list_next() so much.

- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn

On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:

genunix`list_next    5822   3.7%
unix`mach_cpu_idle 150261  96.1%


Rather idle.


Top shows:

   PID USERNAME NLWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 22945 root    1  60    0   13M 3004K cpu/6  144:21  3.79% zfs
   550 root   28  59    0   39M   22M sleep   10:19  0.06% fmd


Having 'fmd' (fault monitor daemon) show up in top at all is rather 
ominous since it implies that faults are actively being reported.  You 
might want to look into what is making it active.



So I'm lead to believe that zfs receive is spending almost 100% of a single 
CPUs time doing a lot of
genunix`list_next ...


Or maybe it is only doing 3.7% of this.  There seems to be a whole lot 
of nothing going on.



Any ideas what is going on here?


Definitely check if low-level faults are being reported to fmd.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi All,

We are currently doing a zfs send/recv with mbuffer to send incremental
changes across and it seems to be running quite slowly, with zfs receive
the apparent bottle neck.

The process itself seems to be using almost 100% of a single CPU in "sys"
time.

Wondering if anyone has any ideas if this is normal or if this is just
going to run forever and never finish...


details - two machines connected via Gigabit Ethernet on the same LAN.

Sending server:

zfs send -i 20111201_1 data@20111205_1 | mbuffer -s 128k -m 1G -O
tdp03r-int:9090

Receiving server:

mbuffer -s 128k -m 1G -I 9090 | zfs receive -vF tank/db/data

mbuffer showing:

in @  256 KiB/s, out @  256 KiB/s,  306 GiB total, buffer 100% ful



My debug:

DTraceToolkit hotkernel reports:

zfs`lzjb_decompress10   0.0%
unix`page_nextn31   0.0%
genunix`fsflush_do_pages   37   0.0%
zfs`dbuf_free_range   183   0.1%
genunix`list_next5822   3.7%
unix`mach_cpu_idle 150261  96.1%


Top shows:

   PID USERNAME NLWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
 22945 root1  600   13M 3004K cpu/6  144:21  3.79% zfs
   550 root   28  590   39M   22M sleep   10:19  0.06% fmd

I'd say the 3.7% or so here is so low because we are providing not per CPU,
but aggregate CPU usage. mpstat seems to show the real story.

mpstat 1 shows output much like this each second:

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   00   329  108   830   1730 00   0   0 100
  10   00   1001   940   2310 00   0   0 100
  20   00320   280510 00   0   0 100
  30   00180   110000 00   0   0 100
  40   00166   100200 00   0   0 100
  50   00 6020000 00   0   0 100
  60   00 2000000 00   0   0 100
  70   00 9040000160   0   0 100
  80   00 6030000 00   3   0  97
  90   00 3100000 00   0   0 100
 100   00222   350110 00  89   0  11
 110   00 2000000 00   0   0 100
 120   00 3020100 20   0   0 100
 130   00 2000000 00   0   0 100
 140   0024   1760020610   0   0 100
 150   00140   240010 20   0   0 100
 160   00 2000000 00   0   0 100
 170   0010280050780   1   0  99
 180   00 2000000 00   0   0 100
 190   00 5120000100   0   0 100
 200   00 2000000 00   0   0 100
 210   00 9240000 40   0   0 100
 220   00 4000000 00   0   0 100
 230   00 2000000 00   0   0 100


So I'm lead to believe that zfs receive is spending almost 100% of a single
CPUs time doing a lot of genunix`list_next ...

Any ideas what is going on here?

Best Regards,
-- 
Lachlan Mulcahy
Senior DBA,
Marin Software Inc.
San Francisco, USA

AU Mobile: +61 458 448 721
US Mobile: +1 (415) 867 2839
Office : +1 (415) 671 6080
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>> 
>> See FEC suggestion from another poster ;)
> 
> Well, of course, all storage mediums have built-in hardware FEC.  At least 
> disk & tape for sure.  But naturally you can't always trust it blindly...
> 
> If you simply want to layer on some more FEC, there must be some standard 
> generic FEC utilities out there, right?
>   zfs send | fec > /dev/...
> Of course this will inflate the size of the data stream somewhat, but 
> improves the reliability...

The problem is that many FEC algorithms are good at correcting a few bits. For 
example, disk 
drives tend to correct somewhere on the order of 8 bytes per block. Tapes can 
correct more bytes
per block. I've collected a large number of error reports showing the bitwise 
analysis of data
corruption we've seen in ZFS and there is only one case where a stuck bit was 
detected. Most of
the corruptions I see are multiple bytes and many are zero-filled.

In other words, if you are expecting to use FEC and FEC only corrects a few 
bits, you might be
disappointed.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 10:37, Edward Ned Harvey wrote:

>> From: David Magda [mailto:dma...@ee.ryerson.ca]
>> Sent: Saturday, June 11, 2011 9:38 AM
>> 
>> These parity files use a forward error correction-style system that can be
>> used to perform data verification, and allow recovery when data is lost or
>> corrupted.
>> 
>> http://en.wikipedia.org/wiki/Parchive
> 
> Well spotted.  But par2 seems to be intended exclusively for use on files,
> not data streams.  From a file (or files) create some par2 files...
> 
> Anyone know of a utility that allows you to layer fec code into a data
> stream, suitable for piping?

Yes; I was thinking more of the stream-on-disk use case.

A FEC pipe might be a nice undregrad software project for someone. Perhaps by 
default multiplex the data and the FEC, and then on the other end do one of two 
things: de-multiplex things into the next part of the pipe, or split the FEC 
stream into the original to one file and the original data into another.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:38 AM
> 
> These parity files use a forward error correction-style system that can be
> used to perform data verification, and allow recovery when data is lost or
> corrupted.
> 
> http://en.wikipedia.org/wiki/Parchive

Well spotted.  But par2 seems to be intended exclusively for use on files,
not data streams.  From a file (or files) create some par2 files...

Anyone know of a utility that allows you to layer fec code into a data
stream, suitable for piping?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Jim Klimov

2011-06-11 17:20, Edward Ned Harvey пишет:

From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: Saturday, June 11, 2011 9:04 AM

If one is saving streams to a disk, it pay be worth creating parity files

for them

(especially if the destination file system is not ZFS):

Parity is just a really simple form of error detection.  It's not very
useful for error correction.  If you look into error correction codes,
you'll see there are many other codes which would be more useful for the
purposes of zfs send datastream integrity on long-term storage.


Well, parity lets you reconstruct the original data, if you can
decide which pieces to trust, no? Either there are many fitting
pieces and few misfitting pieces (raidzN), or you have
checksums so you know which copy is correct, if any.

Or like some RAID implementations, you just trust the copy
on a device which has not shown any errors (yet) ;)

But, wait... if you have checksums to trust one of the two
copies - it's easier to make an option to embed mirroring
into the "zfs send" stream?

For example, interlace chunks of same data with
checksums sized, say, 64Mb by default (not too heavy
on cache, but big enough to unlikely be broken by the
same external problem like a surface scratch; further
configurable by sending user). Sample layout:
AaA'a'BbB'b'...

Where "A" and "A'" are copies of the same data, and
"a" and "a'" are their checksums, "B"'s are the next
set of chunks, etc.

PS: Do I understand correctly that inside a ZFS send
stream there are no longer original variably sized blocks
from the sending system, so the receiver can reconstruct
blocks on its disk according to dedup, compression,
and maybe larger coalesced block sizes for files originally
written in small portions, etc?

If coalescing-on-write is indeed done, how does it play
well with snapshots? I.e. if original file was represented
in snapshot#1 by a few small blocks, received in snapshot#1'
as one big block, but later part of the file was changed.
The source snapshot#2 includes only the changed small
blocks, what would the receiving snapshot#2' do?
Or there is no coalescing and this is why? ;)

Thanks,
//Jim Klimov


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 09:20, Edward Ned Harvey wrote:

> Parity is just a really simple form of error detection.  It's not very
> useful for error correction.  If you look into error correction codes,
> you'll see there are many other codes which would be more useful for the
> purposes of zfs send datastream integrity on long-term storage.


> These parity files use a forward error correction-style system that can be 
> used to perform data verification, and allow recovery when data is lost or 
> corrupted.

http://en.wikipedia.org/wiki/Parchive

> Because this new approach doesn't benefit from like sized files, it 
> drastically extends the potiental applications of PAR. Files such as video, 
> music, and other data can remain in a usable format and still have recovery 
> data associated with them.
> 
> The technology is based on a 'Reed-Solomon Code' implementation that allows 
> for recovery of any 'X' real data-blocks for 'X' parity data-blocks present.

http://parchive.sourceforge.net/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:04 AM
> 
> If one is saving streams to a disk, it pay be worth creating parity files
for them
> (especially if the destination file system is not ZFS):

Parity is just a really simple form of error detection.  It's not very
useful for error correction.  If you look into error correction codes,
you'll see there are many other codes which would be more useful for the
purposes of zfs send datastream integrity on long-term storage.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 08:46, Edward Ned Harvey wrote:

> If you simply want to layer on some more FEC, there must be some standard 
> generic FEC utilities out there, right?
>   zfs send | fec > /dev/...
> Of course this will inflate the size of the data stream somewhat, but 
> improves the reliability...

If one is saving streams to a disk, it pay be worth creating parity files for 
them (especially if the destination file system is not ZFS):

http://en.wikipedia.org/wiki/Parity_file
http://en.wikipedia.org/wiki/Parchive

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> 
> See FEC suggestion from another poster ;)

Well, of course, all storage mediums have built-in hardware FEC.  At least disk 
& tape for sure.  But naturally you can't always trust it blindly...

If you simply want to layer on some more FEC, there must be some standard 
generic FEC utilities out there, right?
zfs send | fec > /dev/...
Of course this will inflate the size of the data stream somewhat, but improves 
the reliability...

But finally - If you think of a disk as one large sequential storage device, 
and a zfs send stream is just another large sequential data stream...  And we 
take it for granted that a single bit error inside a ZFS filesystem doesn't 
corrupt the whole filesystem but just a localized file or object...  It's all 
because the filesystem is broken down into a bunch of smaller blocks and each 
individual block has its own checksum.  Shouldn't it be relatively trivial for 
the zfs send datastream to do its checksums on smaller chunks of data instead 
of just a single checksum for the whole set?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Richard Elling
On Jun 10, 2011, at 8:59 AM, David Magda wrote:

> On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote:
> 
>> #1  A single bit error causes checksum mismatch and then the whole data
>> stream is not receivable.
> 
> I wonder if it would be worth adding a (toggleable?) forward error
> correction (FEC) [1] scheme to the 'zfs send' stream.

pipes are your friend!
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Marty Scholes
> I stored a snapshot stream to a file

The tragic irony here is that the file was stored on a non-zfs filesystem.  You 
had had undetected bitrot which unknowingly corrupted the stream.  Other files 
also might have been silently corrupted as well.

You may have just made one of the strongest cases yet for zfs and its 
assurances.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Paul Kraus
On Fri, Jun 10, 2011 at 8:59 AM, Jim Klimov  wrote:

> Is such "tape" storage only intended for reliable media such as
> another ZFS or triple-redundancy tape archive with fancy robotics?
> How would it cope with BER in transfers to/from such media?

Large and small businesses have been using TAPE as a BACKUP media
for decades. One of the cardinal rules is that you MUST have at least
TWO FULL copies if you expect to be able to use them. An Incremental
backup is marginally better than an incremental zfs send in that you
_can_ recover the files contained in the backup image.

I understand why a zfs send is what it is (and you can't pull
individual files out of it), and that it must be bit for bit correct,
and that IF it is large, then the chances of a bit error are higher.
But given all that, I still have not heard a good reason NOT to keep
zfs send stream images around as insurance. Yes, they must not be
corrupt (that is true for ANY backup storage), and if they do get
corrupted you cannot (without tweeks that may jeopardize the data
integrity) "restore" that stream image. But this really is not a
higher bar than for any other "backup" system. This is why I wondered
at the original posters comment that he had made a critical mistake
(unless the mistake was using storage for the image that had a high
chance of corruption and did not have a second copy of the image).

Sorry if that has been discussed here before, how much of this
list I get to read depends on how busy I am. Right now I am very busy
moving 20 TB of data from one configuration of 14 zpools to a
configuration of one zpool (and only one dataset, no zfs send / recv
for me), so I have lots of time to wait, and I spend some of that time
reading this list :-)

P.S. This data is "backed up", both the old and new configuration via
regular zfs snapshots (for day to day needs) and zfs send / recv
replication to a remote site (for DR needs). The initial zfs full send
occurred when the new zpool was new and empty, so I only have to push
the incrementals through the WAN link.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Jim Klimov

2011-06-10 20:58, Marty Scholes пишет:

If it is true that unlike ZFS itself, the replication
stream format has
no redundancy (even of ECC/CRC sort), how can it be
used for
long-term retention "on tape"?

It can't.  I don't think it has been documented anywhere, but I believe that it 
has been well understood that if you don't trust your storage (tape, disk, 
floppies, punched cards, whatever), then you shouldn't trust your incremental 
streams on that storage.


Well, the whole point of this redundancy in ZFS is about not trusting
any storage (maybe including RAM at some time - but so far it is
requested to be ECC RAM) ;)

Hell, we don't ultimately trust any storage...
Oops, I forgot what I wanted to say next ;)


It's as if the ZFS design assumed that all incremental streams would be either 
perfect or retryable.


Yup. Seems like another ivory-tower assumption ;)


This is a huge problem for tape retention, not so much for disk retention.


Because why? You can make mirrors or raidz of disks?


On a personal level I have handled this with a separate pool of fewer, larger 
and slower drives which serves solely as backup, taking incremental streams 
from the main pool every 20 minutes or so.

Unfortunately that approach breaks the legacy backup strategy of pretty much 
every company.


I'm afraid it also breaks backups of petabyte-sized arrays where
it is impractical to double or triple the number of racks with spinning
drives, but is practical to have a closet full of tapes for the automated
robot-feeded ;)



I think the message is that unless you can ensure the integrity of the stream, 
either backups should go to another pool or zfs send/receive should not be a 
critical part of the backup strategy.


Or that zfs streams can be improved to VALIDLY become part
of such strategy.

Regarding the checksums in ZFS, as of now I guess we
can send the ZFS streams to a file, compress this file
with ZIP, RAR or some other format with CRC and some
added "recoverability" (i.e. WinRAR claims to be able
to repair about 1% of erroneous file data with standard
settings) and send these ZIP/RAR archives to the tape.

Obviously, a standard integrated solution within ZFS
would be better and more portable.

See FEC suggestion from another poster ;)


//Jim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Marty Scholes
> If it is true that unlike ZFS itself, the replication
> stream format has
> no redundancy (even of ECC/CRC sort), how can it be
> used for
> long-term retention "on tape"?

It can't.  I don't think it has been documented anywhere, but I believe that it 
has been well understood that if you don't trust your storage (tape, disk, 
floppies, punched cards, whatever), then you shouldn't trust your incremental 
streams on that storage.

It's as if the ZFS design assumed that all incremental streams would be either 
perfect or retryable.

This is a huge problem for tape retention, not so much for disk retention.

On a personal level I have handled this with a separate pool of fewer, larger 
and slower drives which serves solely as backup, taking incremental streams 
from the main pool every 20 minutes or so.

Unfortunately that approach breaks the legacy backup strategy of pretty much 
every company.

I think the message is that unless you can ensure the integrity of the stream, 
either backups should go to another pool or zfs send/receive should not be a 
critical part of the backup strategy.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread David Magda
On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote:

> #1  A single bit error causes checksum mismatch and then the whole data
> stream is not receivable.

I wonder if it would be worth adding a (toggleable?) forward error
correction (FEC) [1] scheme to the 'zfs send' stream.

Even if we're talking about a straight zfs send/recv pipe, and not saving
to a file, it'd be handy as you wouldn't have restart a large transfer for
a single bit error (especially for those long initial syncs of remote
'mirrors').

[1] http://en.wikipedia.org/wiki/Forward_error_correction


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Jim Klimov

2011-06-10 15:58, Darren J Moffat пишет:


As I pointed out last time this came up the NDMP service on Solaris 11 
Express and on the Oracle ZFS Storage Appliance uses the 'zfs send' 
stream as what is to be stored on the "tape".




This discussion turns interesting ;)

Just curious: how do these products work around the stream fragility
which we are discussing here - that a single-bit error can/will/should
make the whole zfs send stream invalid, even though it is probably
an error localized in a single block. This block is ultimately related
to a file (or a few files in case of dedup or snapshots/clones) whose
name "zfs recv" could report for an admin to take action such as rsync.

If it is true that unlike ZFS itself, the replication stream format has
no redundancy (even of ECC/CRC sort), how can it be used for
long-term retention "on tape"?

I understand about online transfers, somewhat. If the transfer failed,
you still have the original to retry. But backups are often needed when
the original is no longer alive, and that's why they are needed ;)

And by Murphy's law that's when this single bit strikes ;)

Is such "tape" storage only intended for reliable media such as
another ZFS or triple-redundancy tape archive with fancy robotics?
How would it cope with BER in transfers to/from such media?

Also, an argument was recently posed (when I wrote of saving
zfs send streams into files and transferring them by rsync over
slow bad links), that for most online transfers I should better use
zfs send of incremental snapshots. While I agree with this in terms
that an incremental transfer is presumably smaller and has less
chance of corruption (network failure) during transfer than a huge
initial stream, this chance of corruption is still non-zero. Simply
in case of online transfers I can detect the error and retry at low
cost (or big cost - bandwidth is not free in many parts of the world).

Going back to storing many streams (initial + increments) on tape -
if an intermediate incremental stream has a single-bit error, then
its snapshot and any which follow-up can not be received into zfs.
Even if the "broken" block is later freed and discarded (equivalent
to overwriting with a newer version of a file from a newer increment
in classic backup systems with a file being the unit of backup).

And since the total size of initial+incremental backups is likely
larger than of a single full dump, the chance of a single corruption
making your (latest) backup useless would be also higher, right?

Thanks for clarifications,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Darren J Moffat

On 06/10/11 12:47, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jonathan Walker

New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot]>[stream_file]".


There are precisely two reasons why it's not recommended to store a zfs send
datastream for later use.  As long as you can acknowledge and accept these
limitations, then sure, go right ahead and store it.  ;-)  A lot of people
do, and it's good.


Not recommended by who ?  Which documentation says this ?

As I pointed out last time this came up the NDMP service on Solaris 11 
Express and on the Oracle ZFS Storage Appliance uses the 'zfs send' 
stream as what is to be stored on the "tape".


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jonathan Walker
> 
> New to ZFS, I made a critical error when migrating data and
> configuring zpools according to needs - I stored a snapshot stream to
> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".

There are precisely two reasons why it's not recommended to store a zfs send
datastream for later use.  As long as you can acknowledge and accept these
limitations, then sure, go right ahead and store it.  ;-)  A lot of people
do, and it's good.

#1  A single bit error causes checksum mismatch and then the whole data
stream is not receivable.  Obviously you encountered this problem already,
and you were able to work around.  If I were you, however, I would be
skeptical about data integrity on your system.  You said you scrubbed and
corrected a couple of errors, but that's not actually possible.  The
filesystem integrity checksums are for detection, not correction, of
corruption.  The only way corruption gets corrected is when there's a
redundant copy of the data...  Then ZFS can discard the corrupt copy,
overwrite with a good copy, and all the checksums suddenly match.  Of course
there is no such thing in the zfs send data stream - no redundant copy in
the data stream.  So yes, you have corruption.  The best you can possibly do
is to identify where it is, and then remove the affected files.

#2  You cannot do a partial receive, nor generate a catalog of the files
within the datastream.  You can restore the whole filesystem or nothing.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> 
> Besides, the format
> is not public and subject to change, I think. So future compatibility
> is not guaranteed.

That is not correct.  

Years ago, there was a comment in the man page that said this:  "The format
of the stream is evolving. No backwards  compatibility is guaranteed. You
may not be able to receive your streams on future versions of ZFS."

But in the last several years, backward/forward compatibility has always
been preserved, so despite the warning, it was never a problem.

In more recent versions, the man page says:  "The format of the stream is
committed. You will be able to receive your streams on future versions of
ZFS."

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Toby Thain
On 09/06/11 1:33 PM, Paul Kraus wrote:
> On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov  wrote:
>> 2011-06-09 18:52, Paul Kraus пишет:
>>>
>>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker  wrote:
>>>
 New to ZFS, I made a critical error when migrating data and
 configuring zpools according to needs - I stored a snapshot stream to
 a file using "zfs send -R [filesystem]@[snapshot]>[stream_file]".
>>>
>>> Why is this a critical error, I thought you were supposed to be
>>> able to save the output from zfs send to a file (just as with tar or
>>> ufsdump you can save the output to a file or a stream) ?
>>> Was the cause of the checksum mismatch just that the stream data
>>> was stored as a file ? That does not seem right to me.
>>>
>> As recently mentioned on the list (regarding tape backups, I believe)
>> the zfs send stream format was not intended for long-term storage.
> 
> Only due to possible changes in the format.
> 
>> If some bits in the saved file flipped,
> 
> Then you have a bigger problem, namely that the file was corrupted.

This fragility is one of the main reasons it has always been discouraged
(& regularly on this list) as an archive.

--Toby

> ...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
>> New to ZFS, I made a critical error when migrating data and
>> configuring zpools according to needs - I stored a snapshot stream to
>> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
>
>Why is this a critical error, I thought you were supposed to be
>able to save the output from zfs send to a file (just as with tar or
>ufsdump you can save the output to a file or a stream) ?

Well yes, you can save the stream to a file, but it is intended for
immediate use with "zfs receive". Since the stream is not an image but
instead a serialization of objects, normal data recovery methods do not
apply in the event of corruption.

>> When I attempted to receive the stream onto to the newly configured
>> pool, I ended up with a checksum mismatch and thought I had lost my
>> data.
>
>Was the cause of the checksum mismatch just that the stream data
>was stored as a file ? That does not seem right to me.

I really can't say for sure what caused the corruption, but I think it
may have been related to a dying power supply. For more information,
check out:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storing_ZFS_Snapshot_Streams_.28zfs_send.2Freceive.29
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jim Klimov

2011-06-09 21:33, Paul Kraus пишет:

If some bits in the saved file flipped,

Then you have a bigger problem, namely that the file was corrupted.
That is not a limitation of the zfs send format. If the stream gets
corrupted via network transmission you have the same problem.


No it is not quite a limitation, however the longer you store
a file and the huger it is, the more is the probability of
a single bit going wrong over time (i.e. on old tape stored
in a closet).

And ZFS is very picky about having detected a non-integrity
condition. Where other filesystems would feed you a broken
file, and perhaps some other layer of integrity would be
there to fix it, or you'd choose to ignore it, zfs will
refuse to process known-bad data.

As the original posted has shown, even within ZFS this problem
can be worked around... if ZFS would ask the admin what to do.
Kudos to him for that! ;)

And because of a small chunk you may lose everything ;)

I've had that part under a customer's VMware ESX 3.0, which
did not honour cache flushes, so ZFS broke down upon hardware
resets (i.e. thermal failure) and paniced the kernel upon boot
attempts. Revertng that virtual Solaris server to use UFS was
sad - but it worke for years since then, even through such
mischiefs as hardware thermal resets.

I've tested that VM's image recently with OI_151 dev LiveCD -
even it panics on that pool. It took aok=1 and zfs_recover=1
and "zfs import -o ro -f -F pool" to rollback those last bad
transactions.

BTW, "-F -n" was not honoured - the pool was imported and
the transactions were rolled back despite the message along
the lines "Would be able to recover to timestamp XXX"...



Having said that, I have used dumping "zfs send" to files, rsyncing
them over a slow connection, and zfs recv'ing them on a another
machine - so this is known to work.

I suppose to move data or for an initial copy that makes sense, but
for long term replication why not just use incremental zfs sends ?


This was an initial copy (backing up a number of server setups
from a customer) with tens of Gbs to send over a flaky 1Mbit
link. Took many retries, and zfs send is not strong at retrying ;)

--


++
||
| Климов Евгений, Jim Klimov |
| технический директор   CTO |
| ЗАО "ЦОС и ВТ"  JSC COS&HT |
||
| +7-903-7705859 (cellular)  mailto:jimkli...@cos.ru |
|  CC:ad...@cos.ru,jimkli...@mail.ru |
++
| ()  ascii ribbon campaign - against html mail  |
| /\- against microsoft attachments  |
++



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Paul Kraus
On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov  wrote:
> 2011-06-09 18:52, Paul Kraus пишет:
>>
>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker  wrote:
>>
>>> New to ZFS, I made a critical error when migrating data and
>>> configuring zpools according to needs - I stored a snapshot stream to
>>> a file using "zfs send -R [filesystem]@[snapshot]>[stream_file]".
>>
>>     Why is this a critical error, I thought you were supposed to be
>> able to save the output from zfs send to a file (just as with tar or
>> ufsdump you can save the output to a file or a stream) ?
>>     Was the cause of the checksum mismatch just that the stream data
>> was stored as a file ? That does not seem right to me.
>>
> As recently mentioned on the list (regarding tape backups, I believe)
> the zfs send stream format was not intended for long-term storage.

Only due to possible changes in the format.

> If some bits in the saved file flipped,

Then you have a bigger problem, namely that the file was corrupted.
That is not a limitation of the zfs send format. If the stream gets
corrupted via network transmission you have the same problem.

> the stream becomes invalid
> regarding checksums and has to be resent. Besides, the format
> is not public and subject to change, I think. So future compatibility
> is not guaranteed.

Recent documentation (the zfs man page) indicates that as of zpool/zfs
version 15/4 I think the stream format was committed and being able to
receive a stream from a given zfs dataset was supported on _newer_ zfs
versions.

> Having said that, I have used dumping "zfs send" to files, rsyncing
> them over a slow connection, and zfs recv'ing them on a another
> machine - so this is known to work.

I suppose to move data or for an initial copy that makes sense, but
for long term replication why not just use incremental zfs sends ?

> However if it were to fail,
> I could retry (and/or use rsync to correct some misreceived
> blocks if network was faulty).

At some level we need to trust that the zfs send stream is intact.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jim Klimov

2011-06-09 18:52, Paul Kraus пишет:

On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker  wrote:


New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot]>[stream_file]".

 Why is this a critical error, I thought you were supposed to be
able to save the output from zfs send to a file (just as with tar or
ufsdump you can save the output to a file or a stream) ?
 Was the cause of the checksum mismatch just that the stream data
was stored as a file ? That does not seem right to me.


As recently mentioned on the list (regarding tape backups, I believe)
the zfs send stream format was not intended for long-term storage.
If some bits in the saved file flipped, the stream becomes invalid
regarding checksums and has to be resent. Besides, the format
is not public and subject to change, I think. So future compatibility
is not guaranteed.

Having said that, I have used dumping "zfs send" to files, rsyncing
them over a slow connection, and zfs recv'ing them on a another
machine - so this is known to work. However if it were to fail,
I could retry (and/or use rsync to correct some misreceived
blocks if network was faulty).




--


++
||
| Климов Евгений, Jim Klimov |
| технический директор   CTO |
| ЗАО "ЦОС и ВТ"  JSC COS&HT |
||
| +7-903-7705859 (cellular)  mailto:jimkli...@cos.ru |
|  CC:ad...@cos.ru,jimkli...@mail.ru |
++
| ()  ascii ribbon campaign - against html mail  |
| /\- against microsoft attachments  |
++



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Paul Kraus
On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker  wrote:

> New to ZFS, I made a critical error when migrating data and
> configuring zpools according to needs - I stored a snapshot stream to
> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".

Why is this a critical error, I thought you were supposed to be
able to save the output from zfs send to a file (just as with tar or
ufsdump you can save the output to a file or a stream) ?

> When I attempted to receive the stream onto to the newly configured
> pool, I ended up with a checksum mismatch and thought I had lost my
> data.

Was the cause of the checksum mismatch just that the stream data
was stored as a file ? That does not seem right to me.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
Hey all,

New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
When I attempted to receive the stream onto to the newly configured
pool, I ended up with a checksum mismatch and thought I had lost my
data.

After googling the issue and finding nil, I downloaded FreeBSD
9-CURRENT (development), installed, and recompiled the kernel making
one modification to
"/usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c":

Comment out the following lines (1439 - 1440 at the time of writing):

if (!ZIO_CHECKSUM_EQUAL(drre.drr_checksum, pcksum))
ra.err = ECKSUM;

Once recompiled and booted up on the new kernel, I executed "zfs
receive -v [filesystem] <[stream_file]". Once received, I scrubbed the
zpool, which corrected a couple of checksum errors, and proceeded to
finish setting up my NAS. Hopefully, this might help someone else if
they're stupid enough to make the same mistake I did...

Note: changing this section of the ZFS kernel code should not be used
for anything other than special cases when you need to bypass the data
integrity checks for recovery purposes.

-Johnny Walker
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Bruno Damour
OK FORGET IT... I MUST BE VERY TIRED AND CONFUSED ;-(
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Bruno Damour
I have additional problem, whicxh worries me.
I tried different ways of sending/receiving my data pool.
I took some snapshots, sent them, then destroyed them, using destroy -r.

AFAIK this shoud not have affected the filesystem's _current_ state or am I 
mislead ?

Now I succeeded to send a snapshot and receive it like this :
# zfs create ezdata/data
# zfs send -RD d...@prededup | zfs recv -duF ezdata/data

Im seeing some older versions on the source dataset, newer versions on the 
snapshot and the copied filesystems.
Any idea how this can have happened ?

amber ~ # ll -d /data/.zfs/snapshot/prededup/postgres84_64
drwxr-xr-x   2 root root   2 Nov 29 18:20 
/data/.zfs/snapshot/prededup/postgres84_64
amber ~ # ll -d /data/postgres84_64
drwx--  12 postgres postgres  21 Feb  6 22:03 /data/postgres84_64
amber ~ # ll -d /ezdata/data/postgres84_64
drwxr-xr-x   2 root root   2 Nov 29 18:20 /ezdata/data/postgres84_64
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Bruno Damour
actually I succeded using :
# zfs create ezdata/data
# zfs send -RD d...@prededup | zfs recv -duF ezdata/data

I still have to check the result, though
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Bruno Damour
Sorry if my question was confused. 
Yes I'm wondering about the catch22 resulting of the two errors : it means we 
are not able to send/receive a pool's root filesystem without using -F.
The zpool list was just meant to say it was a whole pool...
Bruno
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Edward Ned Harvey
> amber ~ # zpool list data
> NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
> data   930G   295G   635G31%  1.00x  ONLINE  -
> 
> amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
> cannot receive new filesystem stream: destination 'ezdata' exists
> must specify -F to overwrite it
> 
> amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata/data
> cannot receive: specified fs (ezdata/data) does not exist

You're confused because one situation says "cannot receive ... exists" and
the other situation says "cannot receive ... does not exist"   Right?

Why do you show us the zpool list?  Because of the zpool list, I am not sure
I understand what you're asking.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive : is this expected ?

2010-02-09 Thread Bruno Damour
amber ~ # zpool list data
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
data   930G   295G   635G31%  1.00x  ONLINE  -

amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination 'ezdata' exists
must specify -F to overwrite it

amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata/data
cannot receive: specified fs (ezdata/data) does not exist
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive -dFv creates an extra "e" subdirectory..

2009-12-20 Thread Brandon High
On Sat, Dec 19, 2009 at 3:56 AM, Steven Sim  wrote:
> r...@sunlight:/root# zfs list -r myplace/Docs
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> myplace/Docs  3.37G  1.05T  3.33G
> /export/home/admin/Docs/e/Docs <--- *** Here is the extra "e/Docs"..

I saw a similar behavior when doing a receive on b129. I don't
remember if the mountpoint was set locally in the dataset or
inherited, but re-inheriting it fixed the mountpoint.

-B

-- 
Brandon High : bh...@freaks.com
For sale: One moral compass, never used.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS receive -dFv creates an extra "e" subdirectory..

2009-12-19 Thread Steven Sim




Hi;


After some very hairy testing, I came up with the following procedure
for sending a zfs send datastream to a gzip staging file and later
"receiving" it back to the same filesystem in the same pool.


The above was to enable the filesystem data to be dedup.


However, after the final ZFS received, i noticed that the same ZFS
filesystem mountpoint had changed by itself and added an extra "e"
subdirectory.

Here is the procedure

Firstly, the zfs file system in question has the following children..


o...@sunlight:/root# zfs list -t all -r myplace/Docs

NAME   USED  AVAIL  REFER  MOUNTPOINT

myplace/Docs  3.37G  1.05T  3.33G  /export/home/admin/Docs
<-- NOTE ORIGINAL MOUNTPOINT (see later bug below)

myplace/d...@scriptsnap2  43.0M  -  3.33G  -

myplace/d...@scriptsnap3  0  -  3.33G  - <-- latest snapshot

myplace/d...@scriptsnap1  0  -  3.33G  -


As root, i did


r...@sunlight:/root# zfs send -R myplace/d...@scriptsnap3 | gzip -9c
> /var/tmp/myplace-Docs.snapshot.gz


Then I attempted to test a zfs receive by using the "-n" option...


ad...@sunlight:/var/tmp$ gzip -cd /var/tmp/myplace-Docs.snapshot.gz |
zfs receive -dnv myplace

cannot receive new filesystem stream: destination 'myplace/Docs' exists

must specify -F to overwrite it


Ok...let's specify -F...


ad...@sunlight:/var/tmp$ gzip -cd /var/tmp/myplace-Docs.snapshot.gz |
zfs receive -dFnv myplace

cannot receive new filesystem stream: destination has snapshots (eg.
myplace/d...@scriptsnap1)

must destroy them to overwrite it


Ok fine...let's destroy the existing snapshots for myplace/Docs...


ad...@sunlight:/var/tmp$ zfs list -t snapshot -r myplace/Docs

NAME   USED  AVAIL  REFER  MOUNTPOINT

myplace/d...@scriptsnap2  43.0M  -  3.33G  -

myplace/d...@scriptsnap3  0  -  3.33G  -

myplace/d...@scriptsnap1  0  -  3.33G  -


r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap2

r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap1

r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap3


Checking...


r...@sunlight:/root# zfs list -t all -r myplace/Docs

NAME   USED  AVAIL  REFER  MOUNTPOINT

myplace/Docs  3.33G  1.05T  3.33G  /export/home/admin/Docs


Ok...no more snapshots, just the parent myplace/Docs and no children...


Let's try the zfs receive command yet again with a "-n"


r...@sunlight:/root# gzip -cd /var/tmp/myplace-Docs.snapshot.gz | zfs
receive -dFnv myplace

would receive full stream of myplace/d...@scriptsnap2 into
myplace/d...@scriptsnap2

would receive incremental stream of myplace/d...@scriptsnap3 into
myplace/d...@scriptsnap3


Looks great! OK...let's go for the real thing...


r...@sunlight:/root# gzip -cd /var/tmp/myplace-Docs.snapshot.gz | zfs
receive -dFv myplace

receiving full stream of myplace/d...@scriptsnap2 into
myplace/d...@scriptsnap2

received 3.35GB stream in 207 seconds (16.6MB/sec)

receiving incremental stream of myplace/d...@scriptsnap3 into
myplace/d...@scriptsnap3

received 47.6MB stream in 6 seconds (7.93MB/sec)


Yah...looks good!


BUT...


A zfs list of myplace/Docs I get..


r...@sunlight:/root# zfs list -r myplace/Docs

NAME   USED  AVAIL  REFER  MOUNTPOINT

myplace/Docs  3.37G  1.05T  3.33G 
/export/home/admin/Docs/e/Docs <--- *** Here is the extra "e/Docs"..

r...@sunlight:/root# zfs set mountpoint=/export/home/admin/Docs
myplace/Docs

cannot mount '/export/home/admin/Docs': directory is not empty

property may be set but unable to remount filesystem


Ok...


I then went to remove the e/Docs directory under
/export/home/admin/Docs and it is now only
/export/home/admin/Docs...


Then..


r...@sunlight:/root# zfs set mountpoint=/export/home/admin/Docs
myplace/Docs


And all is well again..


Where did the "e/Docs" come from?


Did I do something wrong?


Warmest Regards

Steven Sim






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive bug? Extra "e" directory?

2009-12-16 Thread Steven Sim

Hi;

After some very hairy testing, I came up with the following procedure 
for sending a zfs send datastream to a gzip staging file and later 
"receiving" it back to the same filesystem in the same pool.


The above was to enable the filesystem data to be dedup.

Here is the procedure and under careful examination there seems to be a 
bug (my mistake?) at the end


Firstly, the zfs file system in question has the following children..

o...@sunlight:/root# zfs list -t all -r myplace/Docs
NAME   USED  AVAIL  REFER  MOUNTPOINT
myplace/Docs  3.37G  1.05T  3.33G  /export/home/admin/Docs 
<-- NOTE ORIGINAL MOUNTPOINT (see later bug below)

myplace/d...@scriptsnap2  43.0M  -  3.33G  -
myplace/d...@scriptsnap3  0  -  3.33G  - <-- latest snapshot
myplace/d...@scriptsnap1  0  -  3.33G  -

As root, i did

r...@sunlight:/root# zfs send -R myplace/d...@scriptsnap3 | gzip -9c > 
/var/tmp/myplace-Docs.snapshot.gz


Then I attempted to test a zfs receive by using the "-n" option...

ad...@sunlight:/var/tmp$ gzip -cd /var/tmp/myplace-Docs.snapshot.gz | 
zfs receive -dnv myplace

cannot receive new filesystem stream: destination 'myplace/Docs' exists
must specify -F to overwrite it

Ok...let's specify -F...

ad...@sunlight:/var/tmp$ gzip -cd /var/tmp/myplace-Docs.snapshot.gz | 
zfs receive -dFnv myplace
cannot receive new filesystem stream: destination has snapshots (eg. 
myplace/d...@scriptsnap1)

must destroy them to overwrite it

Ok fine...let's destroy the existing snapshots for myplace/Docs...

ad...@sunlight:/var/tmp$ zfs list -t snapshot -r myplace/Docs
NAME   USED  AVAIL  REFER  MOUNTPOINT
myplace/d...@scriptsnap2  43.0M  -  3.33G  -
myplace/d...@scriptsnap3  0  -  3.33G  -
myplace/d...@scriptsnap1  0  -  3.33G  -

r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap2
r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap1
r...@sunlight:/root# zfs destroy myplace/d...@scriptsnap3

Checking...

r...@sunlight:/root# zfs list -t all -r myplace/Docs
NAME   USED  AVAIL  REFER  MOUNTPOINT
myplace/Docs  3.33G  1.05T  3.33G  /export/home/admin/Docs

Ok...no more snapshots, just the parent myplace/Docs and no children...

Let's try the zfs receive command yet again with a "-n"

r...@sunlight:/root# gzip -cd /var/tmp/myplace-Docs.snapshot.gz | zfs 
receive -dFnv myplace
would receive full stream of myplace/d...@scriptsnap2 into 
myplace/d...@scriptsnap2
would receive incremental stream of myplace/d...@scriptsnap3 into 
myplace/d...@scriptsnap3


Looks great! OK...let's go for the real thing...

r...@sunlight:/root# gzip -cd /var/tmp/myplace-Docs.snapshot.gz | zfs 
receive -dFv myplace
receiving full stream of myplace/d...@scriptsnap2 into 
myplace/d...@scriptsnap2

received 3.35GB stream in 207 seconds (16.6MB/sec)
receiving incremental stream of myplace/d...@scriptsnap3 into 
myplace/d...@scriptsnap3

received 47.6MB stream in 6 seconds (7.93MB/sec)

Yah...looks good!

BUT...

A zfs list of myplace/Docs I get..

r...@sunlight:/root# zfs list -r myplace/Docs
NAME   USED  AVAIL  REFER  MOUNTPOINT
myplace/Docs  3.37G  1.05T  3.33G  
/export/home/admin/Docs/e/Docs <--- WHAT? Where in the world did the 
extra "e/Docs" come from?


UH?

r...@sunlight:/root# zfs set mountpoint=/export/home/admin/Docs myplace/Docs
cannot mount '/export/home/admin/Docs': directory is not empty
property may be set but unable to remount filesystem

Ok...

I then went to remove the e/Docs directory under 
/export/home/admin/Docs and it is now only /export/home/admin/Docs...


Then..

r...@sunlight:/root# zfs set mountpoint=/export/home/admin/Docs myplace/Docs

And all is well again..

Where did the "e" come from?

Did I do something wrong?

Warmest Regards
Steven Sim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 03:16:17PM -0700, Igor Velkov wrote:
> Not so good as I hope.
> zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx 
> zfs recv -vuFd xxx/xxx
> 
> invalid option 'u'
> usage:
> receive [-vnF] 
> receive [-vnF] -d 
> 
> For the property list, run: zfs set|get
> 
> For the delegated permission list, run: zfs allow|unallow
> r...@xxx:~# uname -a
> SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
> 
> What's wrong?

Looks like -u was a recent addition.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Lori Alt

On 09/28/09 16:16, Igor Velkov wrote:

Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs 
recv -vuFd xxx/xxx

invalid option 'u'
usage:
receive [-vnF] 
receive [-vnF] -d 

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
r...@xxx:~# uname -a
SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890

What's wrong?
  
the option was added in S10 Update 7.  I'm not sure whether the 
patch-level shown above included U7 changes or not.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Igor Velkov
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs 
recv -vuFd xxx/xxx

invalid option 'u'
usage:
receive [-vnF] 
receive [-vnF] -d 

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
r...@xxx:~# uname -a
SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890

What's wrong?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Igor Velkov
Wah!

Thank you, lalt!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive should allow to keep received system unmounted

2009-09-28 Thread Lori Alt

On 09/28/09 15:54, Igor Velkov wrote:
zfs receive should allow option to disable immediately mount of received filesystem. 

In case of original filesystem have changed mountpoints, it's hard to make clone fs with send-receive, because received filesystem immediately try to mount to old mountpoint, that locked by sourcr fs. 
In case of different host mountpoint can be locked by unrelated filesystem.


Can anybody recommend a way to avoid mountpoint conflict in that cases?
  

The -u option to zfs receive suppresses all mounts.

lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive should allow to keep received system unmounted

2009-09-28 Thread Igor Velkov
zfs receive should allow option to disable immediately mount of received 
filesystem. 

In case of original filesystem have changed mountpoints, it's hard to make 
clone fs with send-receive, because received filesystem immediately try to 
mount to old mountpoint, that locked by sourcr fs. 
In case of different host mountpoint can be locked by unrelated filesystem.

Can anybody recommend a way to avoid mountpoint conflict in that cases?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive -o

2009-08-13 Thread Gaëtan Lehmann


Hi,

One thing I miss in zfs is the ability to override an attribute value  
in zfs receive - something like the -o option in zfs create. This  
option would be particularly useful with zfs send -R to make a backup  
and be sure that the destination won't be mounted


  zfs send -R f...@snap | ssh myhost zfs receive -d -o canmount=off  
backup/host


or to have compressed backups, because the compression attribute may  
be sent with the file system and thus is not necessarily inherited


  zfs send -R f...@snap | ssh myhost zfs receive -d -o canmount=off -o  
compression=gzip backup/host


It is possible to use -u for the mount problem, but it is not  
absolutely safe until the property is actually changed. An unexpected  
reboot can put the backup server in maintenance mode because the file  
system may have the same mountpoint as another one already there.


Do I miss something? Would it be possible to have such an option?

Regards,

Gaëtan


--
Gaëtan Lehmann
Biologie du Développement et de la Reproduction
INRA de Jouy-en-Josas (France)
tel: +33 1 34 65 29 66fax: 01 34 65 29 09
http://voxel.jouy.inra.fr  http://www.itk.org
http://www.mandriva.org  http://www.bepo.fr



PGP.sig
Description: Ceci est une signature électronique PGP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive coredump

2009-02-27 Thread Richard Elling

David Dyer-Bennet wrote:

Solaris 2008.11

r...@fsfs:/export/home/localddb/src/bup2# zfs send -R -I
bup-20090223-033745UTC z...@bup-20090225-184857utc > foobar

r...@fsfs:/export/home/localddb/src/bup2# ls -l --si foobar
-rw-r--r-- 1 root root 2.4G 2009-02-27 21:24 foobar

r...@fsfs:/export/home/localddb/src/bup2# zfs receive -dvF
bup-ruin/fsfs/zp1 < foobar
Segmentation Fault (core dumped)
  


bug.  Please file one and include the core.
http://bugs.opensolaris.org
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive coredump

2009-02-27 Thread David Dyer-Bennet
Solaris 2008.11

r...@fsfs:/export/home/localddb/src/bup2# zfs send -R -I
bup-20090223-033745UTC z...@bup-20090225-184857utc > foobar

r...@fsfs:/export/home/localddb/src/bup2# ls -l --si foobar
-rw-r--r-- 1 root root 2.4G 2009-02-27 21:24 foobar

r...@fsfs:/export/home/localddb/src/bup2# zfs receive -dvF
bup-ruin/fsfs/zp1 < foobar
Segmentation Fault (core dumped)

r...@fsfs:/export/home/localddb/src/bup2# zpool status bup-ruin
  pool: bup-ruin
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
bup-ruinONLINE   0 0 0
  c7t0d0ONLINE   0 0 0

errors: No known data errors

r...@fsfs:/export/home/localddb/src/bup2# zfs list -r bup-ruin
NAME   USED  AVAIL  REFER 
MOUNT\
POINT
bup-ruin   451G   463G24K 
/back\
ups/bup-ruin
bup-ruin/fsfs  451G   463G19K 
/back\
ups/bup-ruin/fsfs
bup-ruin/fsfs/rpool   35.2M   463G19K 
/back\
ups/bup-ruin/fsfs/rpool
bup-ruin/fsfs/rpool/export35.2M   463G19K 
/back\
ups/bup-ruin/fsfs/rpool/export
bup-ruin/fsfs/rpool/export/home   35.2M   463G19K 
/back\
ups/bup-ruin/fsfs/rpool/export/home
bup-ruin/fsfs/rpool/export/home/export35.1M   463G18K 
/back\
ups/bup-ruin/fsfs/rpool/export/home/export
bup-ruin/fsfs/rpool/export/home/export/home   35.1M   463G19K 
/back\
ups/bup-ruin/export/home
bup-ruin/fsfs/rpool/export/home/export/home/localddb  35.1M   463G  27.8M 
/back\
ups/bup-ruin/export/home/localddb
bup-ruin/fsfs/zp1  451G   463G  33.8M 
/back\
ups/bup-ruin/home
bup-ruin/fsfs/zp1/ddb  326G   463G   326G 
/back\
ups/bup-ruin/home/ddb
bup-ruin/fsfs/zp1/jmf 33.2G   463G  33.2G 
/back\
ups/bup-ruin/home/jmf
bup-ruin/fsfs/zp1/lydy31.1G   463G  31.1G 
/back\
ups/bup-ruin/home/lydy
bup-ruin/fsfs/zp1/music   24.3G   463G  24.3G 
/back\
ups/bup-ruin/home/music
bup-ruin/fsfs/zp1/pddb2.05G   463G  2.05G 
/back\
ups/bup-ruin/home/pddb
bup-ruin/fsfs/zp1/public  33.8G   463G  33.8G 
/back\
ups/bup-ruin/home/public
bup-ruin/fsfs/zp1/raphael   18K   463G18K 
/back\
ups/bup-ruin/home/raphael
r...@fsfs:/export/home/localddb/src/bup2#

This appears to repeat (trying to receive from that file).

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive - list contents of incremental stream?

2008-06-11 Thread Robert Lawhead
Thanks, Matt.  Are you interested in feedback on various questions regarding 
how to display results?  On list or off?  Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive - list contents of incremental stream?

2008-06-08 Thread Matthew Ahrens
Robert Lawhead wrote:
> Apologies up front for failing to find related posts...
> Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL 
> PROTECTED] | zfs receive -n -v ...' to show the contents of the stream?  I'm 
> looking for the equivalent of ufsdump 1f - fs ... |  ufsrestore tv - . I'm 
> hoping that this might be a faster way than using 'find fs -newer ...' to 
> learn what's changed between [EMAIL PROTECTED] and [EMAIL PROTECTED]
> 
> I'd probably use this functionality to produce a list of files to backup with 
> cpio.  I don't want to make archival backups using the stream produced by 
> 'zfs send' because of caviots regarding the liklihood of change in future 
> releases.  If not 'zfs receive, is there another utility that would do this 
> job?  Thanks.

You really want 6425091 "zfs diff".

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive - list contents of incremental stream?

2007-10-25 Thread Robert Lawhead
Apologies up front for failing to find related posts...
Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] 
| zfs receive -n -v ...' to show the contents of the stream?  I'm looking for 
the equivalent of ufsdump 1f - fs ... |  ufsrestore tv - . I'm hoping that this 
might be a faster way than using 'find fs -newer ...' to learn what's changed 
between [EMAIL PROTECTED] and [EMAIL PROTECTED]

I'd probably use this functionality to produce a list of files to backup with 
cpio.  I don't want to make archival backups using the stream produced by 'zfs 
send' because of caviots regarding the liklihood of change in future releases.  
If not 'zfs receive, is there another utility that would do this job?  Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive issue running multiple receives and rollbacks

2007-07-05 Thread David Goldsmith
Thanks, Will, but for the solution I'm building, I can't predict the
hardware the VM will run on, and I don't want to restrict it to the
limited list in the Processor Check document. So unfortunately I think
I'm stuck running 32-bit Solaris for this one.

So it would be nice to have zfs not hang (or give the appearance of
hanging) in the middle of a receive. (Seems like a bug to me that it
does that but I still have much to learn about ZFS.)  Even a message
that said "You need to reboot your system before you can do this" would
be preferable. Heck, it would at least make the Windows user running
this solution feel right at home. :-)

Thanks,

David

Will Murnane wrote:
> On 7/5/07, David Goldsmith <[EMAIL PROTECTED]> wrote:
>> 2. I'm running S10U3 as 32-bit. I don't know if I can run 64-bit Solaris
>> 10 with 32-bit Linux as the host OS. Does anyone know if that will work?
>> If so, I'll give it a shot.
> ISTR that if you have hardware virtualization (Intel VT, on Core 2 Duo
> 6*** chips, or AMD's equivalent technology) you can indeed do that.
> See http://www.vmware.com/pdf/processor_check.pdf .
>
> Will


-- 
David Goldsmith
Course Developer
Sun Identity Management Suite
Sun Learning Services
Voice: (415) 375-8236 (inside Sun: x81217)
E-mail: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/openroad

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive issue running multiple receives and rollbacks

2007-07-05 Thread Will Murnane
On 7/5/07, David Goldsmith <[EMAIL PROTECTED]> wrote:
> 2. I'm running S10U3 as 32-bit. I don't know if I can run 64-bit Solaris
> 10 with 32-bit Linux as the host OS. Does anyone know if that will work?
> If so, I'll give it a shot.
ISTR that if you have hardware virtualization (Intel VT, on Core 2 Duo
6*** chips, or AMD's equivalent technology) you can indeed do that.
See http://www.vmware.com/pdf/processor_check.pdf .

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS receive issue running multiple receives and rollbacks

2007-07-05 Thread David Goldsmith
Hi, all,

Environment: S10U3 running as VMWare Workstation 6 guest; Fedora 7 is
the VMWare host, 1 GB RAM

I'm creating a solution in which I need to be able to save off state on
one host, then restore it on another. I'm using ZFS snapshots with ZFS
receive and it's all working fine, except for some strange behavior when
I perform multiple rollbacks and receives.

Here's what I'm seeing:

- On the first host (a VMWare virtual machine, actually), I create an
initial snapshot
- I modify the state of the ZFS file system
- I create a second snapshot
- I perform a zfs send with the -i argument between the two snapshots.
Size of incremental diffs file is around 1.3 GB.
- On the second host (also a VMWare virtual machine, which is a copy of
the first host), I perform zfs receive
- The receive performs correctly in around 3 to 3 1/2 minutes, and at
the end of it, the state of the second host is identical to the state of
the first host

Now, when I perform the following steps:

- Restore the state of the second host to the initial state (using zfs
rollback -r snapshotname)
- Run zfs receive for a second time on the second host

Now the second host appears to lock up. I wait half an hour and the zfs
receive command has not completed. I try to terminate the command with
cntl-c and I get no response.

But if I take the following action:

- Open a second terminal window on the second host
- Power off the second host (reboot might be enough here but since it's
VMWare, poweroff is easy enough)
- Restart the second host
- Restore the state of the second host to the initial state (using zfs
rollback -r snapshotname)
- Run zfs receive on the second host

Now the state of the second host is restored correctly. The zfs receive
takes around 3 to 3/12 minutes.

So, is there something I need to do/run on S10 that will let me run zfs
receive for the second time without having to restart the OS?

Thanks,

David

-- 
David Goldsmith
Course Developer
Sun Identity Management Suite
Sun Learning Services
Voice: (415) 375-8236 (inside Sun: x81217)
E-mail: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/openroad

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive issue running multiple receives and rollbacks

2007-07-05 Thread David Goldsmith
Hi, Matt,

1. My VMWare host has 4 GB. The VMWare guest (Solaris 10) has 1 GB. I
think that at one point I reset the guest to have 2 GB and ran into the
same problem, but I'm not 100% sure. If you think it's worth trying, I
will.

2. I'm running S10U3 as 32-bit. I don't know if I can run 64-bit Solaris
10 with 32-bit Linux as the host OS. Does anyone know if that will work?
If so, I'll give it a shot.

Still, for the machine to just stall like that strikes me as
problematic. Is there anywhere I could be looking for diagnostics, or
any zfs receive command options I could use to get an idea of what's
going on?

Thanks,

David

Matthew Ahrens wrote:
> David Goldsmith wrote:
>> - Restore the state of the second host to the initial state (using zfs
>> rollback -r snapshotname)
>> - Run zfs receive for a second time on the second host
>>
>> Now the second host appears to lock up. I wait half an hour and the zfs
>> receive command has not completed. I try to terminate the command with
>> cntl-c and I get no response.
>
> Just a guess here, but perhaps you're running 32-bit and running out
> of address space.  Or 1GB of memory is not enough.  Is that 1GB for
> each solaris virtual machine, or 1GB total for linux + 2x solaris?
>
> --matt


-- 
David Goldsmith
Course Developer
Sun Identity Management Suite
Sun Learning Services
Voice: (415) 375-8236 (inside Sun: x81217)
E-mail: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/openroad

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS receive issue running multiple receives and rollbacks

2007-07-05 Thread Matthew Ahrens
David Goldsmith wrote:
> - Restore the state of the second host to the initial state (using zfs
> rollback -r snapshotname)
> - Run zfs receive for a second time on the second host
> 
> Now the second host appears to lock up. I wait half an hour and the zfs
> receive command has not completed. I try to terminate the command with
> cntl-c and I get no response.

Just a guess here, but perhaps you're running 32-bit and running out of 
address space.  Or 1GB of memory is not enough.  Is that 1GB for each solaris 
virtual machine, or 1GB total for linux + 2x solaris?

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive

2007-06-23 Thread michael schuster

Russell Aspinwall wrote:
Hi, 
 
As part of a disk subsystem upgrade I am thinking of using ZFS but there are two issues at present 
 
1) The current filesystems are mounted as  /hostname/mountpoint

except for one directory where the mount point is /. Is is possible to mount a ZFS
filesystem as /hostname// so that
/hostname/ contains only directory . Storage dir is empty apart from the  directory which contains all the file?


I hope I understand you correctly - if so, I see no reason why this 
shouldn't work:


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
bigpool   4.11G  92.4G21K  /extra
bigpool/home   819M  92.4G   819M  /export/home
bigpool/store 1.94G  92.4G  1.94G  /extra/store
# ls -als /extra/
total 13
   3 drwxr-xr-x   4 root sys4 May 15 21:44 .
   4 drwxr-xr-x  55 root root1536 Jun 22 03:31 ..
   3 drwxr-xr-x   2 root root   2 May 15 21:44 home
   3 drwxr-xr-x   5 root sys8 May 15 21:54 store
# zfs set mountpoint=/extra/some/more/dirs/store bigpool/store
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
bigpool   4.11G  92.4G25K  /extra
bigpool/home   819M  92.4G   819M  /export/home
bigpool/store 1.94G  92.4G  1.94G  /extra/some/more/dirs/store
bigpool/zones 1.37G  92.4G20K  /zones
bigpool/zones/lx  1.37G  92.4G  1.37G  /zones/lx
# ls -als /extra/some/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   5 root sys5 Jun 23 10:19 ..
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 more
# ls -als /extra/some/more/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 ..
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 dirs
# ls -als /extra/some/more/dirs/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 ..
   3 drwxr-xr-x   5 root sys8 May 15 21:54 store
#

HTH
michael
--
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive

2007-06-23 Thread Russell Aspinwall
Hi, 
 
As part of a disk subsystem upgrade I am thinking of using ZFS but there are 
two issues at present 
 
1) The current filesystems are mounted as  /hostname/mountpoint except for one 
directory where the mount point is /. 
   Is is possible to mount a ZFS filesystem as /hostname// so that /hostname/ contains only directory . Storage dir is empty apart from the  
directory which contains all the file? 
 
2) Is there any possibility of having a "zfs ireceive " for an interactive 
receive similar to the ufsrestore -i command? After twenty one years of working 
with Sun kit, my experience is that I either have to restore a complete 
filesystem (three disks failing in a RAID5 set) or I have to restore an 
individual file or directory.  
   I have been told that "zfs receive" is very quick at restoring a filesystem 
unfortunately it does not permit an interactive restore of selected files and 
directories. Which is why I would like to see "zfs ireceive" if possible which 
work on a zfs send created data stream but allow for interactive or specified 
files or directories to be restored. Is does not matter if it is 10x slower 
than restoring a complete filesystem, it is the ability to selectively restore 
directories and files. 
 
TIA 
 
Russell 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive and setting properties like compression

2007-04-23 Thread Mark J Musante
On Mon, 23 Apr 2007, Eric Schrock wrote:

> On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> > So If I send a snapshot of a filesystem to a receive command like this:
> > zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
> >
> > In order to get compression turned on, am I correct in my thought that
> > I need to start the send/receive and then in a separate window set the
> > compression property?
>
> Yes.  For doing this automatically, you want:

OK, I guess this is a 'depends on what you want' kind of question.  If you
want the receiving filesystem to automatically inherit the properties of
the sent snapshot, then, as Eric said, that's being worked on.  But if
you're just interested in compression (or any other inheritible property),
then you can set it on the filesystem above the one you're receiving into,
and it will be set when the receive filesystem is created.

If you don't want to set it on the whole 'backup' pool, you could create
an intermediate filesystem.

# zfs get compression backup
NAMEPROPERTY VALUE SOURCE
backup  compression  off   default
# zfs create -o compression=on backup/compressed
# zfs get compression backup/compressed
NAME   PROPERTY VALUE  SOURCE
backup/compressed  compression  on local
# zfs send tank/[EMAIL PROTECTED] | zfs receive backup/compressed/jump
# zfs get -r compression backup
NAME  PROPERTY VALUE SOURCE
backupcompression  off   default
backup/compressed compression  onlocal
backup/compressed/jumpcompression  oninherited from 
backup/compressed
backup/compressed/[EMAIL PROTECTED]  compression  - -
#

That way you could store both compressed and uncompressed datasets in the
same pool.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive and setting properties like compression

2007-04-23 Thread Mark J Musante
On Mon, 23 Apr 2007, Lyle Merdan wrote:

> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that I
> need to start the send/receive and then in a separate window set the
> compression property?

No: if you set compression=on for the 'backup' pool, the backup/jump
filesystem will inherit that automatically when zfs receive creates it.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive and setting properties like compression

2007-04-23 Thread Eric Schrock
On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
> 
> In order to get compression turned on, am I correct in my thought that
> I need to start the send/receive and then in a separate window set the
> compression property?

Yes.  For doing this automatically, you want:

6421959 want zfs send to preserve properties ('zfs send -p')

Which is being worked on.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive and setting properties like compression

2007-04-23 Thread Lyle Merdan
So If I send a snapshot of a filesystem to a receive command like this:
zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump

In order to get compression turned on, am I correct in my thought that I need 
to start the send/receive and then in a separate window set the compression 
property?

Or am I missing something? I understand the destination filesystem cannot exist 
during a receive.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive into zone?

2006-11-03 Thread Matthew Ahrens

Jeff Victor wrote:
If I add a ZFS dataset to a zone, and then want to "zfs send" from 
another computer into a file system that the zone has created in that 
data set, can I "zfs send" to the zone, or can I send to that zone's 
global zone, or will either of those work?


I believe that the 'zfs send' can be done from either the global or 
local zone just fine.  You can certainly do it from the local zone.


FYI, if you are doing a 'zfs recv' into a filesystem that's been 
designated to a zone, you should do the 'zfs recv' inside the zone.


(I think it's possible to do the 'zfs recv' in the global zone, but I 
think you'll have to first make sure that it isn't mounted in the local 
zone.  This is because the global zone doesn't know how to go into the 
local zone and unmount it.)


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive into zone?

2006-11-03 Thread Jeff Victor
If I add a ZFS dataset to a zone, and then want to "zfs send" from another 
computer into a file system that the zone has created in that data set, can I "zfs 
send" to the zone, or can I send to that zone's global zone, or will either of 
those work?



--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs receive kernel panics the machine

2006-09-13 Thread Niclas Sodergard

Hi,

I'm running some experiments with zfs send and receive on Solaris 10u2
between two different machines. On server 1 I have the following

data/zones/app1838M  26.5G   836M  /zones/app1
data/zones/[EMAIL PROTECTED]  2.35M  -   832M  -

I have a script that creates a new snapshot and sends the diff to the
other machine. When I do a zfs receive on the other side the machine
kernel panics (see below for the panic).

I've done a zpool scrub to make sure the pool is ok (no errors found)
and I now wonder what steps I can take to stop this from happening.

cheers,
Nickus

panic[cpu0]/thread=30002033020: BAD TRAP: type=31 rp=2a101067030
addr=0 mmu_fsr=0 occurred in module "SUNW,UltraSPARC-IIe" due to a
NULL pointer dereference

zfs: trap type = 0x31
pid=615, pc=0x11efa24, sp=0x2a1010668d1, tstate=0x4480001602, context=0x4cd
g1-g7: 7ba9a3a4, 0, 1864400, 0, , 10, 30002033020

02a101066d50 unix:die+78 (31, 2a101067030, 0, 0, 2a101066e10, 1075000)
 %l0-3: c080 0031 0100 2000
 %l4-7: 0181a010 0181a000  004480001602
02a101066e30 unix:trap+8fc (2a101067030, 5, 1fff, 1c00, 0, 1)
 %l0-3:  030004664780 0031 
 %l4-7: e000 0200 0001 0005
02a101066f80 unix:ktl0+48 (7, 0, 18a4800, 30007998a00,
30007998a00, 180c000)  %l0-3: 0003 1400
004480001602 01019840
 %l4-7: 0300020f4200 0003  02a101067030
02a1010670d0 SUNW,UltraSPARC-IIe:bcopy+1554 (fcfff8667600,
30007998a00, 0, 140, 1, 72bb1)
 %l0-3: 0001 03000799c648 0008 0300020faab0
 %l4-7:   0002 01f8
02a1010672d0 zfs:zfsctl_ops_root+b75c8d0 (30007996f40,
30003e82860, , 3000799c5d8, 3000799c590, 2)
 %l0-3: 03000799c538  434b 030001a25500
 %l4-7: 0001 0020 0002 030007996ff0
02a101067380 zfs:dnode_reallocate+150 (10e, 13, 3000799c538, 10e,
0, 30003e82860)
 %l0-3: 7bada800 0011 03000799c590 0200
 %l4-7: 0020 030007996f40 030007996f40 0013
02a101067430 zfs:dmu_object_reclaim+80 (0, 0, 13, 200, 11, 7bada400)
 %l0-3: 0008 0007 0001 1af0
 %l4-7: 03072b00  1aef 030003e82860
02a1010674f0 zfs:restore_object+1b8 (2a101067710, 300038da6c8,
2a1010676c8, 11, 30003e82860, 200)
 %l0-3:  0002 010e 0010
 %l4-7:  4a004000 0004 010e
02a1010675b0 zfs:dmu_recvbackup+608 (300036b7a00, 300036b7cd8,
300036b7b30, 300075159c0, 1, 0)
 %l0-3: 0040 02a101067710 0138 030004664780
 %l4-7: 0002f5bacbac  0200 0001
02a101067770 zfs:zfs_ioc_recvbackup+38 (300036b7000, 0, 0, 0, 9, 0)
 %l0-3: 0004  0064 
 %l4-7:  0300036b700f  0031
02a101067820 zfs:zfsdev_ioctl+160 (70336c00, 5d, ffbfee40, 1f, 7c, e68)
 %l0-3: 0300036b7000   007c
 %l4-7: 7bacd668 703371e0 02e8 70336ef8
02a1010678d0 genunix:___const_seg_90212+1c60c (30006705600,
5a1f, ffbfee40, 13, 300046d9148, 11f86c8)
 %l0-3: 030004be2200 030004be2200 0004 030004664780
 %l4-7: 0003 0001  018a5c00
02a101067990 genunix:ioctl+184 (4, 3000438c9a0, ffbfee40,
ff38db68, 40350, 5a1f)
 %l0-3:   0004 14da
 %l4-7: 0001   

syncing file systems... 2 1 done
skipping system dump - no dump device configured
rebooting...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss