Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread przemolicc
On Mon, Nov 03, 2008 at 12:33:52PM -0600, Bob Friesenhahn wrote:
> On Mon, 3 Nov 2008, Robert Milkowski wrote:
> > Now, the good filter could be to use MAGIC numbers within files or
> > approach btrfs come up with, or maybe even both combined.
> 
> You are suggesting that ZFS should detect a GIF or JPEG image stored 
> in a database BLOB.  That is pretty fancy functionality. ;-)

Maybe some general approach (not strictly GIF- or JPEG-oriented)
could be useful.

Give people a choice and they will love ZFS even more.

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





















--
Konkurs! Wygraj telewizor LCD!
Sprawdz >> http://link.interia.pl/f1f61

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting 0811 from USB Stick

2008-11-03 Thread W. Wayne Liauh
> I was able to install os0805 into a USB stick and
> boot from it.  It works really great.
> 
> However, after image-updating to build 95, I am only
> seeing the GRUB prompt.
> 
> I have also installed the 0811_95 LiveDVD into a USB
> stick, but the machine just keeps rebooting itself.

Anyone has any success?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bizarre S10U5 / zfs / iscsi / thumper / Oracle RAC problem

2008-11-03 Thread George William Herbert

I'm looking for any pointers or advice on what might have happened
to cause the following problem...

Setup:
Two X4500 / Sol 10 U5 iSCSI servers, four T1000 S10 U4 -> U5 Oracle RAC
DB heads iSCSI clients.  

iSCSI set up using zfs volumes, set shareiscsi=on, 
(slightly wierd thing) partitioned disks to get max spindles
available for "pseudo-RAID 10" performance zpools (500 gb disks,
465 usable, partitioned 115 GB for "fast" db, 345 for "archive" db,
5 gb for "utility" used for OCR and VOTE partitions in RAC).
Disks on each server set up the same way, active zpool disks
in 7 "fast" pools ("fast" partition on target 1 on each SATA
controller all together in one pool, target 2 on each in second pool, etc)
7 "archive" pools and 7 "utility" pools.  "fast" and "utility" are
zpool pseudo-RAID 10  "archive" raid-Z.  Fixed size zfs volumes
built to full capacity of each pool.

The clients were S10U4 when we first spotted this, we upgraded them
all to S10U5 as soon as we noticed that, but the problem happened
again last week.  The X4500s have been S10U5 since they were installed.


Problem:
Both servers have experienced a failure mode which initially
manifested as a Oracle RAC crash and proved via testing to be
an ignored iSCSI write to "fast" partitions.

Test case: 
(/tmp/zero is a 1-k file full of zero)
# dd if=/dev/rdsk/c2t42d0s6 bs=1k count=1
nÉçORCLDISK
FDATA_0008FDATAFDATA_0008ö*Én¨ö*íSô¼>Ú
ö*5|1+0 records in
1+0 records out
# dd of=/dev/rdsk/c2t42d0s6 if=/tmp/zero bs=1k count=1
1+0 records in
1+0 records out
# dd if=/dev/rdsk/c2t42d0s6 bs=1k count=1
nÉçORCLDISK
FDATA_0008FDATAFDATA_0008ö*Én¨ö*íSô¼>Ú
ö*5|1+0 records in
1+0 records out
#


Once this started happening, the same write behavior appears immediately
on all clients, including new ones which had not previously been
connected to the iSCSI server.

We can write a block of all 0's, or A's, out to any of the other iSCSI
devices other than the problem one, and read it back fine.  But the
misbehaving one consistently refuses to actually commit writes,
though it takes the write and returns.  All reads get the old data.

zpool status, zfs list, /var/adm/messages, everything else we look
at on the servers say they're all happy and fine.  But obviously
there's something very wrong with the particular volume / pool
which is giving us problems.

A coworker fixed it the first time by running a manual resilver,
once that was underway writes did the right thing again.  But that
was just a random shot in the dark - we saw no errors or clear
reason to resilver.

We saw it again, and it blew up the just-about-to-go-live database,
and we had to cut over to SAN storage to hit the deploy window.

It's happend on both the X4500s we were using for iSCSI, so it's
not a single point hardware issue.

I have preserved the second failed system in error mode in case
someone has ideas for more diagnostics.

I have an open support ticket, but so far no hint at a solution.

Anyone on list have ideas?


Thanks

-george william herbert
[EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is putting zone paths on zfs supported in Solaris 10 u5/u6 ?? Documentation on that? - 66124245

2008-11-03 Thread Ian Collins
 On Tue 04/11/08 12:29 , Brian Henchey [EMAIL PROTECTED] sent:
> ZFS experts!
> cu is looking for _documentation_  about putting zone paths on zfs. 
> cu is running 5.10 KP 137111-06.  he needs to make sure u5 kp is ok
> with zone roots.  Also has FSs mounted via legacy.
> So...anybody have documentation on which u5 or u6 of Solaris (NOT
> OpenSolaris) supports zfs, if any update, for zone paths?

It certainly is on update 6.

See the migration documentation at

http://docs.sun.com/app/docs/doc/819-5461/ggpdm?a=view

-- 
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FYI - proposing storage pm project

2008-11-03 Thread Jens Elkner
On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
Hi,
  
>   a disk may take seconds or
>   even tens of seconds to come on line if it needs to be powered up
>   and spin up.

Yes - I really hate this on my U40 and tried to disable PM for HDD[s]
completely. However, haven't found a way to do this (thought
/etc/power.conf is the right place, but either it doesn't work as
explained or is not the right place).

HDD[s] are HITACHI HDS7225S Revision: A9CA

Any hints, how to switch off PM for this HDD?

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] is putting zone paths on zfs supported in Solaris 10 u5/u6 ?? Documentation on that? - 66124245

2008-11-03 Thread Brian Henchey

ZFS experts!

cu is looking for _documentation_  about putting zone paths on zfs.  cu 
is running 5.10 KP 137111-06.  he needs to make sure u5 kp is ok with 
zone roots.  Also has FSs mounted via legacy.


So...anybody have documentation on which u5 or u6 of Solaris (NOT 
OpenSolaris) supports zfs, if any update, for zone paths?


-Brian


 Original Message 
Subject:RE: Sun# 66124245 - Solaris 10 u5/u6 and zone path on zfs
Date:   Mon, 03 Nov 2008 11:18:41 -0700
From:   Wertz, Richard <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
References: <[EMAIL PROTECTED]>



Brian,
Are there any known problems with a whole root zone that has a separate
zone opt and var from the zone root? See zone configuration below or do
we just need to ensure that the ZFS file systems are mounted?


Thank you,
Rich

[NEW : [EMAIL PROTECTED] : /]
# zonecfg -z dxbrok-c4 info
zonename: dxbrok-c4
zonepath: /zones/dxbrok-c4/dxbrok-c4-root
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
[cpu-shares: 2]
fs:
   dir: /opt
   special: /zones/dxbrok-c4/dxbrok-c4-opt
   raw not specified
   type: lofs
   options: []
fs:
   dir: /var
   special: /zones/dxbrok-c4/dxbrok-c4-var
   raw not specified
   type: lofs
   options: []
fs:
   dir: /apps
   special: /zones/dxbrok-c4/dxbrok-c4-apps
   raw not specified
   type: lofs
   options: []
fs:
   dir: /logs
   special: /zones/dxbrok-c4/dxbrok-c4-logs
   raw not specified
   type: lofs
   options: []
net:
   address: 10.9.201.199
   physical: e1000g1
capped-memory:
   physical: 4G
   [swap: 4G]
   [locked: 3G]
rctl:
   name: zone.cpu-shares
   value: (priv=privileged,limit=2,action=none)
rctl:
   name: zone.max-swap
   value: (priv=privileged,limit=4294967296,action=deny)
rctl:
   name: zone.max-locked-memory
   value: (priv=privileged,limit=3221225472,action=deny)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 03, 2008 9:39 AM

To: Wertz, Richard
Subject: Sun# 66124245 - Solaris 10 u5/u6 and zone path on zfs

Richard,

I'm trying to find documentation that I can actually send you.

I must amend what I said earlier: my colleague told me that zone path on

ZFS at S10 u5 _is_ supported, but you can't upgrade the OS, so it's "not

recommended".  Where's the documentation on that? -- I don't know.  I'm 
trying to find documentation on it from the experts.


As for S10 u6, you can see under "What's New":

   http://www.sun.com/software/solaris/whats_new.jsp

...under "Data Management" it says "ZFS as a root file system".  So S10 
u6 definitely has that for a feature.


I'm emailing the experts right now.  When I can find some documentation 
to point you to, I'll let you know.


If you need assistance in the meantime, please *call* me or the next 
available OS engineer.


Brian P. Henchey
Solaris Operating System (OS) Team
Sun Microsystems, Burlington, MA USA
Regular Working Hours:
8 AM - 5 PM Mon-Fri Eastern Time
To reach the next available Operating System engineer, please call:
1-800-USA-4-SUN


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FYI - proposing storage pm project

2008-11-03 Thread Yuan Chu
Hi,

The attached project has been proposed to the OpenSolaris PM community.

thanks,
-jane
-- 
This message posted from opensolaris.orgStorage PM Project
==

Currently, the main challenges in power managing disks on server
platforms are the issues of latency and reliability:

* Latency, as defined by time to first data access, is incurred when
  powering up a disk to put it in service.  While latency for a disk
  in operation is measured in milliseconds, a disk may take seconds or
  even tens of seconds to come on line if it needs to be powered up
  and spin up.

* Reliability is an issue in multiple contexts.  With RAID
  configurations, which guard against random media errors, all disks
  in the RAID group must be online or offline -- it is not possible to
  achieve power savings by powering down some of the drives.

  In addition, excessive head load/unload operations (to save a
  limited amount of power) can cause disk failure over time.  Any disk
  power management must take this into account.

With the invention of ZFS, the latency issue and part of the
reliability issue can now be addressed.  Specifically, ZFS's baked in
Volume Manager and its software RAID-Z feature are among the key
features responsible for a possible breakthrough in the area of power
managing server disks.

This project is the first step to enabling power savings from more
intelligent management of storage.  It offers potential substantial
power savings with minimal impact to storage I/O performance for
server or storage platforms utilizing ZFS.  The project positions ZFS
as Resource Manager that interacts with Solaris's Common Power
Management software to provide shrink-to-fit Elastic policy on
non-virtualized platforms.

Future projects will provide the above functionality in virtualized
environments, and moving furthur on, to explore opportunities to
provide similar a feature set on non-ZFS filesystems.

Modern SAS and SATA disks provide a variety of power states enabling a
reduction in power consumption.  In some cases, these reduced power
consumption states allow data to remain on line for access at lower
throughput and/or higher latency (by slowing down head seek, for
example).  In most cases, however these states result in the data
stored on the disk being rendered inaccessible until the host takes
action to return the disk to normal operation.

In order to allow the disk Resource Manager (i. e. ZFS in the current
project phase) to regulate the disk power consumption, it must be
possible for the software to:

* Identify the storage devices it is using (this information is
  already available by other means).

* Identify the set of power saving states and their characteristics
  (e. g. power requirement at each different state, time to bring
  online).  Note that different storage devices in the same system may
  offer different power states, so it must be possible to discover the
  power states available for each storage device in use.

* Identify the state that a storage device is currently in.

* Request that the device place itself in a specified low power state.

* Request that the device recover from the reduced power state into
  the normal functional state.

The specific components that this project will deliver are as follows:

* Provide infrastructure that allows the disk driver and SATA
  framework to retrieve industry-standard SAS or SATA power state
  information from disks, set and change disk power states, and report
  information on available power states to higher levels of software.
  In this phase of the project, this interface will only function in a
  non-virtualized environment; operation in virtualized environments
  such as xVM or LDoms will be deferred to a future project.

* Provide a Resource Power Manager (RPM) software layer that interacts
  with the Resource Manager and existing PM framework to provide the
  Resource Manager ability to adjust the power states of disks to
  achieve power savings.

* Enhance ZFS to provide Elastic mode power savings by setting disks
  not currently in use to a lower power state and by optionally
  configuring its available storage to minimize the number of drives
  in use.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FIXED: Re: ZFS Pool can't be imported on fresh install with one broken vdev

2008-11-03 Thread Christian Walther
Okay, I found out what the problem was:

As I expected in my last post ZFS didn't like the idea of having another disk 
containing a running zpool on a location that was previously occupied by a disk 
that died. Last weekend I created a few snapshots to be moved to another disk, 
so today I was able to remove this disk. A normal
# zpool import datapool
afterwards did the trick.

For the record: My configuration is based on 4 PATA Disks and 1 SATA drive. The 
SATA drive is supposed to be a boot disk (I'm about to get another one to setup 
a proper mirror).
Now, one of my PATA disks died after I managed to ruin the boot archives, so I 
had to reinstall. Since I wanted to keep the configuration of my first install 
intact I had to use another disk -- and there was a spare 20GB PATA drive lying 
around. I attached it to the port the broken disk was attached to before, 
because it was the only free PATA port.
During the reinstall the 20GB PATA drive attached to the port of the previously 
failed disk became the new rootpool. Something ZFS doesn't seem to like.

Conclusion: Never attach a disk that is not supposed to be a replacement for a 
faulted drive to a port that is used in a zpool configuration.

The question remains wether or not this is supposed to be standard behaviour, 
or a bug. Might be a philosophical issue to be discussed. But at least I expect 
ZFS to be more precise in this regard. A message like "Error: Pool can't be 
exported because at least one device has been exchanged with a device belonging 
to a different pool that is already imported on this system" would be fine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Neal Pollack

On 11/03/08 13:18, Philip Brown wrote:

Ok, I think I understand.  You're going to be told
that ZFS send isn't a backup (and for these purposes
I definately agree),  ...



Hmph. well, even for 'replication' type purposes, what I'm talking about is 
quite useful.
Picture two remote systems, which happen to have "mostly identical" data. 
Perhaps they were manually synced at one time with tar, or something.
Now the company wants to bring them both into full sync... but first analyze 
the small differences that may be present.
  


um, /usr/bin/rsync ?
but agreed, not for huge amounts of data...

In that scenario, it would then be very useful, to be able to do the following:

hostA# zfs snapshot /zfs/[EMAIL PROTECTED]
hostA# zfs send /zfs/[EMAIL PROTECTED] | ssh hostB zfs receive /zfs/[EMAIL 
PROTECTED]

hostB# diff -r /zfs/prod /zfs/prod/.zfs/snapshots/A >/tmp/prod.diffs


One could otherwise find "files that are different", with rsync -avn. But doing it with 
zfs in this way, "adds value", by allowing you to locally compare old and new files on 
the same machine, without having to do some ghastly manual copy of each different file, to a new 
place, and doing the compare there.
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Philip Brown
> Ok, I think I understand.  You're going to be told
> that ZFS send isn't a backup (and for these purposes
> I definately agree),  ...

Hmph. well, even for 'replication' type purposes, what I'm talking about is 
quite useful.
Picture two remote systems, which happen to have "mostly identical" data. 
Perhaps they were manually synced at one time with tar, or something.
Now the company wants to bring them both into full sync... but first analyze 
the small differences that may be present.

In that scenario, it would then be very useful, to be able to do the following:

hostA# zfs snapshot /zfs/[EMAIL PROTECTED]
hostA# zfs send /zfs/[EMAIL PROTECTED] | ssh hostB zfs receive /zfs/[EMAIL 
PROTECTED]

hostB# diff -r /zfs/prod /zfs/prod/.zfs/snapshots/A >/tmp/prod.diffs


One could otherwise find "files that are different", with rsync -avn. But doing 
it with zfs in this way, "adds value", by allowing you to locally compare old 
and new files on the same machine, without having to do some ghastly manual 
copy of each different file, to a new place, and doing the compare there.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross
Ok, I think I understand.  You're going to be told that ZFS send isn't a backup 
(and for these purposes I definately agree), but if we ignore that this sounds 
like you're talking about restoring a snapshot from an external media, and then 
running a clone off that.

Clone's are already supported, but restoring a deleted snapshot isn't.  Can 
anybody comment on whether that would even be possible?  It's an intriguing 
idea if so.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Files from the future are not accessible on ZFS

2008-11-03 Thread Laurent Blume
I see, thanks.
And as Jörg said, I only need a 64 bit binary. I didn't know, but there is one 
for ls, and it does work as expected:

$ /usr/bin/amd64/ls -l  .gtk-bookmarks
-rw-r--r--   1 user opc0 oct. 16  2057 .gtk-bookmarks

This is a bit absurd. I thought Solaris was fully 64 bit. I hope those tools 
will be integrated soon.

Thanks for the pointers!

Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Darren J Moffat
Bob Friesenhahn wrote:
> On Mon, 3 Nov 2008, Robert Milkowski wrote:
>> Maybe that's a good one - so if couple of blocks do not compress then
>> flag it in file metadata and do not try to compress any blocks within
>> the file anymore. Of course for some files it will be suboptimal so
>> maybe a dataset option?
> 
> This is interesting but probably a bad idea.  There are many files 
> which contain a mix of compressable and uncompressable blocks.  It is 
> quite easy to create these.  One easy way to create such files is via 
> the 'tar' command.
> 
> If compression is too slow, then another approach is to monitor the 
> backlog and skip compressing blocks if the backlog is too high.  

We kind of do that already in that we stop compressing if we aren't 
"converging to sync" quick enough because compressing requires we do new 
allocations as the block size is smaller.

 >  Then
> use a background scan which compresses blocks when the system is idle. 

There is already a plan for this type of functionality.

> This background scan can have the positive effect that an uncompressed 
> filesystem can be fully converted to a compressed filesystem even if 
> compression is enabled after most files are already written.  

Or if it wasn't initially created with compression=on or if it was but 
later the value of compression= was changed.

 >  There
> would need to be a flag which indicates if the block has already been 
> evaluated for compression or if it was originally uncompressed, or 
> skipped due to load.

The blkptr_t (on disk) will have ZIO_COMPRESS_OFF if the block wasn't 
compressed for any reason.  That can easily be compared with the 
property for the dataset.  The only part that is missing is a reason code.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Bob Friesenhahn
On Mon, 3 Nov 2008, Robert Milkowski wrote:
> Now, the good filter could be to use MAGIC numbers within files or
> approach btrfs come up with, or maybe even both combined.

You are suggesting that ZFS should detect a GIF or JPEG image stored 
in a database BLOB.  That is pretty fancy functionality. ;-)

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Bob Friesenhahn
On Mon, 3 Nov 2008, Robert Milkowski wrote:
>
> Maybe that's a good one - so if couple of blocks do not compress then
> flag it in file metadata and do not try to compress any blocks within
> the file anymore. Of course for some files it will be suboptimal so
> maybe a dataset option?

This is interesting but probably a bad idea.  There are many files 
which contain a mix of compressable and uncompressable blocks.  It is 
quite easy to create these.  One easy way to create such files is via 
the 'tar' command.

If compression is too slow, then another approach is to monitor the 
backlog and skip compressing blocks if the backlog is too high.  Then 
use a background scan which compresses blocks when the system is idle. 
This background scan can have the positive effect that an uncompressed 
filesystem can be fully converted to a compressed filesystem even if 
compression is enabled after most files are already written.  There 
would need to be a flag which indicates if the block has already been 
evaluated for compression or if it was originally uncompressed, or 
skipped due to load.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Philip Brown
>   If
> I'm interpreting correctly, you're talking about a
> couple of features, neither of which is in ZFS yet,
...
> 1.  The ability to restore individual files from a
> snapshot, in the same way an entire snapshot is
> restored - simply using the blocks that are already
> stored.
> 
> 2.  The ability to store (and restore from) snapshots
> on external media.

Those sound useful. particularly the ability to restore a single file, even if 
it was only from a "full" send instead of a snapshot.  But I dont think that's 
what I'm asking for :-)



Lemme try again.

Lets say that you have a mega-source tree, in one huge zfs filesystem.
(lets say, the entire ON distribution or something :-)
Lets say that you had a full zfs send done, Nov 1st.
then, between then, and today, there were "assorted things done" to the source 
tree. Major things. 
Things that people suddenly realized were "bad". But they werent sure exactly 
how/why. They just knew things worked nov 1st, but are broken now. Pretend 
there's no such thing as tags, etc.
So: they want to get things up and running, maybe even only in read-only mode, 
from the nov 1st full send. 
But they also want to take a look at the changes.  And they want to do it in a 
very space-efficient manner.

It would be REALLY REALLY NICE, to be able to take a full send of /zfs/srctree, 
and restore it to /zfs/[EMAIL PROTECTED], or something like that.
Given that [making up numbers] out of 1 million src files, only 1000 have 
changed, it would be "really nice", to have those 999,000 files that have NOT 
changed, not be doubly allocated in both /zfs/srctree and /zfs/[EMAIL 
PROTECTED] They will be actually hardlinked/snapshot-duped/whatever the 
terminology is.

I guess you might refer to what I'm talking about, as taking a synthetic 
snapshot. Kinda like veritas backup, etc. can "synthesize" full dumps, from a 
sequence of full+ incrementals, and then write out a "real" full dump, onto a 
single tape, as if a "full dump" happened on the date of a particular 
incremental.

Except that in what I 'm talking about for zfs, it would be synthesizing a zfs 
snapshot of a filesystem, that was made for the full zsend (even though the 
original "snapshot" has since been deleted)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Files from the future are not accessible on ZFS

2008-11-03 Thread Mark Shellenbaum
Laurent Blume wrote:
> Hi all,
> 
> It seems a user managed to create files dated Oct 16, 2057, from a Linux 
> distro that mounted by NFS the volumes on an x2100 server running S10U5, with 
> ZFS volumes.
> 
> The problem is, those files are completely unreachable on the S10 server:
> 
> # ls -l .gtk-bookmarks
> .gtk-bookmarks: Value too large for defined data type
> 
> # more .gtk-bookmarks
> .gtk-bookmarks: Value too large for defined data type
> 
> # cp .gtk-bookmarks /tmp
> cp: cannot access .gtk-bookmarks
> 
> # touch .gtk-bookmarks
> touch: .gtk-bookmarks cannot stat
> 

The touch utility was modified a few months ago to deal with out of 
range timestamps.

PSARC 2008/508 allow touch/settime to fix out of range timestamps
6709455 settime should be able to manipulate files with wrong timestamp


-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cleaning user properties

2008-11-03 Thread Eric Schrock
This doesn't make much sense.  All user properties are inheritable, so
you control them just like you do any other property.  For example,
there is no way to "unset" sharenfs on a child filesystem once it's
inherited - you need to explicitly set it to some value other than its
parent.  For user properties, you can just set it to the empty string.

What you're really asking for is non-inherited user properties, but it's
quite easy to treat user properties that way by writing the higher level
software to only pay attention to datasets where the property is set
locally ('zfs get -s local ...').

- Eric

On Mon, Nov 03, 2008 at 08:35:22AM -0500, Mark J Musante wrote:
> On Mon, 3 Nov 2008, Luca Morettoni wrote:
> 
> > now I need to *clear* (remove) the property from 
> > rpool/export/home/luca/src filesystem, but if I use the "inherit" 
> > command I'll get the parent property, any hint to delete it?
> 
> There currently is no way to do it.  I looked for an existing CR and 
> couldn't find one, so I submitted "6766756 want 'zfs unset'".
> 
> 
> Regards,
> markm
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Files from the future are not accessible on ZFS

2008-11-03 Thread Joerg Schilling
Laurent Blume <[EMAIL PROTECTED]> wrote:

> Hi all,
>
> It seems a user managed to create files dated Oct 16, 2057, from a Linux 
> distro that mounted by NFS the volumes on an x2100 server running S10U5, with 
> ZFS volumes.
>
> The problem is, those files are completely unreachable on the S10 server:

You need 64 bit programs to access these files.


Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
>> If the file still existed, would this be a case of redirecting the
>> file's top level block (dnode?) to the one from the snapshot?  If the
>> file had been deleted, could you just copy that one block?
>>
>> Is it that simple, or is there a level of interaction between files
>> and snapshots that I've missed (I've glanced through the tech specs,
>> but I'm a long way from fully understanding them).
>>
>
> It is as simple as a cp, or drag-n-drop in Nautilus.  The snapshot is
> read-only, so
> there is no need to cp, as long as you don't want to modify it or destroy
> the snapshot.
> -- richard

But that's missing the point here, which was that we want to restore
this file without having to copy the entire thing back.

Doing a cp or a drag-n-drop creates a new copy of the file, taking
time to restore, and allocating extra blocks.  Not a problem for small
files, but not ideal if you're say using ZFS to store virtual
machines, and want to roll back a single 20GB file from a 400GB
filesystem.

My question was whether it's technically feasible to roll back a
single file using the approach used for restoring snapshots, making it
an almost instantaneous operation?

ie:  If a snapshot exists that contains the file you want, you know
that all the relevant blocks are already on disk.  You don't want to
copy all of the blocks, but since ZFS follows a tree structure,
couldn't you restore the file by just restoring the one master block
for that file?

I'm just thinking that if it's technically feasible, I might raise an
RFE for this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Richard Elling
Ross Smith wrote:
>> Snapshots are not replacements for traditional backup/restore features.
>> If you need the latter, use what is currently available on the market.
>> -- richard
>> 
>
> I'd actually say snapshots do a better job in some circumstances.
> Certainly they're being used that way by the desktop team:
> http://blogs.sun.com/erwann/entry/zfs_on_the_desktop_zfs
>   

Yes, this is one of the intended uses of snapshots.  But snapshots do
not replace backup/restore systems.

> None of this is stuff I'm after personally btw.  This was just my
> attempt to interpret the request of the OP.
>
> Although having said that, the ability to restore single files as fast
> as you can restore a whole snapshot would be a nice feature.  Is that
> something that would be possible?
>   
> Say you had a ZFS filesystem containing a 20GB file, with a recent
> snapshot.  Is it technically feasible to restore that file by itself
> in the same way a whole filesystem is rolled back with "zfs restore"?
>   

cp

> If the file still existed, would this be a case of redirecting the
> file's top level block (dnode?) to the one from the snapshot?  If the
> file had been deleted, could you just copy that one block?
>
> Is it that simple, or is there a level of interaction between files
> and snapshots that I've missed (I've glanced through the tech specs,
> but I'm a long way from fully understanding them).
>   

It is as simple as a cp, or drag-n-drop in Nautilus.  The snapshot is 
read-only, so
there is no need to cp, as long as you don't want to modify it or 
destroy the snapshot.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cleaning user properties

2008-11-03 Thread Luca Morettoni
On 11/03/08 14:35, Mark J Musante wrote:
> There currently is no way to do it.  I looked for an existing CR and 
> couldn't find one, so I submitted "6766756 want 'zfs unset'".

I found a little workaround about that:

zfs inherit net.morettoni:test rpool/export/home/luca
zfs inherit net.morettoni:test rpool/export/home/luca/src

and after

zfs set net.morettoni:test= rpool/export/home/luca

3 command vs 1 command, but it work :/

-- 
Luca Morettoni  - http://morettoni.net
BLOG @ http://morettoni.blogspot.com/ | GPG key 0xD69411BB
jugUmbria founder - https://jugUmbria.dev.java.net/ | Thawte notary
ITL-OSUG leader - http://www.opensolaris.org/os/project/itl-osug/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
> Snapshots are not replacements for traditional backup/restore features.
> If you need the latter, use what is currently available on the market.
> -- richard

I'd actually say snapshots do a better job in some circumstances.
Certainly they're being used that way by the desktop team:
http://blogs.sun.com/erwann/entry/zfs_on_the_desktop_zfs

None of this is stuff I'm after personally btw.  This was just my
attempt to interpret the request of the OP.

Although having said that, the ability to restore single files as fast
as you can restore a whole snapshot would be a nice feature.  Is that
something that would be possible?

Say you had a ZFS filesystem containing a 20GB file, with a recent
snapshot.  Is it technically feasible to restore that file by itself
in the same way a whole filesystem is rolled back with "zfs restore"?
If the file still existed, would this be a case of redirecting the
file's top level block (dnode?) to the one from the snapshot?  If the
file had been deleted, could you just copy that one block?

Is it that simple, or is there a level of interaction between files
and snapshots that I've missed (I've glanced through the tech specs,
but I'm a long way from fully understanding them).

Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Richard Elling
Ross Smith wrote:
> Hi Darren,
>
> That's storing a dump of a snapshot on external media, but files
> within it are not directly accessible.  The work Tim et all are doing
> is actually putting a live ZFS filesystem on external media and
> sending snapshots to it.
>   

Cognitive disconnect, again.  Snapshots do not contain files, they contain
changed blocks.

> A live ZFS filesystem is far more useful (and reliable) than a dump,
> and having the ability to restore individual files from that would be
> even better.
>
> It still doesn't help the OP, but I think that's what he was after.
>   

Snapshots are not replacements for traditional backup/restore features.
If you need the latter, use what is currently available on the market.
 -- richard

> Ross
>
>
>
> On Mon, Nov 3, 2008 at 9:55 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>   
>> Ross wrote:
>> 
>>> Ok, I see where you're coming from now, but what you're talking about
>>> isn't zfs send / receive.  If I'm interpreting correctly, you're talking
>>> about a couple of features, neither of which is in ZFS yet, and I'd need the
>>> input of more technical people to know if they are possible.
>>>
>>> 1.  The ability to restore individual files from a snapshot, in the same
>>> way an entire snapshot is restored - simply using the blocks that are
>>> already stored.
>>>
>>> 2.  The ability to store (and restore from) snapshots on external media.
>>>   
>> What makes you say this doesn't work ?  Exactly what do you mean here
>> because this will work:
>>
>>$ zfs send [EMAIL PROTECTED] | dd of=/dev/tape
>>
>> Sure it might not be useful and I don't think that is what you mean here  so
>> can you expand on "sotre snapshots on external media.
>>
>> --
>> Darren J Moffat
>>
>> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Files from the future are not accessible on ZFS

2008-11-03 Thread Laurent Blume
Hi all,

It seems a user managed to create files dated Oct 16, 2057, from a Linux distro 
that mounted by NFS the volumes on an x2100 server running S10U5, with ZFS 
volumes.

The problem is, those files are completely unreachable on the S10 server:

# ls -l .gtk-bookmarks
.gtk-bookmarks: Value too large for defined data type

# more .gtk-bookmarks
.gtk-bookmarks: Value too large for defined data type

# cp .gtk-bookmarks /tmp
cp: cannot access .gtk-bookmarks

# touch .gtk-bookmarks
touch: .gtk-bookmarks cannot stat

# rm .gtk-bookmarks
.gtk-bookmarks: Value too large for defined data type

A truss shows this:
lstat64(".gtk-bookmarks", 0x08046A60)   Err#79 EOVERFLOW

>From a RHEL 4 NFS mount, it shows:

$ ls -l .gtk-bookmarks
-rw-r--r--+ 1 user opc 0 oct 16  2057 .gtk-bookmarks

Is that a ZFS bug or an lstat() one?

TIA,

Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cleaning user properties

2008-11-03 Thread Mark J Musante
On Mon, 3 Nov 2008, Luca Morettoni wrote:

> now I need to *clear* (remove) the property from 
> rpool/export/home/luca/src filesystem, but if I use the "inherit" 
> command I'll get the parent property, any hint to delete it?

There currently is no way to do it.  I looked for an existing CR and 
couldn't find one, so I submitted "6766756 want 'zfs unset'".


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Robert Milkowski
Hello Darren,

Monday, November 3, 2008, 12:44:29 PM, you wrote:

DJM> Robert Milkowski wrote:
>> Hello zfs-discuss,
>> 
>> http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable-standalone.git;a=commit;h=eecfe5255c533fefd38072a04e4afb56c40d9719
>> "If compression for a given set of pages fails to make them smaller, the
>> file is flagged to avoid future compression attempts later."
>> 
>> Maybe that's a good one - so if couple of blocks do not compress then
>> flag it in file metadata and do not try to compress any blocks within
>> the file anymore. Of course for some files it will be suboptimal so
>> maybe a dataset option?

DJM> I don't understand why having a couple of blocks in a file not 
DJM> compressible should cause the whole file not to be - that to me seems 
DJM> like a bad idea.

DJM> What if for example the file is a disk image and the first couple of 
DJM> blocks aren't compressible but huge chunks of it are ?

DJM> ZFS does compression at the block level and attempts it on every write.
DJM>   If a given block doesn't compress sufficiently well (hardcoded 12.5%)
DJM> or at all then the block is tagged as ZIO_COMPRESS_OFF in the blkptr. 
DJM> That doesn't impact any other blocks though.

DJM> So what would the dataset option you mention actually do ?

DJM> What problem do you think needs solved here ?

Well, let's say you have a file server with lots of different
documents, pictures, etc. Some of these files are jpegs, gifs, zip
files, etc. - they won't compress at all. Currently ZFS will try to do
compression for each block of these files anyway, each time realizing
that it's below 12.5% - it will be burning CPU cycles for no real
advantage. With gzip compression it could save quite a lot of cpu
cycles. I know that some files could compress very badly at the
beginning and very good later on - that's why I believe the behavior
should be tunable.


Now, the good filter could be to use MAGIC numbers within files or
approach btrfs come up with, or maybe even both combined.



-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Darren J Moffat
Robert Milkowski wrote:
> Hello zfs-discuss,
> 
> http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable-standalone.git;a=commit;h=eecfe5255c533fefd38072a04e4afb56c40d9719
> "If compression for a given set of pages fails to make them smaller, the
> file is flagged to avoid future compression attempts later."
> 
> Maybe that's a good one - so if couple of blocks do not compress then
> flag it in file metadata and do not try to compress any blocks within
> the file anymore. Of course for some files it will be suboptimal so
> maybe a dataset option?

I don't understand why having a couple of blocks in a file not 
compressible should cause the whole file not to be - that to me seems 
like a bad idea.

What if for example the file is a disk image and the first couple of 
blocks aren't compressible but huge chunks of it are ?

ZFS does compression at the block level and attempts it on every write. 
  If a given block doesn't compress sufficiently well (hardcoded 12.5%) 
or at all then the block is tagged as ZIO_COMPRESS_OFF in the blkptr. 
That doesn't impact any other blocks though.

So what would the dataset option you mention actually do ?

What problem do you think needs solved here ?

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cleaning user properties

2008-11-03 Thread Pawel Jakub Dawidek
On Mon, Nov 03, 2008 at 11:47:19AM +0100, Luca Morettoni wrote:
> I have a little question about user properties, I have two filesystems:
> 
> rpool/export/home/luca
> and
> rpool/export/home/luca/src
> 
> in this two I have one user property, setted with:
> 
> zfs set net.morettoni:test=xyz rpool/export/home/luca
> zfs set net.morettoni:test=123 rpool/export/home/luca/src
> 
> now I need to *clear* (remove) the property from 
> rpool/export/home/luca/src filesystem, but if I use the "inherit" 
> command I'll get the parent property, any hint to delete it?

You can't delete it, it's just how things work. I work-around it by
treating empty property and lack of property the same.

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgp772w99zeEG.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread Robert Milkowski
Hello zfs-discuss,

http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable-standalone.git;a=commit;h=eecfe5255c533fefd38072a04e4afb56c40d9719
"If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later."

Maybe that's a good one - so if couple of blocks do not compress then
flag it in file metadata and do not try to compress any blocks within
the file anymore. Of course for some files it will be suboptimal so
maybe a dataset option?



-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zones/zonerootA,B,C

2008-11-03 Thread dick hoogendijk

Ian Collins wrote:
>
> On Mon 03/11/08 08:11 , dick hoogendijk [EMAIL PROTECTED] sent:
>> Live Upgrade does -NOT- do this on my system.

>Did you follow the instructions at
> http://docs.sun.com/app/docs/doc/819-5461/ggpdm?a=view

I read the instructions again, but to no avail. never mind though. the
system runs on ZFS, including all zones.

However, I have -one- question. SUN says create the zone FS like this:
rpool/ROOT/s10BE/zones (mountpoint becomes zones)
Is this absolutely neccessary? If you do a:
# zfs create rpool/zones ; and the set the mountpoint to /zones the
situation works the same.

Or will I get LU problems if the zones are -not- in the BE
(rpool/ROOT/BE/zones/blah)

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
++ http://nagual.nl/ | SunOS 10u6 10/08 ++

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
Hi Darren,

That's storing a dump of a snapshot on external media, but files
within it are not directly accessible.  The work Tim et all are doing
is actually putting a live ZFS filesystem on external media and
sending snapshots to it.

A live ZFS filesystem is far more useful (and reliable) than a dump,
and having the ability to restore individual files from that would be
even better.

It still doesn't help the OP, but I think that's what he was after.

Ross



On Mon, Nov 3, 2008 at 9:55 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Ross wrote:
>>
>> Ok, I see where you're coming from now, but what you're talking about
>> isn't zfs send / receive.  If I'm interpreting correctly, you're talking
>> about a couple of features, neither of which is in ZFS yet, and I'd need the
>> input of more technical people to know if they are possible.
>>
>> 1.  The ability to restore individual files from a snapshot, in the same
>> way an entire snapshot is restored - simply using the blocks that are
>> already stored.
>>
>> 2.  The ability to store (and restore from) snapshots on external media.
>
> What makes you say this doesn't work ?  Exactly what do you mean here
> because this will work:
>
>$ zfs send [EMAIL PROTECTED] | dd of=/dev/tape
>
> Sure it might not be useful and I don't think that is what you mean here  so
> can you expand on "sotre snapshots on external media.
>
> --
> Darren J Moffat
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cleaning user properties

2008-11-03 Thread Luca Morettoni
I have a little question about user properties, I have two filesystems:

rpool/export/home/luca
and
rpool/export/home/luca/src

in this two I have one user property, setted with:

zfs set net.morettoni:test=xyz rpool/export/home/luca
zfs set net.morettoni:test=123 rpool/export/home/luca/src

now I need to *clear* (remove) the property from 
rpool/export/home/luca/src filesystem, but if I use the "inherit" 
command I'll get the parent property, any hint to delete it?

-- 
Luca Morettoni  - http://morettoni.net
BLOG @ http://morettoni.blogspot.com/ | GPG key 0xD69411BB
jugUmbria founder - https://jugUmbria.dev.java.net/ | Thawte notary
ITL-OSUG leader - http://www.opensolaris.org/os/project/itl-osug/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Darren J Moffat
Ross wrote:
> Ok, I see where you're coming from now, but what you're talking about isn't 
> zfs send / receive.  If I'm interpreting correctly, you're talking about a 
> couple of features, neither of which is in ZFS yet, and I'd need the input of 
> more technical people to know if they are possible.
> 
> 1.  The ability to restore individual files from a snapshot, in the same way 
> an entire snapshot is restored - simply using the blocks that are already 
> stored.
> 
> 2.  The ability to store (and restore from) snapshots on external media.

What makes you say this doesn't work ?  Exactly what do you mean here 
because this will work:

$ zfs send [EMAIL PROTECTED] | dd of=/dev/tape

Sure it might not be useful and I don't think that is what you mean here 
  so can you expand on "sotre snapshots on external media.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool with raidz+mirror = wrong size displayed?

2008-11-03 Thread George
Hi,

I installed a zpool containing of

zpool
__mirror
disk1 500gb
disk2 500gb
__raidz
disk3 1tb
disk4 1tb
disk5 1tb

It works fine, but it displays the wrong size (terminal -> zpool list). It 
should be 500gb (mirrored) + 2TB (3TB raidz) = 2,5 TB, right? But it displays 
it has 3,17TB diskspace available.

I first created the mirror and then added the raidz to it (zpool add -f 
poolname raidz disk 3 disk4 disk5). There was a warning because they dont have 
the same redundancy level, but thats ok for me. Or is it a problem for 
OpenSolaris/ZFS?

Thx
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup/Restore

2008-11-03 Thread Cesare
On Mon, Nov 3, 2008 at 6:17 AM, Richard Elling <[EMAIL PROTECTED]>wrote:

> Cesare wrote:
>
>> Hi all,
>>
>> I've recently started down to put on production use for zfs and I'm
>> looking to how doing a backup of filesystem. I've more than one server to
>> migrate to ZFS and not so more server where there is a tape backup. So I've
>> put a L280 tape drive on one server and use it from remote connection.
>>
>> The ZFS configuration and command to make a backup is the following:
>>
>> --- Client A
>>
>> Client-A# zpool list
>> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
>> tank119G   50.0G   69.0G42%  ONLINE -
>> Client-A# zfs list
>> NAMEUSED  AVAIL  REFER  MOUNTPOINT
>> tank   37.4G  50.2G  36.7K  /tank
>> tank/zones 37.2G  50.2G  25.6G  /opt/zones
>> tank/[EMAIL PROTECTED]  11.6G  -  25.0G  -
>> Client-A# zfs send tank/[EMAIL PROTECTED] | /opt/3pp/openssh/bin/ssh -c
>> blowfish -l root server-backup \(dd ibs=258048 of=/dev/rmt/1 obs=2064384 \)
>>
>> When I want to restore what I've dumped, I do the following:
>>
>> Client-B# zfs list
>> NAME  USED  AVAIL  REFER  MOUNTPOINT
>> tank  144K  66.9G  27.5K  /tank
>> tank/backup  24.5K  66.9G  24.5K  /tank/backup
>> tank/zones   24.5K  66.9G  24.5K  /tank/zones
>> Client-B# zpool list
>> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
>> tank 68G148K   68.0G 0%  ONLINE -
>> Client-B# /opt/3pp/openssh/bin/ssh -c blowfish -l root server-backup \(dd
>> if=/dev/rmt/1 bs=2064384 \) | zfs receive tank/zones
>> stty: : Invalid argument
>>
>
> ^
> You must fix your shell environment first.  Try something
> simple, like "ssh ... server-backup ls" and see what is returned.


I tried to fixed up inserting the "-qt" flags on SSH command. The "stty"
error disappear but the result did not change. I noticed that environment
sourcing configuration file on server-backup host and print some information
on stdout screen. Those characters invalidate the input streaming on zfs.

Here the printshoot:

--
Client-B# /opt/3pp/openssh/bin/ssh -qt  -c blowfish -l root server-backup
\(dd bs=2064384  if=/dev/rmt/1 \) | zfs receive -v tank/zones
receiving full stream of tank/[EMAIL PROTECTED] into tank/[EMAIL PROTECTED]
--

Thanks a lot for let me on right direction.

Cesare



> -- richard
>
>  cannot receive: invalid stream (bad magic number)
>> select: Bad file number
>> Client-B#
>>
>> What the trick?
>>
>> Thanks
>>  Cesare
>> --
>>
>> Groucho Marx  - "All people are born alike - except Republicans and
>> Democrats."
>> 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>
>


-- 

Jack Benny  - "Give me golf clubs, fresh air and a beautiful partner, and
you can keep the clubs and the fr...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss