On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat<darr...@opensolaris.org>  wrote:

I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?

You cannot get a single file out of the zfs send datastream.

I don't see that as part of the definition of a backup - you obviously do - so we will just have to disagree on that.

        ZFS system attributes (as used by the CIFS server and locally) ?

star does support such things for Linux and FreeBSD, the problem on Solaris is
that the documentation of the interfaces for this Solaris local feature is poor.
The was Sun tar archives the attibutes is non-portable.

Could you point to documentation?

getattrat(3C) / setattrat(3C)

Even has example code in it.

This is what ls(1) uses.

It could be easily possible to add portable support integrated into the
framework that already supports FreeBSD and Linux attributes.

Great, do you have a time frame for when you will have this added to star then ?

        ZFS dataset properties (compression, checksum etc) ?

Where is the documentation of the interfaces?

There isn't any for those because the libzfs interfaces are currently
still private.   The best you can currently do is to parse the output of
'zfs list' eg.
        zfs list -H -o compression rpool/export/home

Not ideal but it is the only publicly documented interface for now.

As long as there is no interface that supports what I did discuss with
Jeff Bonwick in September 2004:

-       A public interface to get the property state

That would come from libzfs. There are private interfaces just now that are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't documented or public though and are subject to change at any time.

-       A public interface to read the file raw in compressed form

I think you are missing something about how ZFS works here. Files aren't in a compressed form. Some blocks of a file may be compressed if compression is enabled on the dataset. Note that for compression and checksum properties they only indicate what algorithm will be used to compress (or checksum) blocks for new writes. It doesn't say what algorithm the blocks of a given file are compressed with. In fact for any given file some blocks may be compressed and some not. The reasons for a block not being compressed include: 1) it didn't compress 2) it was written when compression=off 3) it didn't compress enough. It is even possible that if the user changed the value of compression blocks within a file are compressed with a different algorithm.

So you won't ever get this because ZFS just doesn't work like that.

In fact even 'zfs send' doesn't even store compressed data. The 'zfs send' stream has the blocks in the form that they exist in the in memory ARC ie uncompressed.

In kernel it is possible to ask for a block in its RAW (ie compressed) form but that is only for consumers of arc_read() and zio_read() - way way way below the ZPL layer and applications like star.

-       A public interface to write the file raw in compressed form

Not even a private API exists for this. There is no capability to send a RAW (ie compressed) block to arc_write() or zio_write().

I am not sure whether this is of relevance for a backup. If there is a need
to change the states, on a directory base, there is a need for an easy to use
public interface.

I don't understand what you mean by that, can you give me an example.

--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to