On Tue, Jan 12, 2010 at 3:16 AM, sanjay nadkarni (Laptop)
<Sanjay.Nadkarni at sun.com> wrote:
> Caimanics,
> ? Please review the requirements documents for replication in OpenSolaris.
> ?This ?provides the the ?Flash type functionality but includes support for
> zones.
>
> Thanks
>
> -Sanjay

> Archive: A compressed zfs send stream of the active
> boot environment is defined as an archive.

s/A compressed/An optionally compressed/

> Master Image: The archive and the metadata for a specific
> system is the the master image. This image can be used
> to clone other systems.

s/clone other systems/create clone systems/

I like the idea of being able to more fully specify system
configuration (e.g. network, partitioning) as part of the archive.
However, it should be possible to override those.  For instance, in a
disaster recovery situation the hardware and/or target network may be
different than the master.

> 4. Master images must support global and non-[global ]zones.

Does this mean that a master image must support all of the following
combinations:

- Global zone only
- Global zone and one non-global zone
- Global zone and more than one non-global zone
- Non-global zone only

I would find it useful to be able to deploy a flash-ish global zone in
one phase, followed by deploying various non-global zones over time.
Much like the architecture requirements, presumably these may need to
do an "image update" during attach.

> 5. Master images created must be portable, i.e. the images need
> to be in a compressed format.  Multiple compression options
> must be provided.

Not sure how compression relates to being portable.

Compression is good, but it should be optional and the algorithm
should be selectable based on user needs.  In an environment with
gigabit networks and Niagara CPU's, I am much more likely to select
lzjb than bzip2 as compression algorithm.  In a global WAN with Xeon
CPU's bzip2 is probably the right compression algorithm.

zfs send data streams can be deduplicated.  In the case of a send
stream having a global zone and one or more non-global zones (and some
other scenarios) this may have more effect than even an aggressive
compression algorithm.
http://arc.opensolaris.org/caselog/PSARC/2009/557/

Disks are likely to get faster (as SSD comes down in price) while each
CPU pipeline is not really getting much faster.  Consideration should
be given to file formats and algorithms that MT friendly.  For
example, http://compression.ca/pbzip2/.  I highly suggest doing all of
your work on Niagara system to help inspire thoughts in this line.

> 7. Must provide the ability to add additional architecture
> specific pkgs. This requirement is especially applicable to
> SPARC. If a master image is created on specific SPARC
> platform, and the image is applied to another SPARC
> platform, then the tool must detect the platform
> differences and have a method to accept platform specific
> pkgs.

Doesn't this apply to x86 as well if the master lacked drivers that
are needed on the clone system?

Just to be clear, this gets us out of the current problem with
requiring separate flash archives for sun4u anad sun4v systems, right?

> Zone information: Number and types of zones. (?)
> Per_zone_info (need to zero out guid info)
> Name Service: Type of name service used ( NIS, DNS, LDAP)

Why not just leave the config files inside the zone alone rather than
trying to capture them in some other format?  I worry that otherwise
the current disconnect with reality of only having one name service
will be carried forward.  That is, currently sysidcfg has no way to
say to use DNS for hosts, LDAP for passwd and group, and files for the
rest.


General:

The global zone and non-global zones may be in different pools.  So
long as the boot pool is limited to single disks or mirrors, it is
quite likely that systems intended to host a lot of zones will have
the zones in a pool other than the boot pool.

If network metadata is contained in the archive, there needs to be a
way to deal with different NICs in different machines.  For example,
one would think that a T1000 and T2000 are relatively the same.
Unfortunately one has bge interfaces and the other has e1000g
interfaces.

Any support for incremental archives?  Presumably this would leverage
zfs send -I.  It would be helpful for those cases where there is a
very common base configuration with several variants built from that
base.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/

Reply via email to