Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-05 Thread Dick Davies
Does 'zpool attach' enough for a root pool?
I mean, does it install GRUB bootblocks on the disk?

On Wed, Jul 2, 2008 at 1:10 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Tommaso,
>
> Wednesday, July 2, 2008, 1:04:06 PM, you wrote:

>  the root filesystem of my thumper is a ZFS with a single disk:
>

>
> is it possible to add a mirror to it? I seem to be able only to add a new
> PAIR of disks in mirror, but not to add a mirror to the existing disk ...

> zpool attach

-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Microsoft WinFS for ZFS?

2008-03-17 Thread Dick Davies
Have you ever used a Mac? HFS has had these features for years.

On Mon, Mar 17, 2008 at 6:33 PM, Bryan Wagoner <[EMAIL PROTECTED]> wrote:
> Actually, having a database on top of an FS is really useful.  It's a Content 
> Addressable Storage system.One of the problem home users have is that 
> they are putting more and more of their lives in digital format.  Users need 
> a way to organize and search all that info in some sort of meaningful way.  
> Imagine having thousands of photos spread all over your filesystems with 
> nothing but filenames associated with them. That's not too easily searchable 
> or organized.
>
>  Imagine all the objects stored on your filesystem have tags associated with 
> them or other metadata that is required at save time.  Then you can start 
> doing things like virtual folders.  Imagine a folder on your windows desktop 
> that says "Steely Dan" and when you click it runs a query shows you all the 
> music files on your computer by Steely Dan and pretends to be an explorer 
> windows.  or a virtual folder that says "Springbreak 2008 pics"  and when you 
> click it it goes through all your gagillion photos and creates an explorer 
> window of just the spring break pics.
>
>  Today, you'd have to tag the Metadata yourself as you put content on your 
> computer,  but Microsoft has other initiatives to do facial recognition in 
> photos and some other things to go along with the Content addressable storage 
> system.
>
>  There's a lot of uses for Content Addressable Storage systems including 
> revision control and some other things that home users can benefit from.  At 
> the Enterprise level, such a system would be something like the 
> 5800(Honeycomb) from Sun.
>
>
>
>
>  This message posted from opensolaris.org
>  ___
>  zfs-discuss mailing list
>  zfs-discuss@opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-06 Thread Dick Davies
On Dec 6, 2007 1:13 AM, Bakul Shah <[EMAIL PROTECTED]> wrote:

> Note that I don't wish to argue for/against zfs/billtodd but
> the comment above about "no *real* opensource software
> alternative zfs automating checksumming and simple
> snapshotting" caught my eye.
>
> There is an open source alternative for archiving that works
> quite well.  venti has been available for a few years now.
> It runs on *BSD, linux, macOS & plan9 (its native os).  It
> uses strong crypto checksums, stored separately from the data
> (stored in the pointer blocks) so you get a similar guarantee
> against silent data corruption as ZFS.

Last time I looked into  Venti, it used content hashing to
locate storage blocks. Which was really cool, because (as
you say) it magically consolidates blocks with the same checksum
together.

The 45 byte score is the checksum of the top of the tree, isn't that
right?

Good to hear it's still alive and been revamped somewhat.

ZFS snapshots and clones save a lot of space, but the
'content-hash == address' trick means you could potentially save
much more.

Though I'm still not sure how well it scales up -
Bigger working set means you need longer (more expensive) hashes
to avoid a collision, and even then its not guaranteed.

When i last looked they were still using SHA-160
and I ran away screaming at that point :)

> Google for "venti sean dorward".  If interested, go to
> http://swtch.com/plan9port/ and pick up plan9port (a
> collection of programs from plan9, not just venti).  See
> http://swtch.com/plan9port/man/man8/index.html for how to use
> venti.




-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mirroring question

2007-12-05 Thread Dick Davies
On Dec 5, 2007 9:54 PM, Brian Lionberger <[EMAIL PROTECTED]> wrote:
> I create two zfs's on one pool of four disks with two mirrors, such as...
> /
> zpool create tank mirror disk1 disk2 mirror disk3 disk4
>
> zfs create tank/fs1
> zfs create tank/fs2/
>
> Are fs1 and fs2 striped across all four disks?

Yes - they're striped across both the mirrors (and so across all 4 submirrors).

> If two disks fail that represent a 2-way mirror, do I lose data?.

Hell yes.


-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create ZFS pool ?

2007-11-17 Thread Dick Davies
Just a +1 - I use an fdisk partition for my zpool and it works
fine (plan was to dual-boot with freebsd and this makes the vdevs slightly
easier to address from both OSes).

zpool doesn't care what the partition ID is, just give it

zpool create gene c0d0pN
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CIFS and user-visible snapshots

2007-11-07 Thread Dick Davies
Does anybody know if the upcoming CIFS integration in b77 will
provide a mechanism for users to see snapshots (like .zfs/snapshot/
does for NFS)?

-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXDE vs Solaris 10u4 for a home file server

2007-11-04 Thread Dick Davies
On 04/11/2007, Ima <[EMAIL PROTECTED]> wrote:

> I'm setting up a home file server, which will mainly just consist of a ZFS 
> pool and access with SAMBA.  I'm not sure if I should use SXDE for this, or 
> Sol 10u4.  Does SXDE offer any ZFS improvements over 10u4 for this purpose?

I'd be inclined to go for SXCE rather than SXDE myself - mainly
because there are good things around the corner (CIFS integration
being the obvious one for a NAS) that you'll be able to try out sooner
that way.

> My hardware is supported under both platforms.  Additionally, with SXDE I 
> worry that I may spend more time maintaining the OS, and about the 
> availability of upgrades for it over the next 5-10 years, so I'm not really 
> sure which would be better in the long run.

For a home NAS, I wouldn't worry much about maintenance taking a lot of time.

It's up you whether you need a bleeding edge feature or not, but it's
nice to have the option. My router is takes much less work than a
server would, but only because Linksys are slackers when it comes to
firmware updates; the kernel/firewall it's built with must be horribly
outdated by now.

-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] Squid Cache on a ZFS file system

2007-10-30 Thread Dick Davies
On 29/10/2007, Tek Bahadur Limbu <[EMAIL PROTECTED]> wrote:

> I created a ZFS file system like the following with /mypool/cache being
> the partition for the Squid cache:
>
> 18:51:27 [EMAIL PROTECTED]:~$ zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> mypool 478M  31.0G  10.0M  /mypool
> mypool/cache   230M  9.78G   230M  /mypool/cache
> mypool/home226M  31.0G   226M  /export/home
>
> Note: I only have a few days of experience on Solaris and I might have
> made some mistakes with the above ZFS partitions!

No, that looks ok. You can just 'zfs set quota= mypool/cache'
to be bigger in the future if need be.

> Basically, I want to know if somebody here on this list is using a ZFS
> file system for a proxy cache and what will be it's performance? Will it
> improve and degrade Squid's performance? Or better still, is there any
> kind of benchmark tools for ZFS performance?

filebench sounds like it'd be useful for you. It's coming in the next Nevada
release, but since it looks like you're on Solaris 10, take a look at:

  http://blogs.sun.com/erickustarz/entry/filebench

Remember to 'zfs set atime=off mypool/cache' -
there's no need for it for squid caches.

-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Dick Davies
On 16/10/2007, Michael Goff <[EMAIL PROTECTED]> wrote:
> Hi,
>
> When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script 
> which does:
>
> zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
> zfs create tank/data
> zfs set mountpoint=/data tank/data
> zpool export -f tank

Try without the '-f' ?


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] case 37962758 - zfs can't destroy Sol10U4

2007-10-16 Thread Dick Davies
On 16/10/2007, Renato Ferreira de Castro - Sun Microsystems - Gland Switzerland
> What he try to do :
> ---
> - re-mount and umount manually, then try to destroy.
> # mount -F zfs zpool_dokeos1/dokeos1/home /mnt
> # umount /mnt
> # zfs destroy dokeos1_pool/dokeos1/home
> cannot destroy 'dokeos1_pool/dokeos1/home': dataset is busy
>
> The file system is not mounted:

I had the same thing on s10u3. Try

zfs mount dokeos1_pool/dokeos1/home
zfs umount dokeos1_pool/dokeos1/home
zfs destroy dokeos1_pool/dokeos1/home

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zone root on a ZFS filesystem and Cloning zones

2007-10-12 Thread Dick Davies
On 11/10/2007, Dick Davies <[EMAIL PROTECTED]> wrote:
> No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).
>
> I have a workaround I'm about to blog

Here it is - hopefully be of some use:

  
http://number9.hellooperator.net/articles/2007/10/11/fast-zone-cloning-on-solaris-10
-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zone root on a ZFS filesystem and Cloning zones

2007-10-11 Thread Dick Davies
No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).

I have a workaround I'm about to blog, the gist of which  is

make the 'template' zone on zfs
boot, configure, etc.
zonecfg -z template detach

zfs snapshot tank/zones/[EMAIL PROTECTED]
zfs clone tank/zones/[EMAIL PROTECTED] tank/zones/clone

zonecfg -z clone 'create -a /zones/clone'
zoneadm -z clone attach

Will post the URL once I pull my finger out.

On 11/10/2007, Tony Marshall <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Does anyone have an update on the support of having a zones root on a
> zfs filesystem with Solaris update 4?  The only information that I have
> seen so far is that it was planned for late 2007 or early 2008.
>
> Also I was hoping to use the snapshot and clone capabilities of zfs to
> clone zones as a faster deployment method for new zones, is this
> supported and if not when is it likely to be supported?

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Dick Davies
Hi Thomas

the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.

Also, there's a new filebench out now, see

 http://blogs.sun.com/erickustarz/entry/filebench

will be integrated into Nevada in b76, according to Eric.

On 09/10/2007, Thomas Liesner <[EMAIL PROTECTED]> wrote:
> Hi again,
>
> i did not want to compare the filebench test with the single mkfile command.
> Still, i was hoping to see similar numbers in the filbench stats.
> Any hints what i could do to further improve the performance?
> Would a raid1 over two stripes be faster?
>
> TIA,
> Tom
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-08 Thread Dick Davies
On 08/10/2007, Thomas Liesner <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] # ./filebench
> filebench> load fileserver
> filebench> run 60

> IO Summary:   8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,508us 
> cpu/op,   0.2ms
> 12746: 65.266: Shutting down processes
> filebench>[/i]
>
> I expected to see some higher numbers really...
> a simple "time mkfile 16g lala" gave me something like 280Mb/s.
>
> Would anyone comment on this?

If you

set $nthreads=1

(which is closer to a single mkfile command)
you'll probably find it's much faster.

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] safe zfs-level snapshots with a UFS-on-ZVOL filesystem?

2007-10-08 Thread Dick Davies
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I  used a
ZVOL-backed UFS filesystem
for the zonepath.

I thought I'd push on with the experiment (in the hope Live Upgrade
would be able to upgrade such a zone).
It's a bit unwieldy, but everything worked reasonably well -
performance isn't much worse than straight ZFS (it gets much faster
with compression enabled, but that's another story).

The only fly in the ointment is that ZVOL level snapshots don't
capture unsynced data up at the FS level. There's a workaround at:

  http://blogs.sun.com/pgdh/entry/taking_ufs_new_places_safely

but I wondered if there was anything else that could be done to avoid
having to take such measures?
I don't want to stop writes to get a snap, and I'd really like to avoid UFS
snapshots if at all possible.

I tried mounting forcedirectio in the (mistaken) belief that this
would bypass the UFS
buffer cache, but it didn't help.

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-06 Thread Dick Davies
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> OK,
>
> I guess using this ...
>
>  set md:mirrored_root_flag=1
>
> for Solaris Volume Manager (SVM) is not supported and could cause problems.
>
> I guess it's back to my first idea ...
>
> With 2 disks, setup three SDR's (State Database Replicas)
>Drive 0 = 1 SDR -> If this drive fails auto-magically boot DRIVE 1
>Drive 1 = 2 SDR's   -> If this drive fails Sysadmin intervention required
>
> Well that's OK, at least 50% of the time the system won't KACK.

What you gain on the swings, you lose on the roundabouts.
But if you lose drive 1 when the system is running, it'll now panic
(whereas with
50% of quorum, it will continue to run).

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] O.T. "patches" for OpenSolaris

2007-10-04 Thread Dick Davies
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> Henk,
>
> By upgrading do you mean, rebooting and installing Open Solaris from DVD or 
> Network?
>
> Like, no Patch Manager install some quick patches and updates and a quick 
> reboot, right?

You can live upgrade and then do a quick reboot:

  
http://number9.hellooperator.net/articles/2007/08/08/solaris-laptop-live-upgrade


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-03 Thread Dick Davies
On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:

> Client A
>   - import pool make couple-o-changes
>
> Client B
>   - import pool -f  (heh)

> Oct  4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
> Oct  4 15:03:12 fozzie genunix: [ID 603766 kern.notice] assertion
> failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5
> == 0x0)
> , file: ../../common/fs/zfs/space_map.c, line: 339
> Oct  4 15:03:12 fozzie unix: [ID 10 kern.notice]
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51160
> genunix:assfail3+b9 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51200
> zfs:space_map_load+2ef ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51240
> zfs:metaslab_activate+66 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51300
> zfs:metaslab_group_alloc+24e ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b513d0
> zfs:metaslab_alloc_dva+192 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51470
> zfs:metaslab_alloc+82 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b514c0
> zfs:zio_dva_allocate+68 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b514e0
> zfs:zio_next_stage+b3 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51510
> zfs:zio_checksum_generate+6e ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51530
> zfs:zio_next_stage+b3 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b515a0
> zfs:zio_write_compress+239 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b515c0
> zfs:zio_next_stage+b3 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51610
> zfs:zio_wait_for_children+5d ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51630
> zfs:zio_wait_children_ready+20 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51650
> zfs:zio_next_stage_async+bb ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51670
> zfs:zio_nowait+11 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51960
> zfs:dbuf_sync_leaf+1ac ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b519a0
> zfs:dbuf_sync_list+51 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51a10
> zfs:dnode_sync+23b ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51a50
> zfs:dmu_objset_sync_dnodes+55 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51ad0
> zfs:dmu_objset_sync+13d ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51b40
> zfs:dsl_pool_sync+199 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51bd0
> zfs:spa_sync+1c5 ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51c60
> zfs:txg_sync_thread+19a ()
> Oct  4 15:03:12 fozzie genunix: [ID 655072 kern.notice] ff0002b51c70
> unix:thread_start+8 ()
> Oct  4 15:03:12 fozzie unix: [ID 10 kern.notice]

> Is this a known issue, already fixed in a later build, or should I bug it?

It shouldn't panic the machine, no. I'd raise a bug.

> After spending a little time playing with iscsi, I have to say it's
> almost inevitable that someone is going to do this by accident and panic
> a big box for what I see as no good reason. (though I'm happy to be
> educated... ;)

You use ACLs and TPGT groups to ensure 2 hosts can't simultaneously
access the same LUN by accident. You'd have the same problem with
Fibre Channel SANs.
-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-27 Thread Dick Davies
On 26/09/2007, Christopher <[EMAIL PROTECTED]> wrote:
> I'm about to build a fileserver and I think I'm gonna use OpenSolaris and ZFS.
>
> I've got a 40GB PATA disk which will be the OS disk,

Would be nice to remove that as a SPOF.

I know ZFS likes whole disks, but I wonder how much would performance suffer
if you SVMed up the first few Gb of a ZFS mirror pair for your root fs?
I did it this week on Solaris 10 and it seemed to work pretty well

(
http://number9.hellooperator.net/articles/2007/09/27/solaris-10-on-mirrored-disks
)

Roll on ZFS root :)

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: "zoneadm clone" doesn't support ZFS snapshots in

2007-09-22 Thread Dick Davies
Bah, wrong list.

A timeline would be really nice for when this is likely to be sorted
out - higher
priority than ZFS root IMO.

-- Forwarded message --
From: Dick Davies <[EMAIL PROTECTED]>
Date: 22 Sep 2007 23:21
Subject: Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in
To: [EMAIL PROTECTED]


On 21/09/2007, Mike Gerdts <[EMAIL PROTECTED]> wrote:

>
> I would really like to ask Sun for a roadmap as to when this is going
> to be supported.

The best way around this I can think of is to have a 'template' zone
for cloning on UFS that you use to build your other (ZFS-backed) zones.
Then delegate a dataset into each zone to hold the important stuff.

Come upgrade time, you drop all the 'child' zones, patch the template
and use it to re-provision the other zones. Then drop the dataset back in.

Course, it'll take a while to clone the template since it's UFS-backed...



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Dick Davies
On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> > I've got 12Gb or so of db+web in a zone on a ZFS
> > filesystem on a mirrored zpool.
> > Noticed during some performance testing today that
> > its i/o bound but
> > using hardly
> > any CPU, so I thought turning on compression would be
> > a quick win.
>
> If it is io bound won't compression make it worse?

Well, the CPUs are sat twiddling their thumbs.
I thought reducing the amount of data going to disk might help I/O -
is that unlikely?

> > benefit of compression
> > on the blocks
> > that are copied by the mirror being resilvered?
>
> No! Since you are doing a block for block mirror of the data, this would not 
> could not compress the data.

No problem, another job for rsync then :)


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Dick Davies
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.

I know I'll have to copy files for existing data to be compressed, so
I was going to
make a new filesystem, enable compression and rysnc everything in, then drop the
old filesystem and mount the new one (with compressed blocks) in its place.

But I'm going to be hooking in faster LUNs later this week. The plan
was to remove
half of the mirror, attach a new disk, remove the last old disk and
attach the second
half of the mirror (again on a faster disk).

Will this do the same job? i.e. will I see the benefit of compression
on the blocks
that are copied by the mirror being resilvered?


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove snapshots

2007-08-18 Thread Dick Davies
On 18/08/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
> Blake wrote:
> > Now I'm curious.
> >
> > I was recursively removing snapshots that had been generated recursively
> > with the '-r' option.  I'm running snv65 - is this a recent feature?
>
> No; it was integrated in snv_43, and is in s10u3.  See:
>
> PSARC 2006/388 snapshot -r
> 6373978 want to take lots of snapshots quickly ('zfs snapshot -r')

I think he was asking about recursive destroy, rather than create.
I know recursive rename went in at b63, because it saves me a lot of
work :)

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: another way

2007-07-03 Thread Dick Davies
I've found it's fairly easy to trim down a 'core' install, installing
to a temporary UFS root,
doing the ufs -> zfs thing, and  then re-use the old UFS slice as swap.

Obviously you need a separate /boot slice in this setup.

On 03/07/07, Douglas Atique <[EMAIL PROTECTED]> wrote:
> I'm afraid the Solaris installer won't let me stop the process just before it 
> starts copying files to the target filesystem. It would be very nice to get 
> away with the UFS slice altogether, but between filesystem creation and 
> initialisation (which seems mandatory) and copying there is no pause where I 
> could open a terminal and do the trick.

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Dick Davies

Thanks to everyone for the sanity check - I think
it's a platform issue, but not an endian one.

The stick was originally DOS-formatted, and the zpool was built on the first
fdisk partition. So Sparcs aren't seeing it, but the x86/x64 boxes are.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS usb keys

2007-06-26 Thread Dick Davies

I used a zpool on a usb key today to get some core files off a non-networked
Thumper running S10U4 beta.

Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
'zpool import sticky' and it worked ok.

But when we attach the drive to a blade 100 (running s10u3), it sees the
pool as corrupt. I thought I'd been too hasty pulling out the stick,
but it works
ok back in the b61 desktop and Thumper.

I'm trying to figure out if this is an endian thing (which I thought
ZFS was immune
from) - or has the b61 machine upgraded the zpool format?



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X "Leopard" to use ZFS

2007-06-08 Thread Dick Davies

On 08/06/07, BVK <[EMAIL PROTECTED]> wrote:

On 6/8/07, Toby Thain <[EMAIL PROTECTED]> wrote:
>
> When should we expect Solaris kernel under OS X? 10.6? 10.7? :-)
>

I think its quite possible. I believe, very soon they will ditch their
Mach based (?) BSD and switch to solaris.


I think that's extremely unlikely. Only the OSX userland is BSD like,
and I'm not sure what replacing that would gain them. Why would they
want a Solaris kernel?


File based CDDL license seems like a right choice to a company like
Apple. My only worry is, Apple never works in open, so their
improvements may never get back into the community.


Apple have given plenty back to the BSD projects (although nothing required
them to).
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Dick Davies

On 24/05/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:


I don't know about FreeBSD PORTS, but NetBSD's ports system works very
well on solaris.  The only thing I didn't like about it is it considers
gcc a dependency to certain things, so even though I have Studio 11
installed, it would insist on installing gcc, which kinda irritated me. :)


pkgsrc is ok, but I found it got very messy very quickly once you get
a reasonably
sized dependency tree. portugrade. FreeBSD portupgrade (a layer on top of ports,
but easy to install) handled that much better.

To be fair I haven't used NetBSD in a couple of years - but pkgsrc was
the reason
I left :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-22 Thread Dick Davies


Take off every ZIL!

 http://number9.hellooperator.net/articles/2007/02/12/zil-communication



On 22/05/07, Albert Chin
<[EMAIL PROTECTED]> wrote:

On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > But still, how is tar/SSH any more multi-threaded than tar/NFS?
>
> It's not that it is, but that NFS sync semantics and ZFS sync
> semantics conspire against single-threaded performance.

What's why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But,
that's only helps ZFS. Is there something similar for NFS?

--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Automatic rotating snapshots

2007-05-10 Thread Dick Davies

Hi Malachi

Tims SMF bits work well  (and also supports remote backups (via send/recv)).

I use something like the process laid out at the bottom of:

 http://blogs.sun.com/mmusante/entry/rolling_snapshots_made_easy

because it's dirt-simple and easily understandable.

On 10/05/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote:

I was thinking of setting up rotating snapshots... probably do
pool/[EMAIL PROTECTED]

Is Tim's method (
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8
) the current preferred plan?




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on the desktop

2007-04-17 Thread Dick Davies

On 17/04/07, Rayson Ho <[EMAIL PROTECTED]> wrote:

On 4/17/07, Rich Teer <[EMAIL PROTECTED]> wrote:
> Same here.  I think anyone who dismisses ZFS as being inappropriate for
> desktop use ("who needs access to Petabytes of space in their desktop
> machine?!") doesn't get it.

Well, for many of those who find it hard to upgrade Windows, I guess
you will have a hard time teaching them how to use ZFS.


I doubt it - google around for some Time Machine mockups. Apple will sell
this easily.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Dick Davies

On 17/04/07, Erik Trimble <[EMAIL PROTECTED]> wrote:


And, frankly, I can think of several very good reasons why Sun would NOT
want to release a ZFS under the GPL


Not to mention the knock-on effects of those already using ZFS (apple, BSD)
who would be adversely affected by a GPL license.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] status of zfs boot netinstall kit

2007-04-13 Thread Dick Davies

On 13/04/07, Lori Alt <[EMAIL PROTECTED]> wrote:


sparc support is in the works.  We're waiting on some other development
work going on right now in the area of sparc booting in general
(not specific to zfs booting, although the zfs boot loader
is part of that project).  I can't give you a date right now,
but zfs boot will defintely be supported on sparc as well as x86.


Excellent work, thanks Lori.

Am I right in thinking the Sparc delay is down to openboot
(licensing issue)?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Linux

2007-04-13 Thread Dick Davies

On 13/04/07, Toby Thain <[EMAIL PROTECTED]> wrote:


Those who promulgate the tag for whatever motive - often agencies of
Microsoft - have all foundered on the simple fact that the GPL
applies ONLY to MY code as licensor (*and modifications thereto*); it
has absolutely nothing to say about what you do with YOUR code.


Until my code comes into contact with yours - that's the 'viral' bit.
(Yes, I can aviod all contact with GPL code, just as I can stay away from
someone with the flu, but it doesn't mean they don't have the flu).

And it's not only Microsoft who have a problem with it - it's anyone who
wants to keep their changes private for some reason.

I've read embedded linux technical books that had to spend 2
chapters explaining how to tiptoe around the GPL - life is too short
for that sort of rubbish.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended setup?

2007-03-16 Thread Dick Davies

Just saw a message on xen-discuss that HVM is in the next version (b60-ish).

On 15/03/07, Dick Davies <[EMAIL PROTECTED]> wrote:

I don't Solaris dom0 does Pacifica (amd-v) yet.
That would rule out windows for now.

You can run centOS zones on SXCR.

That just leaves freebsd (which hasn't got fantastic xen support either,
despite Kip Macys excellent work).

Unless you've got an app that needs that, zones sound like a much saner bet
to me.

On 13/03/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote:
> I had thought about it, but from what I understand that limits the other VMs
> to Solaris. I have a few different administrators that are going to be
> running their own OSes (freebsd, linux, possibly windows), as well as some
> development ones (like jnode).  From what I was able to find, that means
> that I need to run Xen with the newer AMD-V featureset; thus the reason for
> the new board and cpus.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended setup?

2007-03-15 Thread Dick Davies

I don't Solaris dom0 does Pacifica (amd-v) yet.
That would rule out windows for now.

You can run centOS zones on SXCR.

That just leaves freebsd (which hasn't got fantastic xen support either,
despite Kip Macys excellent work).

Unless you've got an app that needs that, zones sound like a much saner bet
to me.

On 13/03/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote:

I had thought about it, but from what I understand that limits the other VMs
to Solaris. I have a few different administrators that are going to be
running their own OSes (freebsd, linux, possibly windows), as well as some
development ones (like jnode).  From what I was able to find, that means
that I need to run Xen with the newer AMD-V featureset; thus the reason for
the new board and cpus.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: update on zfs boot support

2007-03-12 Thread Dick Davies

On 12/03/07, Darren Dunham <[EMAIL PROTECTED]> wrote:

> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
> > On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
> > >* ability to add disks to mirror the root filesystem at any time,
> > >   should they become available
> >
> > Can't this be done with UFS+SVM as well?  A reboot would be required
> > but you have to do regular reboots anyway just for patching.

*if* you already have the root filesystem under SVM in the first place,
then no reboot should be required to add a mirror.  And I assume that's
all we're talking about for the ZFS mirroring as well.


Is there any reason you'd have SVM on just the one partition? I can
see why you'd
do that with ZFS (snapshot, compression, etc).

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS share problem with mac os x client

2007-02-08 Thread Dick Davies

OSX *loves* NFS - it's a lot faster than Samba - but
you need a bit of extra work.

You need a user on the other end with the right uid and gid
(assuming you're using NFSv3 - you probably are).


Have a look at :
http://number9.hellooperator.net/articles/2007/01/12/zfs-for-linux-and-osx-and-windows-and-bsd

(especially the 'create a user' bit).

On 07/02/07, Kevin Bortis <[EMAIL PROTECTED]> wrote:

Hello, I test right now the beauty of zfs. I have installed opensolaris on a 
spare server to test nfs exports. After creating tank1 with zpool and a 
subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to 
tank1/nfsshare.

With Mac OS X as client I can mount the filesystem in Finder.app with 
nfs://server/tank1/nfsshare, but if I copy a file an error ocours. Finder say "The 
operation cannot be completed because you do not have sufficient privileges for some of 
the items.".

Until now I have shared the filesystems always with samba so I have almost no 
experience with nfs. Any ideas?

Kevin


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Adding my own compression to zfs

2007-01-29 Thread Dick Davies

Have a look at:

 http://blogs.sun.com/ahl/entry/a_little_zfs_hack

On 27/01/07, roland <[EMAIL PROTECTED]> wrote:

is it planned to add some other compression algorithm to zfs ?

lzjb is quite good and especially performing very well, but i`d like to have 
better compression (bzip2?) - no matter how worse performance drops with this.

regards
roland


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: How much do we really want zpool remove?

2007-01-26 Thread Dick Davies

On 25/01/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:


The other point is, how many other volume management systems allow you to remove
disks?  I bet if the answer is not zero, it's not large.  ;)


Even Linux LVM can do this (with pvmove) - slow, but you can do it online.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split

2007-01-24 Thread Dick Davies

On 25/01/07, Adam Leventhal <[EMAIL PROTECTED]> wrote:

On Wed, Jan 24, 2007 at 08:52:47PM +, Dick Davies wrote:
> that's an excellent feature addition, look forward to it.
> Will it be accompanied by a 'zfs join'?

Out of curiosity, what will you (or anyone else) use this for? If the idea
is to copy datasets to a new pool, why not use zfs send/receive?


To clarify, I'm talking about 'zfs split' as in
breaking  /tank/export/home into /tank/export/home/user1 ,
/tank/export/home/user2, etc.

The 'zfs join' is just an undo to help me out when I've been overzealous, every
directory in my system is a filesystem, and I have more automated
snapshots than I can stand...

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split

2007-01-24 Thread Dick Davies

On 23/01/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:


Can you pick another name for this please because that name has already
been suggested for zfs(1) where the argument is a directory in an
existing ZFS file system and the result is that the directory becomes a
new ZFS file system while retaining its contents.


Sorry to jump in on the thread, but -

that's an excellent feature addition, look forward to it.
Will it be accompanied by a 'zfs join'?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-19 Thread Dick Davies

> On 15/01/07, Rick McNeal <[EMAIL PROTECTED]> wrote:
>>
>> On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:



> For the record, the reason I asked was we have an iscsi target host
> with
> 2 NICs and for some reason clients were attempting to connect to
> the targets
> on  the private interface instead of the one they were doing
> discovery on
> (which I thought was a bit odd).



This is due to a bug in the initiator. A prior change caused the
discovery list, as returned from the SendTargets request, to be
sorted in reverse order. The Solaris target goes out of it's way to
return the address used to discover targets as the first address in
the list of available IP addresses for any given target. So, if you
had a public and private network and the discovery was done on the
public network, the public network IP address is first.



. This is something which is being fixed now.


Great, thanks.


> I tried creating a TPGT with iscsitadm, which seemed to work:
>
> vera ~ # iscsitadm list tpgt -v
> TPGT: 1
>IP Address: 131.251.5.8
>
> but adding a ZFS iscsi target into it gives me:
>
>  vera ~ # iscsitadm modify target -p 1 tank/iscsi/second4gb
>  iscsitadm: Error Can't call daemon
>
> which is a pity (I'm assuming it can't find the targets to modify).



This was an oversight on my part and should work.


Actually, after running

iscstadm create admin -d /somewhere

assigning both 'handmade' and 'shareiscsi=on' LUNs to a TPGT seems ok,
so presumably there just wasn't anywhere to record this information.

Thanks again for the update.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-18 Thread Dick Davies

On 15/01/07, Rick McNeal <[EMAIL PROTECTED]> wrote:


On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:



> Hi, are there currently any plans to make an iSCSI target created by
> setting shareiscsi=on on a zvol
> bindable to a single interface (setting tpgt or acls)?



We're working on some more interface stuff for setting up various
properties like TPGT's and ACL for the ZVOLs which are shared through
ZFS.



Now that I've knocked off a couple of things that have been on my
plate I've got room to add some more. These definitely rank right up
towards the top.


Great news.

For the record, the reason I asked was we have an iscsi target host with
2 NICs and for some reason clients were attempting to connect to the targets
on  the private interface instead of the one they were doing discovery on
(which I thought was a bit odd).

I tried creating a TPGT with iscsitadm, which seemed to work:

vera ~ # iscsitadm list tpgt -v
TPGT: 1
   IP Address: 131.251.5.8

but adding a ZFS iscsi target into it gives me:

 vera ~ # iscsitadm modify target -p 1 tank/iscsi/second4gb
 iscsitadm: Error Can't call daemon


which is a pity (I'm assuming it can't find the targets to modify).
I've had to go back to just using iscsitadm due to time pressures, but
will be watching any progress closely.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Dick Davies

On 18/01/07, Jeremy Teo <[EMAIL PROTECTED]> wrote:

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?


It's very useful if you accidentally create a concat rather than mirror
of an existing zpool. Otherwise you have to buy another drive :)


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iSCSI on a single interface?

2007-01-15 Thread Dick Davies

Hi, are there currently any plans to make an iSCSI target created by
setting shareiscsi=on on a zvol
bindable to a single interface (setting tpgt or acls)?

I can cobble something together with ipfilter,
but that doesn't give me enough granularity to say something like:

'host a can see target 1, host c can see targets 2-9', etc.

Also, am I right in thinking without this, all targets should be
visible on all interfaces?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?

2007-01-10 Thread Dick Davies

On 08/01/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


I think that in addition to lzjb compression, squishing blocks that contain
the same data would buy a lot of space for administrators working in many
common workflows.


This idea has occurred to me too - I think there are definite
advantages to 'block re-use'.
When you start talking about multiple similar zones, I suspect
substantial space savings could
be made - and if you can re-use that saved storage to provide
additional redundancy, everyone
would be happy.


Assumptions:

SHA256 hash used (Fletcher2/4 have too many collisions,  SHA256 is 2^128 if
I remember correctly)
SHA256 hash is taken on the data portion of the block as it exists on disk.
the metadata structure is hashed separately.
In the current metadata structure, there is a reserved bit portion to be
used in the future.


Description of change:
Creates:
The filesystem goes through its normal process of writing a block, and
creating the checksum.
Before the step where the metadata tree is pushed, the checksum is checked
against a global checksum tree to see if there is any match.
If match exists, insert a metadata placeholder for the block, that
references the already existing block on disk, increment a number_of_links
pointer on the metadata blocks to keep track of the pointers pointing to
this block.
free up the new block that was written and check-summed to be used in the
future.
else if no match, update the checksum tree with the new checksum and
continue as normal.


Unless I'm reading this wrong, this sounds a lot like Plan9s 'Venti'
architecture
( http://cm.bell-labs.com/sys/doc/venti.html ) .

But using a hash 'label' seems the wrong approach.
ZFS is supposed to scale to terrifying levels, and the chances of a collision,
however small, works against that. I wouldn't want to trade
reliability for some extra
space.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating zvols in a non-global zone (or 'Doctor, it hurts when I do this')

2006-12-21 Thread Dick Davies

On 06/09/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Wed, Sep 06, 2006 at 04:23:32PM +0100, Dick Davies wrote:
>
> a) prevent attempts to create zvols in non-global zones
> b) somehow allow it (?) or
> c) Don't do That
>
> I vote for a) myself - should I raise an RFE?

Yes, that was _supposed_ to be the original behavior, and I thought we
had it working that way at one point.  Apparently I'm imagining things,
or it got broken somewhere along the way.  Please file a bug.


For the record, it's filed as :

http://bugs.opensolaris.org/view_bug.do?bug_id=6498038

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-12-02 Thread Dick Davies

On 02/12/06, Chad Leigh -- Shire.Net LLC <[EMAIL PROTECTED]> wrote:


On Dec 2, 2006, at 10:56 AM, Al Hopper wrote:

> On Sat, 2 Dec 2006, Chad Leigh -- Shire.Net LLC wrote:



>> On Dec 2, 2006, at 6:01 AM, [EMAIL PROTECTED] wrote:



>> When you have subtle corruption, some of the data and meta data is
>> bad but not all.  In that case you can recover (and verify the data
>> if you have the means to do so) t he parts that did not get
>> corrupted.  My ZFS experience so far is that it basically said the
>> whole 20GB pool was dead and I seriously doubt all 20GB was
>> corrupted.



> That was because you built a pool with no redundancy.  In the case
> where
> ZFS does not have a redundant config from which to try to
> reconstruct the
> data (today) it simply says: sorry charlie - you pool is corrupt.



Where a RAID system would still be salvageable.


RAID level what? How is anything salvagable if you lose your only copy?

ZFS does store multiple copies of metadata in a single vdev, so I
assume we're talking about data here.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I obtain zfs with spare implementation?

2006-11-30 Thread Dick Davies

On 30/11/06, Michael Barto <[EMAIL PROTECTED]> wrote:


 I would like to update some of our Solaris 10 OS systems to the new zfs file 
system that supports spares.  The Solaris 6/06 version does have zfs but does 
not have this feature. What is the best way to upgrade to this functionality?


Hot spares are in Update 3, which was due out in November  - so I'd
expect it any day now.


This e-mail may contain LogiQwest proprietary information and should be treated 
as confidential.


Sigh.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Dick Davies

On 29/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:

On 28/11/06, Terence Patrick Donoghue <[EMAIL PROTECTED]> wrote:
> Is there a difference - Yep,
>
> 'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
> options
> whereas
> 'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
> need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
> poolname/fsname' to get it mounted.

Thanks Terence - now you've explained it, re-reading the manpage
makes more sense :)

This is plain wrong though:

"  Zones
 A ZFS file system can be added to a non-global zone by using
 zonecfg's  "add  fs"  subcommand.  A ZFS file system that is
 added to a non-global zone must have its mountpoint property
 set to legacy."

It has to be 'none' or it can't be delegated. Could someone change that?


I've had one last go at understanding what the hell is going on,
and what's *really* being complained about is the fact that the mountpoint
attribute is inherited (regardless of whether the value is 'none' or 'legacy').

Explicitly setting the mountpoint lets the zone boot.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Dick Davies

On 28/11/06, Terence Patrick Donoghue <[EMAIL PROTECTED]> wrote:

Is there a difference - Yep,

'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.


Thanks Terence - now you've explained it, re-reading the manpage
makes more sense :)

This is plain wrong though:

"  Zones
A ZFS file system can be added to a non-global zone by using
zonecfg's  "add  fs"  subcommand.  A ZFS file system that is
added to a non-global zone must have its mountpoint property
set to legacy."

It has to be 'none' or it can't be delegated. Could someone change that?



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: 'legacy' vs 'none'

2006-11-28 Thread Dick Davies

Just spotted one - is this intentional?

You can't delegate a dataset to a zone if mountpoint=legacy.
Changing it to 'none' works fine.


  vera / # zfs create tank/delegated
  vera / # zfs get mountpoint tank/delegated
  NAMEPROPERTYVALUE   SOURCE
  tank/delegated  mountpoint  legacy  inherited from tank
  vera / # zfs create tank/delegated/ganesh
  vera / # zfs get mountpoint tank/delegated/ganesh
  NAME   PROPERTYVALUE  SOURCE
  tank/delegated/ganesh  mountpoint  legacy inherited from tank
  vera / # zonecfg -z ganesh
  zonecfg:ganesh> add dataset
  zonecfg:ganesh:dataset> set name=tank/delegated/ganesh
  zonecfg:ganesh:dataset> end
  zonecfg:ganesh> commit
  zonecfg:ganesh> exit
  vera / # zoneadm -z ganesh boot
  could not verify zfs dataset tank/delegated/ganesh: mountpoint
cannot be inherited
  zoneadm: zone ganesh failed to verify
  vera / # zfs set mountpoint=none tank/delegated/ganesh
  vera / # zoneadm -z ganesh boot
  vera / #


On 28/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:

Is there a difference between setting mountpoint=legacy and mountpoint=none?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'legacy' vs 'none'

2006-11-28 Thread Dick Davies

Is there a difference between setting mountpoint=legacy and mountpoint=none?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to backup/clone all filesystems *and* snapshots in a zpool?

2006-11-16 Thread Dick Davies

On 16/11/06, Peter Eriksson <[EMAIL PROTECTED]> wrote:


Is there some way to "dump" all information from a ZFS filesystem? I suppose I 
*could* backup the raw disk devices that is used by the zpool but that'll eat up a lot of 
tape space...


If you want to have another copy somewhere, use zfs send/recv.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-15 Thread Dick Davies

On 15/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


>I suppose it depends how 'catastrophic' the failture is, but if it's
>very low level,
>booting another root probabyl won't help, and if it's too high level, how will
>you detect it (i.e. you've booted the kernel, but it is buggy).

If it panics (but not too early) or fails to come up properly?


Detecting 'come up properly' sounds hard
(as in 'turing test hard') to me.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Fwd: [zfs-discuss] Thoughts on patching + zfs root

2006-11-15 Thread Dick Davies

On 14/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


>Actually, we have considered this.  On both SPARC and x86, there will be
>a way to specify the root file system (i.e., the bootable dataset) to be
>booted,
>at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
>If no root file system is specified, the current default 'bootfs' specified
>in the root pool's metadata will be booted.  But it will be possible to
>override the default, which will provide that "fallback" boot capability.


I was thinking of some automated mechanism such as:

- BIOS which, when reset during POST, will switch to safe
  defaults and enter setup
- Windows which, when reset during boot, will offer safe mode
  at the next boot.

I was thinking of something that on activation of a new boot environment
would automatically fallback on catastrophic failure.


Multiple grub entries would mitigate most risks (you can already define
multiple boot archives pointing at different zfs root filesystems, it's just
not automated).

I suppose it depends how 'catastrophic' the failture is, but if it's
very low level,
booting another root probabyl won't help, and if it's too high level, how will
you detect it (i.e. you've booted the kernel, but it is buggy).


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Dick Davies

On 01/11/06, Rick McNeal <[EMAIL PROTECTED]> wrote:


I too must be missing something. I can't imagine why it would take 5
minutes to online a target. A ZVOL should automatically be brought
online since now initialization is required.


s/now/no/ ?

Thanks for the explanation. The '5 minute online' issue I had was
with a file-based target (which happened to be on a ZFS filesystem).

From what you say, it should be a non-issue

with a zvol-backed target.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Dick Davies

On 01/11/06, Cyril Plisko <[EMAIL PROTECTED]> wrote:

On 11/1/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> > And we'll be able to use sparse zvols
> > for this too (can't think why we couldn't, but it'd be dead handy)?
>
> Thinking about this, we won't be able to (without some changes) -
> I think a target is zero-filled before going online
> (educated guess: it takes 5 minutes to 'online' a target,
> and it consumes virtually no space in the parent zvol if compression is on),
> so a sparse zvol would exhaust zpool space.

Looking at the code it doesn't seem like the backing store being zeroed.
In case of regular file a single sector (512 byte) of uninitialized data from
stack (bad practice ?) is written to the very end of the file. And in case
of character device it isn't written at all. zvol should fall into char device
category. See mgmt_create.c::setup_disk_backing()

Or did I miss something ?


I'm not the one to ask :)
I'm just saying what I've seen - it was SXCR b49, and a ZFS
filesystem, not a zvol as I said (seems iscsi targets are file backed
by default). Still took a few minutes to online a new target, so it was doing
something, but I don't know what.

If it's a non-issue that'd be great,


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Dick Davies

On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:

And we'll be able to use sparse zvols
for this too (can't think why we couldn't, but it'd be dead handy)?


Thinking about this, we won't be able to (without some changes) -
I think a target is zero-filled before going online
(educated guess: it takes 5 minutes to 'online' a target,
and it consumes virtually no space in the parent zvol if compression is on),
so a sparse zvol would exhaust zpool space.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-01 Thread Dick Davies

On 01/11/06, Adam Leventhal <[EMAIL PROTECTED]> wrote:

Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC. Comments and suggestions are welcome.

Adam


Am I right in thinking we're effectively able to snapshot/clone iscsi targets
now (by working on the underlying ZVOL)? And we'll be able to use sparse zvols
for this too (can't think why we couldn't, but it'd be dead handy)?

This will be extremely useful, thanks.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-28 Thread Dick Davies

On 28/10/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:

On 10/28/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-2.html



The original question was about using ZFS root on a T1000.  /grub
looks suspiciously incompatible with the T1000 because it isn't x86.
I've heard rumors of brining grub to sparc, but...


Whoops, walked in halfway :) Tabriz reference is also
x86 specific, I believe. Thanks for the catch.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-28 Thread Dick Davies

On 27/10/06, Christopher Scott <[EMAIL PROTECTED]> wrote:

You can manually set up a ZFS root environment but it requires a UFS
partition to boot off of.
See: http://blogs.sun.com/tabriz/entry/are_you_ready_to_rumble


There's a slightly improved procedure at


http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-2.html

It uses a /grub partition rather than a full root FS to boot from - you
still need a UFS / for the initial install, but after the first boot into ZFS
you can reformat that and use it for swap or whatever.

Also a nice section on how to clone your root fs and boot off that
(which is great for testing new releases now we have 'zfs promote').

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs set sharenfs=on

2006-10-24 Thread Dick Davies

On 24/10/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote:



> Shouldn't a ZFS share be permanently enabling NFS?



# svcprop -p application/auto_enable nfs/server
true



This property indicates that regardless of the current state of
nfs/server, if you invoke share(1M) (either manually or through 'zfs
share -a'), then the server will be automatically started.


All three (nfs/status, nfs/nlockmgr and nfs/server) have auto_enable,
uh, enabled.


By default, the system should have been in this state, with nfs/server
enabled but temporarily disabled.  Did you explicity 'svcadm disable
nfs/server' beforehand?


That sounds like something I'd do to be honest, but in this case I wrote
down all the steps I've taken from the inital install, through the ZFS
root setup,
etc. in a journal, so I don' t think so.

I have been frobbing settings trying to get a linux client to
understand what NFS4
is however, so I may well have toggled this to try to get the client
to see the share.

So if someone has explicitly switched off NFS, ZFS won't turn it back on
(even if sharenfs=on for a share)?

After a reboot, a 'svcs -xv' showed nfs/server as not running because
its dependencies (nfs/status and nfs/lockmgr) weren't.
Enabling those two seemed to fix it. I wondered if maybe ZFS
didn't make sure they were running.

I'm happy to attribute this one to incompetence for now.


ZFS certainly could do the equivalent of a 'svcadm enable nfs/server',
but it shouldn't need to, nor is it clear that ZFS should do anything
different than if you had placed something into /etc/dfs/dfstab.


That's partly what I was asking - whether ZFS should dictate 'thou must
run this service for me' to the system as a whole or not.

Thanks for the explanation.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs set sharenfs=on

2006-10-24 Thread Dick Davies

I started sharing out zfs filesystems via NFS last week using
sharenfs=on. That seems to work fine until I reboot. Turned
out the NFS server wasn't enabled - I had to enable
nfs/server, nfs/lockmgr and nfs/status manually. This is a stock
SXCR b49 (ZFS root) install - don't think I'd changed anything much.

Shouldn't a ZFS share be permanently enabling NFS?



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-15 Thread Dick Davies

On 14/10/06, Darren Dunham <[EMAIL PROTECTED]> wrote:



> So the warnings I've heard no longer apply?
> If so, that's great. Thanks for all replies.
Umm, which warnings?  The "don't import a pool on two hosts at once"
definitely still applies.


Sure :)

I meant the reason I'd heard
( at http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-3.html )
for adding zpool.cache to your failsafe miniroot, since a 'zpool import -f' on
a 'root pool' meant the box wouldn't reboot cleanly.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-14 Thread Dick Davies

On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:

Ceri Davies wrote:
> On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:



>> I'd expect:
>>
>> zpool import -f
>>
>> (see the manpage)
>> to probe /dev/dsk/ and rebuild the zpool.cache file,
>> but my understanding is that this a) doesn't work yet or b) does
>> horrible things to your chances of surviving a reboot [0].



> So how do I import a pool created on a different host for the first
> time?



zpool import [ -f ]

(provided it's not in use *at the same time* by another host)


So the warnings I've heard no longer apply?
If so, that's great. Thanks for all replies.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-12 Thread Dick Davies

On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:

On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:



> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
> up.  Everything else (mountpoints, filesystems, etc) is stored in the
> pool itself.

What happens if the file does not exist?  Are the devices searched for
metadata?


My understanding (I'll be delighted if I'm wrong) is that you would be stuffed.

I'd expect:

zpool import -f

(see the manpage)
to probe /dev/dsk/ and rebuild the zpool.cache file,
but my understanding is that this a) doesn't work yet or b) does
horrible things to your chances of surviving a reboot [0].

This means that for zfs root and failsafe boots, you need to have a
zpool.cache in your boot/miniroot archive (I probably have the terminology
wrong) otherwise the boot will fail.

I was asking if it was going to be replaced because it would really
simplify ZFS root.

Dick.

[0] going from:
 http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-3.html

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-12 Thread Dick Davies

On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:

James C. McPherson wrote:
> Dick Davies wrote:
>> On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>>
>>> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
>>> up.  Everything else (mountpoints, filesystems, etc) is stored in the
>>> pool itself.
>>
>> Does anyone know of any plans or strategies to remove this dependancy?
>
> What do you suggest in its place?

and why? what's your objection to the current scheme?


Just the hassle of having to create a cache file in boot archives etc.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-12 Thread Dick Davies

On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:


FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up.  Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.


Does anyone know of any plans or strategies to remove this dependancy?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-12 Thread Dick Davies

On 11/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Dick Davies wrote:

> On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:



>> You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
>> find out which hardware is supported by Solaris 10.



> I tried that myself - there really isn't very much on there.
> I can't believe Solaris runs on so little hardware (well, I know most of
> my kit isn't on there), so I assume it isn't updated that much...



There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html


That doesn't help with buying hardware though -
I'm quite happy to buy hardware specifically for an OS
(like I've always done for my BSD boxes and linux) but it's
annoying to be forced to do trial and error .


There is a process for submitting input back to Sun on driver testing


I thought so (had that experience trying to get a variant of iprb added
to device_aliases) and I can understand why, but an overly conservative
HCL just feeds the 'Solaris supports hardly any hardware' argument against
adoption.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dick Davies

On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:

Hi There,

You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out 
which hardware is supported by Solaris 10.

Greetings,
Peter


I tried that myself - there really isn't very much on there.
I can't believe Solaris runs on so little hardware (well, I know most of
my kit isn't on there), so I assume it isn't updated that much...

My dream machine at the minute is a nice quiet athlon 64 x2 based
sytem (probably one of the energy-efficient Windsors, so you get low heat
and virtualization support). ZFS root mirror running iSCSI targets.

Have yet to find a good recommendation for an AM2 based SATAII motherboard
(although in dreamland, solaris has a solid Xen domain0 which takes advantage
of Pacifica/AMDV hardware, so I doubt I'll need to make this reality before next
Christmas :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mirror resurrection

2006-10-06 Thread Dick Davies

On 05/10/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:

Dick Davies wrote:



> I very foolishly decided to mirror /grub using SVM
> (so I could boot easily if a disk died). Shrank swap partitions
> to make somewhere to keep the SVM database (2 copies on each
> disk).

D'oh!
N.B. this isn't needed, per se, just make a copy of /grub and
the boot loader.


Lesson learned :) It's not like /grub changes much, and if it does
a simple rsync takes care of it.


> How do I get rid of SVM from a zfs root system?



The key change is in /etc/system where the rootfs is
specified as the metadevice rather than the real device.


Ah, thanks.


N.B. if you only have 2 disks, then the test you performed
will not work for SVM.


I gathered :) - I've flattened the box. This time I'll bother to create
the ZFS rescue bits in the howto I was working from.

Thanks.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mirror resurrection

2006-10-03 Thread Dick Davies

Need a bit of help salvaging a perfectly working ZFS
mirror that I've managed to render unbootable.

I've had a ZFS root (x86, mirored zpool, SXCR b46 ) working fine for months.

I very foolishly decided to mirror /grub using SVM
(so I could boot easily if a disk died). Shrank swap partitions
to make somewhere to keep the SVM database (2 copies on each
disk).

Rebooted and everything seemed ok. I booted with the
second disk unplugged and SVM didn't seem to come up.
ZFS showed the pool as degraded, as expected.

Unplugged the first disk, tried another boot.
Got as far as detecting the disks, then hangs.

So the question -
How do I get rid of SVM from a zfs root system?

Will just clobbering the database partitions
help (sounds easiest as it doesn't need a rescue kernel
to be ZFS aware)?

Otherwise, I'll need to mount the root filesystems out of the zpool
to undo SVM - will  a belenix live cd be enough? ISTR I need a
zpool.cache before it'll see the pool at all.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshotting a pool ?

2006-09-29 Thread Dick Davies

Would 'zfs snapshot -r poolname' achieve what you want?

On 29/09/06, Patrick <[EMAIL PROTECTED]> wrote:

Hi,

Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Dick Davies

On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:


1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create pool with slices and have for
each pool a FS with the size I wanted..Is that true?


zfs set quota=5G poolname/fsname

will give you a filesystem that shows up as 5GiB in 'df' - is that
what you want?



2) I mirrored 2 disks within the same D1000 and while I was putting a
big tar ball in the FS I tried to physically remove one mirror and


You mean pull it out? Does your hardware support hotswap?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible file corruption on a ZFS mirror

2006-09-19 Thread Dick Davies

That looks a bit serious - did you say both disks are on
the same SATA controller?

On 19/09/06, Ian Collins <[EMAIL PROTECTED]> wrote:


# zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 6
  mirrorONLINE   0 0 6
c2d0s7  ONLINE   0 012
c3d0s7  ONLINE   0 012

errors: The following persistent errors have been detected:

  DATASET  OBJECT  RANGE
  13   13  lvl=0 blkid=15787
  13   19  lvl=0 blkid=3838

I format read scan didn't show up any errors.



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any update on zfs root/boot ?

2006-09-13 Thread Dick Davies

On 14/09/06, James C. McPherson <[EMAIL PROTECTED]> wrote:


Hi folks,
I'm in the annoying position of having to replace my rootdisk
(since it's a [EMAIL PROTECTED]@$! maxtor and dying). I'm currently running
with zfsroot after following Tabriz' and TimF's procedure to
enable that. However, I'd like to know whether there's a better
way to get zfs root/boot happening? The mini-ufs partition
kludge is getting a bit tired :)


I want for  Doug Scotts :

http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-2.html

which is only slighty different from Tabriz', It boots from a /grub
partition, so doesn't need a ufs partition after the initial install
(I'm reusing mine as swap).

Whether you can get it working fresh off the CD is another matter :)



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'zfs mirror as backup' status?

2006-09-13 Thread Dick Davies

Since we were just talking about resilience on laptops,
I wondered if it there had been any progress in sorting
some of the glitches that were involved in:

http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸

?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Dick Davies

On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:

Dick Davies wrote:



> But they raise a lot of administrative issues

Sure, especially if you choose to change the copies property on an
existing filesystem.  However, if you only set it at filesystem creation
time (which is the recommended way), then it's pretty easy to address
your issues:


You're right, that would prevent getting into some nasty messes (I see
this as closer to encryption than compression in that respect).

I still feel we'd be doing the same job in several places.
But I'm sure anyone who cares has a pretty good idea of my opinion,
so I'll shut up now :)

Thanks for taking the time to feedback on the feedback.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:

Dick Davies wrote:
> For the sake of argument, let's assume:
>
> 1. disk is expensive
> 2. someone is keeping valuable files on a non-redundant zpool
> 3. they can't scrape enough vdevs to make a redundant zpool
>(remembering you can build vdevs out of *flat files*)

Given those assumptions, I think that the proposed feature is the
perfect solution.  Simply put those files in a filesystem that has copies>1.


I don't think we disagree that multiple copies in ZFS are a good idea,
I just think the zpool is the right place to do that.

To clarify, I was addressing Celsos laptop scenario here - especially the
idea that you can make a single disk redundant without any risks.

(for bigger systems I'd just mirror at the zpool and have done).


Also note that using files to back vdevs is not a recommended solution.


Understood. But neither is mirroring on a single disk (which is what is
effectively being suggested for laptop users using this solution).


> If the user wants to make sure the file is 'safer' than others, he
> can just make multiple copies. Either to a USB disk/flashdrive, cdrw,
> dvd, ftp server, whatever.

It seems to me that asking the user to solve this problem by manually
making copies of all his files puts all the burden on the
user/administrator and is a poor solution.


You'll be being backing up your laptop anyway, aren't you?


For one, they have to remember to do it pretty often.  For two, when
they do experience some data loss, they have to manually reconstruct the
files!  They could have one file which has part of it missing from copy
A and part of it missing from copy B.  I'd hate to have to reconstruct
that manually from two different files, but the proposed solution would
do this transparently.


Are you likely to lose parts of both file at the same time, though?
I'd say you're more likely to have one crap file and one good one.
And you know which file is crap due to checksumming already.


> I'm afraid I honestly think this greatly complicates the conceptual model
> (not to mention the technical implementation) of ZFS, and I haven't seen
> a convincing use case.

Just for the record, these changes are pretty trivial to implement; less
than 50 lines of code changed.


But they raise a lot of administrative issues (how many copies do I really
have? Where are they? Have they all been deleted? If I set this property,
how many copies do I have now? How much disk will I get back if I delete
fileX? How much disk do I bill zone admin foo for this month? How much disk
io are ops on this filesystem likely to cause? How do I dtrace this?)


I appreciate the effort and thought that's gone into it, not to mention
a request for feedback. If I've not made that clear, I apologize.
I'm just worried that it muddies the waters for everybody.

The users (me too!) want mirror-level reliability on their laptops.
I don't think this is the right way to get that feature, that's all.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:


I think it has already been said that in many peoples experience, when a disk 
fails, it completely fails. Especially on laptops. Of course ditto blocks 
wouldn't help you in this situation either!


Exactly.


I still think that silent data corruption is a valid concern, one that ditto 
blocks would solve. > Also, I am not thrilled about losing that much space for 
duplication of unneccessary data (caused by partitioning a disk in two).


Well, you'd only be duplicating the data on the mirror. If you don't want to
mirror the base OS, no one's saying you have to.

For the sake of argument, let's assume:

1. disk is expensive
2. someone is keeping valuable files on a non-redundant zpool
3. they can't scrape enough vdevs to make a redundant zpool
   (remembering you can build vdevs out of *flat files*)

Even then, to my mind:

to the user, the *file* (screenplay, movie of childs birth, civ3 saved
game, etc.)
is the logical entity to have a 'duplication level' attached to it,
and the only person who can score that is the author of the file.

This proposal says the filesystem creator/admin scores the filesystem.
Your argument against unneccessary data duplication applies to all 'non-special'
files in the 'special' filesystem. They're wasting space too.

If the user wants to make sure the file is 'safer' than others, he can
just make
multiple copies. Either to a USB disk/flashdrive, cdrw, dvd, ftp
server, whatever.

The redundancy you're talking about is what you'd get
from 'cp /foo/bar.jpg /foo/bar.jpg.ok', except it's hidden from the
user and causing
headaches for anyone trying to comprehend, port or extend the codebase in
the future.


I also echo Darren's comments on zfs performing better when it has the whole 
disk.


Me too, but a lot of laptop users dual-boot, which makes it a moot point.


Hopefully we can agree that you lose nothing by adding this feature,
even if you personally don't see a need for it.


Sorry, I don't think we're going to agree on this one :)

I've seen dozens of project proposals in the few months I've been lurking
around opensolaris. Most of them have been of no use to me, but
each to their own.

I'm afraid I honestly think this greatly complicates the conceptual model
(not to mention the technical implementation) of ZFS, and I haven't seen
a convincing use case.

All the best
Dick.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:


...you split one disk in two. you then have effectively two partitions which 
you can then create a new mirrored zpool with. Then everything is mirrored. 
Correct?


Everything in the filesystems in the pool, yes.


With ditto blocks, you can selectively add copies (seeing as how filesystem are 
so easy to create on zfs). If you are only concerned with copies of your 
important documents and email, why should /usr/bin be mirrored.


So my machine will boot if a disk fails. Which happened the other day :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:


One of the great things about zfs, is that it protects not just against 
mechanical failure, but against silent data corruption. Having this available 
to laptop owners seems to me to be important to making zfs even more attractive.


I'm not arguing against that. I was just saying that *if* this was useful to you
(and you were happy with the dubious resilience/performance benefits) you can
already create mirrors/raidz on a single disk by using partitions as
building blocks.
There's no need to implement the proposal to gain that.



Am I correct in assuming that having say 2 copies of your "documents" 
filesystem means should silent data corruption occur, your data can be reconstructed. So 
that you can leave your os and base applications with 1 copy, but your important data can 
be protected.


Yes.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:

Dick Davies wrote:

> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?

Some systems have physical space for only a single drive - think most
laptops!


True - I'm a laptop user myself. But as I said, I'd assume the whole disk
would fail (it does in my experience).

If your hardware craps differently to mine, you could do a similar thing
with partitions (or even files) as vdevs. Wouldn't be any less reliable.

I'm still not Feeling the Magic on this one :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:

Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.

Your comments are appreciated!


Flexibility is always nice, but this seems to greatly complicate things,
both technically and conceptually (sometimes, good design is about what
is left out :) ).

Seems to me this lets you say 'files in this directory are x times more
valuable than files elsewhere'. Others have covered some of my
concerns (guarantees, cleanup, etc.). In addition,

* if I move a file somewhere else, does it become less important?
* zpools let you do that already
 (admittedly with less granularity, but *much* *much* more simply -
 and disk is cheap in my world)
* I don't need to do that :)

The only real use I'd see would be for redundant copies
on a single disk, but then why wouldn't I just add a disk?

* disks are cheap, and creating a mirror from a single disk is very easy
 (and conceptually simple)
* *removing* a disk from a mirror pair is simple too - I make mistakes
 sometimes
* in my experience, disks fail. When you get bad errors on part of a disk,
 the disk is about to die.
* you can already create a/several zpools using disk
 partitions as vdevs. That's not all that safe, and I don't see this being
 any safer.


Sorry to be negative, but to me ZFS' simplicity is one of its major features.
I think this provides a cool feature, but I question it's usefulness.

Quite possibly I just don't have the particular itch this is intended
to scratch - is this a much requested feature?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zoned datasets in zfs list

2006-09-06 Thread Dick Davies

On 06/09/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Wed, Sep 06, 2006 at 03:53:52PM +0100, Dick Davies wrote:
> That's a bit nicer, thanks.
> Still not that clear which zone they belong to though - would
> it be an idea to add a 'zone' property be a string == zonename ?

Yes, this is possible, but it's annoying because the actual owning zone
isn't stored with the dataset (nor should it be).   We'd have to grovel
around every zone's configuration file, which is certainly doable, just
annoying.


Oh God no. That's exactly what I wanted to avoid.
Why wouldn't you want it stored in the dataset, out of interest?


In addition, it's possible (though not recommended) to have a
single dataset in multiple zones.


Ah Ok, that explains why a single string wouldn't cut it
(although it sounds insane to me)!


The only real use case would be a
read-only, unmounted dataset whose snapshots could serve as a clone
source for other delegated datasets.


I'm reading that as 'the only real use case for 1 dataset in multiple zones'
(sorry if I'm misunderstanding you)?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] creating zvols in a non-global zone (or 'Doctor, it hurts when I do this')

2006-09-06 Thread Dick Davies

A colleague just asked if zfs delegation worked with zvols too.
Thought I'd give it a go and got myself in a mess
(tank/linkfixer is the delegated dataset):

[EMAIL PROTECTED] / # zfs create -V 500M tank/linkfixer/foo
cannot create device links for 'tank/linkfixer/foo': permission denied
cannot create 'tank/linkfixer/foo': permission denied

Ok, so we'll try a normal filesystem:

[EMAIL PROTECTED] / # zfs create  tank/linkfixer/foo
cannot create 'tank/linkfixer/foo': dataset already exists
[EMAIL PROTECTED] / # zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  2.09G  33.8G  24.5K  legacy
tank/linkfixer36.3M  9.96G  24.5K  legacy
tank/linkfixer/foo22.5K  9.96G  22.5K  -
[EMAIL PROTECTED] / # zfs destroy -f  tank/linkfixer/foo
cannot remove device links for 'tank/linkfixer/foo': permission denied
[EMAIL PROTECTED] / # zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  2.09G  33.8G  24.5K  legacy
tank/linkfixer36.3M  9.96G  24.5K  legacy
tank/linkfixer/foo22.5K  9.96G  22.5K  -

I can destroy it ok from the global zone, and I know I could just
create a top-level zvol and grant the zone access.
Not sure if the 'fix' is :

a) prevent attempts to create zvols in non-global zones
b) somehow allow it (?) or
c) Don't do That

I vote for a) myself - should I raise an RFE?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zoned datasets in zfs list

2006-09-06 Thread Dick Davies

That's a bit nicer, thanks.
Still not that clear which zone they belong to though - would
it be an idea to add a 'zone' property be a string == zonename ?

On 06/09/06, Kenneth Mikelinich <[EMAIL PROTECTED]> wrote:

zfs mount

should show where all your datasets are mounted.

I too was confused with the zfs list readout.



On Wed, 2006-09-06 at 07:37, Dick Davies wrote:
> Just did my first dataset delegation, so be gentle :)
>
> Was initially terrified to see that changes to the mountpoint in the 
non-global
> zone were visible in the global zone.
>
> Then I realised it wasn't actually mounted (except in the delegated zone).
> But I couldn't see any obvious indication that the dataset was delegated to
> another zone in zfs list.
> Eventually I found the 'zoned' property. Couple of thoughts:
>
> 1) would it be worth changing 'zfs list' to clarify where a dataset
>is actually mounted?
> 2) Is there any way to indicate _what_ zone a dataset is mounted in
>   (other than greppping the zones configuration)?
>
> --
> Rasputin :: Jack of All Trades - Master of Nuns
> http://number9.hellooperator.net/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: datasets,zones and mounts

2006-09-06 Thread Dick Davies

On 06/09/06, Kenneth Mikelinich <[EMAIL PROTECTED]> wrote:


Are you suggesting that I not get too granular with datasets and use a
higher level one versus several?


I tihnk what he's saying is you should only have to
delegate one dataset (telecom/oracle/production, for example),
and all the 'child' datasets can be created/administered/snapshotted etc.
in the non -global zone itself.
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zoned datasets in zfs list

2006-09-06 Thread Dick Davies

Just did my first dataset delegation, so be gentle :)

Was initially terrified to see that changes to the mountpoint in the non-global
zone were visible in the global zone.

Then I realised it wasn't actually mounted (except in the delegated zone).
But I couldn't see any obvious indication that the dataset was delegated to
another zone in zfs list.
Eventually I found the 'zoned' property. Couple of thoughts:

1) would it be worth changing 'zfs list' to clarify where a dataset
  is actually mounted?
2) Is there any way to indicate _what_ zone a dataset is mounted in
 (other than greppping the zones configuration)?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + rsync, backup on steroids.

2006-08-30 Thread Dick Davies

On 30/08/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:


Yes.  The architectural benefits of 'zfs send' over rsync only apply to
sending incremental changes.  When sending a full backup, both schemes
have to traverse all the metadata and send all the data, so the *should*
be about the same speed.


Cool! I'll retry it then.


However, as I mentioned, there's still some low-hanging performance
issues with 'zfs send', although I'm surprised that it was 5x slower
than rsync!  I'd like to look into that issue some more... What type of
files were you sending?  Eg. approximately what size files, how many
files, how many files/directory?


It was a copy of /usr/ports from freebsd, so around 500mb of small textfiles.
Bear in mind I'm talking from memory, and it was just a quick test.

I'll retry and let you know if I see a similar problem - if you don't
hear anything,
I couldn't replicate it.

Thanks!


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + rsync, backup on steroids.

2006-08-30 Thread Dick Davies

On 30/08/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:


'zfs send' is *incredibly* faster than rsync.


That's interesting. We had considered it as a replacement for a
certain task (publishing a master docroot to multiple webservers)
but a quick test with ~500Mb of data showed the zfs send/recv
to be about 5x slower than rsync for the initial copy.

You're saying subsequent copies (zfs send -i?) should be faster?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Porting ZFS file system to FreeBSD.

2006-08-22 Thread Dick Davies

This is fantastic work!

How long have you been at it?
You seem a lot further on than the ZFS-Fuse project.

On 22/08/06, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:

Hi.

I started porting the ZFS file system to the FreeBSD operating system.

There is a lot to do, but I'm making good progress, I think.

I'm doing my work in those directories:

   contrib/opensolaris/ - userland files taken directly from
   OpenSolaris (libzfs, zpool, zfs and others)

   sys/contrib/opensolaris/ - kernel files taken directly from
   OpenSolaris (zfs, taskq, callb and others)

   compat/opensolaris/ - compatibility userland layer, so I can
   reduce diffs against vendor files

   sys/compat/opensolaris/ - compatibility kernel layer, so I can
   reduce diffs against vendor files (kmem based on
   malloc(9) and uma(9), mutexes based on our sx(9) locks,
   condvars based on sx(9) locks and more)

   cddl/ - FreeBSD specific makefiles for userland bits

   sys/modules/zfs/ - FreeBSD specific makefile for the kernel
   module

You can find all those on FreeBSD perforce server:

   
http://perforce.freebsd.org/depotTreeBrowser.cgi?FSPC=//depot/user/pjd/zfs&HIDEDEL=NO

Ok, so where am I?

I ported the userland bits (libzfs, zfs and zpool). I had ztest and
libzpool compiling and working as well, but I left them behind for now
to focus on kernel bits.

I'm building in all (except 2) files into zfs.ko (kernel module).

I created new VDEV - vdev_geom, which fits to FreeBSD's GEOM
infrastructure, so basically you can use any GEOM provider to build your
ZFS pool. VDEV_GEOM is implemented as consumers-only GEOM class.

I reimplemented ZVOL to also export storage as GEOM provider. This time
it is providers-only GEOM class.

This way one can create for example RAID-Z on top of GELI encrypted
disks or encrypt ZFS volume. The order is free.
Basically you can put UFS on ZFS volumes already and it behaves really
stable even under heavy load.

Currently I'm working on file system bits (ZPL), which is the most hard
part of the entire ZFS port, because it talks to one of the most complex
part of the FreeBSD kernel - VFS.

I can already mount ZFS-created file systems (with 'zfs create'
command), create files/directories, change permissions/owner/etc., list
directories content, and perform few other minor operation.

Some "screenshots":

   lcf:root:~# uname -a
   FreeBSD lcf 7.0-CURRENT FreeBSD 7.0-CURRENT #74: Tue Aug 22 03:04:01 UTC 
2006 [EMAIL PROTECTED]:/usr/obj/zoo/pjd/lcf/sys/LCF  i386

   lcf:root:~# zpool create tank raidz /dev/ad4a /dev/ad6a /dev/ad5a

   lcf:root:~# zpool list
   NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
   tank   35,8G   11,7M   35,7G 0%  ONLINE -

   lcf:root:~# zpool status
 pool: tank
state: ONLINE
scrub: none requested
   config:

   NAMESTATE READ WRITE CKSUM
   tankONLINE   0 0 0
 raidz1ONLINE   0 0 0
   ad4aONLINE   0 0 0
   ad6aONLINE   0 0 0
   ad5aONLINE   0 0 0

   errors: No known data errors

   lcf:root:# zfs create -V 10g tank/vol
   lcf:root:# newfs /dev/zvol/tank/vol
   lcf:root:# mount /dev/zvol/tank/vol /mnt/test

   lcf:root:# zfs create tank/fs

   lcf:root:~# mount -t zfs,ufs
   tank on /tank (zfs, local)
   tank/fs on /tank/fs (zfs, local)
   /dev/zvol/tank/vol on /mnt/test (ufs, local)

   lcf:root:~# df -ht zfs,ufs
   FilesystemSizeUsed   Avail Capacity  Mounted on
   tank   13G 34K 13G 0%/tank
   tank/fs13G 33K 13G 0%/tank/fs
   /dev/zvol/tank/vol9.7G4.0K8.9G 0%/mnt/test

   lcf:root:~# mkdir /tank/fs/foo
   lcf:root:~# touch /tank/fs/foo/bar
   lcf:root:~# chown root:operator /tank/fs/foo /tank/fs/foo/bar
   lcf:root:~# chmod 500 /tank/fs/foo
   lcf:root:~# ls -ld /tank/fs/foo /tank/fs/foo/bar
   dr-x--  2 root  operator  3 22 sie 05:41 /tank/fs/foo
   -rw-r--r--  1 root  operator  0 22 sie 05:42 /tank/fs/foo/bar

The most important missing pieces:
- Most of the ZPL layer.
- Autoconfiguration. I need implement vdev discovery based on GEOM's taste
 mechanism.
- .zfs/ control directory (entirely commented out for now).
And many more, but hey, this is after 10 days of work.

PS. Please contact me privately if your company would like to donate to the
   ZFS effort. Even without sponsorship the work will be finished, but
   your contributions will allow me to spend more time working on ZFS.

--
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://w

Re: [zfs-discuss] Re: SCSI synchronize cache cmd

2006-08-22 Thread Dick Davies

On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:

On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> Yes, ZFS uses this command very frequently. However, it only does this
> if the whole disk is under the control of ZFS, I believe; so a
> workaround could be to use slices rather than whole disks when
> creating a ZFS pool on a buggy device.

Actually, we issue the command no matter if we are using a whole disk or
just a slice.  Short of an mdb script, there is not a way to disable it.
We are trying to figure out ways to allow users to specify workarounds
for broken hardware without getting the ZFS code all messy as a result.


Has that behaviour changed then? I was definitely told (on list) that
write cache was only enabled for a 'full ZFS disk'. Am I wrong in
thinking this could be risky for UFS slices on the same disk
(or does UFS journalling mitigate that)?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Dick Davies

On 18/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:


No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.


Gotcha. I wasn't sure if you were proposing requiring a custom
BIOS on x86, but I take it (from your next point)
you're just chainloading a ZFS-aware grub


> Or is x86 zfs root going to need a grub /boot partition on one
> of the disks?

On x86, each disk capable of booting the system (which means each
disk in a root pool) will have grub installed on it in a disk
slice which occupies the first few blocks of the disk.  It's not
the same as the old /boot partition, because all the slice
contains is grub.  It doesn't contain a file system.


I think that was really what I was getting at. So long as one
of the disks is still alive, and the BIOS can boot of it, then you'd
be alright? That sounds perfect - the implementation is really
not that important to me, so long as there's no single point of
failure.

Thanks for your time, and have a good weekend.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Dick Davies

On 17/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:


Dick Davies wrote:



> That's excellent news Lori, thanks to everyone who's working
> on this. Are you planning to use a single pool,
> or an 'os pool/application pool' split?



Thus I think of the most important split as the "os pool/data pool"
split.  Maybe that's what you meant.


That's it, yes :)
I should probably have said service rather than application.


.. limitations
in the boot PROMs cause us to place restrictions on the devices
you can place in a root pool.  (root mirroring WILL be supported,
however).


Does boot prom support mean this will be SPARC only? That's
interesting (last time I tried Tabriz' hack, it was x86 only).

Or is x86 zfs root going to need a grub /boot partition on one
of the disks?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-16 Thread Dick Davies

On 16/08/06, Joerg Schilling <[EMAIL PROTECTED]> wrote:

"Dick Davies" <[EMAIL PROTECTED]> wrote:

> As an aside, is there a general method to generate bootable
> opensolaris DVDs? The only way I know of getting opensolaris on
> is installing sxcr and then BFUing on top.

A year ago, I did publish a toolkit to create bootable SchilliX CDs/DVDs.
Would this help?


Definitely - but I'm just curious to be honest. I just wanted
to burn the appropriate thing onto the 'i boot opensolaris' blank dvds I
got sent the other day :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-15 Thread Dick Davies

On 15/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:

Brian Hechinger wrote:
> On Fri, Jul 28, 2006 at 02:26:24PM -0600, Lori Alt wrote:
>
>>>What about Express?
>>
>>Probably not any time soon.  If it makes U4,
>>I think that would make it available in Express late
>>this year.
>
>
> Is there a specific Nevada build you are going to target?  I'd love to
> start testing this as soon as possible.  I have both SPARC and x86 here
> to play with.

You need more than a Nevada build.  You also need the
installation code.  We're working on an OpenSolaris
community web page for zfs-boot.  On that web page
will be links to files that can be downloaded for
putting together a netinstall image or a DVD for
installing a system with a zfs root file system.
We hope to have that available in the next few weeks.


That's excellent news Lori, thanks to everyone who's working
on this. Are you planning to use a single pool,
or an 'os pool/application pool' split?

As an aside, is there a general method to generate bootable
opensolaris DVDs? The only way I know of getting opensolaris on
is installing sxcr and then BFUing on top.



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to monitor ZFS ?

2006-07-16 Thread Dick Davies

On 15/07/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:

eric kustarz wrote:
> martin wrote:



> To monitor activity, use 'zpool iostat 1' to monitor just zfs
> datasets, or iostat(1M) to include non-zfs devices.

Perhaps Martin was asking for something a little more robust. Something
like SNMP traps, alert messages out via email, etc.


Doesn't ZFS report via the usual SMF / fmd mechanisms?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >