Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Udo Grabowski (IMK)

On 11/09/2013 01:44, Antony Brooke-Wood wrote:

My advice is to keep it simple - from what you describe, there isn't any
reason I can see to create more than 2 file systems.

One thing you might consider is setting up CrashPlan with your server.

It has a native solaris installation and for $5 a month, you can have all
of your data replicated offsite.

Combine that with ZFS and snapshots, you are looking pretty safe.
...


From 7 years of experience with zfs I absolutely recommend to
go the other way, create filesystems below /rmh/... for each
host. The simple reason is that you can easily snapshot each
host, transport it, or even promote it to be the root on a real
physical machine. And you can easily destroy a machine filesystem.
It's all much easier and faster than doing a tar, rm, rsync
or whatever. With ZFS there's no reason not to have filesystems
for everything. We usually have more than hundred on a single
96 TB fileserver, no problems, but great savings in manageability.



On 11 September 2013 09:34, Harry Putnam rea...@newsguy.com wrote:


Running 151a8

Now I've got oi running and becoming slightly familiar with zpool and
zfs cmds, I need some coaching as to how to employ the zfs fs.

My general aim is to backup other computers but also want to have a
few zfs fs that serve a windows 7 box and holds lots of pictures and
other graphic files while work is done on them from the windows box.

The setup I now have is all dispensable and is a practice run.  I've
got the smb server working and shares are available to the win7
machine.

What I need is advice about the actual construction of the file
systems.

Example: I have an fs p2/rmh  mounted at root /rmh  rmh stands for
remote hosts.

So I have p2/rmh/host1 p2/rmh/host2 and etc.  So what is the best way
to go... should I have the .zfs directory at each level? At /rmh
at /rmh/host1 and at /rmh/host2  maybe even p2/rmh/host2/someproj,
with its own .zfs?
.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Moving /var and/or /usr to own zfs filesystem

2013-09-11 Thread Ben Taylor
On Tue, Sep 10, 2013 at 2:04 PM, James Carlson carls...@workingcode.comwrote:

 On 09/10/13 12:31, Ben Taylor wrote:
  I really can't see the wisdom of splitting out /usr from / on a ZFS file
  system.  I had an open bug with Sun in 2009 regarding the separate /var
  partition, and we went months arguing with support regarding whether or
 not
  that was a supported configuration.

 It'd be somewhat interesting to know the details on how it could be
 argued, because a separate /usr and /var are explicitly described in
 filesystem(5).  As with all things on Solaris, the official reference on
 what ought to work (and what is not documented to work) is the man page.


Well, I suppose with Solaris 11 (though haven't actually booted it), the
man page might still say something about /usr, though it wouldn't have much
relevance. In Solaris 10, UFS was still a viable file system there.



 However, a separate /usr makes no real sense to me in this day and age,
 given that the only substantial reason that support ever existed was for
 the extremely wacky clients with tiny root disks and NFS-mounted /usr
 configuration.  Nothing other than unusually good fortune could protect
 someone trying to do that in 2013.


In the early 90s, I did NFS-mounted /usr configs.  Later, the concept of
the netboot client was pretty cool, but I don't think many people used it,
and eventually the option went away.


 A separate /var makes sense to me, but you do need to be a bit careful
 with it, and I would not be shocked to find that there things there that
 don't work terribly well.  In particular, I'd expect that you have to
 use legacy (/etc/vfstab) mounting in order to make it work.


For a separate /var on Solaris 10/ZFS, there's no vfstab entry required.
It's fully supported these days, after I put up a 6 month battle with
support in 2009.  When I got notification that it had been fixed, I was
almost incredulous at the minimal amount of work required for one or two of
the initialization *shell* scripts.



 For what it's worth, I don't do that on my own systems.  I just create
 zfs mounts over the key (growable) mounts below /var ... particularly
 /var/cores (where I set coreadm), /var/crash, /var/mail, and /var/log.
 Plus, having separate sub-mounts gives me much finer-grained accounting
 and control.  That's sort of the whole point of ZFS.


We use /var/home on the remaining Solaris systems here, and that works
fairly well.  If we weren't moving away from Solaris, I might consider some
of those sub mounts off /var.


  My point being is that, a separate /usr ZFS file system has had no
 support
  or testing for this type of configuration.

 Testing is a separate and much more important point, I think: if you do
 things the way nobody else does them, intentionally or otherwise, then
 you're a test pilot.  Much luck, and make sure you've repacked your
 parachute recently.


Well, at the time, I used your argument that it's in the man page and
supported, and there was no announcement of removing /var from
filesystem(5) for ZFS, among other things. Stubbornness pays off
occasionally...
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Harry Putnam
Lou Picciano loupicci...@comcast.net writes:
 . . . . . . . . . . . . . . . . . . . There are also some ZFS
 attributes geared toward Windows FSes; you'll want to check these
 out.

Thank you for the helpful input.  Although it does sound like your
operation and thinking are a bit above my pay grade ;).

Can you mention a little more on that part about 'attributes geared
toward windows FSes', a few clues for me to use in search strings?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Udo Grabowski (IMK)

On 11/09/2013 16:13, Harry Putnam wrote:

Udo Grabowski (IMK) udo.grabow...@kit.edu writes:


 From 7 years of experience with zfs I absolutely recommend to
go the other way, create filesystems below /rmh/... for each
host. The simple reason is that you can easily snapshot each
host, transport it, or even promote it to be the root on a real
physical machine. And you can easily destroy a machine filesystem.
It's all much easier and faster than doing a tar, rm, rsync
or whatever. With ZFS there's no reason not to have filesystems
for everything. We usually have more than hundred on a single
96 TB fileserver, no problems, but great savings in manageability.


Thanks for the input.

I wondered how it works with something like a 4 level set of zfs fs.

/rmh/host1/someproj/Acollection
/rmh/host2/someproj/Acollection

Seven filesystems in all assuming a .zfs at each level.

Now, if you wanted to send receive the whole works you would not be
able to send/receive a snapshot from /rmh/, right?

So you'd need to send/receive all seven then, eh?  Or is there
something like the -p operator to 'zfs create -p [...]' that can be
invoked to allow you to send/receive the whole batch in one go?




You can always do a 'zfs snapshot -r ...' and a 'zfs send -r .'
or even -R to send all snaps in one go.
--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Harry Putnam
Udo Grabowski (IMK) udo.grabow...@kit.edu writes:

 From 7 years of experience with zfs I absolutely recommend to
 go the other way, create filesystems below /rmh/... for each
 host. The simple reason is that you can easily snapshot each
 host, transport it, or even promote it to be the root on a real
 physical machine. And you can easily destroy a machine filesystem.
 It's all much easier and faster than doing a tar, rm, rsync
 or whatever. With ZFS there's no reason not to have filesystems
 for everything. We usually have more than hundred on a single
 96 TB fileserver, no problems, but great savings in manageability.

Thanks for the input.

I wondered how it works with something like a 4 level set of zfs fs.

/rmh/host1/someproj/Acollection
/rmh/host2/someproj/Acollection

Seven filesystems in all assuming a .zfs at each level.

Now, if you wanted to send receive the whole works you would not be
able to send/receive a snapshot from /rmh/, right?

So you'd need to send/receive all seven then, eh?  Or is there
something like the -p operator to 'zfs create -p [...]' that can be
invoked to allow you to send/receive the whole batch in one go? 


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Harry Putnam
Antony Brooke-Wood abrookew...@gmail.com writes:

 My advice is to keep it simple - from what you describe, there isn't any
 reason I can see to create more than 2 file systems.

I kind of thought that might be best, but then started noticing how
the designers of openindiana have gone fairly deep into the sets of
zfs fs, so began to wonder if it might be smart to copy that.

 One thing you might consider is setting up CrashPlan with your server.

 It has a native solaris installation and for $5 a month, you can have all
 of your data replicated offsite.

 Combine that with ZFS and snapshots, you are looking pretty safe.

Not sure I follow what you mean about 'native solaris installation'.
Is it an application you install on your server that talks to
something online?



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Harry Putnam
Udo Grabowski (IMK) udo.grabow...@kit.edu writes:

[...]

Harry wrote:
 So you'd need to send/receive all seven then, eh?  Or is there
 something like the -p operator to 'zfs create -p [...]' that can be
 invoked to allow you to send/receive the whole batch in one go?



Udo replied:
 You can always do a 'zfs snapshot -r ...' and a 'zfs send -r .'
 or even -R to send all snaps in one go.

Nice.  I hadn't noticed that yet.  But even then, if follow the use of
-R in relation to send, it appears the -R flag pulls everything behind
the targeted snapshot.

So in the example where each level has a .zfs directory:
 /rmh/host1/someproj/Acollection
 /rmh/host2/someproj/Acollection

A snapshot send with -R flag from:

   /rmh/host1/someproj/Acollection/.zfs HERE
and from:
   /rmh/host2/someproj/Acollection/.zfs HERE

Would be required to get the whole thing sent... Is that right?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Udo Grabowski (IMK)

On 11/09/2013 16:40, Harry Putnam wrote:

Udo Grabowski (IMK) udo.grabow...@kit.edu writes:

[...]

Harry wrote:

So you'd need to send/receive all seven then, eh?  Or is there
something like the -p operator to 'zfs create -p [...]' that can be
invoked to allow you to send/receive the whole batch in one go?





Udo replied:

You can always do a 'zfs snapshot -r ...' and a 'zfs send -r .'
or even -R to send all snaps in one go.


Nice.  I hadn't noticed that yet.  But even then, if follow the use of
-R in relation to send, it appears the -R flag pulls everything behind
the targeted snapshot.

So in the example where each level has a .zfs directory:
  /rmh/host1/someproj/Acollection
  /rmh/host2/someproj/Acollection

A snapshot send with -R flag from:

/rmh/host1/someproj/Acollection/.zfs HERE
and from:
/rmh/host2/someproj/Acollection/.zfs HERE

Would be required to get the whole thing sent... Is that right?



??
'zfs snapshot -r rmh' (or whatever your pool is named) followed
by a 'zfs send -r rmh' would send the complete tree into one stream
(or use -i additionally to get incremental snapshots, but that's a
bit more involved to do that right, if you really want to recover
from an incremental backup).
The .zfs are dummies where these snapshots appear, you won't do tar
on that directories (of course, you can use professional software
like IBM's Tivoli to backup from there).
--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] funky beginer questions

2013-09-11 Thread Lou Picciano
Harry,

Try a $ zfs set - this will list all the attributes you can set on a given 
filesystem. 

Some time ago, experimenting with kernel-based SMB: casesensitivity, 
aclinherit, aclmode, nbmand, sharesmb  - were all important, if to varying 
levels... The byte-range locking (nbmand) was a particular focus of our 
experiments, for example. Or, my memory is simply being spontaneously 
reconstructed, as a result of having missed lunch today - entirely possible!  
'zoned' is also of some importance. Don't leave off the power of zones, if 
setting up an OpenIndiana server...

This is on point: 
http://docs.oracle.com/cd/E23824_01/html/821-1449/managingsmbshares.html
 
other interesting reading: https://www.illumos.org/boards/3/topics/649

And there's always the downright scintillating: 
http://docs.oracle.com/cd/E23824_01/html/821-1448/zfsover-1.html#scrolltoc


   - Original Message -  From: Harry Putnam   To: 
openindiana-discuss@openindiana.org  Sent: Wed, 11 Sep 2013 14:17:48 - 
(UTC)  Subject: Re: [OpenIndiana-discuss] funky beginer questionsLou 
Picciano  writes:   . . . . . . . . . . . . . . . . . . . There are also some 
ZFS   attributes geared toward Windows FSes; you'll want to check these   
out.Thank you for the helpful input.  Although it does sound like your  
operation and thinking are a bit above my pay grade ;).Can you mention a 
little more on that part about 'attributes geared  toward windows FSes', a few 
clues for me to use in search strings?  
___  OpenIndiana-discuss mailing 
list  OpenIndiana-discuss@openindiana.org  
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Some questions

2013-09-11 Thread Michelle Knight
Hi Folks,

I had to move away from OI a while back because of issued mounting CIFS
on Linux after Ubuntu went up to version 13.04.

I am sat here, on the side lines, looking to come back to OI, and was
wondering if that issue has been resolved ... indeed I wasn't actually
sure where the problem was; whether it was with OI or Ubuntu.

I'd be grateful for some guidance.

Also, as I'm not a system programmer, I wasn't sure how to compile DLNA
so that I could publish my small video collection to the home network;
is there a chance that someone has found a noddy guide to installing
DLNA on OI please?

Many thanks for any advice,

Michelle.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Some questions

2013-09-11 Thread Gary Gendel

MIchelle,

I would take a look at serviio, http://www.serviio.org.  It's a java app 
that does a nice job of providing DLNA.  It has a control program that 
integrated into GNOME desktop.  I haven't used it for quite a while, but 
it was pretty simple to set up and use on OpenIndiana.


Gary

On 09/11/2013 03:58 PM, Michelle Knight wrote:

Hi Folks,

I had to move away from OI a while back because of issued mounting CIFS
on Linux after Ubuntu went up to version 13.04.

I am sat here, on the side lines, looking to come back to OI, and was
wondering if that issue has been resolved ... indeed I wasn't actually
sure where the problem was; whether it was with OI or Ubuntu.

I'd be grateful for some guidance.

Also, as I'm not a system programmer, I wasn't sure how to compile DLNA
so that I could publish my small video collection to the home network;
is there a chance that someone has found a noddy guide to installing
DLNA on OI please?

Many thanks for any advice,

Michelle.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] File owner 'messagebus' ?

2013-09-11 Thread Harry Putnam
I've mounted a filesystem on a debian linux machine that resides on an
opendiana (solaris x86) machine and is  zfs filesystem.

I've mounted it with sshfs

The linux user is the same alpha uid (reader) as the solaris user and
both belong to the same group (same alpha and same numeric in the case
of group).

When mounted on the linux box by owner 'reader' the files owner
appears as 'messagebus' instead of 'reader' but the group name stays
the same (nfsu) like this:

-rw-r--r-- 1 messagebus nfsu   69 Mar  9  2010 CMD
drwxr-xr-x 1 messagebus nfsu9 Mar  9  2010 doc
-rw-r--r-- 1 messagebus nfsu  390 Mar 14  2010 elog_flt

Anyone know what that means?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] File owner 'messagebus' ?

2013-09-11 Thread Hugh McIntyre

On 9/11/13 8:29 PM, Harry Putnam wrote:

I've mounted a filesystem on a debian linux machine that resides on an
opendiana (solaris x86) machine and is  zfs filesystem.

I've mounted it with sshfs

The linux user is the same alpha uid (reader) as the solaris user and
both belong to the same group (same alpha and same numeric in the case
of group).

When mounted on the linux box by owner 'reader' the files owner
appears as 'messagebus' instead of 'reader' but the group name stays
the same (nfsu) like this:

-rw-r--r-- 1 messagebus nfsu   69 Mar  9  2010 CMD
drwxr-xr-x 1 messagebus nfsu9 Mar  9  2010 doc
-rw-r--r-- 1 messagebus nfsu  390 Mar 14  2010 elog_flt

Anyone know what that means?


Probably that the numeric UID for user messagebus on the Linux side is 
the same as reader on the Solaris box.  I think if you use NFS4 the 
username may be reported the same, but not for other filesystems such as 
NFS3 and, presumably, sshfs.


What is almost certainly happening is that ls(1) on Linux calls stat() 
on each file, which returns a numeric uid/gid, and then the Linux box 
translates this to messagebug using it's password file to translate 
uid-name.


Traditionally for cases like this with NFS (and similar filesystems, but 
not SMB and generally NFS4) best practice always was to use the same 
name/uid mappings on all systems.


Hugh.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss