Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread Aaron Blew
I actually ran into a situation where I needed to concatenate LUNs last
week.  In my case, the Sun 2540 storage arrays don't yet have the ability to
create LUNs over 2TB, so to use all the storage within the array on one host
efficiently, I created two LUNs per RAID group, for a total of 4 LUNs.  Then
we create two stripes (LUNs 0 and 2, 1 and 3) and concatenate them.  This
way the data is laid out contigously on the RAID groups.

-Aaron

On Mon, Sep 22, 2008 at 11:56 PM, Nils Goroll <[EMAIL PROTECTED]> wrote:

> Hi Darren,
>
> >> http://www.opensolaris.org/jive/thread.jspa?messageID=271983񂙯
> >>
> >> The case mentioned there is one where concatenation in zdevs would be
> > useful.
> >
> > That case appears to be about trying to get a raidz sized properly
> > against disks of different sizes.  I don't see a similar issue for
> > someone preferring a concat over a stripe.
>
> I don't quite understand your comment.
>
> The question I was referring to was from someone who wanted a configuration
> which would optimally use the available physical disk space. The
> configuration
> which would yield maximum net capacity was to use concats, so IMHO this is
> a
> case where one might want a concat below a vdev.
>
> Were you asking for use cases of a concat at the pool layer?
>
> I think those exist when using RAID hardware where additional striping can
> lead
> to an increase of concurrent I/O on the same disks or I/Os being split up
> unnecessarily. All of this highly depends upon the configuration.
>
> Nils
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread Robert Milkowski




Hello Aaron,

Tuesday, September 23, 2008, 8:24:36 AM, you wrote:




>


I actually ran into a situation where I needed to concatenate LUNs last week.  In my case, the Sun 2540 storage arrays don't yet have the ability to create LUNs over 2TB, so to use all the storage within the array on one host efficiently, I created two LUNs per RAID group, for a total of 4 LUNs.  Then we create two stripes (LUNs 0 and 2, 1 and 3) and concatenate them.  This way the data is laid out contigously on the RAID groups.







Depending on your pattern use you could get sub-optimal performance. If you create smaller raid grups and match one lun to a given raid group and then use a zfs striping across luns you probably will get better performance.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem: ZFS export drive

2008-09-23 Thread Marcelo Leal
What was the configuration of that pool? It was a mirror, raidz, or just 
stripe? If was just stripe, and you loose one, you got problems...

 Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to share

2008-09-23 Thread Srinivas Chadalavada
Hi Michael,

Sorry, Here is the info. Main thing I noticed is not able to start
the nfs server.

ech3-mes01.prod:schadala[561] ~ $ svcs -a |grep nfs

disabled   19:43:31 svc:/network/nfs/server:default

online 19:11:49 svc:/network/nfs/cbd:default

online 19:11:49 svc:/network/nfs/status:default

online 19:11:49 svc:/network/nfs/mapid:default

online 19:11:49 svc:/network/nfs/nlockmgr:default

online 19:11:49 svc:/network/nfs/client:default

online 19:11:49 svc:/network/nfs/rquota:default

ech3-mes01.prod:schadala[567] ~ $ svcs -x

svc:/network/rpc/smserver:default (removable media management)

 State: disabled since September 22, 2008  7:11:49 PM CDT

Reason: Disabled by an administrator.

   See: http://sun.com/msg/SMF-8000-05

   See: rpc.smserverd(1M)

Impact: 1 dependent service is not running.  (Use -v for list.)

 

svc:/application/management/seaport:default (net-snmp SNMP daemon)

 State: maintenance since September 22, 2008  7:11:50 PM CDT

Reason: Start method failed repeatedly, last exited with status 1.

   See: http://sun.com/msg/SMF-8000-KS

   See: snmpd(1M)

   See: /var/svc/log/application-management-seaport:default.log

Impact: This service is not running.

ech3-mes01.prod:schadala[568] ~ $ more /etc/release

Solaris 10 8/07 s10x_u4wos_12b X86

   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.

Use is subject to license terms.

Assembled 16 August 2007

ech3-mes01.prod:schadala[569] ~ $ uname -a

SunOS ech3-mes01.prod 5.10 Generic_127128-11 i86pc i386 i86pc

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 22, 2008 8:34 PM
To: Srinivas Chadalavada
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] unable to share

 

Srinivas Chadalavada wrote:

> Hi Mike,

 

That's not my name.

 

also, please answer *all* my questions, you're only providing half the 

information: we're still missing the OS & revision, as well as some 

information about what's in the log files svcs -x tells us about.

 

Michael

>Here is the output.

> Sep 22 18:46:01 Executing start method ("/lib/svc/method/nfs-server

> start") ]

> cannot share 'export': /export: Unknown error

> cannot share 'export/home': /export/home: Unknown error

> [ Sep 22 18:46:01 Method "start" exited with status 0 ]

> [ Sep 22 18:46:01 Stopping because all processes in service exited. ]

> [ Sep 22 18:46:01 Executing stop method ("/lib/svc/method/nfs-server

> stop 472")

> ]

> [ Sep 22 18:46:01 Method "stop" exited with status 0 ]

> [ Sep 22 18:46:01 Disabled. ]

> 

> ech3-mes01.prod:schadala[561] ~ $ svcs -x

> svc:/network/nfs/server:default

> svc:/network/nfs/server:default (NFS server)

>  State: disabled since September 22, 2008  6:04:58 PM CDT

> Reason: Disabled by an administrator.

>See: http://sun.com/msg/SMF-8000-05

>See: nfsd(1M)

>See: /var/svc/log/network-nfs-server:default.log

> Impact: This service is not running.

> 

> -Original Message-

> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 

> Sent: Monday, September 22, 2008 4:26 PM

> To: Srinivas Chadalavada

> Cc: zfs-discuss@opensolaris.org

> Subject: Re: [zfs-discuss] unable to share

> 

> On 09/22/08 16:11, Srinivas Chadalavada wrote:

>> Hi All,

>> 

>>I am trying to share zfs file system, I did enable sharnfs using

> this 

>> command.

> 

> what OS/build are you using?

> 

>> sudo zfs set sharenfs=on export/home

>> 

>> when I do share -a

>> 

>> I get this error

>> 

>> ech3-mes01.prod:schadala[511] ~ $ sudo zfs share -a

>> 

>> cannot share 'export': /export: Unknown error

>> 

>> cannot share 'export/home': /export/home: Unknown error

>> 

>> I am not able to start nfs server also.

>> 

>> ech3-mes01.prod:schadala[512] ~ $ svcs -a |grep 

>> nfs 

> 

> 

>> disabled   18:04:58  svc:/network/nfs/server:default

> 

> so what happens when you do "svcadm enable

> svc:/network/nfs/server:default"?

> 

> what's the output of "svcs -x svc:/network/nfs/server:default", and
what

> do 

> the log files you find there say?

> 

 

 

-- 

Michael Schuster http://blogs.sun.com/recursion

Recursion, n.: see 'Recursion'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem: ZFS export drive

2008-09-23 Thread Steve
Leal,

   Yes, it was stripe, so I have problems. There is really nothing I can do at 
this point, but luckily I've backed up my important data elsewhere, but it'll 
take awhile to get some of my other non-critical information back. Oh well, you 
win some, you lose some. It's all a learning experience. 

  Thanks for the help anyway.

*Mods feel free to lock this thread.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread A Darren Dunham
On Tue, Sep 23, 2008 at 08:56:39AM +0200, Nils Goroll wrote:
>> That case appears to be about trying to get a raidz sized properly
>> against disks of different sizes.  I don't see a similar issue for
>> someone preferring a concat over a stripe.
>
> I don't quite understand your comment.
>
> The question I was referring to was from someone who wanted a 
> configuration which would optimally use the available physical disk 
> space. The configuration which would yield maximum net capacity was to 
> use concats, so IMHO this is a case where one might want a concat below a 
> vdev.

I assumed that was only necessary because they were trying to use a
raidz.  I don't see why a plain stripe would not yield the maximum
capacity.  

> Were you asking for use cases of a concat at the pool layer?
>
> I think those exist when using RAID hardware where additional striping 
> can lead to an increase of concurrent I/O on the same disks or I/Os being 
> split up unnecessarily. All of this highly depends upon the 
> configuration.

No, not necessarily.  I can think of a couple of things where it might
be useful.  I was asking for the OP's particular need.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] resilver being killed by 'zpool status' when root

2008-09-23 Thread Blake Irvin
is there a bug for the behavior noted in the subject line of this post?

running 'zpool status' or 'zpool status -xv' during a resilver as a 
non-privileged user has no adverse effect, but if i do the same as root, the 
resilver restarts.

while i'm not running opensolaris here, i feel this is a good forum to post 
this question to.

(my system: SunOS filer1 5.10 Generic_137112-07 i86pc i386 i86pc)

thanks,
blake
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-23 Thread Vincent Fox
Just make SURE the other host is actually truly DEAD!

If for some reason it's simply wedged, or you have lost console access but the 
hostA is still "live", then you can end up with 2 systems having access to same 
ZFS pool.

I have done this in test, 2 hosts accessing same pool, and the result is 
catastrophic pool corruption.

I use the simple method if I think hostA is dead, I call the operators and get 
them to pull the power cords out of it just to be certain.  Then I force import 
on hostB with certainty.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Jens Elkner
On Tue, Sep 23, 2008 at 01:04:34PM -0500, Bob Friesenhahn wrote:
> On Tue, 23 Sep 2008, Eric Schrock wrote:
> > http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
> 
> I must apologize for anoying everyone.  When Richard Elling posted the 
> GreenBytes link without saying what it was I completely ignored it. 
> I assumed that it would be Windows-centric content that I can not view 
> since of course I am a dedicated Solaris user.  I see that someone 
> else mentioned that the content does not work for Solaris users.  As a 
> result I ignored the entire discussion as being about some silly 
> animation of gumballs.

Don't apologize - its not your fault! 
BTW: I have exactly the same problem/assumption ...

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Brent Jones
On Tue, Sep 23, 2008 at 10:25 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Today while reading EE Times I read an article about a startup company
> named Greenbytes which will be offering a system called Cypress which
> supports deduplication and arrangement of data to minimize power
> consumption.  It seems that deduplication is at the file level.  The
> product is initially based on Sun hardware (Sunfire 4540) and uses
> OpenSolaris and a modified version of ZFS.
>
> I am surprised to first hear about this in EE Times rather than on
> this list.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

It'd been brought up a couple times in the past, but their information
is so vague it doesn't give a whole lot to discuss  =/

-- 
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Eric Schrock
See:

http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0

On Tue, Sep 23, 2008 at 12:25:59PM -0500, Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company 
> named Greenbytes which will be offering a system called Cypress which 
> supports deduplication and arrangement of data to minimize power 
> consumption.  It seems that deduplication is at the file level.  The 
> product is initially based on Sun hardware (Sunfire 4540) and uses 
> OpenSolaris and a modified version of ZFS.
> 
> I am surprised to first hear about this in EE Times rather than on 
> this list.
> 
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Bob Friesenhahn
Today while reading EE Times I read an article about a startup company 
named Greenbytes which will be offering a system called Cypress which 
supports deduplication and arrangement of data to minimize power 
consumption.  It seems that deduplication is at the file level.  The 
product is initially based on Sun hardware (Sunfire 4540) and uses 
OpenSolaris and a modified version of ZFS.

I am surprised to first hear about this in EE Times rather than on 
this list.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Bob Friesenhahn
On Tue, 23 Sep 2008, Eric Schrock wrote:

> See:
>
> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0

I must apologize for anoying everyone.  When Richard Elling posted the 
GreenBytes link without saying what it was I completely ignored it. 
I assumed that it would be Windows-centric content that I can not view 
since of course I am a dedicated Solaris user.  I see that someone 
else mentioned that the content does not work for Solaris users.  As a 
result I ignored the entire discussion as being about some silly 
animation of gumballs.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread C. Bergström
Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company 
> named Greenbytes which will be offering a system called Cypress which 
> supports deduplication and arrangement of data to minimize power 
> consumption.  It seems that deduplication is at the file level.  The 
> product is initially based on Sun hardware (Sunfire 4540) and uses 
> OpenSolaris and a modified version of ZFS.
>
> I am surprised to first hear about this in EE Times rather than on 
> this list.
>   
maybe you didn't grok it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Victor Latushkin
On 23.09.08 21:25, Bob Friesenhahn wrote:
> Today while reading EE Times I read an article about a startup company 
> named Greenbytes which will be offering a system called Cypress which 
> supports deduplication and arrangement of data to minimize power 
> consumption.  It seems that deduplication is at the file level.  The 
> product is initially based on Sun hardware (Sunfire 4540) and uses 
> OpenSolaris and a modified version of ZFS.
> 
> I am surprised to first hear about this in EE Times rather than on 
> this list.

Actually it was discussed here earlier in topic titled "Do you grok it?"

victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Richard Elling
Bob Friesenhahn wrote:
> On Tue, 23 Sep 2008, Eric Schrock wrote:
>
>   
>> See:
>>
>> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
>> 
>
> I must apologize for anoying everyone.  When Richard Elling posted the 
> GreenBytes link without saying what it was I completely ignored it. 
> I assumed that it would be Windows-centric content that I can not view 
> since of course I am a dedicated Solaris user.  I see that someone 
> else mentioned that the content does not work for Solaris users.  As a 
> result I ignored the entire discussion as being about some silly 
> animation of gumballs.
>   

So you admit that you didn't grok it? :-)
Dude poured in a big bag of gumballs, but they were de-duped,
so the gumball machine only had a few gumballs.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-23 Thread Tim Haley
Vincent Fox wrote:
> Just make SURE the other host is actually truly DEAD!
> 
> If for some reason it's simply wedged, or you have lost console access but 
> the hostA is still "live", then you can end up with 2 systems having access 
> to same ZFS pool.
> 
> I have done this in test, 2 hosts accessing same pool, and the result is 
> catastrophic pool corruption.
> 
> I use the simple method if I think hostA is dead, I call the operators and 
> get them to pull the power cords out of it just to be certain.  Then I force 
> import on hostB with certainty.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

This is a common cluster scenario, you need to make sure the other node is 
dead, so you force that result.  In lustre set-ups they recommend a STONITH 
(Shoot the Other Node in the Head) approach.  They use a combo of a heartbeat 
setup like described here:

http://www.linux-ha.org/Heartbeat

and then something like the powerman framework to 'kill' the offline node.

Perhaps those things could be made to run on Solaris if they don't already.

-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-23 Thread Richard Elling
Tim Haley wrote:
> Vincent Fox wrote:
>   
>> Just make SURE the other host is actually truly DEAD!
>>
>> If for some reason it's simply wedged, or you have lost console access but 
>> the hostA is still "live", then you can end up with 2 systems having access 
>> to same ZFS pool.
>>
>> I have done this in test, 2 hosts accessing same pool, and the result is 
>> catastrophic pool corruption.
>>
>> I use the simple method if I think hostA is dead, I call the operators and 
>> get them to pull the power cords out of it just to be certain.  Then I force 
>> import on hostB with certainty.
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> 
>
> This is a common cluster scenario, you need to make sure the other node is 
> dead, so you force that result.  In lustre set-ups they recommend a STONITH 
> (Shoot the Other Node in the Head) approach.  They use a combo of a heartbeat 
> setup like described here:
>
> http://www.linux-ha.org/Heartbeat
>
> and then something like the powerman framework to 'kill' the offline node.
>
>   
> Perhaps those things could be made to run on Solaris if they don't already.
>   

Of course, Solaris Cluster (and the corresponding open source effort:
Open HA Cluster) manage cluster membership and data access.  We
also use SCSI reservations, so that a rogue node cannot even see the
data.  IMHO, if you do this without reservations, then you are dancing
with the devil in the details.
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Kyle McDonald
Richard Elling wrote:
> Bob Friesenhahn wrote:
>   
>> On Tue, 23 Sep 2008, Eric Schrock wrote:
>>
>>   
>> 
>>> See:
>>>
>>> http://www.opensolaris.org/jive/thread.jspa?threadID=73740&tstart=0
>>> 
>>>   
>> I must apologize for anoying everyone.  When Richard Elling posted the 
>> GreenBytes link without saying what it was I completely ignored it. 
>> I assumed that it would be Windows-centric content that I can not view 
>> since of course I am a dedicated Solaris user.  I see that someone 
>> else mentioned that the content does not work for Solaris users.  As a 
>> result I ignored the entire discussion as being about some silly 
>> animation of gumballs.
>>   
>> 
>
> So you admit that you didn't grok it? :-)
> Dude poured in a big bag of gumballs, but they were de-duped,
> so the gumball machine only had a few gumballs.
>   
I won't admit I didn't grok it. I will admit however, (and this may be 
worse) that even though I do have a windows laptop, with QuickTime 
installed, I couldn't get the damn thing to work in Firefox. So I 
couldn't see it.

 -Kyle

>  -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Keith Bierman

On Sep 23, 2008, at 12:48 PM, Richard Elling wrote:

>
> So you admit that you didn't grok it? :-)
> Dude poured in a big bag of gumballs, but they were de-duped,
> so the gumball machine only had a few gumballs.
>

When my data is deduped that's a GoodThing (other than my unanswered  
query to them about how they handle zfs copies=2) but when my candy  
supply suddenly shrinks, I assume there are Rats or Mice about 

I thought it was a lovely bit of art, completely worthless as  
advertising (as it neither made sense, nor made me want to learn more  
about them).

Fortunately they triggered a google news alert so I read their  
announcement when it came out and that was helpful ;>


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
 Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-09-23 Thread Miles Nordin
> "bi" == Blake Irvin <[EMAIL PROTECTED]> writes:

bi> running 'zpool status' or 'zpool status -xv'
bi> during a resilver as a non-privileged user has no adverse
bi> effect, but if i do the same as root, the resilver restarts.

I have this in my ZFS bug notes:

From: Thomas Bleek <[EMAIL PROTECTED]>
 "zpool status" is resetting the resilver process of a spare drive.
 Only the spare drive resilver is disturbed, an "normal" resilver not!
 Because I have a script running every 10 minutes to check some
 things (also zpool status) the resilver did never complete:-(

was yours a resilver onto a spare drive, or a manually-initiated
resilver?


pgpPSfpIdLAQK.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Fusion-IO?

2008-09-23 Thread Jignesh K. Shah

http://www.fusionio.com/Products.aspx

Looks like a cool SSD to go with ZFS

Has anybody tried ZFS with Fusion-IO storage? For that matter even with 
Solaris?


-Jignesh

-- 
Jignesh Shah   http://blogs.sun.com/jkshah  
Sun Microsystems,Inc   http://sun.com/postgresql

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs allow interaction with file system privileges

2008-09-23 Thread Paul B. Henson

So I've been playing with SXCE in anticipation of the release of S10U6
(which last I heard has been delayed until sometime in October :( ) seeing
how I might integrate our identity management system and ZFS provisioning
using a minimum privileges service account.

I need to be able to create filesystems, rename them, delete them, and
change various attributes (quota and whatnot).

However, in addition to delegation using zfs allow, it seems permissions
must be granted in the underlying file systems as well. In order to mount a
new ZFS filesystem, an account needs permission to be able to create a
directory in the containing filesystem.

I suppose I can configure an ACL allowing such without any problem, but I
also need to be able to update the ownership of the new filesystem to the
appropriate account it is being created for. Another option would be to
leave the filesystem owned by the service account, and create an explicit
ACL for the user it was created for, but a fair number of UNIX applications
aren't really happy when a home directory is not owned by the user whose
home directory it is.

What would be the best way to allow the service account to chown the newly
created ZFS filesystem to the appropriate user? Right now I'm tentatively
thinking of making a small suid root binary only executable by the service
account which would take a username and chown appropriately.

Any other suggestions?


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Fusion-IO?

2008-09-23 Thread Bryan Wagoner
Last time I played with one of those, the problem was that it didn't have any 
drivers for Solaris. It's a PCIE device unlike something like the gigabyte 
I-Ram or Intel SSD or something.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow interaction with file system privileges

2008-09-23 Thread Darren J Moffat
Paul B. Henson wrote:
> What would be the best way to allow the service account to chown the newly
> created ZFS filesystem to the appropriate user? Right now I'm tentatively
> thinking of making a small suid root binary only executable by the service
> account which would take a username and chown appropriately.
> 
> Any other suggestions?

Run the "service" with the file_chown privilege.  See privileges(5), 
rbac(5) and if it runs as an SMF service smf_method(5).

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss