Re: [zfs-discuss] how to replace failed vdev on non redundant pool?

2010-10-20 Thread Cassandra Pugh
well, I was expecting/hoping that this command would work as expected:

zpool create testpool vdeva vdevb vdevc

*zpool replace testpool vdevc vdevd.*

# zpool status reports the disk is reslivered.

On a (non-mirror or raid) test pool i just created, this command works.
However, when the disk failed, all I/o was suspended, and I could not
replace it as above.
Even if I forced the command with -f.

If this were a raidz pool, would the zpool replace command even work?

-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Fri, Oct 15, 2010 at 10:06 PM, Edward Ned Harvey wrote:

> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Cassandra Pugh
> >
> > I would like to know how to replace a failed vdev in a non redundant
> > pool?
>
> Non redundant ... Failed ... What do you expect?  This seems like a really
> simple answer...  You can't.  Unless perhaps I've misunderstood the
> question, or the question wasn't asked right or something...
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to replace failed vdev on non redundant pool?

2010-10-15 Thread Cassandra Pugh
Hello,

I would like to know how to replace a failed vdev in a non redundant pool?

I am using fiber attached disks, and cannot simply place the disk back into
the machine, since it is virtual.

I have the latest kernel from sept 2010 that includes all of the new ZFS
upgrades.

Please, can you help me?
-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
I tried zfs replace, however the new drive is slightly smaller, and even
with a -f, it refuses to replace the drive.

I guess i will have to export the pool and destroy this one to get my drives
back.

Still would like the ability to shrink a pool.
-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Tue, Jul 6, 2010 at 1:02 PM, Roy Sigurd Karlsbakk wrote:

> - Original Message -
> > The pool is not redundant, so I would suppose, yes, is is Raid-1 on
> > the software level.
> >
> > I have a few drives, which are on a specific array, which I would like
> > to remove from this pool.
> >
> > I have discovered the "replace" command, and I am going to try and
> > replace, 1 for 1, the drives I would like to remove.
> >
> > However, it would be nice if there were a way to simply "remove" the
> > disks, if space allowed.
>
> zfs attach new drive, zfs detach old drive, that will replace the drive
> without much hazzle
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
> og relevante synonymer på norsk.
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
The pool is not redundant, so I would suppose, yes, is is Raid-1 on the
software level.

I have a few drives, which are on a specific array, which I would like to
remove from this pool.

I have discovered the "replace" command, and I am going to try and replace,
1 for 1, the drives I would like to remove.

However, it would be nice  if there were a way to simply "remove" the disks,
if space allowed.


-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Tue, Jul 6, 2010 at 11:55 AM, Roy Sigurd Karlsbakk wrote:

> - Original Message -
> > Hello list,
> >
> > This has probably been discussed, however I would like to bring it up
> > again, so that the powers that be, know someone else is looking for
> > this feature.
> >
> > I would like to be able to shrink a pool and remove a non-redundant
> > disk.
> >
> > Is this something that is in the works?
> >
> > It would be fantastic if I had this capability.
>
> You're a little unclear on what you want, but it seems to me you want to
> change a raidz2 to a raidz1 or something like that. That is, AFAIK, in the
> works, with the block rewrite functionality. As with most other parts of
> progress in OpenSolaris, nothing is clear about when or if this will get
> integrated into the system.
>
> For now, you can't change a raidz(n) VDEV and you can't detach a VDEV from
> a pool. The only way is to build a new pool and move the data with things
> like zfs send/receive. You can also remove a drive from a raidz(n), and
> reducing its redundancy, but you can't change the value of n.
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
> og relevante synonymer på norsk.
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
Hello list,

This has probably been discussed, however I would like to bring it up again,
so that the powers that be, know someone else is looking for this feature.

I would like to be able to shrink a pool and remove a non-redundant disk.

Is this something that is in the works?

It would be fantastic if I had this capability.

Thanks!


-
Cassandra
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-04 Thread Cassandra Pugh
Well, yes I understand I need to research the issue of running the idmapd
service, but I also need to figure out how to use nfsv4 and automount.
-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Fri, Jun 4, 2010 at 10:00 AM, Pasi Kärkkäinen  wrote:

> On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote:
> >Thank you, when I manually mount using the "mount -t nfs4" option, I
> am
> >able to see the entire tree, however, the permissions are set as
> >nfsnobody.
> >"Warning: rpc.idmapd appears not to be running.
> > All uids will be mapped to the nobody uid."
> >
>
> Did you actually read the error message? :)
> Finding a solution shouldn't be too difficult after that..
>
> -- Pasi
>
> >-
> >Cassandra
> >(609) 243-2413
> >Unix Administrator
> >
> >"From a little spark may burst a mighty flame."
> >-Dante Alighieri
> >
> >On Thu, Jun 3, 2010 at 4:33 PM, Brandon High <[1]bh...@freaks.com>
> wrote:
> >
> >  On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh <[2]cp...@pppl.gov>
> >  wrote:
> >  > The special case here is that I am trying to traverse NESTED zfs
> >  systems,
> >  > for the purpose of having compressed and uncompressed directories.
> >
> >  Make sure to use "mount -t nfs4" on your linux client. The standard
> >  "nfs" type only supports nfs v2/v3.
> >
> >  -B
> >  --
> >  Brandon High : [3]bh...@freaks.com
> >
> > References
> >
> >Visible links
> >1. mailto:bh...@freaks.com
> >2. mailto:cp...@pppl.gov
> >3. mailto:bh...@freaks.com
>
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-04 Thread Cassandra Pugh
Thank you, when I manually mount using the "mount -t nfs4" option, I am able
to see the entire tree, however, the permissions are set as nfsnobody.
"Warning: rpc.idmapd appears not to be running.
 All uids will be mapped to the nobody uid."




-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Thu, Jun 3, 2010 at 4:33 PM, Brandon High  wrote:

> On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh  wrote:
> > The special case here is that I am trying to traverse NESTED zfs systems,
> > for the purpose of having compressed and uncompressed directories.
>
> Make sure to use "mount -t nfs4" on your linux client. The standard
> "nfs" type only supports nfs v2/v3.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-03 Thread Cassandra Pugh
I am trying to set this up as an automount.

Currently I am trying to set mounts for each area, but I have a lot to
mount.

When I run showmount -e nfs_server I do see all of the shared directories.

-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Thu, Jun 3, 2010 at 2:26 PM, Brandon High  wrote:

> showmount -e nfs_server
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-03 Thread Cassandra Pugh
No usernames is not an issue.  I have many shares that work, but they are
single zfs file systems.
The special case here is that I am trying to traverse NESTED zfs systems,
for the purpose of having compressed and uncompressed directories.


-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Thu, Jun 3, 2010 at 3:00 PM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:

> Hi Cassandra,
>
> The mirror mount feature allows the client to access files and dirs that
> are newly created on the server, but this doesn't look like your problem
> described below.
>
> My guess is that you need to resolve the username/permission issues
> before this will work, but some versions of Linux don't support
> traversing nested mount points.
>
> I'm no NFS expert and many on this list are, but things to check are:
>
> - I'll assume that hostnames are resolving between systems since
> you can share/mount the resources.
>
> - If you are seeing "nobody" instead of user names, then you need to
> make sure the domain name is specified in NFSMAPID_DOMAIN. For example,
> add company.com to the /etc/default/nfs file and then restart this
> server:
> # svcs | grep mapid
> online May_27   svc:/network/nfs/mapid:default
> # svcadm restart svc:/network/nfs/mapid:default
>
> - Permissions won't resolve correctly until the above two issues are
> cleared.
>
> - You might be able to rule out the Linux client support of nested
> mount points by just sharing a simple test dataset, like this:
>
> # zfs create mypool/test
> # cp /usr/dict/words /mypool/test/file.1
> # zfs set sharenfs=on mypool/test
>
> and see if file.1 is visible on the Linux client.
>
> Thanks,
>
> Cindy
>
>
> On 06/03/10 11:53, Cassandra Pugh wrote:
>
>> Thanks for getting back to me!
>>
>> I am using Solaris 10 10/09 (update 8)
>>
>> I have created multiple nested zfs directories in order to compress some
>> but not all sub directories in a directory.
>> I have ensured that they all have a sharenfs option, as I have done with
>> other shares.
>>
>> This is a special case to me, since instead of just
>> #zfs create pool/mydir
>>
>> and then just using mkdir to make everything thereafter, I have done:
>>  #zfs create mypool/mydir/
>>  #zfs create mypool/mydir/dir1
>>  #zfs create mypool/mydir/dir1/compressed1
>> #zfs create mypool/mydir/dir1/compressedir2
>> #zfs create mypool/mydir/dir1/uncompressedir
>>
>>
>> i had hoped that i would then export this, and mount it on the client and
>> see:
>> #ls  /mnt/mydir/*
>>
>> dir:
>> compressedir1 compressedir2 uncompressedir
>>
>> and the files thereafter.
>>
>> however  what i see is :
>>
>> #ls /mnt/mydir/*
>>
>> dir:
>>
>> My client is linux. I would assume we are using nfs v3. I also notice that
>> the permissions are not showing through correctly.
>> The mount options used are our "defaults"
>> (hard,rw,nosuid,nodev,intr,noacl)
>>
>>
>> I am not sure what this mirror mounting is?  Would that help me?
>> Is there something else I could be doing to approach this better?
>>
>> Thank you for your insight.
>>
>> -
>>
>> Cassandra
>> Unix Administrator
>>
>>
>> On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen <
>> cindy.swearin...@oracle.com <mailto:cindy.swearin...@oracle.com>> wrote:
>>
>>Cassandra,
>>
>>Which Solaris release is this?
>>
>>This is working for me between an Solaris 10 server and a
>>OpenSolaris client.
>>
>>Nested mount points can be tricky and I'm not sure if you are looking
>>for the mirror mount feature that is not available in the Solaris 10
>>release, where new directory contents are accessible on the client.
>>
>>See the examples below.
>>
>>
>>Thanks,
>>
>>Cindy
>>
>>On the server:
>>
>># zpool create pool c1t3d0
>># zfs create pool/myfs1
>># cp /usr/dict/words /pool/myfs1/file.1
>># zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2
>># ls /pool/myfs1
>>file.1  myfs2
>># cp /usr/dict/words /pool/myfs1/myfs2/file.2
>># ls /pool/myfs1/myfs2/
>>file.2
>># zfs set sharenfs=on pool/myfs1
>># zfs set sharenfs=on pool/myfs2
>># share
>>-   /pool/myfs1   rw   ""
>>-   

Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-03 Thread Cassandra Pugh
Thanks for getting back to me!

I am using Solaris 10 10/09 (update 8)

I have created multiple nested zfs directories in order to compress some but
not all sub directories in a directory.
I have ensured that they all have a sharenfs option, as I have done with
other shares.

This is a special case to me, since instead of just
#zfs create pool/mydir

and then just using mkdir to make everything thereafter, I have done:
 #zfs create mypool/mydir/
 #zfs create mypool/mydir/dir1
 #zfs create mypool/mydir/dir1/compressed1
#zfs create mypool/mydir/dir1/compressedir2
#zfs create mypool/mydir/dir1/uncompressedir


i had hoped that i would then export this, and mount it on the client and
see:
#ls  /mnt/mydir/*

dir:
compressedir1 compressedir2 uncompressedir

and the files thereafter.

however  what i see is :

#ls /mnt/mydir/*

dir:

My client is linux. I would assume we are using nfs v3.
I also notice that the permissions are not showing through correctly.
The mount options used are our "defaults" (hard,rw,nosuid,nodev,intr,noacl)


I am not sure what this mirror mounting is?  Would that help me?
Is there something else I could be doing to approach this better?

Thank you for your insight.

-

Cassandra
Unix Administrator


On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:

> Cassandra,
>
> Which Solaris release is this?
>
> This is working for me between an Solaris 10 server and a OpenSolaris
> client.
>
> Nested mount points can be tricky and I'm not sure if you are looking
> for the mirror mount feature that is not available in the Solaris 10
> release, where new directory contents are accessible on the client.
>
> See the examples below.
>
>
> Thanks,
>
> Cindy
>
> On the server:
>
> # zpool create pool c1t3d0
> # zfs create pool/myfs1
> # cp /usr/dict/words /pool/myfs1/file.1
> # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2
> # ls /pool/myfs1
> file.1  myfs2
> # cp /usr/dict/words /pool/myfs1/myfs2/file.2
> # ls /pool/myfs1/myfs2/
> file.2
> # zfs set sharenfs=on pool/myfs1
> # zfs set sharenfs=on pool/myfs2
> # share
> -   /pool/myfs1   rw   ""
> -   /pool/myfs1/myfs2   rw   "
>
> On the client:
>
> # ls /net/t2k-brm-03/pool/myfs1
> file.1  myfs2
> # ls /net/t2k-brm-03/pool/myfs1/myfs2
> file.2
> # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt
> # ls /mnt
> file.1  myfs2
> # ls /mnt/myfs2
> file.2
>
> On the server:
>
> # touch /pool/myfs1/myfs2/file.3
>
> On the client:
>
> # ls /mnt/myfs2
> file.2  file.3
>
>
> On 05/27/10 14:02, Cassandra Pugh wrote:
>
>> I was wondering if there is a special option to share out a set of
>> nested
>>   directories?  Currently if I share out a directory with
>> /pool/mydir1/mydir2
>>   on a system, mydir1 shows up, and I can see mydir2, but nothing in
>> mydir2.
>>   mydir1 and mydir2 are each a zfs filesystem, each shared with the proper
>>   sharenfs permissions.
>>   Did I miss a browse or traverse option somewhere?
>>   -
>>   Cassandra
>> Unix Administrator
>>   "From a little spark may burst a mighty flame."
>>   -Dante Alighieri
>>
>>
>>
>> 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] nfs share of nested zfs directories?

2010-05-27 Thread Cassandra Pugh
I was wondering if there is a special option to share out a set of
nested
   directories?  Currently if I share out a directory with
/pool/mydir1/mydir2
   on a system, mydir1 shows up, and I can see mydir2, but nothing in
mydir2.
   mydir1 and mydir2 are each a zfs filesystem, each shared with the proper
   sharenfs permissions.
   Did I miss a browse or traverse option somewhere?
   -
   Cassandra
 Unix Administrator
   "From a little spark may burst a mighty flame."
   -Dante Alighieri
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss