Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson
On Thu, 29 Oct 2009 casper@sun.com wrote:

> Do you have the complete NFS trace output?  My reading of the source code
> says that the file will be created with the proper gid so I am actually
> believing that the client "over corrects" the attributes after creating
> the file/directory.

I dug around the OpenSolaris source code and believe I found where this
behavior is coming from.

In

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/nfs/nfs4_srv.c

on line 1643, there's a comment:

"Set default initial values for attributes when not specified in
createattrs."

And if the uid/gid is not explicitly specified in the NFS CREATE operation,
the code calls crgetuid and crgetgid to determine what uid/gid to use for
the mkdir operation. crgetgid is just "return (cr->cr_gid);", which would
result in the behavior we describe -- if there is no group owner explicitly
specified, new subdirectories are always created based on the primary group
of the user, disregarding the presence of any sgid bit on the parent
directory.

As far as the client:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/nfs/nfs4_vnops.c

On line 6790 the client code explicitly checks whether or not the new
directory is being created inside of a parent directory with a sgid bit
set, and then explicitly includes the group owner if so.

I'm guessing you are probably looking at the actual underlying filesystem
code? That probably does do the right thing if the gid is not specified.
But given the NFS server code, if no gid is specified by the client,
explicitly uses the primary gid, by the time it gets to the underlying file
system the gid is already specified and any filesystem level sgid handling
is bypassed.

I doubt if the resolution to the problem is as simple as not having the NFS
server code explicitly specify a gid if none is given by the client,
allowing the underlying filesystem to "do the right thing", but who knows
:)... I still think that the preferred behavior would be to respect the
sgid bit semantics, and continue to hope I can convince the engineers in
charge of this decision to agree.

Thanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: File level cloning

2009-10-29 Thread Robert Milkowski


create a dedicated zfs zvol or filesystem for each file representing
your virtual machine.
Then if you need to clone a VM you clone its zvol or the filesystem.


Jeffry Molanus wrote:

I'm not doing anything yet; I just wondered if ZFS provides any methods to
do file level cloning instead of complete file systems. Basically I want a
zero-size-increase copy of a file. A while ago BTRFS developers added this
feature to the fs by doing a specialized ioctl call. Maybe this isn't
needed at all since vmware can clone but I have the gut feeling that doing
this at zfs level is more efficient. I might me wrong though.

Regards, Jeff

  

-Oorspronkelijk bericht-
Van: Scott Meilicke [mailto:scott.meili...@craneaerospace.com]
Verzonden: Wednesday, October 28, 2009 9:33 PM
Aan: Jeffry Molanus
Onderwerp: Re: [zfs-discuss] File level cloning

What are you doing with your vmdk file(s) from the clone?


On 10/28/09 9:36 AM, "Jeffry Molanus"  wrote:

  

Agreed, but with file level it is more granular then cloning a whole fs


and I
  

would not need to delete the cloned fs once i picked the vmdk I wanted.


Esx
  

has maximum on its datastore otherwise this would not be needed and I


would be
  

able to create a fs per vmdk

Regards, jeff
- Oorspronkelijk bericht -
Van: Scott Meilicke 
Verzonden: woensdag 28 oktober 2009 17:07
Aan: zfs-discuss@opensolaris.org 
Onderwerp: Re: [zfs-discuss] File level cloning

I don't think so. But, you can clone at the ZFS level, and then just use


the
  

vmdk(s) that you need. As long as you don't muck about with the other


stuff in
  

the clone, the space usage should be the same.

-Scott


--
Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace &
Electronics | +1 425-743-8153 | M: +1 206-406-2670


--
  

-


-
We value your opinion!  How may we serve you better?
Please click the survey link to tell us how we are doing:
http://www.craneae.com/ContactUs/VoiceofCustomer.aspx
Your feedback is of the utmost importance to us. Thank you for your time.
--
  

-


-
Crane Aerospace & Electronics Confidentiality Statement:
The information contained in this email message may be privileged and is
confidential information intended only for the use of the recipient, or
  

any


employee or agent responsible to deliver it to the intended recipient. Any
unauthorized use, distribution or copying of this information is strictly
prohibited
and may be unlawful. If you have received this communication in error,
please notify
the sender immediately and destroy the original message and all
  

attachments


from
your electronic files.
--
  

-


-
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson
On Thu, 29 Oct 2009 casper@sun.com wrote:

> Do you have the complete NFS trace output?  My reading of the source code
> says that the file will be created with the proper gid so I am actually
> believing that the client "over corrects" the attributes after creating
> the file/directory.

Yes, we submitted that to support. It's SR#71757154, although I don't know
if they've kept the ticket kept up-to-date. My understanding of the current
status is that they have verified the behavior we describe, and given the
ambiguity of the POSIX spec are not necessarily inclined to change it.

I've attached a small packet capture from creating a subdirectory on a
Solaris 10U8 NFS server from both a Linux client and a Solaris client.

For the linux client:

--
hen...@damien /mnt/sgid_test $ ls -ld .
drwx--s--x 2 henson iit 2 Oct 29 17:29 .

hen...@damien /mnt/sgid_test $ id
uid=1005(henson) gid=1012(csupomona)

hen...@damien /mnt/sgid_test $ mkdir linux

hen...@damien /mnt/sgid_test $ ls -l
total 2
drwx--s--x 2 henson csupomona 2 Oct 29 17:31 linux
--

The mkdir operation appears to consist of the compound call
"PUTFH;SAVEFH;CREATE;GETFH;GETATTR;RESTOREFH;GETATTR"; the CREATE call
specifies an attrmask of just FATTR4_MODE. The response to the GETATTR call
shows the FATTR4_OWNER_GROUP to be csupomona.

For the Solaris client:

--
hen...@s10 /mnt/sgid_test $ ls -ld .
drwx--s--x+  3 henson   iit3 Oct 29 17:31 .

hen...@s10 /mnt/sgid_test $ id
uid=1005(henson) gid=1012(csupomona)

hen...@s10 /mnt/sgid_test $ mkdir solaris

hen...@s10 /mnt/sgid_test $ ls -l
total 4
drwx--s--x+  2 henson   iit2 Oct 29 17:33 solaris
--

The mkdir in this case consists of the compound call
"PUTFH;CREATE;GETFH;GETATTR;SAVEFH;PUTFH;GETATTR;RESTOREFH;NVERIFY;SETATTR",
the CREATE call specifies an attrmask of both FATTR4_MODE *and*
FATTR4_OWNER_GROUP with iit as the group.

In the reply to GETATTR, FATTR4_OWNER_GROUP is iit.


We don't see any evidence that the Linux client explicitly changes the
group ownership after the directory is made. If I might inquire, which
source code are you looking at? Is it available though the OpenSolaris
online source browser? If so, could I trouble you for a link to it?

Thanks much for any help you might provide in clarifying this issue, and if
our understanding of the behavior turns out to be accurate, any help in
getting a change committed to better respect the sgid bit :)...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768

linux_mkdir.pcap
Description: Binary data


solaris_mkdir.pcap
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread C. Bergström

Miles Nordin wrote:

"pt" == Peter Tribble  writes:



pt> Does it make sense to fold this sort of intelligence into the
pt> filesystem, or is it really an application-level task?

in general it seems all the time app writers want to access hundreds
of thousands of files by unique id rather than filename, and the POSIX
directory interface is not really up to the task.

Dear zfs'ers

It's possible to heavily influence the next POSIX/UNIX standard if 
you're interested to test or give feedback ping me off list.  The Open 
Group does take feedback before they implement the next version of the 
standard and now is a good time to participate in that.


Best,

./Christopher
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] internal scrub keeps restarting resilvering?

2009-10-29 Thread Jeremy Kitchen
After several days of trying to get a 1.5TB drive to resilver and it  
continually restarting, I eliminated all of the snapshot-taking  
facilities which were enabled and


2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3  
maxtxg=567354

2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2009-10-29.16:52:53 [internal pool scrub txg:567999] func=1 mintxg=3  
maxtxg=567354

2009-10-29.16:54:09 [internal pool scrub done txg:568003] complete=0
2009-10-29.16:54:09 [internal pool scrub txg:568003] func=1 mintxg=3  
maxtxg=567354

2009-10-29.16:55:24 [internal pool scrub done txg:568007] complete=0
2009-10-29.16:55:24 [internal pool scrub txg:568007] func=1 mintxg=3  
maxtxg=567354


It currently has one drive which is brand new and was just replaced,  
and quite likely another one on the way out:


r...@raid3136:~# zpool status raid3149
  pool: raid3149
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are  
unaffected.
action: Determine if the device needs to be replaced, and clear the  
errors

using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 0h10m, 0.05% done, 344h18m to go
config:

NAME STATE READ WRITE CKSUM
raid3149 ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c6t9d0   ONLINE   0 0 0
c6t10d0  ONLINE   0 0 0
c6t11d0  ONLINE   0 0 0
c6t12d0  ONLINE   0 0 0
c6t13d0  ONLINE   0 0 0
c6t14d0  ONLINE   0 0 0
c6t15d0  ONLINE   0 0 0
c6t16d0  ONLINE   0 0 0
c6t17d0  ONLINE   0 0 0
c6t18d0  ONLINE   0 0 0
c6t19d0  ONLINE   0 0 0
c6t20d0  ONLINE   0 0 0
c6t21d0  ONLINE   0 0 0
c6t22d0  ONLINE   0 0 0  592M resilvered
c6t23d0  ONLINE   0 0 0
c6t24d0  ONLINE   0 0 0
c6t46d0  ONLINE   0 0 0
c6t47d0  ONLINE   0 0 0
c6t25d0  ONLINE   0 0 0
c6t26d0  ONLINE   0 0 0
c6t48d0  ONLINE   0 0 0
c6t49d0  ONLINE   0 0 0
c6t50d0  ONLINE 131 0 0  1.22M resilvered
c6t51d0  ONLINE   0 0 0

errors: No known data errors


c6t22d0 is the brand new drive and c6t50d0 is the one I think is on  
its way out.


is there any way to disable this internal scrub temporarily so it can  
finish resilvering the pool and I can get that other drive replaced?   
I really don't want to have to replace both drives at the same time  
since that leaves me with no redundancy in the pool.


-Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-29 Thread Scott Meilicke
Hi all,

I received my SSD, and wanted to test it out using fake zpools with files as 
backing stores before attaching it to my production pool. However, when I 
exported the test pool and imported, I get an error. Here is what I did:

I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest

Created a new pool:
zpool create ziltest2 /data01/test2/1gtest 

Added the SSD as a log:
zpool add -f ziltest2 log c7t1d0

(c7t1d0 is my SSD. I used the -f option since I had done this before with a 
pool called 'ziltest', same results)

A 'zpool status' returned no errors.

Exported:
zpool export ziltest2

Imported:
zpool import -d /data01/test2 ziltest2
cannot import 'ziltest2': one or more devices is currently unavailable

This happened twice with two different test pools using file-based backing 
stores.

I am nervous about adding the SSD to my production pool. Any ideas why I am 
getting the import error?

Thanks,
Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread Miles Nordin
> "pt" == Peter Tribble  writes:

pt> Does it make sense to fold this sort of intelligence into the
pt> filesystem, or is it really an application-level task?

in general it seems all the time app writers want to access hundreds
of thousands of files by unique id rather than filename, and the POSIX
directory interface is not really up to the task.  In theory there are
supposed to be all these hashed directories and decent performance
with huge directories, but in practice posters keep mentioning stupid
gotchya's like API's that enforce full scans of the directory even if
the filesystem itself doesn't, or lack of garbage collection so that
churning queue directories end up swollen with dead space.  The usual
workaround for all gotchyas of making two-levels-deep nested subdirs
just means a couple extra seeks per subdir level before you reach your
inode number, extra seeks just to accomodate the overcomplex
filesystem API, so sometimes app developers end up rolling their own
filesystem-in-a-file like Hadoop instead just to get away from the
presumption they want filenames and nested directories.  A uuid-subdir
interface, or directoryless uuid-filesystem option for 'zfs create',
might gather a lot of application consumers.


pgpbD3u8x7oya.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread David Magda

On Oct 29, 2009, at 15:08, Henrik Johansson wrote:


On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:


On Thu, 29 Oct 2009, Orvar Korvar wrote:

So the solution is to never get more than 90% full disk space, för  
fan?


Right.  While UFS created artificial limits to keep the filesystem  
from getting so full that it became sluggish and "sick", ZFS does  
not seem to include those protections.  Don't ever run a ZFS pool  
for a long duration of time at very close to full since it will  
become excessively fragmented.


Setting quotas for all dataset could perhaps be of use for some of  
us. A "überquota" property for the whole pool would have been nice  
until a real solution is available.


Or create "lost+found" with 'zfs create' and give it a reservation.

The 'directory name' won't look too much out of place, and there'll be  
some space set aside.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Casper . Dik


>I posted a little while back about a problem we are having where when a
>new directory gets created over NFS on a Solaris NFS server from a Linux
>NFS client, the new directory group ownership is that of the primary group
>of the process, even if the parent directory has the sgid bit set and is
>owned by a different group.
>
>Basically, a Solaris client in such an instance explicitly requests that
>the new directory be owned by the group of the parent directory, and the
>server follows that request. A Linux NFS client, on the other hand, does
>not explicitly request any particular group ownership for the new
>directory, leaving the server to decide that on its own, which in the case
>of the Solaris server, is not the "right" group.

Do you have the complete NFS trace output?  My reading of the source code 
says that the file will be created with the proper gid so I am actually 
believing that the client "over corrects" the attributes after creating 
the file/directory.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread Peter Tribble
On Sat, Oct 24, 2009 at 12:12 PM, Orvar Korvar
 wrote:
> Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I 
> dont know. What do you think, and how to improve this?
>
> Assume all files are put in the zpool, helter skelter. And then you can 
> create arbitrary different filters that shows you the files you want to see.

So why not pipe ls through grep?

> As of now, you have files in one directory structure. This makes the 
> organization of the files, hardcoded. You have /Movies/Action and that is it. 
> But if you had all movies in one large zpool, and if you could 
> programmatically define different structures that act as filters, you could 
> have different directory structures.
>
> Programmatically defined directory structure1, that acts on the zpool:
> /Movies/Action
>
> Programmatically defined directory structure2:
> /Movies/Actors/AlPacino
>
> etc.
>
> Maybe this is what MS WinFS was about? Maybe tag the files? Maybe a 
> relational database ontop ZFS? Maybe no directories at all? I dont know, just 
> brain storming. Is this is a dumb idea? Or old idea?

Old idea - I remember systems that did this or variants of it a
really long time ago. However, there you knew ahead of time
what applications you were going to run.

Does it make sense to fold this sort of intelligence into the
filesystem, or is it really an application-level task?

(And then remember that the filesystem can be accessed by an
almost infinite array of applications, not to mention remotely over
NFS or CIFS - how would they make sense of what you might build?)

Talking of NFS, you could imagine some sort of user-level nfs server
atop the filesystem that presents different views. Or other layered
filesystems (mvfs, for example) that present a modified view. That
seems a more fruitful approach than trying to build this into ZFS
itself.

The problems then become: what additional metadata do you
need? (You can hide the metadata in extended attributes if you
don't want it obviously visible.) And how do you keep the metadata
in sync with the real data in the face of modifications by applications
that aren't aware of your scheme?

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson

I posted a little while back about a problem we are having where when a
new directory gets created over NFS on a Solaris NFS server from a Linux
NFS client, the new directory group ownership is that of the primary group
of the process, even if the parent directory has the sgid bit set and is
owned by a different group.

Basically, a Solaris client in such an instance explicitly requests that
the new directory be owned by the group of the parent directory, and the
server follows that request. A Linux NFS client, on the other hand, does
not explicitly request any particular group ownership for the new
directory, leaving the server to decide that on its own, which in the case
of the Solaris server, is not the "right" group.

The POSIX spec on this is somewhat ambiguous, so you can't really say the
Solaris implementation is "broken", but while perhaps following the letter
of the spec, I don't think it's following the spirit of the sgid bit on
directories.

I have a CR, #6894234, which is currently being reviewed through Sun
support. It seems their current inclination is to not change the behavior.

Again, while not technically broken, I would argue this behavior is
undesirable. The semantics of the sgid bit on directories are that new
subdirectories should be owned by the group of the parent directory. That's
what happens under Solaris for local file system access. That's what
happens under Solaris if a directory is made via NFS from a Solaris NFS
client. It's not what happens when a new directory is created via NFS from
a Linux NFS client, or any other NFS client that does not explicitly
request the group ownership when creating a directory. While POSIX does not
explicitly specify what a server should do when creating a new directory
and the client does not specify the group ownership, in the case where the
new directory resides in an existing directory with the sgid bit set,
following standard sgid bit directory group ownership semantics seems the
most appropriate thing to do.

If any Sun engineers with an interest in improved interoperability and
keeping true to the spirit of the sgid bit could take a look at this CR and
weigh in on its final resolution, that would be greatly appreciated.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan


> So the solution is to never get more than 90% full disk space

while that's true, its not Henrik's main discovery. Henrik points
out that 1/4 of the arc is used for metadata, and sometime
that's not enough..

if
echo "::arc" | mdb -k | egrep ^size
isn't reaching
echo "::arc" | mdb -k | egrep "^c "
and you are maxing out your metadata space, check:
echo "::arc" | mdb -k | grep meta_

one can set the metadata space (1G in this case) with:
echo "arc_meta_limit/Z 0x400" | mdb -kw

So while Henrik's FS had some fragmentation, 1/4 of c_max wasn't
enough metadata arc space for number of files in /var/pkg/download

good find Henrik!

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Henrik Johansson


On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:


On Thu, 29 Oct 2009, Orvar Korvar wrote:

So the solution is to never get more than 90% full disk space, för  
fan?


Right.  While UFS created artificial limits to keep the filesystem  
from getting so full that it became sluggish and "sick", ZFS does  
not seem to include those protections.  Don't ever run a ZFS pool  
for a long duration of time at very close to full since it will  
become excessively fragmented.


Setting quotas for all dataset could perhaps be of use for some of us.  
A "überquota" property for the whole pool would have been nice until a  
real solution is available.


Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen

Daniel,

What is the actual size of c1d1?

>I notice that the size of the first partition is wildly inaccurate.

If format doesn't understand the disk, then ZFS won't either.

Do you have some kind of intervening software like EMC powerpath
or are these disks under some virtualization control?

If so, I would try removing them from this control and retry the
add operation.

Cindy

On 10/29/09 12:07, Daniel wrote:

Yes I am trying to create a non-redundant pool of two disks.

The output of format -> partition for c0d0
Current partition table (original):
Total disk sectors available: 976743646 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  465.75GB  976743646   
  1 unassignedwm 0   0   0   
  2 unassignedwm 0   0   0   
  3 unassignedwm 0   0   0   
  4 unassignedwm 0   0   0   
  5 unassignedwm 0   0   0   
  6 unassignedwm 0   0   0   
  8   reservedwm 9767436478.00MB  976760030   


And for c1d1

Current partition table (original):
Total disk sectors available: 18446744072344831966 + 16384 (reserved 
sectors)


Part  TagFlag First Sector
SizeLast Sector
  0usrwm34   
8589934592.00TB  8446744072344831966   
  1 unassignedwm 0  
0  0   
  2 unassignedwm 0  
0  0   
  3 unassignedwm 0  
0  0   
  4 unassignedwm 0  
0  0   
  5 unassignedwm 0  
0  0   
  6 unassignedwm 0  
0  0   
  8   reservedwm  18446744072344831967   
8.00MB 18446744072344848350


I notice that the size of the first partition is wildly inaccurate.

creating tank2 gives the same error.

# zpool create tank2 c1d1
cannot create 'tank2': invalid argument for this pool operation

Thanks for your help.

On Thu, Oct 29, 2009 at 1:54 PM, Cindy Swearingen 
mailto:cindy.swearin...@sun.com>> wrote:


I might need to see the format-->partition output for both c0d0 and
c1td1.

But in the meantime, you could try this:

# zpool create tank2 c1d1
# zpool destroy tank2
# zpool add tank c1d1

Adding the c1d1 disk to the tank pool will create a non-redundant pool
of two disks. Is this what you had in mind?

Thanks,

Cindy


On 10/29/09 10:17, Daniel wrote:

Here is the output of zpool status and format.

# zpool status tank
 pool: tank
 state: ONLINE
 scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   tankONLINE   0 0 0
 c0d0  ONLINE   0 0 0

errors: No known data errors

format> current
Current Disk = c1d1

/p...@0,0/pci-...@1f,2/i...@0/c...@1,0



On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen
mailto:cindy.swearin...@sun.com>
>> wrote:


   Hi Dan,

   Could you provide a bit more information, such as:

   1. zpool status output for tank

   2. the format entries for c0d0 and c1d1

   Thanks,

   Cindy
   - Original Message -
   From: Daniel mailto:dan.lis...@gmail.com> >>
   Date: Thursday, October 29, 2009 9:59 am
   Subject: [zfs-discuss] adding new disk to pool
   To: zfs-discuss@opensolaris.org

>


> Hi,
>
> I just installed 2 new disks in my solaris box and would
like to add
> them to
> my zfs pool.
> After installing the disks I run
> # zpool add -n tank c1d1
>
> and I get:
>
> would update 'tank' to the following configuration:
> tank
>   c0d0
>   c1d1
>
> Which is what I want however when I omit the -n I get the
   following error
>
> # zpool add tank c1d1
> cannot add to 'tank': invalid 

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Yes I am trying to create a non-redundant pool of two disks.

The output of format -> partition for c0d0
Current partition table (original):
Total disk sectors available: 976743646 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  465.75GB  976743646
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 9767436478.00MB  976760030

And for c1d1

Current partition table (original):
Total disk sectors available: 18446744072344831966 + 16384 (reserved
sectors)

Part  TagFlag First Sector
SizeLast Sector
  0usrwm34
8589934592.00TB  8446744072344831966
  1 unassignedwm 0
0  0
  2 unassignedwm 0
0  0
  3 unassignedwm 0
0  0
  4 unassignedwm 0
0  0
  5 unassignedwm 0
0  0
  6 unassignedwm 0
0  0
  8   reservedwm  18446744072344831967   8.00MB
18446744072344848350

I notice that the size of the first partition is wildly inaccurate.

creating tank2 gives the same error.

# zpool create tank2 c1d1
cannot create 'tank2': invalid argument for this pool operation

Thanks for your help.

On Thu, Oct 29, 2009 at 1:54 PM, Cindy Swearingen
wrote:

> I might need to see the format-->partition output for both c0d0 and
> c1td1.
>
> But in the meantime, you could try this:
>
> # zpool create tank2 c1d1
> # zpool destroy tank2
> # zpool add tank c1d1
>
> Adding the c1d1 disk to the tank pool will create a non-redundant pool
> of two disks. Is this what you had in mind?
>
> Thanks,
>
> Cindy
>
>
> On 10/29/09 10:17, Daniel wrote:
>
>> Here is the output of zpool status and format.
>>
>> # zpool status tank
>>  pool: tank
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>NAMESTATE READ WRITE CKSUM
>>tankONLINE   0 0 0
>>  c0d0  ONLINE   0 0 0
>>
>> errors: No known data errors
>>
>> format> current
>> Current Disk = c1d1
>> 
>> /p...@0,0/pci-...@1f,2/i...@0/c...@1,0
>>
>>
>>
>> On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen <
>> cindy.swearin...@sun.com > wrote:
>>
>>
>>Hi Dan,
>>
>>Could you provide a bit more information, such as:
>>
>>1. zpool status output for tank
>>
>>2. the format entries for c0d0 and c1d1
>>
>>Thanks,
>>
>>Cindy
>>- Original Message -
>>From: Daniel mailto:dan.lis...@gmail.com>>
>>Date: Thursday, October 29, 2009 9:59 am
>>Subject: [zfs-discuss] adding new disk to pool
>>To: zfs-discuss@opensolaris.org 
>>
>>
>> > Hi,
>> >
>> > I just installed 2 new disks in my solaris box and would like to add
>> > them to
>> > my zfs pool.
>> > After installing the disks I run
>> > # zpool add -n tank c1d1
>> >
>> > and I get:
>> >
>> > would update 'tank' to the following configuration:
>> > tank
>> >   c0d0
>> >   c1d1
>> >
>> > Which is what I want however when I omit the -n I get the
>>following error
>> >
>> > # zpool add tank c1d1
>> > cannot add to 'tank': invalid argument for this pool operation
>> >
>> > I get the same message for both dirves with and without the -f
>>option.
>> > Any help is appreciated thanks.
>> >
>> > --
>> > -Daniel
>> > ___
>> > zfs-discuss mailing list
>> > zfs-discuss@opensolaris.org 
>>
>> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>>
>>
>> --
>> -Daniel
>>
>


-- 
-Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen

I might need to see the format-->partition output for both c0d0 and
c1td1.

But in the meantime, you could try this:

# zpool create tank2 c1d1
# zpool destroy tank2
# zpool add tank c1d1

Adding the c1d1 disk to the tank pool will create a non-redundant pool
of two disks. Is this what you had in mind?

Thanks,

Cindy

On 10/29/09 10:17, Daniel wrote:

Here is the output of zpool status and format.

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c0d0  ONLINE   0 0 0

errors: No known data errors

format> current
Current Disk = c1d1

/p...@0,0/pci-...@1f,2/i...@0/c...@1,0



On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen 
mailto:cindy.swearin...@sun.com>> wrote:



Hi Dan,

Could you provide a bit more information, such as:

1. zpool status output for tank

2. the format entries for c0d0 and c1d1

Thanks,

Cindy
- Original Message -
From: Daniel mailto:dan.lis...@gmail.com>>
Date: Thursday, October 29, 2009 9:59 am
Subject: [zfs-discuss] adding new disk to pool
To: zfs-discuss@opensolaris.org 


 > Hi,
 >
 > I just installed 2 new disks in my solaris box and would like to add
 > them to
 > my zfs pool.
 > After installing the disks I run
 > # zpool add -n tank c1d1
 >
 > and I get:
 >
 > would update 'tank' to the following configuration:
 > tank
 >   c0d0
 >   c1d1
 >
 > Which is what I want however when I omit the -n I get the
following error
 >
 > # zpool add tank c1d1
 > cannot add to 'tank': invalid argument for this pool operation
 >
 > I get the same message for both dirves with and without the -f
option.
 > Any help is appreciated thanks.
 >
 > --
 > -Daniel
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org 
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
-Daniel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Bob Friesenhahn

On Thu, 29 Oct 2009, Orvar Korvar wrote:


So the solution is to never get more than 90% full disk space, för fan?


Right.  While UFS created artificial limits to keep the filesystem 
from getting so full that it became sluggish and "sick", ZFS does not 
seem to include those protections.  Don't ever run a ZFS pool for a 
long duration of time at very close to full since it will become 
excessively fragmented.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Here is the output of zpool status and format.

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c0d0  ONLINE   0 0 0

errors: No known data errors

format> current
Current Disk = c1d1

/p...@0,0/pci-...@1f,2/i...@0/c...@1,0



On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen  wrote:

>
> Hi Dan,
>
> Could you provide a bit more information, such as:
>
> 1. zpool status output for tank
>
> 2. the format entries for c0d0 and c1d1
>
> Thanks,
>
> Cindy
> - Original Message -
> From: Daniel 
> Date: Thursday, October 29, 2009 9:59 am
> Subject: [zfs-discuss] adding new disk to pool
> To: zfs-discuss@opensolaris.org
>
>
> > Hi,
> >
> > I just installed 2 new disks in my solaris box and would like to add
> > them to
> > my zfs pool.
> > After installing the disks I run
> > # zpool add -n tank c1d1
> >
> > and I get:
> >
> > would update 'tank' to the following configuration:
> > tank
> >   c0d0
> >   c1d1
> >
> > Which is what I want however when I omit the -n I get the following error
> >
> > # zpool add tank c1d1
> > cannot add to 'tank': invalid argument for this pool operation
> >
> > I get the same message for both dirves with and without the -f option.
> > Any help is appreciated thanks.
> >
> > --
> > -Daniel
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
-Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen

Hi Dan,

Could you provide a bit more information, such as:

1. zpool status output for tank

2. the format entries for c0d0 and c1d1

Thanks,

Cindy
- Original Message -
From: Daniel 
Date: Thursday, October 29, 2009 9:59 am
Subject: [zfs-discuss] adding new disk to pool
To: zfs-discuss@opensolaris.org


> Hi,
> 
> I just installed 2 new disks in my solaris box and would like to add 
> them to
> my zfs pool.
> After installing the disks I run
> # zpool add -n tank c1d1
> 
> and I get:
> 
> would update 'tank' to the following configuration:
> tank
>   c0d0
>   c1d1
> 
> Which is what I want however when I omit the -n I get the following error
> 
> # zpool add tank c1d1
> cannot add to 'tank': invalid argument for this pool operation
> 
> I get the same message for both dirves with and without the -f option.
> Any help is appreciated thanks.
> 
> -- 
> -Daniel
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Hi,

I just installed 2 new disks in my solaris box and would like to add them to
my zfs pool.
After installing the disks I run
# zpool add -n tank c1d1

and I get:

would update 'tank' to the following configuration:
tank
  c0d0
  c1d1

Which is what I want however when I omit the -n I get the following error

# zpool add tank c1d1
cannot add to 'tank': invalid argument for this pool operation

I get the same message for both dirves with and without the -f option.
Any help is appreciated thanks.

-- 
-Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Orvar Korvar
So the solution is to never get more than 90% full disk space, för fan?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10U8 msg/ZFS-8000-9P

2009-10-29 Thread Andrew Gabriel

Lasse Osterild wrote:

Hi,

Seems either Solaris or SunSolve is in need of an update.

pool: dataPool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the 
errors

using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P

When I go the the URL listed everything's fine, at the bottom there's 
a URL linking to a SunSolve document.


"Details
The Message ID: ZFS-8000-9P indicates a device has exceeded the 
acceptable limit of errors allowed by the system. See document 203768 
for additional information."


And here's what SunSolve throws at me, not very helpful.

"You have selected content which is restricted to Sun Internal 
Employees Only:


Thank you for using SunSolve, the result you have selected is only 
available to Sun Internal Employees. It is available in the search 
engine to advise you that there is content on the topic you are 
searching. If you have having specific issues which look like they 
could be resolved by this content please log a Service Request. In the 
Service Request document the search result you found which could be 
part of the resolution." 


I just raised:

CR 6896270 http://www.sun.com/msg/ZFS-8000-9P refers to internal-only 
Sunsolve document

(The CR won't show up yet externally though).

The relevant part of the document just shows an example zpool replace 
command.
Another part of the document gives advice which I suspect is incorrect 
(and certainly generally unsuitable), and I'm guessing it's just 
someone's internal notes, and never checked for correctness for external 
exposure.


--
Cheers
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] S10U8 msg/ZFS-8000-9P

2009-10-29 Thread Lasse Osterild

Hi,

Seems either Solaris or SunSolve is in need of an update.

  pool: dataPool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are  
unaffected.
action: Determine if the device needs to be replaced, and clear the  
errors

using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P

When I go the the URL listed everything's fine, at the bottom there's  
a URL linking to a SunSolve document.


"Details
The Message ID: ZFS-8000-9P indicates a device has exceeded the  
acceptable limit of errors allowed by the system. See document 203768  
for additional information."


And here's what SunSolve throws at me, not very helpful.

"You have selected content which is restricted to Sun Internal  
Employees Only:


Thank you for using SunSolve, the result you have selected is only  
available to Sun Internal Employees. It is available in the search  
engine to advise you that there is content on the topic you are  
searching. If you have having specific issues which look like they  
could be resolved by this content please log a Service Request. In the  
Service Request document the search result you found which could be  
part of the resolution."


 - Lasse
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS create hanging on door call?

2009-10-29 Thread Miles Benson
Hi,

Did anyone ever get to the bottom of this?  After enabling smb, I'm now seeing 
this behaviour - zfs create just hangs.

Thanks
Miles
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss