[zfs-discuss] Is file cloning anywhere on ZFS roadmap

2010-04-20 Thread Schachar Levin
Hi,
We are currently using NetApp file clone option to clone multiple VMs on our FS.

ZFS dedup feature is great storage space wise but when we need to clone allot 
of VMs it just takes allot of time.

Is there a way (or a planned way) to clone a file without going through the 
process of actually copying the blocks, but just duplicating its meta data like 
NetApp does?

TIA,
-- Schachar
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread Brandon High
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
 wrote:
> there's a file or something you want to rollback, it's presently difficult
> to know how far back up the tree you need to go, to find the correct ".zfs"
> subdirectory, and then you need to figure out the name of the snapshots

There is one feature that OnTap has which I miss in zfs. Every
directory has a hidden .snapshot directory, so you never need to look
in the parents.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Double slash in mountpoint

2010-04-20 Thread Ryan John
Thanks. That was it

-Original Message-
From: Brandon High [mailto:bh...@freaks.com] 
Sent: Wednesday, 21 April 2010 6:57 AM
To: Ryan John
Cc: zfs-discuss
Subject: Re: [zfs-discuss] Double slash in mountpoint

On Tue, Apr 20, 2010 at 7:38 PM, Ryan  John  wrote:
> Anyone know how to fix it?
> I can't even do a zfs destroy

zfs unmount -a -f

-B

-- 
Brandon High : bh...@freaks.com


smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Double slash in mountpoint

2010-04-20 Thread Brandon High
On Tue, Apr 20, 2010 at 7:38 PM, Ryan  John  wrote:
> Anyone know how to fix it?
> I can't even do a zfs destroy

zfs unmount -a -f

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Double slash in mountpoint

2010-04-20 Thread Ryan John
Hi Timothy,
That didn't work either.

# zfs inherit mountpoint dataPool/SoftwareRepo
cannot unmount '/sw-repo1/dir2': Device busy

Regards
John

-Original Message-
From: Timothy Haley [mailto:tim.ha...@oracle.com] 
Sent: Wednesday, 21 April 2010 5:52 AM
To: Ryan John
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Double slash in mountpoint

Ryan John wrote:
> Hi,
>
> I've accidentally put a double slash in a mountpoint, and now can't change it.
>
> # zfs list
> ...
> dataPool/SoftwareRepo 529G  31.3T  73.1K  /sw-repo1/
> dataPool/SoftwareRepo/dir1   6.10G  31.3T  6.10G  /sw-repo1//dir1
> dataPool/SoftwareRepo/dir2   26.0G  31.3T  25.7G  /sw-repo1//dir2
> ...
> # zfs get mountpoint dataPool/usrLocalSoftwareRepo
> NAME   PROPERTYVALUE   SOURCE
> dataPool/SoftwareRepo  mountpoint  /sw-repo1/  local
> # zfs set mountpoint=/sw-repo/ dataPool/SoftwareRepo
> cannot unmount '/sw-repo1/dir2': Device busy
>
> Anyone know how to fix it?
> I can't even do a zfs destroy
>
> I'm running snv_133
>
>   
Try zfs inherit mountpoint dataPool/SoftwareRepo
to reset the mountpoint.

-tim

> Cheers
> John
>
>   
> 
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nicolas Williams
> 
> The .zfs/snapshot directory is most certainly available over NFS.

I'm not sure you've been following this thread.  Nobody said .zfs/snapshot
wasn't available over NFS.  It was said that all the snapshot subdirectories
".zfs/snapshot/frequent-blah" and ".zfs/snapshot/hourly-foo" and so on ... 

Over NFS there's no way to know the time those snapshots were taken.  There
is a convention of writing the time of the snapshot into the name of the
snapshot, but if you can't rely on that, then the NFS client doesn't know
the order of snapshots.

 
> And you can even create, rename and destroy snapshots by creating,
> renaming and removing directories in .zfs/snapshot:
> 
> % mkdir .zfs/snapshot/foo
> % mv .zfs/snapshot/foo .zfs/snapshot/bar
> % rmdir .zfs/snapshot/bar
> 
> (All this also works locally, not just over ZFS.)

Holy crap, for real?  I'll have to try that.  ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Double slash in mountpoint

2010-04-20 Thread Timothy Haley

Ryan John wrote:

Hi,

I've accidentally put a double slash in a mountpoint, and now can't change it.

# zfs list
...
dataPool/SoftwareRepo 529G  31.3T  73.1K  /sw-repo1/
dataPool/SoftwareRepo/dir1   6.10G  31.3T  6.10G  /sw-repo1//dir1
dataPool/SoftwareRepo/dir2   26.0G  31.3T  25.7G  /sw-repo1//dir2
...
# zfs get mountpoint dataPool/usrLocalSoftwareRepo
NAME   PROPERTYVALUE   SOURCE
dataPool/SoftwareRepo  mountpoint  /sw-repo1/  local
# zfs set mountpoint=/sw-repo/ dataPool/SoftwareRepo
cannot unmount '/sw-repo1/dir2': Device busy

Anyone know how to fix it?
I can't even do a zfs destroy

I'm running snv_133

  

Try zfs inherit mountpoint dataPool/SoftwareRepo
to reset the mountpoint.

-tim


Cheers
John

  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-20 Thread Edward Ned Harvey
> From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
> Of casper@sun.com
> 
> >On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
> >> Improbability assessment aside, suppose you use something like the
> DDRDrive
> >> X1 ... Which might be more like 4G instead of 32G ... Is it even
> physically
> >> possible to write 4G to any device in less than 10 seconds?
> Remember, to
> >> achieve worst case, highest demand on ZIL log device, these would
> all have
> >> to be <32kbyte writes (default configuration), because larger writes
> will go
> >> directly to primary storage, with only the intent landing on the
> ZIL.
> >
> >Note that ZFS always writes data in order so I believe that the
> >statement "larger writes will go directly to primary storage" really
> >should be "larger writes will go directly to the ZIL implemented in
> >primary storage (which always exists)".  Otherwise, ZFS would need to
> >write a new TXG whenever a new "large" block of data appeared (which
> >may be puny as far as the underlying store is concerned) in order to
> >assure proper ordering.  This would result in a very high TXG issue
> >rate.  Pool fragmentation would be increased.
> >
> >I am sure that someone will correct me if this is wrong.
> 
> There's a difference between "written" and "the data is referenced by
> the
> uberblock".  There is no need to start a new TXG when a large datablock
> is written.  (If the system resets, the data will be on disk but not
> referenced and is lost unless the TXG it belongs to is comitted)

*Also* it turns out, what I said was not strictly correct either.  I think
I'm too sleepy to get this correct right now, but ...

My (hopefully corrected) understanding is now:

By default, all sync writes will go to ZIL entirely, regardless of size.
Only if you change the ... what is it ... logbias to ... throughput.  Then,
if you have a large sync write, the bulk of data will be written to primary
storage, while just a tiny little intent will be written to the SSD.

I think I misunderstood the default.  I previously thought throughput was
the default, not latency.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] upgrade zfs stripe

2010-04-20 Thread Edward Ned Harvey
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
> 
> On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
> >
> > Just be aware that if *any* of your devices fail, all is lost.
> (Because
> > you've said it's configured as a nonredundant stripe.)
> 
> The good news is that it is easy to convert any single-disk vdev into
> a mirror vdev.  It is also easy to convert a mirror vdev into a
> single-disk vdev.  This means that you can upgrade your simple
> "stripe" into a stripe of mirrors.

A really good point.

Yes, you can take a 2-disk stripe volume, and make a 4-disk stripe of
mirrors volume of the same size.

No, you cannot take a 2-disk volume, and make it a 3-disk raidz volume.
Unless you're willing to destroy and restore all your data.

Also, since the question was "can I expand my volume just by adding more
disks" this is worth mention too:  Whatever type of volume you have, be it a
stripe, a mirror, a stripe of mirrors, a raidz set, or whatever ... You can
always expand the volume by just adding disks to it.  But if you're adding
non-redundant disks, you're not fully redundant anymore.  If you add a
non-redundant disk to some pool that had redundancy, and your new disk dies,
then your pool is lost.

If you have a raidz volume, of "n" disks, you cannot simply add 1 disk, and
have a raidz volume of "n+1" disks.

I don't know if I'm just adding to confusion here.  Sorry if it's not more
clear.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Benchmarking Methodologies

2010-04-20 Thread Ben Rockwood
I'm doing a little research study on ZFS benchmarking and performance
profiling.  Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.


To that end, I'm curious if folks wouldn't mind sharing their work on
the subject?  What tool(s) to you prefer in what situations?  Do you
have a standard method of running them (tool args; block sizes, thread
counts, ...) or procedures between runs (zpool import/export, new
dataset creation,...)?  etc.


Any feedback is appreciated.  I want to get a good sampling of opinions.

Thanks!



benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Double slash in mountpoint

2010-04-20 Thread Ryan John
Hi,

I've accidentally put a double slash in a mountpoint, and now can't change it.

# zfs list
...
dataPool/SoftwareRepo 529G  31.3T  73.1K  /sw-repo1/
dataPool/SoftwareRepo/dir1   6.10G  31.3T  6.10G  /sw-repo1//dir1
dataPool/SoftwareRepo/dir2   26.0G  31.3T  25.7G  /sw-repo1//dir2
...
# zfs get mountpoint dataPool/usrLocalSoftwareRepo
NAME   PROPERTYVALUE   SOURCE
dataPool/SoftwareRepo  mountpoint  /sw-repo1/  local
# zfs set mountpoint=/sw-repo/ dataPool/SoftwareRepo
cannot unmount '/sw-repo1/dir2': Device busy

Anyone know how to fix it?
I can't even do a zfs destroy

I'm running snv_133

Cheers
John



smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] In iSCSI hell...

2010-04-20 Thread Ron Mexico
I have a storage server with snv_134 installed. This has four zfs file systems 
shared with iscsi that are mounted as zfs volumes on a Sun v480.

Everything has been working great for about a month, and all of a sudden the 
v480 has timeout errors when trying to connect to the iscsi volumes on the 
storage server.

The connection between the two is a gigabit crossover cable, so other network 
traffic isn't an issue. HBAs, NICs, and cables in the storage server have been 
troubleshot and are working normally. The only common element here is the Intel 
nic in the v480. It seems to work OK otherwise, but it's the only component in 
this equation that hasn't changed.

What's the consensus on NIC configurations for iscsi? Are errors like this 
common if the MTU is set at the default of 1500?

Any and all comments/opinions are welcome.

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Daniel Carosone
On Tue, Apr 20, 2010 at 12:55:10PM -0600, Cindy Swearingen wrote:
> You can use the OpenSolaris beadm command to migrate a ZFS BE over
> to another root pool, but you will also need to perform some manual
> migration steps, such as
> - copy over your other rpool datasets
> - recreate swap and dump devices
> - install bootblocks
> - update BIOS and GRUB entries to boot from new root pool

I've also found it handy to use different names for each rpool.
Sometimes it's handy to boot an image that's entirely on a removable
disk, for example, and move that between hosts. The last thing you
want is a name clash or confusion about which pool is which.

In addition to the "import name" of the pool, there's another name
that needs to be changed. This is the "boot name" of the pool; it's
the name grub looks for in the "findroot(pool_rpool,...)" line.
That name is found in the root fs of the pool, in ./etc/bootsign (so
typically mounted at /poolname/etc/bootsign).

--
Dan.

pgpTeuorcSbDh.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-20 Thread Michael Bosch
I believe the question of Ned and the answers given have more far
reaching consequences than has been discussed so far.

When I read this thread I thought there was an easy solution to
deleting files from a snapshot by using clones instead.  Clones are a
writable copy so you should be able to delete files selectively.

However it turns out that this is not possible:

- Clones can only be made from snapshots

- Snapshots can only be deleted if there is only one dataset that
  directly derives from them.

The snapshot can not be destroyed as long as both the original file
system and the clone live. And the "big file" that Ned wants to delete
will still survive in the snapshot even if deleted from the clone.

Example:
zfs create t1
zfs snapshot t...@s1
zfs snapshot t...@s2
zfs snapshot t...@s3
zfs clone t...@s2 t2
zfs snapshot t...@s4

creates datasets with the following origin graph

owned by t1: s1 <- s2 <- s3 <- t1
owned by t2:  (s2) <- s4 <- t2

Any snapshot can be destroyed except s2.  And t2 can be destroyed but
not t1 since destroying a filesystem implicitly destroys all owned
snapshots. zfs promote can be used to swap the roles of t1 and t2.

Thus it seems that clones are not at all independent copies, because
all copies of a filesystem are always linked via an older snapshot.
Given the central role of read-only snapshots to the design, my fear
would be that sharing files across independent file systems is
impossible.

Then again there is pool wide deduplication which seems to have no
problem sharing blocks across independent filesystems.  Creating a new
filesystem and copying all files over should result in two filesystems
sharing the same blocks.  This very same result should also be
achievable without having to copy the data.  And with independent
clones Neds use case would be possible.

ZFS has created a world where setting up several filesystems even
within a single users home directory is encouraged and has been made
very easy.  I would find it highly desirable if files could be moved
across filesystem boundaries without the need to revert to a copy &
remove.  This is both due to the difference in performance as well as
due to the waste of space if the moved file is already part of a
snapshot of the old filesystem.

Can somebody with more knowledge about the ZFS internals say something
on the possibility of independent clones and sharing / moving between
filesystems?

Michael Bosch
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Bayard Bell

Ken,

The sharpest parts of my remarks weren't directed your way, and I  
regret if that wasn't as clear as I had thought. For clarification: I  
was referring to the thread as starting with what you forwarded by URL  
(which was sent from a gmail address to the freebsd list), and my  
objection was in the first instance to people who came back and  
questioned the reply because it came from a gmail address.


As fortuitous as it is that someone from Oracle stepped into say that  
there are no plans to strip ZFS out of OpenSolaris, my fundamental  
objection nonetheless stands: the premise that it would be pulled had  
a basic lack of credibility in both its premise and its particulars. I  
appreciate that you mean to help by quashing these kind of stories,  
but do we really mean to re-task the OpenSolaris list with challenging  
every suggestive rumour that shows up on the Internet about the impact  
of the acquisition on product roadmaps? I mean, is asking to have it  
rejected by authoritative sources really a compelling reason to  
circulate it further in the first instance? That seems to involve some  
risks (e.g. perhaps you don't get a response, as there seem to be a  
number of people trying to get the hang of Oracle corporate  
communications policies, which seem deeply vexing to parts of the  
community that are concerned about the future of their sweat equity in  
OpenSolaris), where there are reasonable criteria for saying when this  
is unnecessary. Surely there has to be some threshold of plausibility  
before these things are passed on in a public forum, and, while I  
don't mean to imply that forwarding the post to this list is something  
singularly egregious, the two further posts quoted in my reply  
reinforced to me how low the bar was set and how much this  
participates in conspiracy theory (a rumour is posted to one list,  
forwarded to another, attracts a rebuttal, and rather than asking why  
they should credit the rumour in the first place, there remain a few  
people whose spirit of critical inquiry is singularly focused on the  
provenance of the rebuttal). Given that there are clearly some people  
for whom this wasn't nipped in the bud in the manner you suggest,  
where these people wouldn't necessarily have been aware of this were  
it not for your post, despite your best intentions, there remain signs  
of lingering negative effects that you've not addressed below. I don't  
mean to be vehement towards you in saying any of this, but I don't on  
the other hand mean to understate real, foreseeable, and negative  
consequences.


For such reasons, shouldn't the standard for forwarding with a request  
for clarification require that a rumour consist of what a reasonable  
person would believe based on clear attribution and credible sources?


Cheers,
Bayard

Am 20 Apr 2010 um 20:09 schrieb Ken Gunderson:



On Tue, 2010-04-20 at 18:51 +0100, Bayard Bell wrote:

This thread starts with someone who doesn't claim to have any
authoritative information or attempt to cite any sources using a  
gmail

account to post to a mailgroup. Now people turn around and say that


Whoa!  By way of clarification:

1) If I had authoritative information why would I bother posing the
question?

2) The source were I ran across the info that prompted my query was
cited in my initial post and present in the email sent out by the
Mailman listserver.  Noting evidence of confusion from some reading  
via

Jive forum interface I followed up with an explanation.

3) Rather than fan rumor and speculation by posting to
freebsd-questions, a list to which I am not subscribed, I addressed my
query to what I deemed the most appropriate source for an  
authoritative

answer.  Moreover, in so doing I explicitly qualified the post as
suspect.

4) I do not have, nor have ever had, a gmail address.  To the contrary
my email address is readily apparent in my signature.


they doubt the sourcing on this, but looking at the archives of this
list, there are a number of posts over the years from a Dominic Kay
using this gmail address but providing links to a Sun employee blog
(http://blogs.sun.com/dom/). If you Google "Dominic Kay Oracle", you


I didn't need to, as I already knew the name.  Hence I publicly
acknowledged his reply as more than satisfactory, expressed my
gratitude, and moved on.  I don't really see grounds for directing  
these
vehement comments my way.  The misinformation has now been  
identified as

such and nipped in the bud, wh/I would think would be a good thing.

Thank you and have a nice day.


--
Ken Gunderson 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-20 Thread matthew patton
Geoff Nordli  wrote:

> With our particular use case we are going to do a "save
> state" on their
> virtual machines, which is going to write  100-400 MB
> per VM via CIFS or
> NFS, then we take a snapshot of the volume, which
> guarantees we get a
> consistent copy of their VM.

maybe you left out a detail or two but I can't see how your ZFS snapshot is 
going to be consistent UNLESS every VM on that ZFS volume is prevented from 
doing any and all I/O from the time it finishes "save state" and you take your 
ZFS snapshot.

If by "save state" you mean something akin to VMWare's disk snapshot, why would 
you even bother with a ZFS snapshot in addition?

> end we could have
> maybe 20-30 VMs getting saved at the same time, which could
> mean several GB
> of data would need to get written in a short time frame and
> would need to
> get committed to disk.  
> 
> So it seems the best case would be to get those "save
> state" writes as sync
> and get them into a ZIL.

That I/O pattern is vastly >32kb and so will hit the 'rust' ZIL (which ALWAYS 
exists) and if you were thinking an SSD would help you, I don't see any/much 
evidence it will buy you anything.


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Ken Gunderson

On Tue, 2010-04-20 at 18:51 +0100, Bayard Bell wrote:
> This thread starts with someone who doesn't claim to have any
> authoritative information or attempt to cite any sources using a gmail
> account to post to a mailgroup. Now people turn around and say that

Whoa!  By way of clarification: 

1) If I had authoritative information why would I bother posing the
question?

2) The source were I ran across the info that prompted my query was
cited in my initial post and present in the email sent out by the
Mailman listserver.  Noting evidence of confusion from some reading via
Jive forum interface I followed up with an explanation.

3) Rather than fan rumor and speculation by posting to
freebsd-questions, a list to which I am not subscribed, I addressed my
query to what I deemed the most appropriate source for an authoritative
answer.  Moreover, in so doing I explicitly qualified the post as
suspect.

4) I do not have, nor have ever had, a gmail address.  To the contrary
my email address is readily apparent in my signature.

>  they doubt the sourcing on this, but looking at the archives of this
> list, there are a number of posts over the years from a Dominic Kay
> using this gmail address but providing links to a Sun employee blog
> (http://blogs.sun.com/dom/). If you Google "Dominic Kay Oracle", you

I didn't need to, as I already knew the name.  Hence I publicly
acknowledged his reply as more than satisfactory, expressed my
gratitude, and moved on.  I don't really see grounds for directing these
vehement comments my way.  The misinformation has now been identified as
such and nipped in the bud, wh/I would think would be a good thing.

Thank you and have a nice day.


-- 
Ken Gunderson 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Cindy Swearingen

Brandon,

You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
- recreate swap and dump devices
- install bootblocks
- update BIOS and GRUB entries to boot from new root pool

The BE recreation gets you part of the way and its fast, anyway.

Thanks,

Cindy

!. Create the second root pool.

# zpool create rpool2 c5t1d0s0

2. Create the new BE in the second root pool.

# beadm create -p rpool2 osol2BE

3. Activate the new BE.

# beadm activate

4. Install the boot blocks.

5. Test that the system boots from the second root pool.

6. Update BIOS and GRUB to boot from new pool.

On 04/20/10 08:36, Cindy Swearingen wrote:

Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10 system.

I wonder if beadm supports a similar migration. I will find out
and let you know.

Thanks,

Cindy

On 04/19/10 17:22, Brandon High wrote:

On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen
 wrote:

I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.


It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail because zfs would use the
wront partition table for booting?

basestar:~$ lucreate
-bash: lucreate: command not found
bh...@basestar:~$ man lucreate
No manual entry for lucreate.
bh...@basestar:~$ pkgsearch lucreate
-bash: pkgsearch: command not found
bh...@basestar:~$ pkg search lucreate
bh...@basestar:~$ pkg search SUNWluu
bh...@basestar:~$

I think I remember someone posting a method to copy the boot drive's
layout with prtvtoc and fmthard, but I don't remember the exact
syntax.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread Carson Gaspar

Nicolas Williams wrote:

On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:

On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:

"zfs list -t snapshot" lists in time order.

Good to know.  I'll keep that in mind for my "zfs send" scripts but it's not
relevant for the case at hand.  Because "zfs list" isn't available on the
NFS client, where the users are trying to do this sort of stuff.

I'll note for comparison that the Netapp shapshots do expose this in one
way.

The actual snapshot directory access time is set to the time of the
snapshot. That makes it visible over NFS.  Would be handy to do
something similar in ZFS.


The .zfs/snapshot directory is most certainly available over NFS.


[ irrelevant stuff removed ]

Yes, it is. With a useless timestamp. Please re-read the thread.

--
Carson

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Bayard Bell
This thread starts with someone who doesn't claim to have any  
authoritative information or attempt to cite any sources using a gmail  
account to post to a mailgroup. Now people turn around and say that  
they doubt the sourcing on this, but looking at the archives of this  
list, there are a number of posts over the years from a Dominic Kay  
using this gmail address but providing links to a Sun employee blog (http://blogs.sun.com/dom/ 
). If you Google "Dominic Kay Oracle", you can find this (http://66.102.9.132/search?q=cache:NuhbvEoafV4J:www.snwusa.com/ereg/popups/speakerdetails.php%3Feventid%3D8242%26speakerid%3D5987+dominic+kay+oracle&cd=3&hl=en&ct=clnk&gl=uk 
) cached hit as some corroboration of his role. If this is a false  
flag, it was planted three years ago and has somehow managed to pass  
without challenge in the interim. People are more than happy to say  
that they're sceptical, but there's a lot of data points that indicate  
that what is willing to self-identify as risking conspiracy theory  
isn't using basic research in the public domain to see what  
potentially dispositive information can be sourced there.


ZFS is increasingly integrated into the core of Solaris. Is the same  
company that's giving away btrfs going to require engineering a less  
powerful filesystem than zfs just so it can be a differentiator  
between OpenSolaris and Solaris? Are filesystems really on a plane of  
engineering where you want to develop them on an entirely separate  
track from OpenSolaris, where no statement contradicts the premise  
that it remains the development branch of Solaris? Even if you're sold  
on the premise that Oracle are trying to maximise revenue off of  
Solaris at the expense of products bundled with OpenSolaris, I just  
can't get to the point where a rumour like this one seems like a  
credible formulation of how they might go about that. If anything, the  
result here is that a core component of Solaris would suffer from the  
imposition of a more convoluted development model.


These folks running the relevant business lines have already said  
publicly to the OGB that Oracle's corporate management accepts the  
basic premise of OpenSolaris, so why pass the time waiting to learn  
how they're going to make good on this by concocting baroque  
conspiracy theories about how they're going to reverse themselves in  
some material fashion or passing along rumours to that effect?


Am 20 Apr 2010 um 17:51 schrieb Eric D. Mudama:


On Tue, Apr 20 at 11:41, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have  
registered that gmail account and supplied that answer.  It would  
be a lot more believable from Mr Kay's Oracle or Sun account.


+1

Glad I wasn't the only one who noticed.

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Bob Friesenhahn

On Tue, 20 Apr 2010, Don Turnbull wrote:

Not to be a conspiracy nut but anyone anywhere could have registered that 
gmail account and supplied that answer.  It would be a lot more believable 
from Mr Kay's Oracle or Sun account.


It is true that gmail accounts are just as free and untrustworthy as 
your own temporary yahoo.com email account.


Next someone with a yahoo.com email account will be posting that Ford 
will no longer supports round tires on their trucks.  Statements to 
the contrary will not be accepted unless they come from a @ford.com 
address.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Eric D. Mudama

On Tue, Apr 20 at 11:41, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have registered 
that gmail account and supplied that answer.  It would be a lot more 
believable from Mr Kay's Oracle or Sun account.


+1

Glad I wasn't the only one who noticed.

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread Nicolas Williams
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
> On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > > "zfs list -t snapshot" lists in time order.
> > 
> > Good to know.  I'll keep that in mind for my "zfs send" scripts but it's not
> > relevant for the case at hand.  Because "zfs list" isn't available on the
> > NFS client, where the users are trying to do this sort of stuff.
> 
> I'll note for comparison that the Netapp shapshots do expose this in one
> way.
> 
> The actual snapshot directory access time is set to the time of the
> snapshot. That makes it visible over NFS.  Would be handy to do
> something similar in ZFS.

The .zfs/snapshot directory is most certainly available over NFS.

But note that .zfs does not appear in directory listings of dataset
roots -- you have to actually refer to it:

% ls -f|fgrep .zfs
% ls -f .zfs
. ..snapshot
% ls .zfs/snapshot

% nfsstat -m $PWD
/net/.../pool/nico from ...:/pool/nico
 Flags: 
vers=4,proto=tcp,sec=sys,hard,intr,link,symlink,acl,mirrormount,rsize=1048576,wsize=1048576,retrans=5,timeo=600
 Attr cache:acregmin=3,acregmax=60,acdirmin=30,acdirmax=60

%

And you can even create, rename and destroy snapshots by creating,
renaming and removing directories in .zfs/snapshot:

% mkdir .zfs/snapshot/foo
% mv .zfs/snapshot/foo .zfs/snapshot/bar
% rmdir .zfs/snapshot/bar

(All this also works locally, not just over ZFS.)

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Don Turnbull
Not to be a conspiracy nut but anyone anywhere could have registered 
that gmail account and supplied that answer.  It would be a lot more 
believable from Mr Kay's Oracle or Sun account.


On 4/20/2010 9:40 AM, Ken Gunderson wrote:

On Tue, 2010-04-20 at 13:57 +0100, Dominic Kay wrote:
   

Oracle has no plan to move from ZFS as the principle storage platform
for Solaris 10 and OpenSolaris. It remains key to both data management
and to the OS infrastructure such as root/boot, install and upgrade.
Thanks

Dominic Kay
Product Manager, Filesystems
Oracle
 

I'll take that as a "definitive answer";)

Much appreciated. Thank you.

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-20 Thread Geoff Nordli
>From: Richard Elling [mailto:richard.ell...@gmail.com]
>Sent: Monday, April 19, 2010 10:17 PM
>
>Hi Geoff,
>The Canucks have already won their last game of the season :-)
>more below...


Hi Richard, 
I didn't watch the game last night, but obviously Vancouver better pick up
their socks or they will be joining San Jose on the sidelines.  With Ottawa,
Montreal on the way out too, it could be a tough spring for Canadian hockey
fans.  

>
>On Apr 18, 2010, at 11:21 PM, Geoff Nordli wrote:
>
>> Hi Richard.
>>
>> Can you explain in a little bit more detail how this process works?
>>
>> Let's say you are writing from a remote virtual machine via an iscsi
>target
>> set for async writes and I take a snapshot of that volume.
>>
>> Are you saying any outstanding writes for that volume will need to be
>> written to disk before the snapshot happens?
>
>Yes.

That is interesting, so if your system is under write load and you are doing
snapshots it could lead to problems.  I was thinking writes wouldn't be an
issue because they would be lazily written. 

With our particular use case we are going to do a "save state" on their
virtual machines, which is going to write  100-400 MB per VM via CIFS or
NFS, then we take a snapshot of the volume, which guarantees we get a
consistent copy of their VM.  When a class came to and end we could have
maybe 20-30 VMs getting saved at the same time, which could mean several GB
of data would need to get written in a short time frame and would need to
get committed to disk.  

So it seems the best case would be to get those "save state" writes as sync
and get them into a ZIL.  Would you agree with that? 

>
>I'm glad you enjoyed it.  I'm looking forward to Vegas next week and
>there
>are some seats still open.
> -- richard

I would love to go to Vegas, but I need to work on getting our new product
out the door.

Enjoy yourself in Vegas next week!

Geoff  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering data

2010-04-20 Thread eXeC001er
Hi All.

I have pool (3 disks, raidz1). I made recabling for disks and now some of
disks in pool not available (cannot open). bounce back is not possible. Can
i recovery data from this pool?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread A Darren Dunham
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > "zfs list -t snapshot" lists in time order.
> 
> Good to know.  I'll keep that in mind for my "zfs send" scripts but it's not
> relevant for the case at hand.  Because "zfs list" isn't available on the
> NFS client, where the users are trying to do this sort of stuff.

I'll note for comparison that the Netapp shapshots do expose this in one
way.

The actual snapshot directory access time is set to the time of the
snapshot. That makes it visible over NFS.  Would be handy to do
something similar in ZFS.

# ls -lut
total 72
drwxr-xr-x   8 root root4096 Apr 20 09:24 manual_snap
drwxr-xr-x   8 root root4096 Apr 20 08:00 hourly.0
drwxr-xr-x   8 root root4096 Apr 20 00:00 nightly.0
drwxr-xr-x   8 root root4096 Apr 19 20:00 hourly.1
[...]

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD sale on newegg

2010-04-20 Thread Andreas Grüninger
I did the same experiment in an VMWare guest (SLES10 x64). The archive was 
stored on the vdisk and untarring went to the same vdisk.
The storage backend is sun system with 64 GB RAM, 2*QC cpus, 24 SAS disks with 
450 GB, 4 vdevs with 6 disks as RAIDZ2, an Intel X25-E as log device (c2t1d0).
A StorageTek SAS RAID Host Bus Adapters with 256 RAM and BBU for the zpool and 
a second HBA for the slog device.
c3 is for the zpool and c2 for slog (c2t1d0)/boot (c2t0d0) devices.
There are actually 140 VMs running and used over NFS from VSphere 4 with two 1 
Gb/s links.

zd-nms-s5:/build # iostat -indexC 5
before untarring
  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0  396.00.0 9428.3  0.0  0.10.00.2   0   5   0   0   0   0 c2
0.0   14.00.0   61.9  0.0  0.00.02.8   0   1   0   0   0   0 
c2t0d0
0.0  382.00.0 9366.4  0.0  0.00.00.1   0   3   0   0   0   0 
c2t1d0
  265.40.0 3631.20.0  0.0  1.20.04.3   0 105   0   0   0   0 c3
9.80.0  148.20.0  0.0  0.00.03.4   0   3   0   0   0   0 
c3t0d0
8.80.0  137.70.0  0.0  0.00.03.6   0   3   0   0   0   0 
c3t1d0


zd-nms-s5:/build # iostat -indexC 5
during untarring
   extended device statistics    errors ---
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0 1128.30.0 31713.6  0.0  0.20.00.1   0  12   0   0   0   0 c2
0.00.00.00.0  0.0  0.00.00.0   0   0   0   0   0   0 
c2t0d0
0.0 1128.30.0 31713.6  0.0  0.20.00.1   1  12   0   0   0   0 
c2t1d0
 2005.7 5708.9 7423.7 42041.5  0.1 61.70.08.0   0 1119   0   0   0   0 
c3
   82.8  602.2  364.9 2408.4  0.0  4.40.06.4   1  68   0   0   0   0 
c3t0d0
   72.4  601.6  288.5 2452.7  0.0  4.20.06.2   1  61   0   0   0   0 
c3t1d0


zd-nms-s5:/build # time tar jxf /tmp/gcc-4.4.3.tar.bz2

real0m58.086s
user0m12.241s
sys 0m6.552s

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Ken Gunderson

On Tue, 2010-04-20 at 05:48 -0700, Tonmaus wrote:
> Don't copy the netiquette issue you are seeing, as I am talking about nothing 
> but an issue in a post on this forum. Why should I contact the OP off record 
> about this?
> There is no need to read intentions either. I just made clear once more what 
> is obvious from board metadata anyhow.
> Besides that, if we are having a dispute about netiquette, that highlights 
> the potential substance of the topic more than anything else.
> 
> Regards,
> 
> Tonmaus

fwiw- the post was to a mailing list handled by the excellent, and these
days defacto standard open source, listserver Mailman. Apparently
backend processing then propagates to Jive forums after stripping URLs.
Hence the missing link when viewed via the forums, opening the potential
for confusion for the unaware.  If you want to follow the link included
in my OP, simply access via Mailman archives, where it is indeed
reproduced intact.

Thank you and have a nice day.

P.S.; Speaking of netiquette, it is also quite nice that Mailman is
smart enough to know to wrap email body text at 72-80 chars.  I've never
admined Jive forum and unsure whether it's tunable but it would be nice
if the powers that be could configure Jive to wrap emails at correct
line lengths.

-- 
Ken Gunderson 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] can't destroy snapshot (continue)

2010-04-20 Thread Frank Contrepois
esce   0
  1<- dmu_tx_assign   0
  1-> txg_list_add
  1<- txg_list_add0
  1-> dmu_tx_commit
  1  -> txg_rele_to_sync
  1-> cv_broadcast
  1<- cv_broadcast0
  1  <- txg_rele_to_sync  0
  1  -> kmem_free
  1-> kmem_cache_free
  1<- kmem_cache_free 0
  1  <- kmem_free 0
  1<- dmu_tx_commit   0
  1-> txg_wait_synced
  1  -> cv_broadcast
  1  <- cv_broadcast  0

Hope this help

Frank Contrepois
Coblan srl


__ Informazioni da ESET NOD32 Antivirus, versione del database delle 
firme digitali 5044 (20100420) __

Il messaggio ? stato controllato da ESET NOD32 Antivirus.

www.nod32.it

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-20 Thread Don
I looked through that distributor page already and none of the ones I visited 
listed the IOPS SSD's- they all listed DRAM and other memory from STEC- but not 
the SSD's.

I'm not looking to get the same number of IOPS out of 15k RPM drives. I'm 
looking for an appropriate number of IOPS for my environment- that is to say- 
twice what I'm currently getting. That would be 6k-10k IOPS. If I can do that 
with four Intel drives for 1/10th of what a pair of ZEUS SSD's are going to 
cost me- then that would seem to make a lot more sense. It would also be nice 
to be able to have a couple of spares on hand- just in case a mirror fails. 
That's a lot harder when the drives areas expensive as the ZEUS.

Who else, besides STEC, is making write optimized drives and what kind of IOP 
performance can be expected?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Cindy Swearingen

Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10 system.

I wonder if beadm supports a similar migration. I will find out
and let you know.

Thanks,

Cindy

On 04/19/10 17:22, Brandon High wrote:

On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen
 wrote:

I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.


It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail because zfs would use the
wront partition table for booting?

basestar:~$ lucreate
-bash: lucreate: command not found
bh...@basestar:~$ man lucreate
No manual entry for lucreate.
bh...@basestar:~$ pkgsearch lucreate
-bash: pkgsearch: command not found
bh...@basestar:~$ pkg search lucreate
bh...@basestar:~$ pkg search SUNWluu
bh...@basestar:~$

I think I remember someone posting a method to copy the boot drive's
layout with prtvtoc and fmthard, but I don't remember the exact
syntax.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Co-creator of ZFS, Bill Moore joins Nexenta advisory board

2010-04-20 Thread Erast

Good news for Nexenta and OpenSolaris community in general:

http://www.nexenta.com/corp/blog/2010/04/06/bill-moore-joins-nexenta-advisory-board/

Nexenta invites talents and hiring OpenSolaris Kernel/API engineers. If 
you are in SF bay area and you think you are qualified, send your resume 
by following the instructions below:


http://www.nexenta.com/corp/nexenta-careers
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-20 Thread David Magda
On Mon, April 19, 2010 23:05, Don wrote:
>> A STEC Zeus IOPS SSD (45K IOPS) will behave quite differently than an
>> Intel X-25E (~3.3K IOPS).
>
> Where can you even get the Zeus drives? I thought they were only in the
> OEM market and last time I checked they were ludicrously expensive. I'm
> looking for between 5k and 10k IOPS using up to 4 drive bays (so a 2 x 2
> striped mirror would be fine). Right now we peak at about 3k IOPS (though
> that's not to a ZFS system) but I would like to be able to be able to
> burst to double that. We do have a lot of small size burst writes hence
> our ZIL concerns.

They do have distributors:

http://www.stec-inc.com/support/global_contact.php

http://tinyurl.com/y2lrse2
http://www.stec-inc.com/support/oem_regional_sales_contacts.php?region=USA&subregion=New%20York

And though they do cost a pretty penny, getting the same number of IOps
out of a stack of 15 krpm disks would probably cost a lot more in
hardware, power, and cooling.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Ken Gunderson

On Tue, 2010-04-20 at 13:57 +0100, Dominic Kay wrote:
> Oracle has no plan to move from ZFS as the principle storage platform
> for Solaris 10 and OpenSolaris. It remains key to both data management
> and to the OS infrastructure such as root/boot, install and upgrade.  
> Thanks
> 
> Dominic Kay
> Product Manager, Filesystems
> Oracle

I'll take that as a "definitive answer";) 

Much appreciated. Thank you.

-- 
Ken Gunderson 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Sean Sprague

Khyron,

Finally, Michael S. made the best recommendation...talk to your sales 
rep if you're

a paying customer.


... but don't expect any commitments or generic answer from them at the 
moment.


I do however congratulate quoting Mr. Harman in your .sig ;-)

Regards... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Dominic Kay
Oracle has no plan to move from ZFS as the principle storage platform for
Solaris 10 and OpenSolaris. It remains key to both data management and to
the OS infrastructure such as root/boot, install and upgrade.
Thanks

Dominic Kay
Product Manager, Filesystems
Oracle

2010/4/20 Khyron 

> This is how rumors get started.
>
> From reading that thread, the OP didn't seem to know much of anything
> about...
> anything.  Even less so about Solaris and OpenSolaris.  I'd advise not to
> get your
> news from mailing lists, especially not mailing lists for people who don't
> use the
> product you're interested in.
>
> Nothing like this has been said anywhere by anyone that even resembles or
> approximates an Oracle representative.  So, yeah, ignore it, as the guy was
>
> just asking dumb questions in a very poor manner about things he has
> absolutely
> no knowledge of, and adding assumptions on top of that, in his best but not
> very
> good English.  At least, that's my impression and opinion.
>
> Finally, Michael S. made the best recommendation...talk to your sales rep
> if you're
> a paying customer.
>
> Cheers!
>
>
> On Tue, Apr 20, 2010 at 01:18, Ken Gunderson wrote:
>
>> Greetings All:
>>
>> Granted there has been much fear, uncertainty, and doubt following
>> Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
>> list post dated 4/20/2010"
>>
>> "...Seems that Oracle won't offer support for ZFS on opensolaris"
>>
>> Link here to full post here:
>>
>> <
>> http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html
>> >
>>
>> It seems like such would be pretty outrageous and the OP either confused
>> or spreading FUD, but then on the other hand there's lot of rumors
>> flying around about hidden agendas behind the 2010.03 delay, and Oracle
>> being Oracle such could be within the realm of possibilities.
>>
>> Given Oracle's information policies we're not likely to know if such is
>> indeed the case until it's a fait accompli but I nonetheless thought
>> this would be the best place to inquire (or perhaps Indiana list, as I
>> assume OP is referencing upcoming opensolaris.com release).
>>
>> Thank you and have a nice day.
>>
>> --
>> Ken Gunderson 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
>
> --
> "You can choose your friends, you can choose the deals." - Equity Private
>
> "If Linux is faster, it's a Solaris bug." - Phil Harman
>
> Blog - http://whatderass.blogspot.com/
> Twitter - @khyron4eva
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>


-- 
Dominic Kay
+44 780 124 6099
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
Don't copy the netiquette issue you are seeing, as I am talking about nothing 
but an issue in a post on this forum. Why should I contact the OP off record 
about this?
There is no need to read intentions either. I just made clear once more what is 
obvious from board metadata anyhow.
Besides that, if we are having a dispute about netiquette, that highlights the 
potential substance of the topic more than anything else.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-20 Thread Harry Putnam
Cindy Swearingen  writes:

> Hi Harry,
>
> Both du and df are pre-ZFS commands and don't really understand ZFS
> space issues, which are described in the ZFS FAQ here:
>
> http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
>
> Why does du(1) report different file sizes for ZFS and UFS? Why doesn't
> the space consumption that is reported by the df command and the zfs
> list command match?
>
> Will's advice is good:
>
> Use zpool list and zfs list to determine how much space is available
> for your ZFS file systems and use du or ls -l to review file
> sizes. Don't
> use du or df to look at ZFS file systems sizes.
>
I think I'm beginning to see how this goes.  And at risk of sounding
like a total idiot I notice the information below:

  zfs list z2/rhosts/imgs/harvey

  NAMEUSED  AVAIL  REFER  MOUNTPOINT
  z2/rhosts/imgs/harvey   150G  90.7G  17.6G  /rhosts/imgs/harvey

Its wasn't clear (to me) really what REFER means... according to
`man zfs':
 
[...]  
referenced

 The amount of data that is accessible by  this  dataset,
 which  may  or  may not be shared with other datasets in
 the pool. When a snapshot or clone is created,  it  ini-
 tially  references  the same amount of space as the file
 system or snapshot it was created from, since  its  con-
 tents are identical.

 This property can also be referred to by  its  shortened
 column name, refer.

So apparently it means that even though 150 GB are used, only 17.6G
can be accessed.

Now du -sh on the same data:

# /bin/du -sh /rhosts/imgs/harvey
  46G   /rhosts/imgs/harvey

That is 300+ % less.

But then if you examine the [...].zfs/snapshot directory... you find
the data... or I guess its really the possible data.

  pwd
  /rhosts/imgs/harvey/.zfs/snapshot

du -sh `ls`
  18G zfs-auto-snap:frequent-2010-04-18-03:15
  18G zfs-auto-snap:frequent-2010-04-18-16:30
  18G zfs-auto-snap:frequent-2010-04-18-18:45
  18G zfs-auto-snap:frequent-2010-04-20-07:15
  18G zfs-auto-snap:monthly-2010-04-01-00:00
  18G zfs-auto-snap:weekly-2010-03-22-00:00
  18G zfs-auto-snap:weekly-2010-03-29-00:00
  18G zfs-auto-snap:weekly-2010-04-08-00:00
  18G zfs-auto-snap:weekly-2010-04-15-00:00
 =
  162GB

----   ---=---   -   

So then it seems some careful attention must be paid to the snapshots,
especially when you've removed quite a lot of data from the zfs
filesystem above them.

If you really want the space back now you'll need to follow up by
removing the data from the snapshots too.  Instead of letting the
rollover of snapshots, eventually square with the data above them.

Or I guess if you feel lucky and don't think there will be a need for
that removed data you could remove all the snapshots and let the first
new one happen.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-20 Thread Ian Garbutt
> 
> On Mon, Apr 19, 2010 at 1:42
> AM, Ian Garbutt < href="mailto:ian.g.garb...@newcastle.gov.uk";>ian.g.gar
> b...@newcastle.gov.uk>
> wrote: style="margin:0 0 0 .8ex;border-left:1px #ccc
> solid;padding-left:1ex;">
> Having looked through the forum I gather that you
> cannot just add an additional device to to raidz
> pool.  This being the case is what are the
> alternatives that I could to expand a raidz
> pool?You can't expand
> the number of disks in a raidz vdev (go from 4-disk
> raidz1 to a 5-disk raidz1, for example).
> However, you can replace each of
> the disks in a raidz vdev with larger disks, thus
> expanding the total amount of storage available.
>  We've done this on two of our storage servers,
> replacing 500 GB WD disks with 1.5 TB Seagate
> disks.
> You can also add another raidz
> vdev to the pool, thus increasing the total amount of
> storage available in the pool (zpool add poolname
> raidz disk1 disk2 disk3 disk4, for
> example). -- 
> Freddie Cash href="mailto:fjwc...@gmail.com";>fjwc...@gmail.com<
> br>
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss


Its a shame that its not that easy, hopefully something for the future.  For my 
purpose I am going to have to create a new zpool with some new disks in and 
clone a zone to increase its size.  I can't just add another raidz set of disks 
as we are quite strict on how much storage I call allocate (gets charged back).

Thanks for the replies.

Ian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Sean Sprague

Tonmaus,


you talking about and to whom were you
responding?
My intention was a response to the OP, which I guess from what I am 
seeing in the jive forum, happened as well. Indeed, my concern was the 
broken link in the first post which would be simple to fix if 
intended. That not being the case increases the smell of FUD.


Sorry. We are not mind(or intention)-readers. If you have a specific 
comment which should be directed to a specific individual, then a forum 
is not the best place to air it. Having said that, "The <...> website 
appears to be down at the moment - anyone fixing it?" is more than 
acceptable. Netiquette rules. To quote you: "Why don't you just fix the 
apparently broken link to your source, then?" is _not_ forum/list material.


Thanks... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
> you talking about and to whom were you
> responding?
 My intention was a response to the OP, which I guess from what I am seeing in 
the jive forum, happened as well. Indeed, my concern was the broken link in the 
first post which would be simple to fix if intended. That not being the case 
increases the smell of FUD.

-Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-20 Thread Harry Putnam
Harry Putnam  writes:

> I'm seeing a really big (to big to be excused lightly) difference with
> the 2 zfs native methods  zpool and rpool
 
  Typo alert: The above line should have read:
  the 2 zfs native methods   ZPOOL list and  ZFS list

> compared to 2 native unix methods, du and /bin/df

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Khyron
I have no idea who you're talking to, but presumably you mean this link:

http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html

Worked fine for me.  I didn't post it.  I'm not the OP on this thread or on
the FreeBSD thread.  So what "broken link" are you talking about and to whom

were you responding?

On Tue, Apr 20, 2010 at 06:58, Tonmaus  wrote:

> Why don't you just fix the apparently broken link to your source, then?
>
> Regards,
>
> Tonmaus
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] upgrade zfs stripe

2010-04-20 Thread Albert Frenz
ok thanks for the fast info. that sounds really awesome. i am glad i tried out 
zfs, so i no longer have to worry about this issues and the fact that i can 
upgrad forth and back between stripe and mirror is amazing. money was short, so 
only 2 disks had been put in and since the data is not that worthy i was aware 
of non-redundancy. though after knowing that "feature" i will surely add a disk 
for that in the next time. thanks again.

adrian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
Why don't you just fix the apparently broken link to your source, then?

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshots and rsync

2010-04-20 Thread G. Ander
Thank you very much for your help! I wasn't aware of those options.

...sending end is running rsync < 3.0 (Ubuntu 8.04 LTS), crossing my fingers, 
hoping it'll work.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Daniel Carosone
I have certainly moved a root pool from one disk to another, with the
same basic process, ie:  

 - fuss with fdisk and SMI labels (sigh)
 - zpool create
 - snapshot, send and recv
 - installgrub
 - swap disks

I looked over the "root pool recovery" section in the Best Practices guide
at the time, it has details of all these steps.

In my case, it was to move to a larger disk (in my laptop) rather than
a smaller, but as long as it all fits it won't matter.  

(I did it this way, instead of by attach and detach of mirror, in
order to go through dedup and upgrade checksums, and also to get
comfort with the process for some time when I'm really doing a
recovery.) 

--
Dan.

pgpnel8QFR6Yq.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-20 Thread Casper . Dik

>On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
>> Improbability assessment aside, suppose you use something like the DDRDrive
>> X1 ... Which might be more like 4G instead of 32G ... Is it even physically
>> possible to write 4G to any device in less than 10 seconds?  Remember, to
>> achieve worst case, highest demand on ZIL log device, these would all have
>> to be <32kbyte writes (default configuration), because larger writes will go
>> directly to primary storage, with only the intent landing on the ZIL.
>
>Note that ZFS always writes data in order so I believe that the 
>statement "larger writes will go directly to primary storage" really 
>should be "larger writes will go directly to the ZIL implemented in 
>primary storage (which always exists)".  Otherwise, ZFS would need to 
>write a new TXG whenever a new "large" block of data appeared (which 
>may be puny as far as the underlying store is concerned) in order to 
>assure proper ordering.  This would result in a very high TXG issue 
>rate.  Pool fragmentation would be increased.
>
>I am sure that someone will correct me if this is wrong.

There's a difference between "written" and "the data is referenced by the 
uberblock".  There is no need to start a new TXG when a large datablock
is written.  (If the system resets, the data will be on disk but not 
referenced and is lost unless the TXG it belongs to is comitted)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss