[zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Monish Shah

Hello everyone,

I'm wondering if the following makes sense:

To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS 
drives.  For high IOPS, I believe it is best to let ZFS stripe them, instead 
of doing a raidz1 across them.  Therefore, I would like to mirror the drives 
for reliability.


Now, I'm wondering if I can get away with using a large capacity 7200 RPM 
SATA drive as mirror for multiple SAS drives.  For example, say I had 3 SAS 
drives of 150 GB each.  Could I take a 500 GB SATA drive, partition it into 
3 vdevs and use each one as a mirror for one SAS drive?  I believe this is 
possible.


The problem is in performance.  What I want is for all reads to go to the 
SAS drives so that the SATA drive will only see writes.  I'm hoping that due 
to the copy-on-write nature of ZFS, the writes will get bunched into 
sequential blocks, so write bandwidth will be good, even on a SATA drive. 
But, the reads must be kept off the SATA drive.  Is there any way I can get 
ZFS to do that?


Thanks,

Monish 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Daniel Carosone
Use the SAS drives as l2arc for a pool on sata disks.   If your l2arc is the 
full size of your pool, you won't see reads from the pool (once the cache is 
primed).

If you're purchasing all the gear from new, consider whether SSD in this mode 
would be better than 15k sas.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ata - sata quustion

2009-06-10 Thread dick hoogendijk
I boot my OpenSolaris 2009.06 system off ONE ata drive.
I want to change that to a mirrored boot from two SATA drives.

Is it possible to FIRST make a mirror of the existing ata drive
PLUS one new sata drive and after resilvering, remove the ata drive and
replace it with another (second) SATA one?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata - sata quustion

2009-06-10 Thread James C. McPherson
On Wed, 10 Jun 2009 14:24:31 +0200
dick hoogendijk d...@nagual.nl wrote:

 I boot my OpenSolaris 2009.06 system off ONE ata drive.
 I want to change that to a mirrored boot from two SATA drives.
 
 Is it possible to FIRST make a mirror of the existing ata drive
 PLUS one new sata drive and after resilvering, remove the ata drive and
 replace it with another (second) SATA one?

yes.


zpool attach rpool newdisk1

[twiddle thumbs]

zpool replace rpool olddisk newdisk2


Also, remember to installgrub on each of the new disks



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Rodrigo E . De León Plicet
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata - sata quustion

2009-06-10 Thread Casper . Dik

I boot my OpenSolaris 2009.06 system off ONE ata drive.
I want to change that to a mirrored boot from two SATA drives.

Is it possible to FIRST make a mirror of the existing ata drive
PLUS one new sata drive and after resilvering, remove the ata drive and
replace it with another (second) SATA one?

Yes, that's what I did.  Make sure that the sata drive is at least as big
as the ata drive; make sure you make the appropriate Solaris FDISK 
partition and don't use an EFI label (can't boot those).

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ata - sata quustion

2009-06-10 Thread dick hoogendijk
On Wed, 10 Jun 2009 14:52:37 +0200
casper@sun.com wrote:
 make sure you make the appropriate Solaris FDISK partition and
 don't use an EFI label (can't boot those).

Thank you Casper (and James too). This EFI label is a nice reminder.
Installing grub is second nature ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Richard Elling

Monish Shah wrote:

Hello everyone,

I'm wondering if the following makes sense:

To configure a system for high IOPS, I want to have a zpool of 15K RPM 
SAS drives.  For high IOPS, I believe it is best to let ZFS stripe 
them, instead of doing a raidz1 across them.  Therefore, I would like 
to mirror the drives for reliability.


ok, so far.



Now, I'm wondering if I can get away with using a large capacity 7200 
RPM SATA drive as mirror for multiple SAS drives.  For example, say I 
had 3 SAS drives of 150 GB each.  Could I take a 500 GB SATA drive, 
partition it into 3 vdevs and use each one as a mirror for one SAS 
drive?  I believe this is possible.


yes, it is.

The problem is in performance.  What I want is for all reads to go to 
the SAS drives so that the SATA drive will only see writes.  I'm 
hoping that due to the copy-on-write nature of ZFS, the writes will 
get bunched into sequential blocks, so write bandwidth will be good, 
even on a SATA drive. But, the reads must be kept off the SATA drive.  
Is there any way I can get ZFS to do that?


What sort of performance do you need?

Writes tend to be asynchronous (non-blocking) for many apps, unless your
running a database or NFS server where synchronous writes are common.
In the latter case, invest in a SSD for separate log.

Reads tend to get cached in RAM at several places in the data path, so it
is much more difficult to predict.

IMHO, today, systems which only use HDDs will not be considered high
performance in any case.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Scott Meilicke
The SATA drive will be your bottleneck, and you will lose any speed advantages 
of the SAS drives, especially using 3 vdevs on a single SATA disk.

I am with Richard, figure out what performance you need, and build accordingly.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Bob Friesenhahn

On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:


http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS


Maybe Apple will drop the server version of OS-X and will eliminate 
their only server hardware (Xserve) since all it manages to do is lose 
money for Apple and distracts from releasing the next iPhone?


Only a lunatic would rely on Apple for a mission-critical server 
application.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Sriram Narayanan
On Wed, Jun 10, 2009 at 4:35 AM, Arthur Bundono-re...@opensolaris.org wrote:
 I cant login as root anymore with su , as x user i cant execute almost 
 anything as sys to do some maintenance , only in single mode at boot,

After you log on as the use x (let's call this user arthur), see if
you can run pfexec su - and if you get a # in the command prompt. If
you do, then you can still rescue your data.

Alternatively, boot from the Belenix live CD, run zpool import -f and
see if the pool gets imported.

 name of system is unknown now, just got tired of this thing now, i don't 
 want to learn solaris, dont have time for that , just wanted to use it since 
 the broadband connection and the java environment seemed much more responsive 
 than linux and gnome too behaves much better in OpenSolaris, on previous 
 releases i did have sound via OSS now not anymore etc etc and a lot of small 
 things i see changed from release 59 and dont have the time to go deeply. 
 this snapshot thing is a killer, but i immediately run into problems with it.


Snapshots are an awesome feature indeed.

In case you are interested, try booting from Belenix and see if it
detects your sound device. You can take this discussion to
belenix-discuss, since zfs-discuss is for the zfs filesystem.

 thank you guys for your time and advice since it was all for free.
 good luck with this monster, see you all in 6 months


I wouldn't be so quick to dismiss opensolaris as a monster, much less
ZFS :) There's sometimes a learning curve (no matter low close to zero
it may be).

All you need to do, is to calmly read the instructions give earlier on
this thread, and also try to use shorter sentences and a fullstop.

 Arthur
 --

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread C. Bergström

Bob Friesenhahn wrote:


Only a lunatic would rely on Apple for a mission-critical server 
application.

/OT
It's funny, but I suspect you just called a large portion of the mac 
userbase lunatics..  While my reasons my differ I wouldn't disagree ;)


./C

/OT
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots ignore everything in /export/....

2009-06-10 Thread Cindy . Swearingen

Hi Krenz,

Can you provide your zfs list output and your snapshot syntax?

See the output below from my Solaris 10 5/09 system. Snapshot
syntax and behavior should similar to the Solaris 10 10/08
release.

When you take a snapshot of the root pool you must use the
-r option to recursively snapshot descendent datasets.

Thanks,

Cindy

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  5.61G  61.3G94K  /rpool
rpool/ROOT 4.61G  61.3G18K  legacy
rpool/ROOT/zfsBE   4.61G  61.3G  4.61G  /
rpool/dump 1.00G  61.3G  1.00G  -
rpool/export 38K  61.3G20K  /export
rpool/export/home18K  61.3G18K  /export/home
rpool/swap  406K  61.3G   406K  -
# zfs snapshot -r rp...@now
# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  5.61G  61.3G94K  /rpool
rp...@now  0  -94K  -
rpool/ROOT 4.61G  61.3G18K  legacy
rpool/r...@now 0  -18K  -
rpool/ROOT/zfsBE   4.61G  61.3G  4.61G  /
rpool/ROOT/zf...@now190K  -  4.61G  -
rpool/dump 1.00G  61.3G  1.00G  -
rpool/d...@now 0  -  1.00G  -
rpool/export 38K  61.3G20K  /export
rpool/exp...@now   0  -20K  -
rpool/export/home18K  61.3G18K  /export/home
rpool/export/h...@now  0  -18K  -
rpool/swap  406K  61.3G   406K  -


Krenz von Leiberman wrote:

When I take a snapshot of my rpool, (of which /export/... is a part of), ZFS 
ignores all the data in it and doesn't take any snapshots...

How do I make it include /export in my snapshots?

BTW, I'm running on Solaris 10 Update 6 (Or whatever is the first update to 
allow for root pools...)

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots ignore everything in /export/....

2009-06-10 Thread Lori Alt

On 06/09/09 18:15, Krenz von Leiberman wrote:

When I take a snapshot of my rpool, (of which /export/... is a part of), ZFS 
ignores all the data in it and doesn't take any snapshots...

How do I make it include /export in my snapshots?

BTW, I'm running on Solaris 10 Update 6 (Or whatever is the first update to 
allow for root pools...)

Thanks.
  

So are you doing the following?

# zfs snapshot -r rp...@today

If so, what is the output then of `zfs list` ?

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Richard Elling

Something is bothering me about this thread.  It seems to me that
if the system provides an error message such as cannot mount
'/tank/home': directory is not empty then the first plan of action
should be to look and see what is there, no? 


The issue of overlaying mounts has existed for about 30 years and
invariably one discovers that events which lead to different data in
overlapping directories is the result of some sort of procedural issue.

Perhaps once again, ZFS is a singing canary?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Aaron Blew
That's quite a blanket statement.  MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of their
price point and capabilities.  You easily could buy two Xserve RAIDs and
mirror them for what comparable arrays of the time cost.

-Aaron

On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:


 http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS


 Maybe Apple will drop the server version of OS-X and will eliminate their
 only server hardware (Xserve) since all it manages to do is lose money for
 Apple and distracts from releasing the next iPhone?

 Only a lunatic would rely on Apple for a mission-critical server
 application.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS mirrors with uncoordinated LUN snapshots (Amazon EBS)

2009-06-10 Thread Clayton Wheeler
I'm setting up OpenSolaris on Amazon EC2, and planning on using their Elastic 
Block Store volumes to store a persistent ZFS zpool. I'm inclined to make a 
mirror of two EBS volumes (essentially LUNs with snapshot features and an API 
for mapping/unmapping them), for better data protection. However, EC2 only lets 
you snapshot one volume at a time; there is no consistency group feature for 
taking simultaneous snapshots of the volumes comprising a zpool. Likewise, you 
can only map or unmap one volume at a time.

My question is this: how well can ZFS deal with the mirror devices getting out 
of sync? For instance, if one or both of my EBS volumes are lost and I have to 
restore from EBS snapshots, one volume will have a newer version of the data 
than the other. Will ZFS be able to recognize this and safely resilver from the 
newer device to the older?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirrors with uncoordinated LUN snapshots (Amazon EBS)

2009-06-10 Thread Richard Elling

Clayton Wheeler wrote:

I'm setting up OpenSolaris on Amazon EC2, and planning on using their Elastic 
Block Store volumes to store a persistent ZFS zpool. I'm inclined to make a 
mirror of two EBS volumes (essentially LUNs with snapshot features and an API 
for mapping/unmapping them), for better data protection. However, EC2 only lets 
you snapshot one volume at a time; there is no consistency group feature for 
taking simultaneous snapshots of the volumes comprising a zpool. Likewise, you 
can only map or unmap one volume at a time.
  


Interesting.  Let us know how it works.


My question is this: how well can ZFS deal with the mirror devices getting out 
of sync? For instance, if one or both of my EBS volumes are lost and I have to 
restore from EBS snapshots, one volume will have a newer version of the data 
than the other. Will ZFS be able to recognize this and safely resilver from the 
newer device to the older?
  


Syncing is done to a transaction group. By default, txgs are sync'ed every
5 or 30 seconds.

It would be relatively easy to setup a script which would notify EBS
to snap immediately after a txg commit completes. If the workload contains
a lot of sync writes, special care would be needed to design the system to
properly deal with the ZIL.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirrors with uncoordinated LUN snapshots (Amazon EBS)

2009-06-10 Thread Will Murnane
On Wed, Jun 10, 2009 at 14:18, Clayton Wheelerno-re...@opensolaris.org wrote:
 I'm setting up OpenSolaris on Amazon EC2, and planning on using their Elastic 
 Block Store volumes to store a persistent ZFS zpool. I'm inclined to make a 
 mirror of two EBS volumes (essentially LUNs with snapshot features and an API 
 for mapping/unmapping them), for better data protection. However, EC2 only 
 lets you snapshot one volume at a time; there is no consistency group feature 
 for taking simultaneous snapshots of the volumes comprising a zpool. 
 Likewise, you can only map or unmap one volume at a time.
You could export the zpool (which will cause it to stop writing to
disk) and then take the snapshots.  This would prevent desynchronized
volumes entirely.

 My question is this: how well can ZFS deal with the mirror devices getting 
 out of sync? For instance, if one or both of my EBS volumes are lost and I 
 have to restore from EBS snapshots, one volume will have a newer version of 
 the data than the other. Will ZFS be able to recognize this and safely 
 resilver from the newer device to the older?
In my experience, ZFS doesn't have problems with this kind of scenario
as long as you don't touch the snapshots before feeding both of them
back into the system.  That is, if you snapshot volumes A and B at
different times, don't make changes to both (producing A' and B')
before putting them back in a zpool.  A and B will be able to form a
mirror, but A' and B' will not necessarily.

You should make sure that whatever application you are running forces
sync of its data to disk before taking the snapshots, but you probably
knew that already.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Arthur Bundo
I want to thank you for your quick response.

Regarding the learning curve, i really don't have enough time to go in deep of 
the things anymore, i just like the stability of the Solaris platform in 
general, and i used it at home years from now.

All i need now is some out of the box installation OS.
I see a lot of things have changed in OSolaris, regarding Belenix, i appreciate 
very much your efforts on that distro, but i don't know their internals and 
simply put, it is about trust  and Opensolaris is supported under the Sun name, 
and that is enough for me.

I am not a developer or sys-admin or anything connected to OS in general, i 
just want stability and backward compatibility.

I am a old solaris lover (esp in x86) and i am waiting for it to thrive.
Wish all the best to the community.
Arthur

PS the snapshot idea is really a killer, and monster is mentioned in a very 
appreciative sense.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: Re-Propragate inheritable ACL permissions

2009-06-10 Thread Cindy . Swearingen

Christo,

We don't have an easy way to re-propagate ACL entries on existing files
and directories.

You might try using a combination of find and chmod, similar to the
syntax below.

Which Solaris release is this? We might be able to provide better
hints if you can identify the release and the ACLs you are trying to 
propagate.


Cindy

For files:

$ find . -type f -exec chmod A=...:...file_inherit:allow {} \;

For directories:

$ find . -type d -exec chmod A=...:...dir_inherit:allow {} \;

If you create a snapshot and clone of the target dataset, you could
experiment with the correct syntax.

Christo Kutrovsky wrote:

Hello,

Any hints on how to re-propagate all ACL entries from a given parent directory 
down?

For example, you set your inheritable ACLs the way you want by running multiple:

chmod A+:dir_inherit/file_inherit PARRENT_DIR

Then what command you would run to add these to all already created files 
*and* directories?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot send/recv hangs X4540 servers

2009-06-10 Thread Tim Haley

Brent Jones wrote:

On Mon, Jun 8, 2009 at 9:38 PM, Richard Lowerichl...@richlowe.net wrote:

Brent Jones br...@servuhome.net writes:




I've had similar issues with similar traces.  I think you're waiting on
a transaction that's never going to come.

I thought at the time that I was hitting:
  CR 6367701 hang because tx_state_t is inconsistent

But given the rash of reports here, it seems perhaps this is something
different.

I, like you, hit it when sending snapshots, it seems (in my case) to be
specific to incremental streams, rather than full streams, I can send
seemingly any number of full streams, but incremental sends via send -i,
or send -R of datasets with multiple snapshots, will get into a state
like that above.

-- Rich



For now, back to snv_106 (the most stable build that I've seen, like it a lot)
I'll open a case in the morning, and see what they suggest.


After examining the dump we got from you (thanks again), we're relatively sure 
you are hitting


6826836 Deadlock possible in dmu_object_reclaim()

This was introduced in nv_111 and fixed in nv_113.

Sorry for the trouble.

-tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Alex Lam S.L.
On Thu, Jun 11, 2009 at 2:08 AM, Aaron Blewaaronb...@gmail.com wrote:
 That's quite a blanket statement.  MANY companies (including Oracle)
 purchased Xserve RAID arrays for important applications because of their
 price point and capabilities.  You easily could buy two Xserve RAIDs and
 mirror them for what comparable arrays of the time cost.

 -Aaron

I'd very much doubt that, but I guess one can always push their time
budgets around ;-)

Alex.



 On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:

 On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:


 http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS

 Maybe Apple will drop the server version of OS-X and will eliminate their
 only server hardware (Xserve) since all it manages to do is lose money for
 Apple and distracts from releasing the next iPhone?

 Only a lunatic would rely on Apple for a mission-critical server
 application.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





-- 

Josh Billings  - Every man has his follies - and often they are the
most interesting thing he has got. -
http://www.brainyquote.com/quotes/authors/j/josh_billings.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] multiple devs for rpool

2009-06-10 Thread Colleen
As I understand it, you cannot currently use multiple disks for a rpool (IE: 
something similar to raid10). Are there plans to provide this functionality, 
and if so does anyone know what the general timeframe is?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Lori Alt
A root pool is composed of one top-level vdev, which can be a mirror 
(i.e. 2 or more disks).  A raidz vdev is not supported for the root pool 
yet.  It might be supported in the future, but the timeframe is unknown 
at this time.


Lori

Colleen wrote:


As I understand it, you cannot currently use multiple disks for a rpool (IE: 
something similar to raid10). Are there plans to provide this functionality, 
and if so does anyone know what the general timeframe is?

Thanks!
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs

2009-06-10 Thread Brad Reese
Hi Victor,

Sorry it took a while for me to reply, I was traveling and had limited network 
access.

'zdb -e -bcsv -t 2435913 tank' has been running for a few days with no 
output...want to try something else?

Here's the output of 'zdb -e -u -t 2435913 tank':

Uberblock

magic = 00bab10c
version = 4
txg = 2435911
guid_sum = 16655261404755214374
timestamp = 1240287900 UTC = Mon Apr 20 23:25:00 2009

Thanks,

Brad
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Carson Gaspar

Lori Alt wrote:
A root pool is composed of one top-level vdev, which can be a mirror 
(i.e. 2 or more disks).  A raidz vdev is not supported for the root pool 
yet.  It might be supported in the future, but the timeframe is unknown 
at this time.


The original poster was asking about a zpool of more than 1 mirrored 
pair (4 disks making up 2 mirrored pairs, for example). I don't know if 
that changes the answer (doubtful), but raidz/raidz2 was not being 
discussed.


--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Lori Alt



Carson Gaspar wrote:


Lori Alt wrote:

A root pool is composed of one top-level vdev, which can be a mirror 
(i.e. 2 or more disks).  A raidz vdev is not supported for the root 
pool yet.  It might be supported in the future, but the timeframe is 
unknown at this time.



The original poster was asking about a zpool of more than 1 mirrored 
pair (4 disks making up 2 mirrored pairs, for example). I don't know 
if that changes the answer (doubtful), but raidz/raidz2 was not being 
discussed.



one top-level vdev means that a root pool can be composed of no more 
than one mirrored pair (or mirrored triple, or whatever).  That might 
change in the future, but there is no projected date for relaxing that 
constraint.


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Toby Thain


On 10-Jun-09, at 7:25 PM, Alex Lam S.L. wrote:

On Thu, Jun 11, 2009 at 2:08 AM, Aaron Blewaaronb...@gmail.com  
wrote:

That's quite a blanket statement.  MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of  
their
price point and capabilities.  You easily could buy two Xserve  
RAIDs and

mirror them for what comparable arrays of the time cost.

-Aaron


I'd very much doubt that, but I guess one can always push their time
budgets around ;-)


Hm, as someone who personally installed a 1st gen 1.1TB (half full)  
Xserve RAID + Xserve in a production environment, back when such a  
configuration cost AUD $40,000, I can tell you that it was child's  
play to set up, and ran flawlessly. The cost halved within a few  
months, iirc. :)


--Toby



Alex.




On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:


On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:



http://hardware.slashdot.org/story/09/06/09/2336223/Apple- 
Removes-Nearly-All-Reference-To-ZFS


Maybe Apple will drop the server version of OS-X and will  
eliminate their
only server hardware (Xserve) since all it manages to do is lose  
money for

Apple and distracts from releasing the next iPhone?

Only a lunatic would rely on Apple for a mission-critical server
application.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/ 
bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






--

Josh Billings  - Every man has his follies - and often they are the
most interesting thing he has got. -
http://www.brainyquote.com/quotes/authors/j/josh_billings.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Sriram Narayanan
On Thu, Jun 11, 2009 at 1:45 AM, Arthur Bundono-re...@opensolaris.org wrote:
 I want to thank you for your quick response.

 Regarding the learning curve, i really don't have enough time to go in deep 
 of the things anymore, i just like the stability of the Solaris platform in 
 general, and i used it at home years from now.

 All i need now is some out of the box installation OS.
 I see a lot of things have changed in OSolaris, regarding Belenix, i 
 appreciate very much your efforts on that distro, but i don't know their 
 internals and simply put, it is about trust  and Opensolaris is supported 
 under the Sun name, and that is enough for me.


Heh.. most of the OpenSolaris 2008.n distro is based on work done on
Belenix. The same person who created Belenix worked closely with the
OpenSolaris distro team to create that distro. We continue to help out
as and when we get time out from work.

 I am not a developer or sys-admin or anything connected to OS in general, i 
 just want stability and backward compatibility.


I think Richard Elling pointed out something worth investigating -
what does the present /export/home folder contain ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Jonathan Edwards
i've seen a problem where periodically a 'zfs mount -a' and sometimes  
a 'zpool import pool' can create what appears to be a race condition  
on nested mounts .. that is .. let's say that i have:


FS  mountpoint
pool/export
pool/fs1/export/home
pool/fs2/export/home/bob
pool/fs3/export/home/bob/stuff

if pool is imported (or a mount -a is done) and somehow pool/fs3  
mounts first - then it will create /export/home and /export/home/bob  
and pool/fs1 and pool/fs2 will fail to mount .. this seems to be  
happening on more recent builds, but not predictably - so i'm still  
trying to track down what's going on


On Jun 10, 2009, at 1:01 PM, Richard Elling wrote:


Something is bothering me about this thread.  It seems to me that
if the system provides an error message such as cannot mount
'/tank/home': directory is not empty then the first plan of action
should be to look and see what is there, no?
The issue of overlaying mounts has existed for about 30 years and
invariably one discovers that events which lead to different data in
overlapping directories is the result of some sort of procedural  
issue.


Perhaps once again, ZFS is a singing canary?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Rich Teer
It's not pertinent to this sub-thread, but zfs (albeit read-only)
is already in currently shipping MacOS 10.5.  SO presumably it'll
be in MacOS 10.6...

-- 
Rich Teer, SCSA, SCNA, SCSECA

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss