Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Michelle Knight
I have to admit that fmthard does appear to be a bit of a sledgehammer in this 
case. I thought I was doing wrong with that, but you've confirmed that  now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Michelle Knight
Many thanks, I'll try that tonight.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Daniel Carosone
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote:
> We considered a SSD ZIL as well but from my understanding it won't
> help much on sequential bulk writes but really helps on random
> writes (to sequence going to disk better).  

slog will only help if your write load involves lots of synchronous
writes; typically apps calling fsync() or using O_*SYNC, and writes
via NFS.  Random vs sequential isn't important (though sync random
writes can be worse for the combination).  Otherwise, it won't help.

zilstat.sh will help you figure out if it will.  If the workload
would be helped by slog at all, raidz might be helped the most, since
it's the most limited for total IOPS (vs mirror). 

> Also, doubt L2ARC/ARC will help that much for sequential either.

Maybe, maybe not.  It depends mostly on how often you re-stream the
same content, so the cache can be hit often enough to be worthwhile.
At the other end, with decent RAM and lots of repeated content, you
might not even see much benefit from l2arc if enough fits in l1arc :)

I didn't mention it when talking about performance, even if it might
reduce disk load with a good hit ratio, because l2arc (currently)
starts cold after each reboot.   If you need to stream N clits at rate
X, you probably need to do so from boot and can't wait for the cache
to warm up. 

Cache might help you keep doing so after a while, with less work, but
for a discussion of the underlying pool storage the base requirement
is the same. 

--
Dan.


pgpYG8yTKYOd5.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Ed Fang
Thanks for the responses guys.  It looks like I'll probably use RaidZ2 with 8 
drives.  The write bandwidth isn't that great as it'll be a hundred gigs every 
couple weeks but in a bulk load type of environment.  So, not a major issue.  
Testing with 8 drives in a raidz2 easily saturated a GigE connection on the 
client and the server side.  We'll  probably link aggregate two GigE ports onto 
the switch to boost the incoming bandwidth.

In response to some of the other questions - drives are SATA drives 7200.  All 
connected via a SAS expander backplane onto a machine.  CPU cycles obviously 
aren't an issue on a Xeon machine/24Gig memory.  We considered a SSD ZIL as 
well but from my understanding it won't help much on sequential bulk writes but 
really helps on random writes (to sequence going to disk better).  Also, doubt 
L2ARC/ARC will help that much for sequential either.   I could be wrong on both 
counts here so please correct me if I'm wrong.  

Currently testing with 8 disk RaidZ2 and see how that performs.  As it isn't 
speed critical - this will probably be the sweet spot between storage and 
reliability for us.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-01-28 Thread TheJay
Attached the zpool history.

Things to note: raidz2-0 was created on FreeBSD 8

2010-01-16.16:30:05 zpool create rzpool2 raidz2 da1 da2 da3 da4 da5 da6 da7 da8 
da9
2010-01-18.17:04:17 zpool export rzpool2
2010-01-18.21:00:35 zpool import rzpool2
2010-01-23.22:11:03 zpool export rzpool2
2010-01-24.01:28:21 zpool import rzpool2
2010-01-24.01:29:09 zpool upgrade rzpool2
2010-01-24.01:31:19 zpool scrub rzpool2
2010-01-24.17:41:45 zpool add rzpool2 raidz2 c6t0d0 c6t1d0 c6t10d0 c6t11d0 
c6t12d0 c6t13d0 c6t14d0 c6t15d0
2010-01-24.18:21:26 zfs create -o casesensitivity=mixed rzpool2/music
2010-01-24.18:30:27 zfs create -o casesensitivity=mixed rzpool2/photos
2010-01-24.18:30:45 zfs create -o casesensitivity=mixed rzpool2/movies
2010-01-24.19:08:23 zfs set sharesmb=on Movies rzpool2/movies
2010-01-24.19:09:09 zfs set sharesmb=on rzpool2/movies
2010-01-24.19:09:16 zfs set sharesmb=on rzpool2/music
2010-01-24.19:09:24 zfs set sharesmb=on rzpool2/photos
2010-01-24.20:32:02 zfs set sharenfs=on rzpool2/movies
2010-01-26.20:12:50 zpool scrub rzpool2
2010-01-26.20:15:55 zpool clear rzpool2 c6t1d0
2010-01-26.20:20:52 zpool clear rzpool2 c6t1d0
2010-01-26.21:42:58 zpool offline rzpool2 c6t1d0
2010-01-26.21:51:56 zpool scrub -s rzpool2
2010-01-26.21:55:17 zpool online rzpool2 c6t1d0
2010-01-27.19:59:01 zpool clear -F rzpool2
2010-01-27.20:05:03 zpool offline rzpool2 c6t1d0
2010-01-27.20:34:44 zpool clear -F rzpool2
2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0
2010-01-28.07:57:27 zpool scrub rzpool2
2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0
2010-01-28.20:47:46 zpool replace rzpool2 c6t1d0 c6t17d0


On Jan 28, 2010, at 6:03 AM, Mark J Musante wrote:

> On Wed, 27 Jan 2010, TheJay wrote:
> 
>> Guys,
>> 
>> Need your help. My DEV131 OSOL build with my 21TB disk system somehow got 
>> really screwed:
>> 
>> This is what my zpool status looks like:
>> 
>>  NAME STATE READ WRITE CKSUM
>>  rzpool2  DEGRADED 0 0 0
>>raidz2-0   DEGRADED 0 0 0
>>  replacing-0  DEGRADED 0 0 0
>>c6t1d0 OFFLINE  0 0 0
>>c6t16d0ONLINE   0 0 0  256M resilvered
>>  c6t2d0s2 ONLINE   0 0 0
>>  c6t3d0p0 ONLINE   0 0 0
>>  c6t4d0p0 ONLINE   0 0 0
>>  c6t5d0p0 ONLINE   0 0 0
>>  c6t6d0p0 ONLINE   0 0 0
>>  c6t7d0p0 ONLINE   0 0 0
>>  c6t8d0p0 ONLINE   0 0 0
>>  c6t9d0   ONLINE   0 0 0
>>raidz2-1   DEGRADED 0 0 0
>>  c6t0d0   ONLINE   0 0 0
>>  c6t1d0   UNAVAIL  0 0 0  cannot open
>>  c6t10d0  ONLINE   0 0 0
>>  c6t11d0  ONLINE   0 0 0
>>  c6t12d0  ONLINE   0 0 0
>>  c6t13d0  ONLINE   0 0 0
>>  c6t14d0  ONLINE   0 0 0
>>  c6t15d0  ONLINE   0 0 0
>> 
>> check drive c6t1d0 -> It appears in both raidz2-0 and raidz2-1 !!
>> 
>> How do I *remove* the drive from raidz2-1 (with edit/hexedit or anything 
>> else) it is clearly a bug in ZFS that allowed me to assign the drive 
>> twiceagain: running DEV131 OSOL
> 
> Could you send us the zpool history output?  It'd be interesting to know how 
> this happened.  Anyway, the way to get out of this is to do a 'zpool detach' 
> on c6d1s0 after the resilvering finishes, and then do a 'zpool online' of 
> c6d1s0 to connect it back up to raidz2-1.
> 
> 
> Regards,
> markm

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Mark Grant
> Also, I noticed you're using 'EARS' series drives.
> Again, I'm not sure if the WD10EARS drives suffer
> from a problem mentioned in these posts, but it might
> be worth looking into -- especially the last link:

Aren't the EARS drives the first ones using 4k sectors? Does OpenSolaris 
support that properly yet? From what I've read using the 512-byte emulation 
mode in the drives is not good for performance (lots of read/modify/write), 
though I don't know whether that could cause these kind of problems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Flash Jumpstart and mini-root version

2010-01-28 Thread Tony MacDoodle
Getting the following error when trying to do a ZFS Flash install via
jumpstart.

error: field 1 - keyword "pool"

Do I have to have Solaris 10 u8 installed as the mini-root, or will previous
versions of Solaris 10 work?

jumpstart profile below

install_type flash_install
archive_location nfs://192.168.1.230/export/install/media/sol10u8.flar
partitioning explicit
pool rpool auto 8g 8g yes
bootenv installbe bename c1t0d0s0


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-28 Thread Thomas Burgess
On Thu, Jan 28, 2010 at 7:58 PM, Tiernan OToole  wrote:

> Good morning. This is more than likley a stupid question on this alias
> but I will ask anyway. I am building a media server in the house and
> am trying to figure out what os to install. I know it must have zfs
> support but can't figure if I should use Freenas or open solaris.
>
> Free nas has the advantage of out of the box setup but is there
> anything similar for opensolaris? Also, ability to boot and install
> from USB key would be handy.
>
> Thanks.
>
> --Tiernan
>
> You should def. go with opensolaris or something based on opensolaris if
possible BUT there is one major caveat

OpenSolaris has a much smaller HCL than FreeBSD.  Finding hardware for Osol
isn't hard if you design your system around it, but it CAN be an issue when
you are using "found" hardware or old stuff you just happen to have.  If you
are designing a system from scratch and don't mind doing the research, it's
a nice option.

The main reason i PERSONALLY say to go with Osol is that it has the newest
ZFS features and i found CIFS performance to be great, not to mention easy
to set up. (you download 2 packages and then simply use zfs set sharesmb=on,
what could be easier)

FreeBSD is great too (and FreeNAS is based on FreeBSD) but for PURE
fileserver/nas I think opensolaris is a better choice.






> --
> Tiernan O'Toole
> blog.lotas-smartman.net
> www.tiernanotoolephotography.com
> www.the-hairy-one.com
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-01-28 Thread Stanley Chiu
Hmm, no... that's the item I linked to in my first post.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for setting ACL

2010-01-28 Thread Simon Breden
I don't have a lot of time to help here, but this post of mine might possibly 
help with ACLs:

http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-28 Thread Richard Elling
On Jan 28, 2010, at 4:58 PM, Tiernan OToole wrote:

> Good morning. This is more than likley a stupid question on this alias
> but I will ask anyway. I am building a media server in the house and
> am trying to figure out what os to install. I know it must have zfs
> support but can't figure if I should use Freenas or open solaris.
> 
> Free nas has the advantage of out of the box setup but is there
> anything similar for opensolaris? Also, ability to boot and install
> from USB key would be handy.

Check out NexentaStor.  www.nexenta.com
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Media server build

2010-01-28 Thread Tiernan OToole
Good morning. This is more than likley a stupid question on this alias
but I will ask anyway. I am building a media server in the house and
am trying to figure out what os to install. I know it must have zfs
support but can't figure if I should use Freenas or open solaris.

Free nas has the advantage of out of the box setup but is there
anything similar for opensolaris? Also, ability to boot and install
from USB key would be handy.

Thanks.

--Tiernan

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Daniel Carosone
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote:
> 4 x x6 vdevs in RaidZ1 configuration
> 3 x x8 vdevs in RaidZ2 configuration

Another choice might be
 2 x x12 vdevs in raidz2 configuration

This gets you the space of the first, with the recovery properties of
the second - at a cost in potential performance.  Your workload
(mostly streaming, not many parallel streams, large files) sounds like
it might be one that can tolerate this cost, but care will be needed.
Experiment and measure, if you can.

2 x x12 could also get you to raidz3, for extra safety, making the
same performance tradeoff against 3x8 with constant space.  I don't
think this is a choice you're likely to want, but worth mentioning.

> Obviously if a drive fails, it'll take a good several days to
> resilver.  The data is important but not critical.  

That's important information.

> Using raidz1
> allows you one drive failure, but my understanding is that if the
> zpool has four vdevs using raidz1, then any single vdev failure of
> more than one drive may fail the entire zpool 

Correct, as already discussed.

However, there are actually two questions here, and your
final decision depends on both:

 - how many vdevs of what type?
 - how many pools?

Do you need all the space available in one common pool, or can your 
application distribute space and load between multiple resource
containers?   You probably have more degrees of trade-off freedom,
even for the same choices of base vdevs.

If space is more important to you, and losing 1/4 of your non-critical
files on a second disk failure is a tolerable risk, you might consider
4 pools of 6-disk raidz1.  Likewise, 3 pools of 8-disk raidz2 reduces
the worst impact of a third disk failure to 1/3 of you data, and 2
pools of 12-disk vdevs to 1/2.

> If that is the
> case, then it sounds better to consider 3 x8 with raidz2.  

Others have recommended raidz2, and I agree with them, in general
principle. 

All that said, for large files that will fill large blocks, I'm wary
of raidz pools with an odd number of data disks, and prefer if
possible, a power-of-two number of data disks (plus whatever
redundancy level you choose).   Raid-z striping can leave holes, and
this seems like it may result in inefficencies, either in space,
fragmentation or just extra work.  I have not measured this, and it
may be irrelevant or invisible, generally or in your workload.

So, I would recommend raidz2 vdevs, either 3x8 or 2x12. Test and
compare the performance under your workload and see if you can afford
the cost of the extra space the wide stripes offer.  Test the
performance while scrubs and resilvers are going on as well as real
workload. If 2x12 can carry this for you, go for it. Then choose
whether to combine the vdevs into a big pool, or keep them separate.

--
Dan.


pgpTn58UmlK25.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Simon Breden
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and 
without checking mine using AOC-USAS-L8i which uses same controller)

Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in 
these posts, but it might be worth looking into -- especially the last link:

1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
 (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
 (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Edward Ned Harvey
> Replacing my current media server with another larger capacity media
> server.   Also switching over to solaris/zfs.
> 
> Anyhow we have 24 drive capacity.  These are for large sequential
> access (large media files) used by no more than 3 or 5 users at a time.

What type of disks are you using, and how fast is your network?  Will it be
mostly read operations, or a lot of write operations too?  Do you care about
making sure the filer can keep up with the speed of the network?

Typical 7200rpm sata disks can sustain approx 500Mbps, and therefore a
2-disk mirror can sustainably max out a Gb Ethernet.  A bunch of 2-disk
mirrors striped together would definitely be able to keep up.

People often mistakenly think that raidz or raidz2 perform well, like a
bunch of disks working as a team.  In my tests, a raid5 configuration
usually performs slower than a single disk, especially for writes.  (Note: I
said raid5, not raidz.  I haven't tested zfs to see if raidz can outperform
raid5 on an enterprise LSI raid controller fully accelerated.)

If you want performance, go with a bunch of mirrors striped together.  If
you want to keep your GB/$ maximized, go for raidz.

In either configuration, it is highly advisable to keep all disks
identically sized, and have a hotspare.

Also, if you get a single (doesn't need to be redundant) high performance
SSD (can be small ... 32G or whatnot) disk to use for the ZIL, you get a
performance boost that way too.  I emphasize high performance, because not
all cheap SSD's outperform real hard drives.  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Simon Breden
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and 
without checking mine using AOC-USAS-L8i which uses same controller)

Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in 
these posts, but it might be worth looking into -- especially the last link:

1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
 (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
 (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small connect another disk

2010-01-28 Thread Cindy Swearingen

I think the SATA(2)-->SATA(1) connection will negotiate correctly,
but maybe some hardware expert will confirm.

cs


On 01/28/10 15:27, dick hoogendijk wrote:

On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
Or, if possible, connect another larger disk and attach it to the original root 
disk or even replace the smaller root pool disk with the larger disk.


I go for that one. But since it's a smoewhat older system I only have
IDE and SATA(150) connections. IDE disks are rare these days.

Question: do SATA2 disks work on SATA(1) connections?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Mark Bennett
My experience was different again.
I have the same timeout issues with both the LSI and Supermicro cards in IT 
mode.
IR mode on the Supermicro card didn't solve the problem, but seems to have 
reduced it .
Server has 1 x 16 bay chassis and 1 x 24 bay chassis (both use expander)
test pool has 24 x WD10EARS in 6 disk vdev sets, 1 on 16 bay and 2 on 24 bay.

Mark
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Tonmaus
Hi James

> I do not think that you are reading the data
> correctly.
> 
> The issues that we have seen via this list and
> storage-discuss
> have implicated downrev firmware on cards, and the
> various different
> disk drives that people choose to attach to those
> cards.
Thanks for pointing that out. I have indeed noticed such reports but I didn't 
see any specific plans or ack's to take these issues from mpt. Thus the 
question is if these reports justify the assumption that there was anything 
wrong with mpt in general.

> 
> The use of SAS expanders with mpt-based cards is
> *not* an issue.
> The use of MPxIO with mpt-based cards is *not* an
> issue.

I didn't want to make the point of denying some of mpt's core features. I just 
saw a couple of reports that involved SAS Expanders, specifically those based 
on LSI silicon, reported under the "mpt problem" umbrella, and I understood 
that these issues were quite obstinate.

> Personally, I'm quite happy with the LSISAS3081E that
> I have
> installed in my system, with the attached 320Gb
> consumer-grade
> SATA2 disks.
> 

Excellent. That's encouraging. I am planning a similar configuration, with WD 
RE3 1 TB disks though.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Richard Elling

On Jan 28, 2010, at 2:23 PM, Michelle Knight wrote:

> Hi Folks,
> 
> As usual, trust me to come up with the unusual.  I'm planning ahead for 
> future expansion and running tests.
> 
> Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade 
> to anything than 130, which gives me problems)
> 
> Anyway, here is the situation.
> 
> Initial installation drive is a 40gig drive given over to Open Solaris.
> Second drive is an 80 gig drive.
> 
> The aim is to mirror the operating system in a way that I can remove the 
> 40gig drive form the system and have the 80 gig drive boot.
> 
> At this point, you're probably thinking that you've heard it all before.
> 
> I believe that the drive size difference is causing a problem.
> 
> I kill the EFI partition and set up a Solaris partition.  Yes, I even reboot 
> the box to ensure that the Solaris partition has stuck.
> 
> I run the usual ... prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s – 
> /dev/rdsk/c4t1d0s2 ... command and it is here that I think something is going 
> wrong ...

Don't do that. You are basically copying the label for a 40 GB drive onto an 80 
GB
drive, which magically transforms the 80 GB drive into (presto change-o!) a 40 
GB 
drive.  Use format(1m) and setup the SMI label and partitions as you need.

[I consider prtvtoc | fmthard to be a virus :-(]

On Jan 28, 2010, at 2:29 PM, Michelle Knight wrote:
> A bit more information... this is what I've used the all free hog to 
> generate
> Part  TagFlag CylindersSizeBlocks
>  0 unassignedwm   3 - 9725   74.48GB(9723/0/0) 15615
>  1 unassignedwm   0   0 (0/0/0)0
>  2 backupwu   0 - 9725   74.50GB(9726/0/0) 156248190
>  3 unassignedwm   0   0 (0/0/0)0
>  4 unassignedwm   0   0 (0/0/0)0
>  5 unassignedwm   0   0 (0/0/0)0
>  6 unassignedwm   0   0 (0/0/0)0
>  7 unassignedwm   0   0 (0/0/0)0
>  8   bootwu   0 -07.84MB(1/0/0)16065
>  9 alternateswm   1 -2   15.69MB(2/0/0)32130
> 
> ...and when I attempt to add c19d0s0 to the pool, I get...
> 
> m...@cougar:~# zpool attach rpool c7d0s0 c19d0s0
> invalid vdev specification  
> use '-f' to override the following errors:  
> /dev/dsk/c19d0s0 overlaps with /dev/dsk/c19d0s2 
> 
> Is it OK for me to use the -f or have I got something critically wrong here?

This is annoying protectionism. If the disk does not currently contain
data you care about, go ahead and use -f.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Cindy Swearingen

Hi Michelle,

Your previous mail about the disk label reverting to EFI makes me wonder
whether you used the format -e option to relabel the disk, but your disk
label below looks fine.

This also might be a known bug (6419310), whose workaround is to use the 
-f option to zpool attach.


An interim test would be to create a test pool with c19d0s0, like this:

# zpool create test c19d0s0

If that works, then destroy the test pool and try to attach it to the
root pool. If it complains again, then use the -f option.

Thanks,

Cindy

On 01/28/10 15:29, Michelle Knight wrote:

A bit more information... this is what I've used the all free hog to 
generate
Part  TagFlag CylindersSizeBlocks
  0 unassignedwm   3 - 9725   74.48GB(9723/0/0) 15615
  1 unassignedwm   0   0 (0/0/0)0
  2 backupwu   0 - 9725   74.50GB(9726/0/0) 156248190
  3 unassignedwm   0   0 (0/0/0)0
  4 unassignedwm   0   0 (0/0/0)0
  5 unassignedwm   0   0 (0/0/0)0
  6 unassignedwm   0   0 (0/0/0)0
  7 unassignedwm   0   0 (0/0/0)0
  8   bootwu   0 -07.84MB(1/0/0)16065
  9 alternateswm   1 -2   15.69MB(2/0/0)32130

...and when I attempt to add c19d0s0 to the pool, I get...

m...@cougar:~# zpool attach rpool c7d0s0 c19d0s0
invalid vdev specification  
use '-f' to override the following errors:  
/dev/dsk/c19d0s0 overlaps with /dev/dsk/c19d0s2 


Is it OK for me to use the -f or have I got something critically wrong here?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Freddie Cash
Personally, I'd go with 4x raidz2 vdevs, each with 6 drives.  You may not get 
as much raw storage space, but you can lose up to 2 drives per vdev, and you'll 
get more IOPS than with a 3x vdev setup.

Our current 24-drive storage servers use the 3x raidz2 vdevs with 8 drives in 
each.  Performance is good, but not great (tops out at 300 MBps using SATA 
drives and controllers).  This is using 2 12-port RAID controllers, so one of 
the vdevs is split across the controllers.

If I could rebuild things from scratch, I'd go with 4x 8-port SATA controllers, 
and use 4x 6-drive raidz2, using a separate controller for each vdev.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for setting ACL

2010-01-28 Thread Cindy Swearingen

Hi--

I need to collect some more info:

1. What Solaris release is this?

2. Send me the output of this command on the file system below:

# zfs get aclmode,aclinherit pool/dataset

3. What copy command are you using to copy testfile?
In addition, are you using any options.

Thanks,

Cindy

On 01/28/10 14:37, CD wrote:

Hey, thanks for replying!

I've been accessing my server with samba, but now that I'm switching 
over to nfs, I can't seem to get the ACL right..


Basically, moving and overwriting files seems to work fine. But if I 
copy a file - either from an external source or internally on the server 
- the permissions get messed up. Either I lose permission to write to 
the file, or I lose all permissions..



My access hierarchy:
1. Master group with full access to all files on server
2. Master group with only read access to all files on server
3. Local group with rw access to only this filesystem
4. Local group with read access to only this filesystem
5. Deny everyone else


The template used on the filesystem:

group:su:full_set:f:allow,\
group:su:full_set:d:allow,\
group:su::f:deny,\
group:su::d:deny,\
group:vu:read_set:f:allow,\
group:vu:read_set:d:allow,\
group:vu:wxpdDAWCos:f:deny,\
group:vu:wxpdDAWCos:d:deny,\
group:isorw:full_set:f:allow,\
group:isorw:full_set:d:allow,\
group:isorw::f:deny,\
group:isorw::d:deny,\
group:isor:read_set:f:allow,\
group:isor:read_set:d:allow,\
group:isor:wxpdDAWCos:f:deny,\
group:isor:wxpdDAWCos:d:deny,\
everyone@::f:allow,\
everyone@::d:allow,\
everyone@:full_set:f:deny,\
everyone@:full_set:d:deny \



If I make a new file on the server, the permissions looks fine, and I 
get full access:

--+  1 1000 workers0 Jan 28 20:35 testfile
 group:su:rwxpdDaARWcCos:--I:allow
 group:su:--:--I:deny
 group:vu:r-a-R-c---:--I:allow
 group:vu:-wxpdD-A-W-Cos:--I:deny
group:isorw:rwxpdDaARWcCos:--I:allow
group:isorw:--:--I:deny
group:isor:r-a-R-c---:--I:allow
group:isor:-wxpdD-A-W-Cos:--I:deny
  everyone@:--:--I:allow
  everyone@:rwxpdDaARWcCos:--I:deny


If I make a copy of the file, however, it gets messy:
--+  1 1000 workers0 Aug 29  2022 testfile_copy
group:su:rwxp--:---:deny
 group:su:rwxpdDaARWcCos:--I:allow
 group:su:--:--I:deny
 group:vu:r-:---:deny
 group:vu:r-a-R-c---:--I:allow
 group:vu:-wxpdD-A-W-Cos:--I:deny
group:isorw:rwxp--:---:deny
group:isorw:rwxpdDaARWcCos:--I:allow
group:isorw:--:--I:deny
group:isor:r-:---:deny
group:isor:r-a-R-c---:--I:allow
group:isor:-wxpdD-A-W-Cos:--I:deny
  everyone@:--:--I:allow
  everyone@:dDaARWcCos:--I:deny
 owner@:rwxp--:---:deny
 owner@:---A-W-Co-:---:allow
 group@:rwxp--:---:deny
 group@:--:---:allow
  everyone@:rwxp---A-W-Co-:---:deny
  everyone@:--a-R-c--s:---:allow

Why does the extra entries get added? The extra entry at the top, seem 
to block me from accessing the file.


On 01/25/2010 09:18 PM, Cindy Swearingen wrote:

Hi CD,

Practical in what kind of environment? What are your goals?

Do you want the ACL deny entries to be inherited?

Do you plan to use CIFS to access these files + ACLs from
systems running Windows?

Thanks,

Cindy


On 01/25/10 07:21, CD wrote:

Hello forum.

I'm in the process of re-organizing my server and ACL-settings.
I've seen so many different ways of doing ACL, which makes me wonder 
how I should do it myself.



This is obviously the easiest way, only describing the positive 
permissions:

/usr/bin/chmod -R A=\
group:sa:full_set:fd:allow,\
group:vk:read_set:fd:allow \


However, I've seen people split each line, so you getone for each 
inheritance-setting:


group:sa:full_set:f:allow,\
group:sa:full_set:d:allow,\
group:vk:read_set:f:allow,\
group:vk:read_set:d:allow \


And some include all negative permissions, like this:

group:sa:full_set:f:allow,\
group:sa:full_set:d:allow,\
group:sa::f:deny,\
group:sa::d:deny,\
group:vk:read_set:f:allow,\
group:vk:read_set:d:allow,\
group:vk:wxpdDAWCos:f:deny,\
group:vk:wxpdDAWCos:d:deny,\
everyone@::f:allow,\
everyone@::d:allow,\
everyone@:full_set:f:deny,\
everyone@:full_set:d:deny \

- Which, I admit, looks more tidy and thoroughly done, but is it 
practical?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-

Re: [zfs-discuss] raidz using partitions

2010-01-28 Thread Lutz Schumann
Also write performance may drop because of write dache disable: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools

Just a hint, have not tested this. 

Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Michelle Knight
A bit more information... this is what I've used the all free hog to 
generate
Part  TagFlag CylindersSizeBlocks
  0 unassignedwm   3 - 9725   74.48GB(9723/0/0) 15615
  1 unassignedwm   0   0 (0/0/0)0
  2 backupwu   0 - 9725   74.50GB(9726/0/0) 156248190
  3 unassignedwm   0   0 (0/0/0)0
  4 unassignedwm   0   0 (0/0/0)0
  5 unassignedwm   0   0 (0/0/0)0
  6 unassignedwm   0   0 (0/0/0)0
  7 unassignedwm   0   0 (0/0/0)0
  8   bootwu   0 -07.84MB(1/0/0)16065
  9 alternateswm   1 -2   15.69MB(2/0/0)32130

...and when I attempt to add c19d0s0 to the pool, I get...

m...@cougar:~# zpool attach rpool c7d0s0 c19d0s0
invalid vdev specification  
use '-f' to override the following errors:  
/dev/dsk/c19d0s0 overlaps with /dev/dsk/c19d0s2 

Is it OK for me to use the -f or have I got something critically wrong here?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small connect another disk

2010-01-28 Thread dick hoogendijk
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
> Or, if possible, connect another larger disk and attach it to the original 
> root 
> disk or even replace the smaller root pool disk with the larger disk.

I go for that one. But since it's a smoewhat older system I only have
IDE and SATA(150) connections. IDE disks are rare these days.

Question: do SATA2 disks work on SATA(1) connections?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Michelle Knight
Hi Folks,

As usual, trust me to come up with the unusual.  I'm planning ahead for future 
expansion and running tests.

Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade to 
anything than 130, which gives me problems)

Anyway, here is the situation.

Initial installation drive is a 40gig drive given over to Open Solaris.
Second drive is an 80 gig drive.

The aim is to mirror the operating system in a way that I can remove the 40gig 
drive form the system and have the 80 gig drive boot.

At this point, you're probably thinking that you've heard it all before.

I believe that the drive size difference is causing a problem.

I kill the EFI partition and set up a Solaris partition.  Yes, I even reboot 
the box to ensure that the Solaris partition has stuck.

I run the usual ... prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s – 
/dev/rdsk/c4t1d0s2 ... command and it is here that I think something is going 
wrong ...

... because when I add the drive to the zpool, it flips the partition back to 
EFI, which means my grub installation is useless.

Some extra information is that I can't add c19d0s0 - I have to add c19d0. This 
could be one point where it is reverting it to EFI.

If I try to add c19d0s0 I get, "cannot open '/dev/dsk/c19d0s0': No such device 
or address"

... which makes me think that the fmthard is putting wrong information on to 
the 80gig drive because the "slice" sizes will be different.

Have I reached the right conclusion? If so, how do I get around this? Do I have 
to somehow manually slice the drive up?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-28 Thread Lutz Schumann
While thinking about ZFS as the next generation filesystem without limits I am 
wondering if the real world is ready for this kind of incredible technology ... 

I'm actually speaking of hardware :)

ZFS can handle a lot of devices. Once in the import bug 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed 
it should be able to handle a lot of disks. 

I want to ask the ZFS community and users what large scale deploments are out 
there.  How man disks ? How much capacity ? Single pool or many pools on a 
server ? How does resilver work in those environtments ? How to you backup ? 
What is the experience so far ? Major headakes ? 

It would be great if large scale users would share their setups and experiences 
with ZFS. 

Will you ? :)
Thanks, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Cindy Swearingen



On 01/28/10 14:19, Lori Alt wrote:

On 01/28/10 14:08, dick hoogendijk wrote:

On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
  
But those could be copied by send/recv from the larger disk (current 
root pool) to the smaller disk (intended new root pool).  You won't be 
attaching anything until you can boot off the smaller disk and then it 
won't matter what's on the larger disk because attaching the larger disk 
to the root  mirror will destroy the contents of the larger disk anyway.



You are right of course.
Are these right values for amd64 swap/dump:
zfs create -V 2G rpool/dump
zfs create -V 2G -b 4k rpool/swap
Are these -b 4k values OK?

  
I suggest that you set the sizes of the dump and swap zvols to match the 
zvols created by install on your original boot disk (the dump zvold size 
is particular difficult to get right because install calls the kernel to 
determine the optimal value).


The block size for rpool/dump should 128k .  The block size for swap 
zvols should be 4k for x86 (or amd64)  platforms and 8k for sparc platforms.


Lori


The 128k block size for rpool/dump should be set by default starting in 
build 102 due to integration of CR 6725698.


cs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-01-28 Thread Lutz Schumann
No picture, but something like this: 
http://www.provantage.com/supermicro-aoc-smp-lsiss9252~7SUP91MC.htm ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-01-28 Thread Stanley Chiu
I managed to get a picture of the interposer card: 
http://i46.tinypic.com/wspoxu.jpg

So from that, you can see that it uses LSI's LSISS1320 AAMUX, but specifically, 
it looks like they use a custom produced version of the LSISS9132, like this: 
http://www.lsi.com/DistributionSystem/AssetDocument/documentation/storage/standard_product_ics/sata_multiplexers/lsiss9132_pb_fin.pdf

Doesn't look like this is going to be something that can be sourced from a 
third party.

Anyone have any ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for setting ACL

2010-01-28 Thread CD

Hey, thanks for replying!

I've been accessing my server with samba, but now that I'm switching 
over to nfs, I can't seem to get the ACL right..


Basically, moving and overwriting files seems to work fine. But if I 
copy a file - either from an external source or internally on the server 
- the permissions get messed up. Either I lose permission to write to 
the file, or I lose all permissions..



My access hierarchy:
1. Master group with full access to all files on server
2. Master group with only read access to all files on server
3. Local group with rw access to only this filesystem
4. Local group with read access to only this filesystem
5. Deny everyone else


The template used on the filesystem:

group:su:full_set:f:allow,\
group:su:full_set:d:allow,\
group:su::f:deny,\
group:su::d:deny,\
group:vu:read_set:f:allow,\
group:vu:read_set:d:allow,\
group:vu:wxpdDAWCos:f:deny,\
group:vu:wxpdDAWCos:d:deny,\
group:isorw:full_set:f:allow,\
group:isorw:full_set:d:allow,\
group:isorw::f:deny,\
group:isorw::d:deny,\
group:isor:read_set:f:allow,\
group:isor:read_set:d:allow,\
group:isor:wxpdDAWCos:f:deny,\
group:isor:wxpdDAWCos:d:deny,\
everyone@::f:allow,\
everyone@::d:allow,\
everyone@:full_set:f:deny,\
everyone@:full_set:d:deny \



If I make a new file on the server, the permissions looks fine, and I 
get full access:

--+  1 1000 workers0 Jan 28 20:35 testfile
 group:su:rwxpdDaARWcCos:--I:allow
 group:su:--:--I:deny
 group:vu:r-a-R-c---:--I:allow
 group:vu:-wxpdD-A-W-Cos:--I:deny
group:isorw:rwxpdDaARWcCos:--I:allow
group:isorw:--:--I:deny
group:isor:r-a-R-c---:--I:allow
group:isor:-wxpdD-A-W-Cos:--I:deny
  everyone@:--:--I:allow
  everyone@:rwxpdDaARWcCos:--I:deny


If I make a copy of the file, however, it gets messy:
--+  1 1000 workers0 Aug 29  2022 testfile_copy
group:su:rwxp--:---:deny
 group:su:rwxpdDaARWcCos:--I:allow
 group:su:--:--I:deny
 group:vu:r-:---:deny
 group:vu:r-a-R-c---:--I:allow
 group:vu:-wxpdD-A-W-Cos:--I:deny
group:isorw:rwxp--:---:deny
group:isorw:rwxpdDaARWcCos:--I:allow
group:isorw:--:--I:deny
group:isor:r-:---:deny
group:isor:r-a-R-c---:--I:allow
group:isor:-wxpdD-A-W-Cos:--I:deny
  everyone@:--:--I:allow
  everyone@:dDaARWcCos:--I:deny
 owner@:rwxp--:---:deny
 owner@:---A-W-Co-:---:allow
 group@:rwxp--:---:deny
 group@:--:---:allow
  everyone@:rwxp---A-W-Co-:---:deny
  everyone@:--a-R-c--s:---:allow

Why does the extra entries get added? The extra entry at the top, seem 
to block me from accessing the file.


On 01/25/2010 09:18 PM, Cindy Swearingen wrote:

Hi CD,

Practical in what kind of environment? What are your goals?

Do you want the ACL deny entries to be inherited?

Do you plan to use CIFS to access these files + ACLs from
systems running Windows?

Thanks,

Cindy


On 01/25/10 07:21, CD wrote:

Hello forum.

I'm in the process of re-organizing my server and ACL-settings.
I've seen so many different ways of doing ACL, which makes me wonder 
how I should do it myself.



This is obviously the easiest way, only describing the positive 
permissions:

/usr/bin/chmod -R A=\
group:sa:full_set:fd:allow,\
group:vk:read_set:fd:allow \


However, I've seen people split each line, so you getone for each 
inheritance-setting:


group:sa:full_set:f:allow,\
group:sa:full_set:d:allow,\
group:vk:read_set:f:allow,\
group:vk:read_set:d:allow \


And some include all negative permissions, like this:

group:sa:full_set:f:allow,\
group:sa:full_set:d:allow,\
group:sa::f:deny,\
group:sa::d:deny,\
group:vk:read_set:f:allow,\
group:vk:read_set:d:allow,\
group:vk:wxpdDAWCos:f:deny,\
group:vk:wxpdDAWCos:d:deny,\
everyone@::f:allow,\
everyone@::d:allow,\
everyone@:full_set:f:deny,\
everyone@:full_set:d:deny \

- Which, I admit, looks more tidy and thoroughly done, but is it 
practical?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and EMC Replication Software

2010-01-28 Thread Mark Woelfel

Hello,

 Can anybody tell me if EMC's Replication software is supported using 
ZFS, and if so is there any particular version

of Solaris that it is supported with?

  thanks,

   -mark

--
 Mark Woelfel
Storage TSC Backline Volume Products 
Sun Microsystems 
Work: 781-442-1370 Hours: 8 A.M. - 5 P.M. EST

For immediate assistance dial 1-800-USA4-SUN, option 2.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Lori Alt

On 01/28/10 14:08, dick hoogendijk wrote:

On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
  
But those could be copied by send/recv from the larger disk (current 
root pool) to the smaller disk (intended new root pool).  You won't be 
attaching anything until you can boot off the smaller disk and then it 
won't matter what's on the larger disk because attaching the larger disk 
to the root  mirror will destroy the contents of the larger disk anyway.



You are right of course.
Are these right values for amd64 swap/dump:
zfs create -V 2G rpool/dump
zfs create -V 2G -b 4k rpool/swap
Are these -b 4k values OK?

  
I suggest that you set the sizes of the dump and swap zvols to match the 
zvols created by install on your original boot disk (the dump zvold size 
is particular difficult to get right because install calls the kernel to 
determine the optimal value).


The block size for rpool/dump should 128k .  The block size for swap 
zvols should be 4k for x86 (or amd64)  platforms and 8k for sparc platforms.


Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread dick hoogendijk
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
> But those could be copied by send/recv from the larger disk (current 
> root pool) to the smaller disk (intended new root pool).  You won't be 
> attaching anything until you can boot off the smaller disk and then it 
> won't matter what's on the larger disk because attaching the larger disk 
> to the root  mirror will destroy the contents of the larger disk anyway.

You are right of course.
Are these right values for amd64 swap/dump:
zfs create -V 2G rpool/dump
zfs create -V 2G -b 4k rpool/swap
Are these -b 4k values OK?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread James C. McPherson

On 28/01/10 09:36 PM, Tonmaus wrote:

Thanks for your answer.

I asked primarily because of the mpt timeout issues I
saw on the list.


Hi Arnaud,

I am looking into the LSI SAS 3081 as well. My current understanding
with mpt issues is that the "sticky" part of these problems is rather
 related to multipath features, that is using port multipliers or sas

> expanders. Avoiding these one should be fine.

I am quite a newbie though. Just judging from what I read here.


I do not think that you are reading the data correctly.

The issues that we have seen via this list and storage-discuss
have implicated downrev firmware on cards, and the various different
disk drives that people choose to attach to those cards.

The use of SAS expanders with mpt-based cards is *not* an issue.
The use of MPxIO with mpt-based cards is *not* an issue.

Both MPxIO and SAS Expanders are an essential part of the whole
picture that is the SS7000 appliance series, as well as the
J4x00 series.

Personally, I'm quite happy with the LSISAS3081E that I have
installed in my system, with the attached 320Gb consumer-grade
SATA2 disks.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-01-28 Thread Stanley Chiu
Going by the parts lists here: 
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/J4400/components&source=

All the SATA drive kits come with the hard drive, mounting bracket, and a SATA 
interposer card.

We want to use 2TB drives, but Sun wants $1645 (!) for a single kit. That's 
just criminal.

We may be able to source the mounting brackets, but I need to find out whether 
we can source the SATA interposer card.

Who here has worked with the J4200s or J4400s with SATA drives? Do you know if 
the SATA interposer card that's used is completely proprietary, or do they use 
an off-the-shelf part?

Could they possibly be using the LSI LSISS9252?

http://www.lsi.com/storage_home/products_home/standard_product_ics/sas_sata_protocol_bridge/lsiss9252/index.html

Does that look like what the J4200/J4400 uses?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Lori Alt

On 01/28/10 12:05, Dick Hoogendijk wrote:

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.
It's not that great an idea after all. Creating a new ABE in the new 
root pool goes wel, BUT all other files systems on rpool 
(rpool/export, export/home, etc) don't get transfered. So, attaching 
is not possible because '/export/home/me' is busy ;-)


But those could be copied by send/recv from the larger disk (current 
root pool) to the smaller disk (intended new root pool).  You won't be 
attaching anything until you can boot off the smaller disk and then it 
won't matter what's on the larger disk because attaching the larger disk 
to the root  mirror will destroy the contents of the larger disk anyway.


lori




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-28 Thread Lutz Schumann
Yes, here it is  (performance is vmware on laptop, so sorry for that)

How did I test ? 

1) My Disks: 

LUN ID  DeviceType Size   Volume Mounted Remov Attach
c0t0d0  sd4   cdromNo Media  no  yes   ata
c1t0d0  sd0   disk 8GBsyspoolno  nompt
c1t1d0  sd1   disk 20GB   data   no  nompt
c1t2d0  sd2   disk 20GB   data   no  nompt
c1t3d0  sd3   disk 20GB   data   no  nompt
c1t4d0  sd8   disk 4GB   no  nompt
syspo~/swap   zvol 768.0MBsyspoolno  no

2) My Pools:
  
volume: data
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

volume: syspool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
syspool ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0

errors: No known data errors

3) Add the cache device to syspool:
zpool add -f syspool cache c1t4d0s2


r...@nexenta:/volumes# zpool status
  pool: data
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: syspool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
syspool ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0
cache
  c1t4d0s2  ONLINE   0 0 0

errors: No known data errors

4) Do I/O on the data volume and watch if the l2arc is filled with "zpool 
iostat": 

cmd: 
cd /volumes/data
iozone -s 1G -i 0 -i 1 (for I/O) 

Typically looks like this: 

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
data1.47G  58.0G  0131  0  9.47M
  raidz11.47G  58.0G  0131  0  9.47M
c1t1d0  -  -  0100  0  8.45M
c1t2d0  -  -  0 77  0  4.74M
c1t3d0  -  -  0 77  0  5.48M
--  -  -  -  -  -  -
syspool 1.87G  6.06G  2  0  23.8K  0
  c1t0d0s0  1.87G  6.06G  2  0  23.7K  0
cache   -  -  -  -  -  -
  c1t4d0s2  95.9M  3.89G  0  0  0   127K
--  -  -  -  -  -  -

5) Do the same I/O on the syspool: 

cd /volumes
iozone -s 1G -i 0 -i 1 (for I/O)

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
data 407K  59.5G  0  0  0  0
  raidz1 407K  59.5G  0  0  0  0
c1t1d0  -  -  0  0  0  0
c1t2d0  -  -  0  0  0  0
c1t3d0  -  -  0  0  0  0
--  -  -  -  -  -  -
syspool 2.35G  5.59G  0167  6.25K  14.2M
  c1t0d0s0  2.35G  5.59G  0167  6.25K  14.2M
cache   -  -  -  -  -  -
  c1t4d0s2   406M  3.59G  0 80  0  9.59M
--  -  -  -  -  -  -


6) You see only if I/O to syspool is done, the l2arc in syspool used. 

Release is build 104 with some patches.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Scott Meilicke
It looks like there is not a free slot for a hot spare? If that is the case, 
then it is one more factor to push towards raidz2, as you will need time to 
remove the failed disk and insert a new one. During that time you don't want to 
be left unprotected.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.
It's not that great an idea after all. Creating a new ABE in the new 
root pool goes wel, BUT all other files systems on rpool (rpool/export, 
export/home, etc) don't get transfered. So, attaching is not possible 
because '/export/home/me' is busy ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-28 Thread Richard Elling
On Jan 28, 2010, at 10:54 AM, Lutz Schumann wrote:

> Actuall I tested this. 
> 
> If I add a l2arc device to the syspool it is not used when issueing I/O to 
> the data pool (note: on root pool it must no be a whole disk, but only a 
> slice of it otherwise ZFS complains that root disks may not contain some EFI 
> label). 

In my tests it does work. Can you share your test plan?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-28 Thread Lutz Schumann
Actuall I tested this. 

If I add a l2arc device to the syspool it is not used when issueing I/O to the 
data pool (note: on root pool it must no be a whole disk, but only a slice of 
it otherwise ZFS complains that root disks may not contain some EFI label). 

So this does not work - unfortunately :(

Just for Info. 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need help with repairing zpool :(

2010-01-28 Thread Lori Alt


First, you might want to send this out to caiman-disc...@opensolaris.org 
as well in order to find the experts in the OpenSolaris install 
process.  (I am familiar with zfs booting in general and the legacy 
installer, but not so much about the OpenSolaris installer).


Second, including the text of the commands you issued below and the 
messages received would give me context and help me figure out what's 
going on and how to help.


Third, booting the system from the OpenSolaris install medium (CD or 
whatever) might give you some more diagnostic flexibility while you're 
trying to figure this out.  It may be that once you're booted from the 
install CD, you can simply import the pool (like any other pool, nothing 
special about the root pool in this way) and retrieve the files you need 
from it without having to worry about booting the pool.  In that case, 
you'll want to use the -R  option when importing the pool and 
you will have to explicitly mount some of the datasets since some of the 
datasets in root pools have their "canmount" property set to "noauto".


Lori

On 01/28/10 08:53, Eugene Turkulevich wrote:

...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.

have one HDD device (80gb), entire disk is for rpool, system on it and home 
folders.
this is no problem to reinstall system, but need to save some files from user 
dirs.
an, o'cos, there is no backup.

so, the problem is that zpool is broken :( when I try to start system from this 
disk, got a message about there is no bootable partition.
as I see, the data on the disk is not touched, so the only broken information is 
partition table. format says that disk is unknown and propose to enter 
cilinders/heads/etc manually. this has been done (looking to the same HDD info). but when 
I try to enter into "partition" menu of format utility, there is a message 
about This disk may be in use by an application that has modified the fdisk table. so, 
there is no way to see/modify slices (I need s0 slice, 'cos rpool is on it).
any try to access to s0 partition (dd if=/dev/dsk/v14t0d0s0 ...) reports an I/O 
error, but p0 is readable.
also, it is known that disk was formatted with OpenSolaris installer to use the 
whole disk.

help me, please. is it possible to access to the pool on this disk?
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Lutz Schumann
Some very interesting insights on the availability calculations: 
  http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl

For streaming also look at: 
   http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6732803

Regards, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-28 Thread Bob Friesenhahn

On Wed, 27 Jan 2010, RayLicon wrote:


If no one has any data on this issue then fine, but I didn't waste 
my time posting to this site to get responses that simply say -don't 
swap


Perhaps you can set up a test environment, measure this in a 
scientific way, and provide a formal summary for our edification?


Thanks,

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-28 Thread Chris Du
That must be a combination of many things to make it happen.
ie. expander revision, SAS HBA revision, firmware, disk model, firmware, etc.

I didn't see the problem on my system but I haven't used SATA disks with it so 
I can't say.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] primarycache=off, secondarycache=all

2010-01-28 Thread Christo Kutrovsky
Thanks for info Dan,


I will test it out, but won't be anytime soon. Waiting for that SSD.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 17:35, Cindy Swearingen schreef:

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.


Yes yes yes. It's a great idea. So, I first create thsi new root pool on 
the smaller disk and then I use beadm?

I can't use the same name (rpool) I guess.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Dick Hoogendijk

Op 28-1-2010 16:52, Thomas Maier-Komor schreef:

have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?

beadm might do this job for you...
   


What a great idea. Are there any special preparations I have to do on 
the second smaller disk before I can create this ABE? It sounds like the 
simplest option after installing new hardware. ;-) I guess it's enough 
if the disk has a sun partitionon it?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Cindy Swearingen

Thomas,

Excellent and much better suggestion... :-)

You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.

Dick, you will need to update the BIOS to boot from the smaller disk.

Thanks,

Cindy



On 01/28/10 08:52, Thomas Maier-Komor wrote:

On 28.01.2010 15:55, dick hoogendijk wrote:

Cindy Swearingen wrote:


On some disks, the default partitioning is not optimal and you have to
modify it so that the bulk of the disk space is in slice 0.

Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall the whole thing on this smaller disk and
thren let the bigger second attach? That would mean opening up the case
and all that, because I don't have a DVD player built in.
So I thought I'd go the zfs send|recv way. What are yout thoughts about this?


Another thought is that a recent improvement was that you can attach a
disk that is an equivalent size, but not exactly the same geometry.
Which OpenSolaris release is this?

b131
And this only works if the difference is realy (REALLY) small. :)



have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?

beadm might do this job for you...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need help with repairing zpool :(

2010-01-28 Thread Eugene Turkulevich
...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.

have one HDD device (80gb), entire disk is for rpool, system on it and home 
folders.
this is no problem to reinstall system, but need to save some files from user 
dirs.
an, o'cos, there is no backup.

so, the problem is that zpool is broken :( when I try to start system from this 
disk, got a message about there is no bootable partition.
as I see, the data on the disk is not touched, so the only broken information 
is partition table. format says that disk is unknown and propose to enter 
cilinders/heads/etc manually. this has been done (looking to the same HDD 
info). but when I try to enter into "partition" menu of format utility, there 
is a message about This disk may be in use by an application that has modified 
the fdisk table. so, there is no way to see/modify slices (I need s0 slice, 
'cos rpool is on it).
any try to access to s0 partition (dd if=/dev/dsk/v14t0d0s0 ...) reports an I/O 
error, but p0 is readable.
also, it is known that disk was formatted with OpenSolaris installer to use the 
whole disk.

help me, please. is it possible to access to the pool on this disk?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Thomas Maier-Komor
On 28.01.2010 15:55, dick hoogendijk wrote:
> 
> Cindy Swearingen wrote:
> 
>> On some disks, the default partitioning is not optimal and you have to
>> modify it so that the bulk of the disk space is in slice 0.
> 
> Yes, I know, but in this case the second disk indeed is smaller ;-(
> So I wonder, should I reinstall the whole thing on this smaller disk and
> thren let the bigger second attach? That would mean opening up the case
> and all that, because I don't have a DVD player built in.
> So I thought I'd go the zfs send|recv way. What are yout thoughts about this?
> 
>> Another thought is that a recent improvement was that you can attach a
>> disk that is an equivalent size, but not exactly the same geometry.
>> Which OpenSolaris release is this?
> 
> b131
> And this only works if the difference is realy (REALLY) small. :)
> 

have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?

beadm might do this job for you...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Thomas Burgess
if a vdev fails you loose the pool.

if you go with raidz1 and 2 of the RIGHT drives fail (2 in the same vdev)
your pool is lost.

I was faced with a similar situation recently and decided that raidz2 was
the better option.

It's comes down to resilver timesif you look at how long it will take to
replace a failed drive then look at the likelyhood of a drive failing
durring that process then raidz1 is much less attractive.


On Thu, Jan 28, 2010 at 10:26 AM, Ed Fang  wrote:

> Replacing my current media server with another larger capacity media
> server.   Also switching over to solaris/zfs.
>
> Anyhow we have 24 drive capacity.  These are for large sequential access
> (large media files) used by no more than 3 or 5 users at a time.  I'm
> inquiring as to what the best configuration for this is for vdevs.  I'm
> considering the following configurations
>
> 4 x x6 vdevs in RaidZ1 configuration
> 3 x x8 vdevs in RaidZ2 configuration
>
> Obviously if a drive fails, it'll take a good several days to resilver.
>  The data is important but not critical.  Using raidz1 allows you one drive
> failure, but my understanding is that if the zpool has four vdevs using
> raidz1, then any single vdev failure of more than one drive may fail the
> entire zpool   If that is the case, then it sounds better to consider 3
> x8 with raidz2.
>
> Am I on the right track here ?  Thanks
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread Cindy Swearingen

Hi Dick,

Yes, you can use zfs send|recv to recreate the root pool snapshots on
the other disk in addition to the other steps that are needed for full
root pool recovery is my assessment. See the link below, following the
steps for storing the root pool snapshots as snapshots rather than
files. I should attempt a similar migration to see how it goes since
I've only tested this recovery going from a local-->remote system and
back, but not having two potential root pools on the same system. (?)

Maybe someone else can advise better but to me your choices are to
recreate the root pool on the second disk or reinstall. Or, if
possible, connect another larger disk and attach it to the original root 
disk or even replace the smaller root pool disk with the larger disk.


Thanks,

Cindy

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

ZFS Root Pool Recovery



On 01/28/10 07:55, dick hoogendijk wrote:

Cindy Swearingen wrote:


On some disks, the default partitioning is not optimal and you have to
modify it so that the bulk of the disk space is in slice 0.


Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall the whole thing on this smaller disk and
thren let the bigger second attach? That would mean opening up the case
and all that, because I don't have a DVD player built in.
So I thought I'd go the zfs send|recv way. What are yout thoughts about this?


Another thought is that a recent improvement was that you can attach a
disk that is an equivalent size, but not exactly the same geometry.
Which OpenSolaris release is this?


b131
And this only works if the difference is realy (REALLY) small. :)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Ed Fang
Replacing my current media server with another larger capacity media server.   
Also switching over to solaris/zfs.

Anyhow we have 24 drive capacity.  These are for large sequential access (large 
media files) used by no more than 3 or 5 users at a time.  I'm inquiring as to 
what the best configuration for this is for vdevs.  I'm considering the 
following configurations

4 x x6 vdevs in RaidZ1 configuration
3 x x8 vdevs in RaidZ2 configuration

Obviously if a drive fails, it'll take a good several days to resilver.  The 
data is important but not critical.  Using raidz1 allows you one drive failure, 
but my understanding is that if the zpool has four vdevs using raidz1, then any 
single vdev failure of more than one drive may fail the entire zpool   If 
that is the case, then it sounds better to consider 3 x8 with raidz2.  

Am I on the right track here ?  Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-28 Thread dick hoogendijk

Cindy Swearingen wrote:

> On some disks, the default partitioning is not optimal and you have to
> modify it so that the bulk of the disk space is in slice 0.

Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall the whole thing on this smaller disk and
thren let the bigger second attach? That would mean opening up the case
and all that, because I don't have a DVD player built in.
So I thought I'd go the zfs send|recv way. What are yout thoughts about this?

> Another thought is that a recent improvement was that you can attach a
> disk that is an equivalent size, but not exactly the same geometry.
> Which OpenSolaris release is this?

b131
And this only works if the difference is realy (REALLY) small. :)

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05/09 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-01-28 Thread Mark J Musante

On Wed, 27 Jan 2010, TheJay wrote:


Guys,

Need your help. My DEV131 OSOL build with my 21TB disk system somehow got 
really screwed:

This is what my zpool status looks like:

NAME STATE READ WRITE CKSUM
rzpool2  DEGRADED 0 0 0
  raidz2-0   DEGRADED 0 0 0
replacing-0  DEGRADED 0 0 0
  c6t1d0 OFFLINE  0 0 0
  c6t16d0ONLINE   0 0 0  256M resilvered
c6t2d0s2 ONLINE   0 0 0
c6t3d0p0 ONLINE   0 0 0
c6t4d0p0 ONLINE   0 0 0
c6t5d0p0 ONLINE   0 0 0
c6t6d0p0 ONLINE   0 0 0
c6t7d0p0 ONLINE   0 0 0
c6t8d0p0 ONLINE   0 0 0
c6t9d0   ONLINE   0 0 0
  raidz2-1   DEGRADED 0 0 0
c6t0d0   ONLINE   0 0 0
c6t1d0   UNAVAIL  0 0 0  cannot open
c6t10d0  ONLINE   0 0 0
c6t11d0  ONLINE   0 0 0
c6t12d0  ONLINE   0 0 0
c6t13d0  ONLINE   0 0 0
c6t14d0  ONLINE   0 0 0
c6t15d0  ONLINE   0 0 0

check drive c6t1d0 -> It appears in both raidz2-0 and raidz2-1 !!

How do I *remove* the drive from raidz2-1 (with edit/hexedit or anything 
else) it is clearly a bug in ZFS that allowed me to assign the drive 
twiceagain: running DEV131 OSOL


Could you send us the zpool history output?  It'd be interesting to know 
how this happened.  Anyway, the way to get out of this is to do a 'zpool 
detach' on c6d1s0 after the resilvering finishes, and then do a 'zpool 
online' of c6d1s0 to connect it back up to raidz2-1.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-28 Thread borov

Hello.
Thanks for config, but Chenbro badly widespread here, in Russia.
As for 3Ware RAID cards i think better get "dumb" HBA cards and let ZFS 
do all work.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange random errors getting automatically repaired

2010-01-28 Thread gtirloni
On Wed, Jan 27, 2010 at 10:11 PM, Mark Bennett
 wrote:
> Hi Giovanni,
>
> I have seen these while testing the mpt timeout issue, and on other systems 
> during resilvering of failed disks and while running a scrub.
>
> Once so far on this test scrub, and several on yesterdays.
>
> I checked the iostat errors, and they weren't that high on that device, 
> compared to other disks.
>
> c2t34d0  ONLINE       0     0     1  25.5K repaired

I'm not seeing any errors at all (and the servers are very loaded):

# iostat -eXn
  errors ---
  s/w h/w trn tot device
  0   0   0   0 c3t0d0
  0   0   0   0 c3t1d0
  0   0   0   0 c3t2d0
  0   0   0   0 c3t3d0
  0   0   0   0 c3t4d0
  0   0   0   0 c3t5d0
  0   0   0   0 c3t6d0
  0   0   0   0 c3t7d0
  0   0   0   0 c3t8d0
  0   0   0   0 c3t9d0
  0   0   0   0 c3t10d0
  0   0   0   0 c3t11d0
  0   0   0   0 c3t12d0
  0   0   0   0 c3t13d0
  0   0   0   0 c3t14d0
  0   0   0   0 c3t15d0
  0   0   0   0 c3t16d0
  0   0   0   0 c3t17d0
  0   0   0   0 c3t18d0
  0   0   0   0 c3t19d0
  0   0   0   0 c3t20d0
  0   0   0   0 c3t21d0

Right now this is a mystery but I'm reading more about FMA and how it
could have decided something was wrong (since I can't find anything in
its error log).

-- 
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Tonmaus
> Thanks for your answer.
> 
> I asked primarily because of the mpt timeout issues I
> saw on the list.

Hi Arnaud,

I am looking into the LSI SAS 3081 as well. My current understanding with mpt 
issues is that the "sticky" part of these problems is rather related to 
multipath features, that is using port multipliers or sas expanders. Avoiding 
these one should be fine. 
I am quite a newbie though. Just judging from what I read here.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-28 Thread Kjetil Torgrim Homme
Freddie Cash  writes:

> We use the following for our storage servers:
> [...]
> 3Ware 9650SE PCIe RAID controller (12-port, muli-lane)
> [...]
> Fully supported by FreeBSD, so everything should work with
> OpenSolaris.

FWIW, I've used the 9650SE with 16 ports in OpenSolaris 2008.11 and
2009.06, and had problems with the driver just hanging after 4-5 days of
use.  iostat would report 100% busy on all drives connected to the card,
and even "uadmin 1 1" (low-level reboot command) was ineffective.  I had
to break into the debugger and do the reboot from there.  I was using
the newest driver from AMCC.

-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz using partitions

2010-01-28 Thread Sanjeev
Albert,

On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote:
> hi there,
> 
> maybe this is a stupid question, yet i haven't found an answer anywhere ;)
> let say i got 3x 1,5tb hdds, can i create equal partitions out of each and 
> make a raid5 out of it? sure the safety would drop, but that is not that 
> important to me. with roughly 500gb partitions and the raid5 forumla of 
> n-1*smallest drive i should be able to get 4tb storage instead of 3tb when 
> using 3x 1,5tb in a normal raid5. 
> 

Yes, it is possible to create partitions and configure it the way you mentioned.
However, along with the drop in the safety, the performance will also drop
because of potential seek delays.

Thanks and regards,
Sanjeev
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-28 Thread borov



SAS disks more expensive. Besides, there is no 2Tb SAS 7200 drives on market 
yet.


Seagate released a 2 TB SAS drive last year.
http://www.seagate.com/ww/v/index.jsp?locale=en-US&vgnextoid=c7712f655373f110VgnVCM10f5ee0a0aRCRD


Yes, it was announced. But it is not available in Russia yet. I think it 
is not available anywhere.


Take some time to google model number. It is ST32000444SS for 2Tb version.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss