RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Chris Forgeron
Yup, but the second set (stripe of 2 raidz1's) can achieve slightly better performance, particularly on a system that has a lot of load. There's a number of blog articles that discuss that in more detail than I care to get into here. Of course, that's a bit of a moot point, as you're not going t

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Artem Belevich
On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot wrote: > Well actually... > > raidz2: > - 7x 1.5 tb = 10.5tb > - 2 parity drives > > raidz1: > - 3x 1.5 tb = 4.5 tb > - 4x 1.5 tb = 6 tb , total 10.5tb > - 2 parity drives in split thus different raidz1 arrays > > So really, in both cases 2 different

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Damien Fleuriot
Well actually... raidz2: - 7x 1.5 tb = 10.5tb - 2 parity drives raidz1: - 3x 1.5 tb = 4.5 tb - 4x 1.5 tb = 6 tb , total 10.5tb - 2 parity drives in split thus different raidz1 arrays So really, in both cases 2 different parity drives and same storage... --- Fleuriot Damien On 5 Jan 2011, at 16

[releng_8 tinderbox] failure on i386/pc98

2011-01-05 Thread FreeBSD Tinderbox
TB --- 2011-01-05 20:39:14 - tinderbox 2.6 running on freebsd-stable.sentex.ca TB --- 2011-01-05 20:39:14 - starting RELENG_8 tinderbox run for i386/pc98 TB --- 2011-01-05 20:39:14 - cleaning the object tree TB --- 2011-01-05 20:39:38 - cvsupping the source tree TB --- 2011-01-05 20:39:38 - /usr/bi

[releng_8 tinderbox] failure on i386/i386

2011-01-05 Thread FreeBSD Tinderbox
TB --- 2011-01-05 20:34:44 - tinderbox 2.6 running on freebsd-stable.sentex.ca TB --- 2011-01-05 20:34:44 - starting RELENG_8 tinderbox run for i386/i386 TB --- 2011-01-05 20:34:44 - cleaning the object tree TB --- 2011-01-05 20:35:15 - cvsupping the source tree TB --- 2011-01-05 20:35:15 - /usr/bi

Re: gstripe/gpart problems.

2011-01-05 Thread Clifton Royston
On Wed, Jan 05, 2011 at 11:36:59AM +0200, Daniel Braniss wrote: > Hi Clifton, > I was getting very frustrated yesterday, hence the cripted message, your > response requieres some background :-) > the box is a Sun Fire X2200, which has bays for 2 disks, (we have several of > these) > before the lat

RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Chris Forgeron
First off, raidz2 and raidz1 with copies=2 are not the same thing. raidz2 will give you two copies of parity instead of just one. It also guarantees that this parity is on different drives. You can sustain 2 drive failures without data loss. raidz1 with copies=2 will give you two copies of al

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Rick Macklem
> Yes, to access the file volumes via any version of NFS, they need to > be exported. (I don't think it would make sense to allow access to all > of the server's data without limitations for NFSv4?) > > What is different (and makes it confusing for folks familiar with > NFSv2,3) > is the fact that

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Rick Macklem
> Hi > > On 5 January 2011 12:09, Rick Macklem wrote: > > > You can also do the following: > > For /etc/exports > > V4: / > > /usr/home -maproot=root -network 192.168.183.0 -mask 255.255.255.0 > > > > Then mount: > > # mount_nfs -o nfsv4 192.168.183.131:/usr/home /marek_nfs4/ > > (But only if th

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Rick Macklem
> Rick Macklem wrote: > > > ... one of the fundamental principals for NFSv2, 3 was a stateless > > server ... > > Only as long as UDP transport was used. Any NFS implementation that > used TCP for transport had thereby abandoned the stateless server > principle, since a TCP connection itself req

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Rick Macklem
> On Wednesday, January 05, 2011 5:55:53 am per...@pluto.rain.com wrote: > > Rick Macklem wrote: > > > > > ... one of the fundamental principals for NFSv2, 3 was a stateless > > > server ... > > > > Only as long as UDP transport was used. Any NFS implementation that > > used TCP for transport had

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Rick Macklem
> > You can also do the following: > > For /etc/exports > > V4: / > > /usr/home -maproot=root -network 192.168.183.0 -mask 255.255.255.0 > > Not in my configuration - '/' and '/usr' are different partitions > (both UFS) > Hmm. Since entire volumes are exported for NFSv4, I can't remember if expor

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread John Baldwin
On Wednesday, January 05, 2011 5:55:53 am per...@pluto.rain.com wrote: > Rick Macklem wrote: > > > ... one of the fundamental principals for NFSv2, 3 was a stateless > > server ... > > Only as long as UDP transport was used. Any NFS implementation that > used TCP for transport had thereby aband

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread Marek Salwerowicz
You can also do the following: For /etc/exports V4: / /usr/home -maproot=root -network 192.168.183.0 -mask 255.255.255.0 Not in my configuration - '/' and '/usr' are different partitions (both UFS) -- Marek Salwerowicz ___ freebsd-stable@freebsd.org

Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-05 Thread perryh
Rick Macklem wrote: > ... one of the fundamental principals for NFSv2, 3 was a stateless > server ... Only as long as UDP transport was used. Any NFS implementation that used TCP for transport had thereby abandoned the stateless server principle, since a TCP connection itself requires that stat

Re: gstripe/gpart problems.

2011-01-05 Thread Daniel Braniss
> On Tue, Jan 04, 2011 at 04:21:31PM +0200, Daniel Braniss wrote: > > Hi, > > I have 2 ada disks striped: > > > > # gstripe list > > Geom name: s1 > > State: UP > > Status: Total=2, Online=2 > > Type: AUTOMATIC > > Stripesize: 65536 > > ID: 2442772675 > > Providers: > > 1. Name: stripe/s1 > >M

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Damien Fleuriot
Hi again List, I'm not so sure about using raidz2 anymore, I'm concerned for the performance. Basically I have 9x 1.5T sata drives. raidz2 and 2x raidz1 will provide the same capacity. Are there any cons against using 2x raidz1 instead of 1x raidz2 ? I plan on using a SSD drive for the OS, 40-