[zfs-discuss] ZFS, ZIL, vq_max_pending and OSCON

2007-08-02 Thread Jay Edwards
The slides from my ZFS presentation at OSCON (as well as some  
additional information) are available at http://www.meangrape.com/ 
2007/08/oscon-zfs/


Jay Edwards
[EMAIL PROTECTED]
http://www.meangrape.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS dynamic striping over RAID-Z

2007-08-02 Thread Ralf Ramge
Tim Thomas wrote:
> if I create a storage pool with multiple RAID-Z stripes in it does ZFS 
> dynamically stripe data across all the RAID-Z stripes in the pool 
> automagically ?
>
> If I relate this back to my storage array experience, this would be 
> "Plaiding" which is/was creating a RAID-0 logical volume across multiple 
> h/ware RAID-5 stripes.
>   
I did this one week ago, while trying to get at least bit of random read
performance out of a X4500. Normal RAIDZ(2) performance has been between
0.8 and 5 MB/s, which was way too slow for our needs, so I used striped
RAIDZ to get a boost.

My test configuration:

c5t0/t4: system (mirrored)
c5t1/t5 - c5t3/t7: AVS bitmap volumes (mirrored)

This left 40 disks. I've created 13x RAIDZ with 3 disks each, that's 39
disks in total. The script I used is appended below.

And yes, it results in striped RAIDZ arrays (I call it "RAIDZ0"). And my
data throughput was 13 times higher, as expected.

Hope this will help you a bit.

---
#!/bin/sh

/usr/sbin/zpool create -f big raidz c0t0d0s0 c1t0d0s0 c4t0d0s0 spare
c7t7d0s0
/usr/sbin/zpool add -f big raidz c6t0d0s0 c7t0d0s0 c0t1d0s0
/usr/sbin/zpool  add -f big raidz c1t1d0s0 c4t1d0s0 c6t1d0s0
/usr/sbin/zpool  add -f big raidz c7t1d0s0 c0t2d0s0 c1t2d0s0
/usr/sbin/zpool  add -f big raidz c4t2d0s0 c6t2d0s0 c7t2d0s0
/usr/sbin/zpool  add -f big raidz c0t3d0s0 c1t3d0s0 c4t3d0s0
/usr/sbin/zpool  add -f big raidz c6t3d0s0 c7t3d0s0 c0t4d0s0
/usr/sbin/zpool  add -f big raidz c1t4d0s0 c4t4d0s0 c6t4d0s0
/usr/sbin/zpool  add -f big raidz c7t4d0s0 c0t5d0s0 c1t5d0s0
/usr/sbin/zpool  add -f big raidz c4t5d0s0 c6t5d0s0 c7t5d0s0
/usr/sbin/zpool  add -f big raidz c0t6d0s0 c1t6d0s0 c4t6d0s0
/usr/sbin/zpool  add -f big raidz c6t6d0s0 c7t6d0s0 c0t7d0s0
/usr/sbin/zpool  add -f big raidz c1t7d0s0 c4t7d0s0 c6t7d0s0

/usr/sbin/zpool status
---


-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Andreas 
Gauger, Matthias Greve, Robert Hoffmann, Norbert Lang, Achim Weiss
Aufsichtsratsvorsitzender: Michael Scheeren


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ? ZFS dynamic striping over RAID-Z

2007-08-02 Thread Tim Thomas
Hi

if I create a storage pool with multiple RAID-Z stripes in it does ZFS 
dynamically stripe data across all the RAID-Z stripes in the pool 
automagically ?

If I relate this back to my storage array experience, this would be 
"Plaiding" which is/was creating a RAID-0 logical volume across multiple 
h/ware RAID-5 stripes.

Thanks

Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS dynamic striping over RAID-Z

2007-08-02 Thread Alderman, Sean
ZFS will stripe across all Root Level VDEVs, regardless of type of VDEV
(mirror, raidz, single whole disk, disk slices, whatever).

e.g.

tank01
  mirror
   c1t0d0
   c1t1d0
  raidz
   c1t2d0
   c1t3d0
   c1t4d0
  mirror
   c0t0d0s7
   c0t1d0s7

Should give you 3 stripes.


--
Sean

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim Thomas
Sent: Thursday, August 02, 2007 7:00 AM
To: ZFS Discuss
Subject: [zfs-discuss] ? ZFS dynamic striping over RAID-Z

Hi

if I create a storage pool with multiple RAID-Z stripes in it does ZFS
dynamically stripe data across all the RAID-Z stripes in the pool
automagically ?

If I relate this back to my storage array experience, this would be
"Plaiding" which is/was creating a RAID-0 logical volume across multiple
h/ware RAID-5 stripes.

Thanks

Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread Jürgen Keil
> I think I have ran into this bug, 6560174, with a firewire drive. 

And 6560174 might be a duplicate of 6445725
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Disk replacement/upgrade

2007-08-02 Thread Matthew C Aycock
I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This is our 
first real tests with ZFS and I working on how to replace our HA-NFS ufs file 
systems with ZFS counterparts. One of the things I am concerned with is how do 
I replace a disk array/vdev in a pool? It appears that is not possible at the 
moment.

For example, I have this array that I want to replace the drives in with bigger 
ones. I currently have 3 raidz vdevs and I am using about two thirds of the 
total space. So, to keep ahead of the curve, I want to replace the 1TB drvies 
with 1.5TB drives. 

Another example, would be that I have a pool with some older T3Bs and newer 
SE3511. I want to remove the T3Bs from the pool and replace them with an 
expansion tray on the SE3511. 

Any idea when I might be able to do this?

Matt
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Disk replacement/upgrade

2007-08-02 Thread Tomas Ögren
On 02 August, 2007 - Matthew C Aycock sent me these 1,0K bytes:

> I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This
> is our first real tests with ZFS and I working on how to replace our
> HA-NFS ufs file systems with ZFS counterparts. One of the things I am
> concerned with is how do I replace a disk array/vdev in a pool? It
> appears that is not possible at the moment.
> 
> For example, I have this array that I want to replace the drives in
> with bigger ones. I currently have 3 raidz vdevs and I am using about
> two thirds of the total space. So, to keep ahead of the curve, I want
> to replace the 1TB drvies with 1.5TB drives. 

zpool replace mypool 1tbdevice 1.5tbdevice

When all devices in a raidz are grown, the raidz grows automatically
iirc (it does for mirrors at least).

> Another example, would be that I have a pool with some older T3Bs and
> newer SE3511. I want to remove the T3Bs from the pool and replace them
> with an expansion tray on the SE3511. 

zpool replace mypool t3bdevice se3511device

> Any idea when I might be able to do this?

Long time ago..

You can replace a single device with another device.. what you can't do
at the moment is for example to replace a hwraid5 (single device) with a
raidz (multiple devices) or replace 3 t3b's with a single se3511.. For
that, you need the evacuate/shrink thingie which I've heard ETAs around
years end.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread aric
> And 6560174 might be a duplicate of 6445725

I see what you mean. Unfortunately there does not look to be a work-around. 

It is beginning to sound like firewire drives are not a safe alternative for 
backup? This is unfortunate when you have an Ultra20 with only 2 disks. 

Is there a way to destroy the pool on the device and start over?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [docs-discuss] Introduction to Operating Systems

2007-08-02 Thread Alan Coopersmith
Lisa Shepherd wrote:
> "Zettabyte File System" is the formal, expanded name of the file system and 
> "ZFS" is its abbreviation. In most Sun manuals, the name is expanded at first 
> use and the abbreviation used the rest of the time. Though I was surprised to 
> find that the Solaris ZFS System Administration Guide, which I would consider 
> the main source of ZFS information, doesn't seem to have "Zettabyte" anywhere 
> in it. Anyway, both names are official and correct, but since "Zettabyte" is 
> such a mouthful, "ZFS" is what gets used most of the time.

How current is that?   I thought that while "Zettabyte File System"
was the original name, use of it was dropped a couple years ago and
ZFS became the only name.   I don't see "Zettabyte" appearing anywhere
in the ZFS community pages.

-- 
-Alan Coopersmith-   [EMAIL PROTECTED]
 Sun Microsystems, Inc. - X Window System Engineering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread Jürgen Keil
> > And 6560174 might be a duplicate of 6445725
> 
> I see what you mean. Unfortunately there does not
> look to be a work-around. 

Nope, no work-around.  This is a scsa1394 bug; it
has some issues when it is used from interrupt context.

I have some source code diffs, that are supposed to
fix the issue, see this thread:

http://www.opensolaris.org/jive/thread.jspa?messageID=46190
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [docs-discuss] Introduction to Operating Systems

2007-08-02 Thread Erblichs
http://www.sun.com/software/solaris/ds/zfs.jsp

Solaris ZFS—The Most Advanced File System on the Planet

Anyone who has ever lost important files, run out of space on a
partition, spent weekends adding new storage to servers, tried to grow
or shrink a file system, or experienced data corruption knows that there
is room for improvement in file systems and volume managers. The Solaris
Zettabyte File System (ZFS), is designed from the ground up to meet the
emerging needs of a general-purpose file system that spans the desktop
to the data center. 

Mitchell Erblich
Ex-Sun Eng
--

Alan Coopersmith wrote:
> 
> Lisa Shepherd wrote:
> > "Zettabyte File System" is the formal, expanded name of the file system and 
> > "ZFS" is its abbreviation. In most Sun manuals, the name is expanded at 
> > first use and the abbreviation used the rest of the time. Though I was 
> > surprised to find that the Solaris ZFS System Administration Guide, which I 
> > would consider the main source of ZFS information, doesn't seem to have 
> > "Zettabyte" anywhere in it. Anyway, both names are official and correct, 
> > but since "Zettabyte" is such a mouthful, "ZFS" is what gets used most of 
> > the time.
> 
> How current is that?   I thought that while "Zettabyte File System"
> was the original name, use of it was dropped a couple years ago and
> ZFS became the only name.   I don't see "Zettabyte" appearing anywhere
> in the ZFS community pages.
> 
> --
> -Alan Coopersmith-   [EMAIL PROTECTED]
>  Sun Microsystems, Inc. - X Window System Engineering
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Disk replacement/upgrade

2007-08-02 Thread Vincent Fox
As a novice, I undestand that if you don't have any redundancy between vdevs 
this is going to be a problem.  Perhaps you can add mirroring to your existing 
pool and make it work that way?

A pool made up of mirror pairs:

{cyrus4:137} zpool status
  pool: ms2
 state: ONLINE
 scrub: scrub completed with 0 errors on Sun Jul 22 00:47:51 2007
config:
NAME   STATE READ WRITE CKSUM
ms2ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c4t600C0FF00A7E0A0E6F8A1000d0  ONLINE   0 0 0
c4t600C0FF00A7E8D1EA7178800d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c4t600C0FF00A7E0A7219D78100d0  ONLINE   0 0 0
c4t600C0FF00A7E8D7B3709D800d0  ONLINE   0 0 0
errors: No known data errors

So remove a half of the mirror and replace it with a larger one.  Wait for 
everything to synch up, then remove the other half and add a large one.  
Suddenly pool expands.

Alternatively setup new arrrays on a 2nd server, and use zfs send and receive 
to duplicate data.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread aric
> Nope, no work-around.  

OK. Then I have 3 questions:

1) How do I destroy the pool that was on the firewire drive? (So that zfs stops 
complaining about it)

2) How can I reformat the firewire drive? Does this need to be done on a 
non-Solaris OS?

3) Can your code diffs be integrated into the OS on my end to use this drive, 
and if so, how?

thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss