Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-13 Thread Stephan Budach
Hi all, thanks a lot for your suggestions. I have checked all of them and neither the network itself nor any other check indicated any problem. Alas, I think I know what is going on… ehh… my current zpool has two vdevs that are actually not even sized, as shown by zpool iostat -v: zpool

[zfs-discuss] zpool scalability and performance

2011-01-13 Thread Stephan Budach
Hi, the ZFS_Best_Practises_Guide states this: Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the pool fills up, new allocations will be forced to favor larger vdevs over smaller ones and this will cause subsequent reads to come from a subset of underlying devices leading

Re: [zfs-discuss] Size of incremental stream

2011-01-13 Thread fred
Thanks for this explanation So there is no real way to estimate the size of the increment? Anyway, for this particular filesystem, i'll stick with rsync and yes, the difference was 50G! Thanks -- This message posted from opensolaris.org ___

Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] This means the current probability of any sha256 collision in all of the data in the whole world, using a ridiculously small block size, assuming all ... it doesn't matter. Other posters have found collisions and a collision without

Re: [zfs-discuss] serious problem plz need your help ( I/O error)

2011-01-13 Thread Benji
Maybe this can be of help: (ZFS Administration Guide) http://docs.sun.com/app/docs/doc/819-5461/gavwg?a=view -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread Benji
The way I understand it is that you should add new mirrors (vdevs) of the same size as the other vdevs already attached to the said pool. That is, if your vdevs are mirrors of 2TB drives, don't add a new mirror of, say, 1TB drives. I might be wrong but this is my understanding. -- This

Re: [zfs-discuss] zil and root on the same SSD disk

2011-01-13 Thread Jorgen Lundman
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it will always complain about overlapping slices, since *s2 is the entire disk. This warning seems excessive, but -f will ignore it. As for ZIL, the first time I created a slice for it. This worked well, the second

[zfs-discuss] serious problem plz need your help ( I/O error)

2011-01-13 Thread Omar MEZRAG
Hi all, I got a serious problem when I have upgraded my zpool !! (big mistake) I have booted from opensolaris milax 05, to import my rpool I got some errors like -- zpool import -fR /mnt rpool milax zfs : WARNING can't open objset for rpool/zpnes/z-email/ROOT milax zfs :

[zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Wim van den Berge
I have a pile of aging Dell MD-1000's laying around that have been replaced by new primary storage. I've been thinking of using them to create some archive/backup storage for my primary ZFS systems. Unfortunately they do not all contain identical drives. Some of the older MD-1000's have

Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Strom
Moving to a new SAN, both LUNs will not be accessible at the same time. Thanks for the several replies I've received, sounds like the dd to tape mechanism is broken for zfs send, unless someone knows otherwise or has some trick? I'm just going to try a tar to tape then (maybe using dd),

[zfs-discuss] Migrating iSCSI volumes between pools

2011-01-13 Thread Brian
I have a situation coming up soon in which I will have to migrate some iSCSI backing stores setup with comstar. Are there steps published anywhere on how to move these between pools? Does one still use send/receive or do I somehow just move the backing store? I have moved filesystems before

Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Stephan Budach
Am 13.01.11 15:00, schrieb David Strom: Moving to a new SAN, both LUNs will not be accessible at the same time. Thanks for the several replies I've received, sounds like the dd to tape mechanism is broken for zfs send, unless someone knows otherwise or has some trick? I'm just going to try

Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread a . smith
Basically I think yes you need to add all the vdevs you require in the circumstances you describe. You just have to consider what ZFS is able to do with the disks that you give it. If you have 4x mirrors to start with then all writes will be spread across all disks and you will get nice

Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Magda
On Thu, January 13, 2011 09:00, David Strom wrote: Moving to a new SAN, both LUNs will not be accessible at the same time. Thanks for the several replies I've received, sounds like the dd to tape mechanism is broken for zfs send, unless someone knows otherwise or has some trick? I'm just

Re: [zfs-discuss] Size of incremental stream

2011-01-13 Thread Matthew Ahrens
On Thu, Jan 13, 2011 at 4:36 AM, fred f...@mautadine.com wrote: Thanks for this explanation So there is no real way to estimate the size of the increment? Unfortunately not for now. Anyway, for this particular filesystem, i'll stick with rsync and yes, the difference was 50G! Why? I

Re: [zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Richard Elling
On Jan 12, 2011, at 5:45 PM, Wim van den Berge wrote: I have a pile of aging Dell MD-1000's laying around that have been replaced by new primary storage. I've been thinking of using them to create some archive/backup storage for my primary ZFS systems. Unfortunately they do not all

Re: [zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Freddie Cash
On Wed, Jan 12, 2011 at 5:45 PM, Wim van den Berge wvandenbe...@altep.com wrote: I have a pile of aging Dell MD-1000's laying around that have been replaced by new primary storage. I've been thinking of using them to create some archive/backup storage for my primary ZFS systems.

Re: [zfs-discuss] Hard Errors on HDDs

2011-01-13 Thread Richard Elling
hard errors are a generic classification. fmdump -eV shows the sense/asc/ascq, which is generally more useful for diagnosis. More below... On Jan 1, 2011, at 7:50 AM, Benji wrote: Hi, I recently noticed that there are a lot of Hard Errors on multiple drives that's being reported by

Re: [zfs-discuss] Migrating iSCSI volumes between pools

2011-01-13 Thread Richard Elling
On Jan 13, 2011, at 7:47 AM, Brian wrote: I have a situation coming up soon in which I will have to migrate some iSCSI backing stores setup with comstar. Are there steps published anywhere on how to move these between pools? Does one still use send/receive or do I somehow just move the