Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-13 Thread Stephan Budach

Hi all,

thanks a lot for your suggestions. I have checked all of them and 
neither the network itself nor any other check indicated any problem.


Alas, I think I know what is going on… ehh… my current zpool has two 
vdevs that are actually not even sized, as shown by zpool iostat -v:


zpool iostat -v obelixData 5
capacity operations bandwidth
pool alloc free read write read write
--- - - - - - -
obelixData 13,1T 5,84T 36 227 348K 21,5M
c9t21D023038FA8d0 6,25T 59,3G 21 98 269K 9,25M
c9t21D02305FF42d0 6,84T 5,78T 15 129 79,2K 12,3M
--- - - - - - -


So, the small vdev is actually 99+% full, which is likely to be the root 
cause for this issue. Especially, since RAIDs tend to take tremendous 
performance hits, when they exceed 90% space utilization.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool scalability and performance

2011-01-13 Thread Stephan Budach

Hi,

the ZFS_Best_Practises_Guide  states this:

Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the 
pool fills up, new allocations will be forced to favor larger vdevs over 
smaller ones and this will cause subsequent reads to come from a subset 
of underlying devices leading to lower performance.


I am setting up a zpool comprised of mirrored LUNs, each one being 
exported as a JBOD from my FC RAIDs. Now, as the zpool will fill up, I 
intend to attach more mirrors to it and I am wondering, if I understood 
that correctly.


Let's assume I am creating the initial zpool like this:  zpool create 
tank mirror disk1a disk1b mirror disk2a disk2b mirror disk3a disk3b 
mirror disk4a disk4b. After some time the zpool has filled up and I 
attach another mirror to it: zpool attach tank mirror disk5a disk5b, 
this would mean, that all new data would take a performance hit, since 
it can only be stored on the new mirror disk, instead of being 
distributed across all vdevs, right?


So, to circumvent this, it would be mandantory to add at least as many 
vdevs at once, to satisfy the desired performance?


How do you guys handle this?

Cheers,
budy

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Size of incremental stream

2011-01-13 Thread fred
Thanks for this explanation

So there is no real way to estimate the size of the increment?

Anyway, for this particular filesystem, i'll stick with rsync and yes, the 
difference was 50G!
 
Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Edward Ned Harvey
 From: Richard Elling [mailto:richard.ell...@gmail.com]
 
  This means the current probability of any sha256 collision in all of the
  data in the whole world, using a ridiculously small block size, assuming
all
 
 ... it doesn't matter. Other posters have found collisions and a collision
 without
 verify means silent data corruption.  Do yourself a favor, enable verify.
  -- richard

Somebody has found sha256 collisions?  Perhaps it should be published to let
the world know.

http://en.wikipedia.org/wiki/SHA-2 says none have ever been found for either
sha-1(160) or sha-2(256 or 512).  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] serious problem plz need your help ( I/O error)

2011-01-13 Thread Benji
Maybe this can be of help: (ZFS Administration Guide)

http://docs.sun.com/app/docs/doc/819-5461/gavwg?a=view
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread Benji
The way I understand it is that you should add new mirrors (vdevs) of the same 
size as the other vdevs already attached to the said pool. That is, if your 
vdevs are mirrors of 2TB drives, don't add a new mirror of, say, 1TB drives. 

I might be wrong but this is my understanding.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zil and root on the same SSD disk

2011-01-13 Thread Jorgen Lundman
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it 
will always complain about overlapping slices, since *s2 is the entire disk. 
This warning seems excessive, but -f will ignore it.

As for ZIL, the first time I created a slice for it. This worked well, the 
second time I did:

# zfs create -V 2G rpool/slog
# zfs set refreservation=2G rpool/slog

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c9d0s0ONLINE   0 0 0

  pool: zpool
 state: ONLINE
config:

NAMESTATE READ WRITE CKSUM
zpool   ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c8t0d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0
c8t4d0  ONLINE   0 0 0
logs
  /dev/zvol/dsk/rpool/slog  ONLINE   0 0 0


Which I prefer now, as I can potentially change it size and reboot, compared to 
slices that are much more static. Don't know how it compares performance wise, 
but right now the NAS is fast enough (the nic is the slowest part).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] serious problem plz need your help ( I/O error)

2011-01-13 Thread Omar MEZRAG
Hi all,
I got a serious problem when I have upgraded my zpool !! (big mistake)
I have booted from opensolaris milax 05, to import my rpool 
I got some errors like 
--
zpool import -fR /mnt rpool
milax zfs : WARNING can't open objset for rpool/zpnes/z-email/ROOT
milax zfs : WARNING can't open objset for rpool/zpnes/z-web/ROOT
cannot iterate filesystem I/O error
--
pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested 
config:

NAMESTATE READ WRITE CKSUM
rpool ONLINE   0 0 7
  c4d0s0 ONLINE   0 0 28


errors: Permanent errors have been detected in the following files: 

rpool/data:0x1
rpool/backup:/databases/db.sql.gz
rpool/software:0x1
rpool/ROOT/opensolaris-1:/etc/inet/hotsts
[...]
-
zfs list 
cannot iterate filesystem I/O error
cannot iterate filesystem I/O error
NAME USED   AVAIL REFER  MOUNTPOINT
rpool 17.9G   667G   30K   /mnt/rpool
[...]

-
zfs mount rpool/ROOT/opensolaris-1   OK

cp /mnt/etc/hosts /root/hosts   OK

ls /root  no thing 

cat /mnt/etc/hosts 
cat: Input error on /mnt/etc/hosts: I/O error


so I can't restore my files , please help me to restore my database 
thanks in advance
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Wim van den Berge
I have a pile of aging Dell MD-1000's laying around that have been replaced by 
new primary storage. I've been thinking of using them to create some 
archive/backup storage for my primary ZFS systems. 

Unfortunately they do not all contain identical drives. Some of the older 
MD-1000's have 15x500GB drives, some have all 750's some all 1TB's. Since size 
and integrity matters here, not speed. I was thinking of creating one large 
pool containing multiple RAIDZ2's. Each RAIDZ2 would be one MD-1000 and would 
have 14 drives, reserving one drive per shelf as a spare.

The question is: The final pool would have spares of 500GB, 750GB and 1TB. Is 
ZFS smart enough to pick the right one if a drive fails? If not, is there a way 
to make this scenario work and still combine all available storage in a single 
pool?

Willem
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Strom

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to tape 
mechanism is broken for zfs send, unless someone knows otherwise or has 
some trick?


I'm just going to try a tar to tape then (maybe using dd), then, as I 
don't have any extended attributes/ACLs.  Would appreciate any 
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes 
(what I have).


Might send it across the (Gigabit Ethernet) network to a server that's 
already on the new SAN, but I was trying to avoid hogging down the 
network or the other server's NIC.


I've seen examples online for sending via network, involves piping zfs 
send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable 
it temporarily between the two hosts?


Thanks again, all.

--
David Strom

On 1/11/2011 11:43 PM, Ian Collins wrote:

On 01/12/11 04:15 AM, David Strom wrote:

I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full  load the next one. Has always worked OK whenever
I tried it.

I'm planning to try this with a new Quantum Superloader 3 with LTO5
tape drives and zfs send. I need to migrate a Solaris 10 host on a
V440 to a new SAN. There is a 10 TB zfs pool  filesystem that is
comprised of 3 LUNs of different sizes put in the zfs pool, and it's
almost full. Rather than copying the various sized Luns from the old
SAN storage unit to the new one  getting ZFS to recognize the pool, I
thought it would be cleaner to dump the zfs filesystem to the tape
autoloader  restore it to a 10TB Lun. The users can live without this
zfs filesystem for a few days.



Why can't you just send directly to the new LUN? Create a new pool, send
the data, export the old pool and rename.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migrating iSCSI volumes between pools

2011-01-13 Thread Brian
I have a situation coming up soon in which I will have to migrate some iSCSI 
backing stores setup with comstar.  Are there steps published anywhere on how 
to move these between pools?  Does one still use send/receive or do I somehow 
just move the backing store? I have moved filesystems before using send/receive 
but not sure if iSCSI targets are treated the same.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Stephan Budach

Am 13.01.11 15:00, schrieb David Strom:

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to 
tape mechanism is broken for zfs send, unless someone knows otherwise 
or has some trick?


I'm just going to try a tar to tape then (maybe using dd), then, as I 
don't have any extended attributes/ACLs.  Would appreciate any 
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes 
(what I have).


Might send it across the (Gigabit Ethernet) network to a server that's 
already on the new SAN, but I was trying to avoid hogging down the 
network or the other server's NIC.


I've seen examples online for sending via network, involves piping zfs 
send over ssh to zfs receive, right?  Could I maybe use rsh, if I 
enable it temporarily between the two hosts? 
Actually mbuffer does a great job for that, too. Whenever I am using 
mbuffer I am achieving much higher throughput then using ssh.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread a . smith
Basically I think yes you need to add all the vdevs you require in the  
circumstances you describe.


You just have to consider what ZFS is able to do with the disks that  
you give it. If you have 4x mirrors to start with then all writes will  
be spread across all disks and you will get nice performance using all  
8 spindles/disks. If you fill all of these up then add one other  
mirror then its logical that new data written will be only written to  
the free space on the new mirror and you will get the performance of  
writing data to a single mirrored vdev.


To handle this you would either have to add sufficient new devices to  
give you your required performance. Or if there is a fair amount of  
data turn around on your pool, ie you are deleting (including from  
snapshots) old data then you might get reasonable performance by  
adding a new mirror at some point before your existing pool is  
completely full. Ie data will initially get written and spread across  
all disks as there will be free space on all disks, and over time old  
data will be removed from the other older vdevs. Which would result in  
most of the time reads and writes benefiting from all vdevs, but it't  
not going to give you guarantees of that I guess...


Anyway, thats what occurred to me on the subject! ;)

cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Magda
On Thu, January 13, 2011 09:00, David Strom wrote:
 Moving to a new SAN, both LUNs will not be accessible at the same time.

 Thanks for the several replies I've received, sounds like the dd to tape
 mechanism is broken for zfs send, unless someone knows otherwise or has
 some trick?

 I'm just going to try a tar to tape then (maybe using dd), then, as I
 don't have any extended attributes/ACLs.  Would appreciate any
 suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes
 (what I have).

 Might send it across the (Gigabit Ethernet) network to a server that's
 already on the new SAN, but I was trying to avoid hogging down the
 network or the other server's NIC.

 I've seen examples online for sending via network, involves piping zfs
 send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable
 it temporarily between the two hosts?

If you don't already have a backup infrastructure (remember: RAID !=
backup), this may be a good opportunity. Something like Amanda or Bacula
is gratis, and it could be useful for other circumstances.

If this is a one-off it may not be worth it, but having important data
without having (offline) backups is usually tempting fate.

If you're just going to go to tape, then suntar/gnutar/star can write
directly to it (or via rmt over the network), and there's no sense
necessarily going through dd; 'tar' is short for TApe aRchiver after all.

(However this is getting a bit OT for ZFS, and heading towards general
sysadmin related.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Size of incremental stream

2011-01-13 Thread Matthew Ahrens
On Thu, Jan 13, 2011 at 4:36 AM, fred f...@mautadine.com wrote:
 Thanks for this explanation

 So there is no real way to estimate the size of the increment?

Unfortunately not for now.

 Anyway, for this particular filesystem, i'll stick with rsync and yes, the 
 difference was 50G!

Why?  I would expect rsync to be slower and send more data, and also
not be able to estimate how large the stream will be.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Richard Elling
On Jan 12, 2011, at 5:45 PM, Wim van den Berge wrote:

 I have a pile of aging Dell MD-1000's laying around that have been replaced 
 by new primary storage. I've been thinking of using them to create some 
 archive/backup storage for my primary ZFS systems. 
 
 Unfortunately they do not all contain identical drives. Some of the older 
 MD-1000's have 15x500GB drives, some have all 750's some all 1TB's. Since 
 size and integrity matters here, not speed. I was thinking of creating one 
 large pool containing multiple RAIDZ2's. Each RAIDZ2 would be one MD-1000 and 
 would have 14 drives, reserving one drive per shelf as a spare.

If you are going to put all drives in a shelf into a single vdev, then it will 
be 
better to use raidz3 than raidz2+spare.

 
 The question is: The final pool would have spares of 500GB, 750GB and 1TB. Is 
 ZFS smart enough to pick the right one if a drive fails? If not, is there a 
 way to make this scenario work and still combine all available storage in a 
 single pool?

Use warm spares instead of hot spares, or raidz3.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mixing drive sizes within a pool

2011-01-13 Thread Freddie Cash
On Wed, Jan 12, 2011 at 5:45 PM, Wim van den Berge
wvandenbe...@altep.com wrote:
 I have a pile of aging Dell MD-1000's laying around that have been replaced 
 by new primary storage. I've been thinking of using them to create some 
 archive/backup storage for my primary ZFS systems.

 Unfortunately they do not all contain identical drives. Some of the older 
 MD-1000's have 15x500GB drives, some have all 750's some all 1TB's. Since 
 size and integrity matters here, not speed. I was thinking of creating one 
 large pool containing multiple RAIDZ2's. Each RAIDZ2 would be one MD-1000 and 
 would have 14 drives, reserving one drive per shelf as a spare.

 The question is: The final pool would have spares of 500GB, 750GB and 1TB. Is 
 ZFS smart enough to pick the right one if a drive fails? If not, is there a 
 way to make this scenario work and still combine all available storage in a 
 single pool?

While it may not be recommended as a best practise, there's nothing
wrong with using vdevs of different sizes.  You can even use vdevs
of different types (mirror + raidz1 + raidz2 + raidz3) in the same
pool, although you do have to force (-f) the add command.

My home ZFS box uses a 3-drive raidz1 vdev and a 2-drive mirror vdev
in the same pool, using 160 GB SATA and 120 GB IDE drives.

My work storage boxes use 8-drive raidz2 vdevs, mixed between 0.5 TB
SATA, 1.0 TB SATA, and 1.5 TB SATA.

Performance won't be as good as it could be due to the uneven
striping, especially when the smaller vdevs get to be full.  But it
works.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hard Errors on HDDs

2011-01-13 Thread Richard Elling
hard errors are a generic classification.  fmdump -eV shows the 
sense/asc/ascq, which
is generally more useful for diagnosis.  More below...


On Jan 1, 2011, at 7:50 AM, Benji wrote:

 Hi,
 
 I recently noticed that there are a lot of Hard Errors on multiple drives 
 that's being reported by iostat. Also, dmesg reports various messages from 
 the mpt driver.
 
 My config is:
 MB: SUPERMICRO X8SIL-F
 HBA: AOC-USAS-L8i (LSI 1068)
 RAM: 4GB ECC
 SunOS SAN 5.11 snv_134 i86pc i386 i86pc Solaris
 
 My configuration is a striped mirrored vdev of 13 drives (one mirror had an 
 error on a drive, which I cleared. But just to be safe I added another drive 
 to the mirror):
 
 NAME STATE READ WRITE CKSUM
zpoolONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c4t13d0  ONLINE   0 0 0
c4t19d0  ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c4t25d0  ONLINE   0 0 0
c4t31d0  ONLINE   0 0 0
  mirror-2   ONLINE   0 0 0
c4t12d0  ONLINE   0 0 0
c4t18d0  ONLINE   0 0 0
  mirror-3   ONLINE   0 0 0
c4t24d0  ONLINE   0 0 0
c4t30d0  ONLINE   0 0 0
  mirror-4   ONLINE   0 0 0
c4t11d0  ONLINE   0 0 0
c4t17d0  ONLINE   0 0 0
c4t10d0  ONLINE   0 0 0
  mirror-5   ONLINE   0 0 0
c4t23d0  ONLINE   0 0 0
c4t29d0  ONLINE   0 0 0
 
 
 Here's the output from iostat -En:
 
 c6d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
 Model: WDC WD3200BEKT- Revision:  Serial No:  WD-WXR1A30 Size: 320.07GB 
 320070352896 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0
 c7d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
 Model: WDC WD3200BEKT- Revision:  Serial No:  WD-WXR1A30 Size: 320.07GB 
 320070352896 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0
 c4t12d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t13d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t18d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t19d0  Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t24d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t25d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t30d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0003 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t31d0  Soft Errors: 0 Hard Errors: 252 Transport Errors: 0
 Vendor: ATA  Product: SAMSUNG HD203WI  Revision: 0002 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t17d0  Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
 Vendor: ATA  Product: WDC WD20EADS-32S Revision: 0A01 Serial No:
 Size: 2000.40GB 2000398934016 bytes
 Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
 Illegal Request: 0 Predictive Failure Analysis: 0
 c4t11d0  Soft Errors: 0 Hard Errors: 17 Transport Errors: 116
 Vendor: ATA  Product: WDC WD20EADS-32S Revision: 5G04 Serial No:
 Size: 

Re: [zfs-discuss] Migrating iSCSI volumes between pools

2011-01-13 Thread Richard Elling
On Jan 13, 2011, at 7:47 AM, Brian wrote:

 I have a situation coming up soon in which I will have to migrate some iSCSI 
 backing stores setup with comstar.  Are there steps published anywhere on how 
 to move these between pools?  Does one still use send/receive or do I somehow 
 just move the backing store? I have moved filesystems before using 
 send/receive but not sure if iSCSI targets are treated the same.

zfs send/receive works on datasets: filesystems or zvols.
For zvols being shared via iscsi, there is at least one hidden parameter that
is used by COMSTAR. These are sent when you do a replication stream send,
using zfs send -R ...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss