IMHO
Zfs is smart but not smart when you deal with two different controller


Sent from my iPhone

On Mar 13, 2012, at 3:32 PM, P-O Yliniemi <p...@bsd-guide.net> wrote:

> Jim Klimov skrev 2012-03-13 15:24:
>> 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:
>>> hi
>>> are the disk/sas controller the same on both server?
>> 
>> Seemingly no. I don't see the output of "format" on Server2,
>> but for Server1 I see that the 3TB disks are used as IDE
>> devices (probably with motherboard SATA-IDE emulation?)
>> while on Server2 addressing goes like SAS with WWN names.
>> 
> Correct, the servers are all different.
> Server1 is a HP xw8400, and the disks are connected to the first four SATA 
> ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports 
> for the system disks).
> On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, 
> updated with the latest IT-mode firmware (also tested with the original 
> IR-mode firmware)
> 
> The output of the 'format' command on Server2 is:
> 
> AVAILABLE DISK SELECTIONS:
>       0. c2t0d0 <ATA-OCZ-VERTEX3-2.11-55.90GB>
>          /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
>       1. c2t1d0 <ATA-OCZ-VERTEX3-2.11-55.90GB>
>          /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
>       2. c3d1 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63>
>          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
>       3. c4d0 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63>
>          /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
>       4. c7t5000C5003F45CCF4d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB>
>          /scsi_vhci/disk@g5000c5003f45ccf4
>       5. c7t5000C50044E0F0C6d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB>
>          /scsi_vhci/disk@g5000c50044e0f0c6
>       6. c7t5000C50044E0F611d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB>
>          /scsi_vhci/disk@g5000c50044e0f611
> 
> Note that this is what it looks like now, not at the time I sent the 
> question. The difference is that I have set up three other disks (items 4-6) 
> on the new server, and are currently transferring the contents from Server1 
> to this one using zfs send/receive.
> 
> I will probably be able to reconnect the correct disks to the Server2 
> tomorrow when the data has been transferred to the new disks (problem 
> 'solved' at that moment), if there is anything else that I can do to try to 
> solve it the 'right' way.
> 
>> It may be possible that on one controller disks are used
>> "natively" while on another they are attached as a JBOD
>> or a set of RAID0 disks (so the controller's logic or its
>> expected layout intervenes), as recently discussed on-list?
>> 
> On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 
> 800GB-something by the BIOS (although correctly identified by OpenIndiana and 
> ZFS). This could be a part of the problem with the ability to export/import 
> the pool.
> 
>>> On Mar 13, 2012, at 6:10, P-O Yliniemi<p...@bsd-guide.net>  wrote:
>>> 
>>>> Hello,
>>>> 
>>>> I'm currently replacing a temporary storage server (server1) with the one 
>>>> that should be the final one (server2). To keep the data storage from the 
>>>> old one I'm attempting to import it on the new server. Both servers are 
>>>> running OpenIndiana server build 151a.
>>>> 
>>>> Server 1 (old)
>>>> The zpool consists of three disks in a raidz1 configuration:
>>>> # zpool status
>>>>            c4d0    ONLINE       0     0     0
>>>>            c4d1    ONLINE       0     0     0
>>>>            c5d0    ONLINE       0     0     0
>>>> 
>>>> errors: No known data errors
>>>> 
>>>> Output of format command gives:
>>>> # format
>>>> AVAILABLE DISK SELECTIONS:
>>>>       0. c2t1d0<LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126>
>>>>          /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
>>>>       1. c4d0<ST3000DM-         W1F07HW-0001-2.73TB>
>>>>          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
>>>>       2. c4d1<ST3000DM-         W1F05H2-0001-2.73TB>
>>>>          /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
>>>>       3. c5d0<ST3000DM-         W1F032R-0001-2.73TB>
>>>>          /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
>> 
>>>> Server 2 (new)
>>>> I have attached the disks on the new server in the same order (which 
>>>> shouldn't matter as ZFS should locate the disks anyway)
>>>> zpool import gives:
>>>> 
>>>> root@backup:~# zpool import
>>>>   pool: storage
>>>>     id: 17210091810759984780
>>>>  state: UNAVAIL
>>>> action: The pool cannot be imported due to damaged devices or data.
>>>> config:
>>>> 
>>>>        storage                    UNAVAIL  insufficient replicas
>>>>          raidz1-0                 UNAVAIL  corrupted data
>>>>            c7t5000C50044E0F316d0  ONLINE
>>>>            c7t5000C50044A30193d0  ONLINE
>>>>            c7t5000C50044760F6Ed0  ONLINE
>>>> 
> 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to