I recently tried moving a pool created under oi151a7, to ZFS-on-Linux, and the ZoL machine could not even see the pool to import it. In the time I had available, I did enough testing to convince myself that the ZoL side was not recognizing the whole-disk EFI label. Even "zdb" found no pools present on the entire 40-drive JBOD.
In my case, when I created a new pool from the ZoL system, the ZoL system's gparted and other partition utilities were convinced that the partition label was corrupted. This was not the case for the partitions that originally came over from the oi151a7 system -- the gparted, etc. utilities recognized the labels, which matched how the labels looked from the oi151a7 system at the time I exported the pool. Yet the ZoL utilities were unable to find the headers on the oi151a7-created pool. I wonder if this is a similar label-compatiblity issue. Regards, Marion Andrew Gabriel via [email protected] said: > I think the disk has two different zpool labels on it, probably at different > offsets. One is probably an old one, and then the disk was repartitioned > using a different scheme which put the new label in a different place and > has not trampled fully over the old label. > The cmdk driver in the first example is the legacy IDE disk driver. The > scsi_vhci driver in the second example is mpxio, which will have been > layered over a SCSI or SAS driver. > Are you sure the whole drive is being presented to the VM, and not just one > partition? (Are you sure you copied the whole disk in the first place, and > not just one partition?) > Andrew Gabriel > On 10/30/14 08:15 AM, Alexander Pyhalov via illumos-discuss wrote: > On 10/30/2014 10:56, Alexander Pyhalov via illumos-discuss wrote: >> Hello. >> > I'm trying to move OI Hipster host from physical host to linux-hosted >> > kvm. > > One more interesting thing. > > It seems after boot VM remembers something about old data pool. > When I run zdb -e rpool |head -60 > > I see > > Configuration for import: > vdev_children: 1 > version: 5000 > pool_guid: 3662662750235703870 > name: 'rpool' > state: 0 > hostid: 672767 > hostname: '' > vdev_tree: > type: 'root' > id: 0 > guid: 3662662750235703870 > children[0]: > type: 'disk' > id: 0 > guid: 1958306450195883095 > >> phys_path: '/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a' > whole_disk: 0 > metaslab_array: 33 > metaslab_shift: 28 > ashift: 9 > asize: 34319368192 > is_log: 0 > create_txg: 4 > path: '/dev/dsk/c10d0s0' > devid: 'id1,cmdk@AQEMU_HARDDISK=QM00001/a' > > I can see /pci@0,0/pci-ide@... disk path > > But when I run zdb -e data |head -60 > I receive > > zdb -e data |head -60 > zdb: can't open 'data': I/O error > > Configuration for import: > vdev_children: 1 > version: 5000 > pool_guid: 18026850962074001314 > name: 'data' > state: 0 > hostid: 672767 > hostname: 'oi-build.mgmt.r61.net' > vdev_tree: > type: 'root' > id: 0 > guid: 18026850962074001314 > children[0]: > type: 'disk' > id: 0 > guid: 5489749995336862939 > >> phys_path: > '/scsi_vhci/disk@g6005076802808844b000000000000032:a' > whole_disk: 1 > metaslab_array: 33 > metaslab_shift: 32 > ashift: 9 > asize: 549742444544 > is_log: 0 > DTL: 9205 > create_txg: 4 > path: '/dev/dsk/c10d1s0' > devid: 'id1,cmdk@AQEMU_HARDDISK=QM00002/a' > > It seems to remember some old path. How can I fix it? > > ------------------------------------------- illumos-discuss Archives: https:// > www.listbox.com/member/archive/182180/=now RSS Feed: https://www.listbox.com/ > member/archive/rss/182180/21175553-58a984f1 Modify Your Subscription: https:// > www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- illumos-discuss Archives: https://www.listbox.com/member/archive/182180/=now RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be Modify Your Subscription: https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4 Powered by Listbox: http://www.listbox.com
