Even after rebooting with the drive for the zpool physically removed from the system, I still had a zpool I couldn't destroy, export, or otherwise remove from the listing.
Using "sudo zpool status -Pv" I worked out that my ZFS was actually expecting to find the data on partition 1 of the drive: errors: No known data errors pool: hub state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM hub UNAVAIL 0 0 0 insufficient replicas /dev/disk/by-id/ata-ST6000DM003-2CY186_ZF200PC8-part1 UNAVAIL 0 0 0 I'd previously tried symlinking the old device name to the new one, but I was inspired to try it with just the partition: [anovak@octagon ~]$ cd /dev/disk/by-id/ [anovak@octagon by-id]$ sudo ln -s 'usb-Seagate_Backup+_Hub_BK_NA8TQC87-0:0-part1' ata-ST6000DM003-2CY186_ZF200PC8-part1 When I did that, the pool immediately came back online, and I was able to export it to make it go away. Then I managed to import it under a more stable name with "sudo zpool import -a -d /dev/disk/by-partuuid/". I still can't see the drive data in gsmartcontrol, and I still can't spin it down, but at least I can now use it. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1774569 Title: gsmartcontrol, hdparm, and ZFS all refuse to talk to an apparently working Seagate Backup+ Hub drive after upgrade to 18.04 Status in gsmartcontrol package in Ubuntu: New Status in hdparm package in Ubuntu: New Status in zfs-linux package in Ubuntu: New Bug description: I recently upgraded from 17.10 to 18.04. After the upgrade, I noticed that my Seagate Backup+ Hub external drive was displaying a series of puzzling symptoms: 1. gsmartcontrol can't get SMART data from the drive. I am pretty sure it used to report SMART data? Here's a log of it not working: <warn> [hz] Warning: exit: Command line did not parse. <warn> [app] execute_smartctl(): Smartctl binary did not execute cleanly. <warn> [app] StorageDevice::execute_device_smartctl(): Smartctl binary did not execute cleanly. <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Physical block size" <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Logical Unit id" <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Temperature Warning" <warn> [app] SmartctlParser::parse_section_data(): Unknown Data subsection encountered. <warn> [hz] Warning: exit: Some SMART command to the disk failed, or there was a checksum error in a SMART data structure <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Physical block size" <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Logical Unit id" <warn> [app] SmartctlParser::parse_section_info_property(): Unknown property "Temperature Warning" <warn> [app] SmartctlParser::parse_section_data(): Unknown Data subsection encountered. 2. hdparm used to be able to spin down the drive. I had it configured to spin it down after a few minutes of inactivity, in the hdparm config file. Now that no longer happens, and hdparm can't seem to talk to the drive meaningfully at all: [anovak@octagon ~]$ sudo hdparm -I /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 24 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Standards: Likely used: 1 Configuration: Logical max current cylinders 0 0 heads 0 0 sectors/track 0 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY not likely Cannot perform double-word IO R/W multiple sector transfer: not supported DMA: not supported PIO: pio0 [anovak@octagon ~]$ sudo hdparm -y /dev/sdb /dev/sdb: issuing standby command SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 24 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 I think this may be related to https://askubuntu.com/questions/1037997 /upgraded-to-18-04-usb-harddrive-doesn-t-idle-anymore which is someone else having the same problem. 3. The ZFS tools think the drive is hosed: [anovak@octagon ~]$ sudo zpool status hub pool: hub state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM hub UNAVAIL 0 0 0 insufficient replicas ata-ST6000DM003-2CY186_ZF200PC8 UNAVAIL 0 0 0 This may be related to the drive having adopted a new /dev/disk/by-id name during the upgrade? I think it was "ata- ST6000DM003-2CY186_ZF200PC8" when I added it to my zpool by its symlink under /dev/disks/by-id, but now it is "usb- Seagate_Backup+_Hub_BK_NA8TQC87-0:0": [anovak@octagon ~]$ ls -lah /dev/disk/by-id/usb-Seagate_Backup+_Hub_BK_NA8TQC87-0\:0 lrwxrwxrwx 1 root root 9 May 31 20:52 /dev/disk/by-id/usb-Seagate_Backup+_Hub_BK_NA8TQC87-0:0 -> ../../sdb This *shouldn't* cause trouble; you should be able to export the zpool and re-import it under the new name. But zpool import shows nothing to import: [anovak@octagon ~]$ sudo zpool import no pools available to import And I also can't export or even destroy the busted zpool, because zpool doesn't think it exists for exporting or destroying purposes: [anovak@octagon ~]$ sudo zpool export hub cannot export 'hub': no such pool or dataset [anovak@octagon ~]$ sudo zpool destroy hub cannot destroy 'hub': no such pool or dataset 4. The weirdest thing is that the drive itself seems to be working correctly. I see /dev/sdb1 and /dev/sdb9, as expected for a ZFS drive. I can `cat /dev/sdb1 | xxd | less` and see the data stored on the drive, including what I think is the ZFS label (at 0x4000, with a bunch of ZFS-y strings in it) that zpool is upset about not seeing. I see the partitions in `gparted` just fine, too; there's no indication that there's anything wrong with the partition table. Even the device's integrated USB hub seems to be working fine. This is definitely not a hard drive failure. If I had to speculate, I would guess that the drive is being treated as a generic USB mass storage device now, when it used to be being handled as a SATA device in a USB-to-SATA enclosure (which I think it is). That would explain the name change, and the difficulty that hdparm and gsmartcontrol have in talking to it. The ZFS weirdness with not being able to export/destroy the pool has to be another issue; it happens even when the drive is disconnected from the system entirely. ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: gsmartcontrol 1.1.3-1 ProcVersionSignature: Ubuntu 4.15.0-22.24-generic 4.15.17 Uname: Linux 4.15.0-22-generic x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair ApportVersion: 2.20.9-0ubuntu7 Architecture: amd64 CurrentDesktop: ubuntu:GNOME Date: Thu May 31 20:46:52 2018 InstallationDate: Installed on 2017-08-06 (298 days ago) InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412) ProcEnviron: TERM=xterm-256color PATH=(custom, no user) XDG_RUNTIME_DIR=<set> LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: gsmartcontrol UpgradeStatus: Upgraded to bionic on 2018-05-29 (3 days ago) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gsmartcontrol/+bug/1774569/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp