Damien Le Moal <damien.lem...@opensource.wdc.com> writes: > On 9/1/22 23:57, Markus Armbruster wrote: >> Sam Li <faithilike...@gmail.com> writes: >> >>> Markus Armbruster <arm...@redhat.com> 于2022年8月31日周三 16:35写道: >>>> >>>> Sam Li <faithilike...@gmail.com> writes: >>>> >>>>> Markus Armbruster <arm...@redhat.com> 于2022年8月30日周二 19:57写道:
[...] >>>>> Zoned_host_device is basically host_device + zone operations. It >>>>> serves for a simple purpose: if the host device is zoned, register >>>>> zoned_host_device driver; else, register host_device. >>>> >>>> Why would I ever want to use host_device instead of zoned_host_device? >>>> >>>> To answer this question, we need to understand how their behavior >>>> differs. >>>> >>>> We can ignore the legacy protocol prefix / string filename part. >>>> >>>> All that's left seems to be "if the host device is zoned, then using the >>>> zoned_host_device driver gets you the zoned features, whereas using the >>>> host_device driver doesn't". What am I missing? >>> >>> I think that's basically what users need to know about. >> >> Now answer my previous question, please: why would I ever want to use >> host_device instead of zoned_host_device? >> >> Or in other words, why would I ever want to present a zoned host device >> to a guest as non-zoned device? >> >>>>>> Notably common is .bdrv_file_open = hdev_open. What happens when you >>>>>> try to create a zoned_host_device where the @filename argument is not in >>>>>> fact a zoned device? >>>>> >>>>> If the device is a regular block device, QEMU will still open the >>>>> device. For instance, I use a loopback device to test zone_report in >>>>> qemu-io. It returns ENOTTY which indicates Inappropriate ioctl for the >>>>> device. Meanwhile, if using a regular block device when emulation a >>>>> zoned device on a guest os, the best case is that the guest can boot >>>>> but has no emulated block device. In some cases, QEMU just terminates >>>>> because the block device has not met the alignment requirements. >>>> >>>> I'm not sure I understand all of this. I'm also not sure I have to :) >>> >>> Maybe I didn't explain it very well. Which part would you like to know >>> more about? >> >> Let's try more specific questions. Say I configure a zoned_host_device >> backed by a host device that isn't zoned. >> >> 1. Is this configuration accepted? > > If we assume we have the special zoned_host_device driver, with the > associated command line zoned_host_device option explicitly calling for > it, then no, I do not think this should be allowed at all and an error > should be returned on startup. That would be consistent with the fact that > the options zoned_host_device and host_device are different to make sure > we can check that the user knows what he/she/them is doing. > > If we have only host_device as a setup option and driver, the driver > methods can be trivially adjusted to do the right thing based on the > device type (i.e. zoned vs regular/not zoned). However, that would prevent > an interesting future extension of this work to implement a full zone > emulation on top of a regular (not zoned) host block device. > > With this in mind, we currently have the following: > > 1) host_device option -> accept only regular non-zoned host block devices > 2) zoned_host_device option -> accept only zoned host block devices 2) matches my intuitive expectations for this driver name. However, if host_device works even with a zoned host device before the patch presenting it as non-zoned to the guest, then it needs to continue to do so. > And in the future, we can have: > > 1) host_device option -> accept only regular non-zoned host block devices > 2) zoned_host_device option -> accept any host block device type > a) Use native zone kernel API for zoned host block devices > b) Use full zone emulation for regular host block devices Understood. > But sure, internally, we could have a single driver structure with methods > adjusted to do the correct thing based on the device type and option > specified. Having a 1:1 mapping between the driver name and driver > structure does clarify things I think (even though there are indeed a lot > of methods that are identical). I think this is basically a matter of user interface design. Let's review what we have: host_device and host_cdrom. I'm only passingly familiar with them, so please correct my misunderstandings, if any. host_device and host_cdrom let you "pass through" a host device to a guest. host_cdrom presents a removable device to the guest. I appears to accept any host block device, even a non-removable one. What happens when you try to use a non-removable host device as removable guest device I don't know. host_device presents a non-removable device to the guest. It accepts any host block device, even a removable one (as long as it has a medium). host_device detects whether the host device is a SCSI generic device. Guest devices scsi-hd and scsi-cd reject a SCSI generic host device. Guest device scsi-block requires one (I think). On the one hand, there is precedence for using different driver types for different kinds of host devices: host_cdrom for removable ones, host_device for non-removable ones. On the other hand, there is precedence for using a single driver type for different kinds of host devices, with dynamic detection: host_device both for SCSI generic and for others. On the third hand, the "different driver type" story is complicated by us accepting the "wrong" kind of host device at least in some scenarios. Kevin, do you have an opinion on how the user interface should be? Next, do you have one on how it can be, given what we have? [...]