Re: [RFC PATCH 9/9] hw/sd: Allow card size not power of 2 again

2021-06-24 Thread Tom Yan
Hi,

On Thu, 24 Jun 2021 at 02:01, Philippe Mathieu-Daudé  wrote:
>
> In commit a9bcedd15a5 ("hw/sd/sdcard: Do not allow invalid SD card
> sizes") we tried to protect us from CVE-2020-13253 by only allowing
> card with power-of-2 sizes. However doing so we disrupted valid user
> cases. As a compromise, allow any card size, but warn only power of 2
> sizes are supported, still suggesting the user how to increase a
> card with 'qemu-img resize'.
>
> Cc: Tom Yan 
> Cc: Warner Losh 
> Buglink: https://bugs.launchpad.net/qemu/+bug/1910586
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  hw/sd/sd.c | 25 +
>  1 file changed, 9 insertions(+), 16 deletions(-)
>
> diff --git a/hw/sd/sd.c b/hw/sd/sd.c
> index 9c8dd11bad1..cab4aab1475 100644
> --- a/hw/sd/sd.c
> +++ b/hw/sd/sd.c
> @@ -2131,23 +2131,16 @@ static void sd_realize(DeviceState *dev, Error **errp)
>  blk_size = blk_getlength(sd->blk);
>  if (blk_size > 0 && !is_power_of_2(blk_size)) {
>  int64_t blk_size_aligned = pow2ceil(blk_size);
> -char *blk_size_str;
> +g_autofree char *blk_size_s = size_to_str(blk_size);
> +g_autofree char *blk_size_aligned_s = 
> size_to_str(blk_size_aligned);
>
> -blk_size_str = size_to_str(blk_size);
> -error_setg(errp, "Invalid SD card size: %s", blk_size_str);
> -g_free(blk_size_str);
> -
> -blk_size_str = size_to_str(blk_size_aligned);
> -error_append_hint(errp,
> -  "SD card size has to be a power of 2, e.g. 
> %s.\n"
> -  "You can resize disk images with"
> -  " 'qemu-img resize  '\n"
> -  "(note that this will lose data if you make 
> the"
> -  " image smaller than it currently is).\n",
> -  blk_size_str);
> -g_free(blk_size_str);
> -
> -return;
> +warn_report("SD card size is not a power of 2 (%s). It might 
> work"
> +" but is not supported by QEMU. If possible, resize"
> +" your disk image to %s with:",
> +blk_size_s, blk_size_aligned_s);
> +warn_report(" 'qemu-img resize  '");
> +warn_report("(note that this will lose data if you make the"
> +" image smaller than it currently is).");

Not trying to be picky, but I don't think this is much better. IMHO
it's quite irresponsible to give a warning like that, leaving users in
a state like "Should I use it or not then?", without giving a concrete
reference to what exactly might/would lead to the warned problem.

I really think we should get (/ have gotten) things clear first. What
exactly is the bug we have been talking about here? I mean like, where
does it occur and what's the nature of it.

1. Is it specific to a certain type / model of backend / physical
storage device that will be made use of by qemu for the emulated
storage? (I presume not since you mention about image, unless you
irrationally related/bound the emulated storage type and the physical
storage type together.)

2. Does it have anything to do with a certain flaw in qemu itself?
Like the code that does read/write operation is flawed that it cannot
be handled by a certain *proper* backend device?

3. Or is it actually a bug in a certain driver / firmware blob that
will be used by an *emulated* device in the guest? In that case, can
we emulate another model so that it won't be using the problematic
driver / firmware?

Also, could you provide any information / reference to what the bug
really is? Like a bug report (for the problem itself, not some vague
claim that qemu should workaround the problem)?

>  }
>
>  ret = blk_set_perm(sd->blk, BLK_PERM_CONSISTENT_READ | 
> BLK_PERM_WRITE,
> --
> 2.31.1
>



Regarding commit a9bcedd (SD card size has to be power of 2)

2021-06-07 Thread Tom Yan
Hi philmd (and others),

So I just noticed your commit of requiring the size of an emulated SD
card to be a power of 2, when I was trying to emulate one for an
actual one (well, it's a microSD, but still), as it errored out.

You claim that the kernel will consider it to be a firmware bug and
"correct" the capacity by rounding it up. Could you provide a concrete
reference to the code that does such a thing? I'm not ruling out that
some crazy code could have gone upstream because some reviewers might
not be doing their job right, but if that really happened, it's a
kernel bug/regression and qemu should not do an equally-crazy thing to
"fix" it.

No offense but what you claimed really sounds absurd and ridiculous.
Although I don't have hundreds of SD cards in hand, I owned quite a
few at least, like most people do, with capacities ranging from ~2G to
~128G, and I don't even recall seeing a single one that has the
capacity being a power of 2. (Just like vendors of HDDs and SSDs, they
literally never do that AFAICT, for whatever reasons.)

Besides, even if there's a proper reason for the kernel to "fix" the
capacity, there's no reason for it to round it up either, because
obviously there will never be actual storage for the "virtual blocks".
I've never seen such a behavior so far either with the "mmcblk" hosts
I've used so far.

Regards,
Tom



[PATCH 2/3] pcie_chassis: add slot_max field

2021-03-31 Thread Tom Yan
Keep track of the largested allocated slot number when adding slot.

Signed-off-by: Tom Yan 
---
 hw/pci/pcie_port.c | 15 +--
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/hw/pci/pcie_port.c b/hw/pci/pcie_port.c
index 40fd80c4da..1e2348250c 100644
--- a/hw/pci/pcie_port.c
+++ b/hw/pci/pcie_port.c
@@ -49,6 +49,7 @@ void pcie_port_init_reg(PCIDevice *d)
  */
 struct PCIEChassis {
 uint8_t number;
+uint16_tslot_max; // largest allocated slot number
 
 QLIST_HEAD(, PCIESlot) slots;
 QLIST_ENTRY(PCIEChassis) next;
@@ -94,19 +95,11 @@ static PCIESlot *pcie_chassis_find_slot_with_chassis(struct 
PCIEChassis *c,
 
 static void pcie_chassis_find_avail_slot_with_chassis(struct PCIEChassis *c, 
struct PCIESlot *slot)
 {
-PCIESlot *s;
-uint16_t slot_max = 0;
-QLIST_FOREACH(s, >slots, next) {
-if (s->slot > slot_max) {
-slot_max = s->slot;
-}
-}
-
 /*
   find an available number to use from slot->slot (inclusive) to
-  slot_max; if there is none, use slot_max+1
+  c->slot_max; if there is none, use c->slot_max+1
 */
-while (slot->slot <= slot_max) {
+while (slot->slot <= c->slot_max) {
 if (!pcie_chassis_find_slot_with_chassis(c, slot->slot)) {
 break;
 }
@@ -137,6 +130,8 @@ int pcie_chassis_add_slot(struct PCIESlot *slot)
 return -EBUSY;
 }
 QLIST_INSERT_HEAD(>slots, slot, next);
+if (slot->slot > c->slot_max)
+c->slot_max = slot->slot;
 return 0;
 }
 
-- 
2.31.1




[PATCH 1/3] pcie_chassis: use an available slot number if not explicity set

2021-03-31 Thread Tom Yan
Currently pcie_chassis_add_slot() simply fails if the requested
slot number is already used.

Make it find and use an available slot number if the requested one
does not appear to be explicitly set (i.e. is 0). This allows slot
numbers to be enumerated automatically.

Maintain the current behaviour if the requested slot number appears
to be explicitly set (i.e. is larger than 0), in case some users do
not wish the slot numbers in actual use could be different from what
were specified (they will need to avoid using 0 though).

Also fixing an apparent mistake in pcie_chassis_find_slot_with_chassis().
The type of the slot field of PCIESlot is uint16_t, which is also what
all of its users pass to it.

Signed-off-by: Tom Yan 
---
 hw/pci/pcie_port.c | 28 ++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/hw/pci/pcie_port.c b/hw/pci/pcie_port.c
index eb563ad435..40fd80c4da 100644
--- a/hw/pci/pcie_port.c
+++ b/hw/pci/pcie_port.c
@@ -81,7 +81,7 @@ void pcie_chassis_create(uint8_t chassis_number)
 }
 
 static PCIESlot *pcie_chassis_find_slot_with_chassis(struct PCIEChassis *c,
- uint8_t slot)
+ uint16_t slot)
 {
 PCIESlot *s;
 QLIST_FOREACH(s, >slots, next) {
@@ -92,6 +92,28 @@ static PCIESlot *pcie_chassis_find_slot_with_chassis(struct 
PCIEChassis *c,
 return s;
 }
 
+static void pcie_chassis_find_avail_slot_with_chassis(struct PCIEChassis *c, 
struct PCIESlot *slot)
+{
+PCIESlot *s;
+uint16_t slot_max = 0;
+QLIST_FOREACH(s, >slots, next) {
+if (s->slot > slot_max) {
+slot_max = s->slot;
+}
+}
+
+/*
+  find an available number to use from slot->slot (inclusive) to
+  slot_max; if there is none, use slot_max+1
+*/
+while (slot->slot <= slot_max) {
+if (!pcie_chassis_find_slot_with_chassis(c, slot->slot)) {
+break;
+}
+slot->slot++;
+}
+}
+
 PCIESlot *pcie_chassis_find_slot(uint8_t chassis_number, uint16_t slot)
 {
 struct PCIEChassis *c;
@@ -109,7 +131,9 @@ int pcie_chassis_add_slot(struct PCIESlot *slot)
 if (!c) {
 return -ENODEV;
 }
-if (pcie_chassis_find_slot_with_chassis(c, slot->slot)) {
+if (!slot->slot) { // slot number not explicity set
+pcie_chassis_find_avail_slot_with_chassis(c, slot);
+} else if (pcie_chassis_find_slot_with_chassis(c, slot->slot)) {
 return -EBUSY;
 }
 QLIST_INSERT_HEAD(>slots, slot, next);
-- 
2.31.1




[PATCH 3/3] pcie_root_port: reorder procedures in realize()

2021-03-31 Thread Tom Yan
There are a bunch of pcie_cap_*_init() in the realize method of the
pcie root port devices that do not have return value. They can probably
be called after the slot is successfully added to an created chassis
(i.e. the slot number to use is final).
of them makes use of the slot number.

Signed-off-by: Tom Yan 
---
 hw/pci-bridge/pcie_root_port.c | 10 +-
 hw/pci-bridge/xio3130_downstream.c |  9 +
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/hw/pci-bridge/pcie_root_port.c b/hw/pci-bridge/pcie_root_port.c
index f1cfe9d14a..2bd2d02ccb 100644
--- a/hw/pci-bridge/pcie_root_port.c
+++ b/hw/pci-bridge/pcie_root_port.c
@@ -92,11 +92,6 @@ static void rp_realize(PCIDevice *d, Error **errp)
 goto err_int;
 }
 
-pcie_cap_arifwd_init(d);
-pcie_cap_deverr_init(d);
-pcie_cap_slot_init(d, s);
-pcie_cap_root_init(d);
-
 pcie_chassis_create(s->chassis);
 rc = pcie_chassis_add_slot(s);
 if (rc < 0) {
@@ -104,6 +99,11 @@ static void rp_realize(PCIDevice *d, Error **errp)
 goto err_pcie_cap;
 }
 
+pcie_cap_arifwd_init(d);
+pcie_cap_deverr_init(d);
+pcie_cap_slot_init(d, s);
+pcie_cap_root_init(d);
+
 rc = pcie_aer_init(d, PCI_ERR_VER, rpc->aer_offset,
PCI_ERR_SIZEOF, errp);
 if (rc < 0) {
diff --git a/hw/pci-bridge/xio3130_downstream.c 
b/hw/pci-bridge/xio3130_downstream.c
index 04aae72cd6..b7a92693ee 100644
--- a/hw/pci-bridge/xio3130_downstream.c
+++ b/hw/pci-bridge/xio3130_downstream.c
@@ -92,10 +92,6 @@ static void xio3130_downstream_realize(PCIDevice *d, Error 
**errp)
 if (rc < 0) {
 goto err_msi;
 }
-pcie_cap_flr_init(d);
-pcie_cap_deverr_init(d);
-pcie_cap_slot_init(d, s);
-pcie_cap_arifwd_init(d);
 
 pcie_chassis_create(s->chassis);
 rc = pcie_chassis_add_slot(s);
@@ -104,6 +100,11 @@ static void xio3130_downstream_realize(PCIDevice *d, Error 
**errp)
 goto err_pcie_cap;
 }
 
+pcie_cap_flr_init(d);
+pcie_cap_deverr_init(d);
+pcie_cap_slot_init(d, s);
+pcie_cap_arifwd_init(d);
+
 rc = pcie_aer_init(d, PCI_ERR_VER, XIO3130_AER_OFFSET,
PCI_ERR_SIZEOF, errp);
 if (rc < 0) {
-- 
2.31.1




[Bug 1908266] Re: spice unnecessary forces nographic

2021-01-05 Thread Tom Yan
Does the spice protocol / any spice client allow access to compatmonitor
/ serial /paralel?

Spice can be (if not often / mainly) used for remote access like VNC,
but that does not necessarily mean users would want to host "fully-
headless".

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908266

Title:
  spice unnecessary forces nographic

Status in QEMU:
  Incomplete

Bug description:
  When spice is enabled, qemu does not give the graphical window. It
  should not imply -nographic but only -display none.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908266/+subscriptions



[Bug 1908266] Re: spice unnecessary forces nographic

2020-12-29 Thread Tom Yan
The gtk window is not limited for -display but also for compatmonitor /
serial /paralel, but when -spice is used, the gtk window does not show
at all. While you can force the window to show with -display gtk, but
the *side effect* is the vga will be wired/connected to the gtk window
(which seems to break things when gl and so on is enabled).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908266

Title:
  spice unnecessary forces nographic

Status in QEMU:
  Incomplete

Bug description:
  When spice is enabled, qemu does not give the graphical window. It
  should not imply -nographic but only -display none.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908266/+subscriptions



[Bug 1908266] Re: spice unnecessary forces nographic

2020-12-15 Thread Tom Yan
More precisely, there should be a way to prevent -vga qxl from being
wired to the graphical window.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908266

Title:
  spice unnecessary forces nographic

Status in QEMU:
  New

Bug description:
  When spice is enabled, qemu does not give the graphical window. It
  should not imply -nographic but only -display none.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908266/+subscriptions



[Bug 1908266] [NEW] spice unnecessary forces nographic

2020-12-15 Thread Tom Yan
Public bug reported:

When spice is enabled, qemu does not give the graphical window. It
should not imply -nographic but only -display none.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908266

Title:
  spice unnecessary forces nographic

Status in QEMU:
  New

Bug description:
  When spice is enabled, qemu does not give the graphical window. It
  should not imply -nographic but only -display none.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908266/+subscriptions



[PATCH] virtio-blk: change default config-wce to false

2020-12-14 Thread Tom Yan
This would allow `cache=` to be respected when `if=virtio` is used
(instead of `-device virtio-blk-pci`). Since `cache=writeback` is
the default, this does not cause a change in the default behavior.

Also, when `config-wce` is true, `cache.writeback` is still overriden.

Signed-off-by: Tom Yan 
---
 hw/block/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index bac2d6fa2b..0c2ecd22bc 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1284,7 +1284,7 @@ static Property virtio_blk_properties[] = {
 DEFINE_BLOCK_CHS_PROPERTIES(VirtIOBlock, conf.conf),
 DEFINE_PROP_STRING("serial", VirtIOBlock, conf.serial),
 DEFINE_PROP_BIT64("config-wce", VirtIOBlock, host_features,
-  VIRTIO_BLK_F_CONFIG_WCE, true),
+  VIRTIO_BLK_F_CONFIG_WCE, false),
 #ifdef __linux__
 DEFINE_PROP_BIT64("scsi", VirtIOBlock, host_features,
   VIRTIO_BLK_F_SCSI, false),
-- 
2.29.2




Re: [PATCH v2 1/5] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-12-10 Thread Tom Yan
On Wed, 9 Dec 2020 at 21:54, Maxim Levitsky  wrote:
>
> From: Tom Yan 
>
> We can and should get max transfer length and max segments for all host
> devices / cdroms (on Linux).
>
> Also use MIN_NON_ZERO instead when we clamp max transfer length against
> max segments.
>
> Signed-off-by: Tom Yan 
> Signed-off-by: Maxim Levitsky 
> ---
>  block/file-posix.c | 59 +-
>  1 file changed, 43 insertions(+), 16 deletions(-)
>
> diff --git a/block/file-posix.c b/block/file-posix.c
> index d5fd1dbcd2..226ddbbdad 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -1162,6 +1162,12 @@ static void raw_reopen_abort(BDRVReopenState *state)
>
>  static int sg_get_max_transfer_length(int fd)
>  {
> +/*
> + * BLKSECTGET for /dev/sg* character devices incorrectly returns
> + * the max transfer size in bytes (rather than in blocks).
> + * Also note that /dev/sg* doesn't support BLKSSZGET ioctl.
> + */
The second statement should be removed. Also maybe it's better to have
the first one right above the line `return max_bytes;`.
> +
>  #ifdef BLKSECTGET
>  int max_bytes = 0;
>
> @@ -1175,7 +1181,22 @@ static int sg_get_max_transfer_length(int fd)
>  #endif
>  }
>
> -static int sg_get_max_segments(int fd)
> +static int get_max_transfer_length(int fd)
> +{
> +#if defined(BLKSECTGET)
> +int sect = 0;
> +
> +if (ioctl(fd, BLKSECTGET, ) == 0) {
> +return sect << 9;
> +} else {
> +return -errno;
> +}
> +#else
> +return -ENOSYS;
> +#endif
> +}
> +
> +static int get_max_segments(int fd)
>  {
>  #ifdef CONFIG_LINUX
>  char buf[32];
> @@ -1230,23 +1251,29 @@ static void raw_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>  {
>  BDRVRawState *s = bs->opaque;
>
> -if (bs->sg) {
> -int ret = sg_get_max_transfer_length(s->fd);
> +raw_probe_alignment(bs, s->fd, errp);
> +bs->bl.min_mem_alignment = s->buf_align;
> +bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +}
>
> -if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> -bs->bl.max_transfer = pow2floor(ret);
> -}
> +static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
> +{
> +BDRVRawState *s = bs->opaque;
>
> -ret = sg_get_max_segments(s->fd);
> -if (ret > 0) {
> -bs->bl.max_transfer = MIN(bs->bl.max_transfer,
> -  ret * qemu_real_host_page_size);
> -}
> +int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
> +   get_max_transfer_length(s->fd);
> +
> +if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> +bs->bl.max_transfer = pow2floor(ret);
>  }
>
> -raw_probe_alignment(bs, s->fd, errp);
> -bs->bl.min_mem_alignment = s->buf_align;
> -bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +ret = get_max_segments(s->fd);
> +if (ret > 0) {
> +bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
> +   ret * qemu_real_host_page_size);
> +}
> +
> +raw_refresh_limits(bs, errp);
>  }
>
>  static int check_for_dasd(int fd)
> @@ -3601,7 +3628,7 @@ static BlockDriver bdrv_host_device = {
>  .bdrv_co_pdiscard   = hdev_co_pdiscard,
>  .bdrv_co_copy_range_from = raw_co_copy_range_from,
>  .bdrv_co_copy_range_to  = raw_co_copy_range_to,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> @@ -3725,7 +3752,7 @@ static BlockDriver bdrv_host_cdrom = {
>  .bdrv_co_preadv = raw_co_preadv,
>  .bdrv_co_pwritev= raw_co_pwritev,
>  .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> --
> 2.26.2
>



Re: [PATCH 1/5] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-11-04 Thread Tom Yan
Actually I made a mistake in this. BLKSECTGET (the one in the block
layer) returns the number of "sectors", which is "defined" as 512-byte
block. So we shouldn't use BLKSSZGET here, but simply 512 (1 << 9).
See logical_to_sectors() in sd.h of the kernel.

On Thu, 5 Nov 2020 at 01:32, Maxim Levitsky  wrote:
>
> From: Tom Yan 
>
> We can and should get max transfer length and max segments for all host
> devices / cdroms (on Linux).
>
> Also use MIN_NON_ZERO instead when we clamp max transfer length against
> max segments.
>
> Signed-off-by: Tom Yan 
> Signed-off-by: Maxim Levitsky 
> ---
>  block/file-posix.c | 61 ++
>  1 file changed, 45 insertions(+), 16 deletions(-)
>
> diff --git a/block/file-posix.c b/block/file-posix.c
> index c63926d592..6581f41b2b 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -1162,6 +1162,12 @@ static void raw_reopen_abort(BDRVReopenState *state)
>
>  static int sg_get_max_transfer_length(int fd)
>  {
> +/*
> + * BLKSECTGET for /dev/sg* character devices incorrectly returns
> + * the max transfer size in bytes (rather than in blocks).
> + * Also note that /dev/sg* doesn't support BLKSSZGET ioctl.
> + */
> +
>  #ifdef BLKSECTGET
>  int max_bytes = 0;
>
> @@ -1175,7 +1181,24 @@ static int sg_get_max_transfer_length(int fd)
>  #endif
>  }
>
> -static int sg_get_max_segments(int fd)
> +static int get_max_transfer_length(int fd)
> +{
> +#if defined(BLKSECTGET) && defined(BLKSSZGET)
> +int sect = 0;
> +int ssz = 0;
> +
> +if (ioctl(fd, BLKSECTGET, ) == 0 &&
> +ioctl(fd, BLKSSZGET, ) == 0) {
> +return sect * ssz;
> +} else {
> +return -errno;
> +}
> +#else
> +return -ENOSYS;
> +#endif
> +}
> +
> +static int get_max_segments(int fd)
>  {
>  #ifdef CONFIG_LINUX
>  char buf[32];
> @@ -1230,23 +1253,29 @@ static void raw_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>  {
>  BDRVRawState *s = bs->opaque;
>
> -if (bs->sg) {
> -int ret = sg_get_max_transfer_length(s->fd);
> +raw_probe_alignment(bs, s->fd, errp);
> +bs->bl.min_mem_alignment = s->buf_align;
> +bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +}
>
> -if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> -bs->bl.max_transfer = pow2floor(ret);
> -}
> +static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
> +{
> +BDRVRawState *s = bs->opaque;
>
> -ret = sg_get_max_segments(s->fd);
> -if (ret > 0) {
> -bs->bl.max_transfer = MIN(bs->bl.max_transfer,
> -  ret * qemu_real_host_page_size);
> -}
> +int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
> +   get_max_transfer_length(s->fd);
> +
> +if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> +bs->bl.max_transfer = pow2floor(ret);
>  }
>
> -raw_probe_alignment(bs, s->fd, errp);
> -bs->bl.min_mem_alignment = s->buf_align;
> -bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +ret = get_max_segments(s->fd);
> +if (ret > 0) {
> +bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
> +   ret * qemu_real_host_page_size);
> +}
> +
> +raw_refresh_limits(bs, errp);
>  }
>
>  static int check_for_dasd(int fd)
> @@ -3600,7 +3629,7 @@ static BlockDriver bdrv_host_device = {
>  .bdrv_co_pdiscard   = hdev_co_pdiscard,
>  .bdrv_co_copy_range_from = raw_co_copy_range_from,
>  .bdrv_co_copy_range_to  = raw_co_copy_range_to,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> @@ -3724,7 +3753,7 @@ static BlockDriver bdrv_host_cdrom = {
>  .bdrv_co_preadv = raw_co_preadv,
>  .bdrv_co_pwritev= raw_co_pwritev,
>  .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> --
> 2.26.2
>



Re: [PATCH v4 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-06 Thread Tom Yan
Your patch only fixed it for sg devices. S_ISBLK() will never be true
when sg_get_max_segments() is only called in raw_refresh_limits() when
bs->sg is true.

With that said, it's not like we cannot ditch the bs->sg check in
raw_refresh_limits() and switch to S_ISBLK()/S_ISCHR(). (Although not
strictly necessary, I would also do S_ISCHR() for
SG_GET_SG_TABLESIZE.)

I don't know if it's a better approach though. Seems repetitive to me
(and bs->sg is not only used/checked in
file-posix/raw_refresh_limits):
https://github.com/qemu/qemu/blob/v5.1.0/block/file-posix.c#L3390
https://github.com/qemu/qemu/blob/v5.1.0/block/file-posix.c#L3504

On Mon, 7 Sep 2020 at 03:50, Dmitry Fomichev  wrote:
>
>
>
> > -Original Message-
> > From: Qemu-block  > bounces+dmitry.fomichev=wdc@nongnu.org> On Behalf Of Maxim
> > Levitsky
> > Sent: Sunday, September 6, 2020 1:06 PM
> > To: Tom Yan ; ebl...@redhat.com;
> > pbonz...@redhat.com; f...@euphon.net; anie...@linux.vnet.ibm.com;
> > kw...@redhat.com; mre...@redhat.com
> > Cc: qemu-devel@nongnu.org; qemu-bl...@nongnu.org
> > Subject: Re: [PATCH v4 2/2] file-posix: add sg_get_max_segments that
> > actually works with sg
> >
> > On Sun, 2020-09-06 at 23:26 +0800, Tom Yan wrote:
> > > I don't disagree with your proposal, but the thing is, do we even need
> > > another field/limit for case 1? For example, do we *really* need to
> > > clamp sizes[2] (NBD_INFO_BLOCK_SIZE) in nbd/server.c and
> > > max_io_sectors (SCSIBlockLimits) in hw/scsi/scsi-disk.c to to any kind
> > > of "dynamic" limit?
> >
> > It depends on if we have other block drivers that have IO segment/max
> > transfer sizes,
> > that we need to enforce, other than the SG_IO case.
> >
> > Knowing that the commit that added these limits to file-posix, which I was
> > fixing is relatively
> > new, I guess there are other cases for these limits.
> >
> > I'll check this tomorrow.
> >
> > >
> > > Either way I can add another field (`max_pwrite`, maybe?) to
> > > BlockLimits, as an infrastructure/hint, but what should be switched to
> > > it, and what value should each block driver set it to, is probably
> > > beyond what I can take.
> >
> > Implementation wise, I think I can do this, but I'll wait few days for the
> > feedback on my proposal.
> >
> > Thanks for finding this bug!
>
> There was also 
> https://lore.kernel.org/qemu-devel/20200811225122.17342-2-dmitry.fomic...@wdc.com/
> posted three weeks ago, but, seemingly, it was ignored... sorry if this
> bug is already being fixed in a different way, but I think the fix presented
> in that patch is worth considering.
>
> Very best,
> Dmitry
>
> >
> > Best regards,
> >   Maxim Levitsky
> >
> > >
> > > IMHO my patches should be merged first (they are really just fixing a
> > > regression and some bugs). If anyone finds it necessary and is capable
> > > and willing to fix the insufficiency of BlockLimits, they can do it
> > > afterwards with another patch series as an improvement. I can add a
> > > patch to remove the clamping in nbd/server.c as a perhaps-temporary
> > > measure to address the original issue, if desired.
> > >
> > > On Sun, 6 Sep 2020 at 20:53, Maxim Levitsky 
> > wrote:
> > > > On Sun, 2020-09-06 at 19:04 +0800, Tom Yan wrote:
> > > > > Maybe you want to add some condition for this:
> > > > > https://github.com/qemu/qemu/blob/v5.1.0/nbd/server.c#L659
> > > > > Or not clamp it at all.
> > > > >
> > > > > On Sun, 6 Sep 2020 at 18:58, Tom Yan  wrote:
> > > > > > In commit 867eccfed84f96b54f4a432c510a02c2ce03b430, Levitsky
> > appears
> > > > > > to have assumed that the only "SCSI Passthrough" is `-device
> > > > > > scsi-generic`, while the fact is there's also `-device scsi-block`
> > > > > > (passthrough without the sg driver). Unlike `-device scsi-hd`, 
> > > > > > getting
> > > > > > max_sectors is necessary to it (more precisely, hw_max_sectors
> > might
> > > > > > what matters, but BLKSECTGET reports max_sectors, so).
> > > > > >
> > > > > > I'm unsure about how qemu-nbd works, but the commit clearly
> > wasn't the
> > > > > > right approach to fix the original issue it addresses. (It should 
> > > > > > for
> > > > > > example ignore the max_transfer if it will never matter in to it, 

Re: [PATCH v4 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-06 Thread Tom Yan
I don't disagree with your proposal, but the thing is, do we even need
another field/limit for case 1? For example, do we *really* need to
clamp sizes[2] (NBD_INFO_BLOCK_SIZE) in nbd/server.c and
max_io_sectors (SCSIBlockLimits) in hw/scsi/scsi-disk.c to to any kind
of "dynamic" limit?

Either way I can add another field (`max_pwrite`, maybe?) to
BlockLimits, as an infrastructure/hint, but what should be switched to
it, and what value should each block driver set it to, is probably
beyond what I can take.

IMHO my patches should be merged first (they are really just fixing a
regression and some bugs). If anyone finds it necessary and is capable
and willing to fix the insufficiency of BlockLimits, they can do it
afterwards with another patch series as an improvement. I can add a
patch to remove the clamping in nbd/server.c as a perhaps-temporary
measure to address the original issue, if desired.

On Sun, 6 Sep 2020 at 20:53, Maxim Levitsky  wrote:
>
> On Sun, 2020-09-06 at 19:04 +0800, Tom Yan wrote:
> > Maybe you want to add some condition for this:
> > https://github.com/qemu/qemu/blob/v5.1.0/nbd/server.c#L659
> > Or not clamp it at all.
> >
> > On Sun, 6 Sep 2020 at 18:58, Tom Yan  wrote:
> > > In commit 867eccfed84f96b54f4a432c510a02c2ce03b430, Levitsky appears
> > > to have assumed that the only "SCSI Passthrough" is `-device
> > > scsi-generic`, while the fact is there's also `-device scsi-block`
> > > (passthrough without the sg driver). Unlike `-device scsi-hd`, getting
> > > max_sectors is necessary to it (more precisely, hw_max_sectors might
> > > what matters, but BLKSECTGET reports max_sectors, so).
> > >
> > > I'm unsure about how qemu-nbd works, but the commit clearly wasn't the
> > > right approach to fix the original issue it addresses. (It should for
> > > example ignore the max_transfer if it will never matter in to it, or
> > > overrides it in certain cases; when I glimpsed over
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1647104, I don't see how
> > > it could be file-posix problem when it is reporting the right thing,
> > > regardless of whether "removing" the code helps.)
> > >
> > > I don't think we want to "mark" `-device scsi-block` as sg either. It
> > > will probably bring even more unexpected problems, because they are
> > > literally different sets of things behind the scene / in the kernel.
>
> Yes, I did overlook the fact that scsi-block is kind of hybrid passthough 
> device,
> doing SG_IO for some things and regular IO for others.
>
> I don't think that my approach was wrong way to fix the problem, but as you 
> found
> out, abusing 'bs->sg' hack (which I would be very happy to remove completely)
> wasn't a good idea.
> I actualy was aware of scsi-block and that it does SG_IO but it
> was forgotten some where on the way.
>
> So in summary what the problem is and what I think is the right solution:
>
>
> Each qemu block driver exposes block limits and assumes that they are the same
> for two IO interfaces the block driver can expose:
>
> 1. Regular qemu blk_pread/pwrite alike functions
> 2. blk_ioctl (tiny wrapper against SG_IO), supported by posix-raw on
>host block devices/sg char devices, and by iscsi
>
> The problem is that these two interfaces can have different block limits.
>
> I don't know about iscsi, but for files, doing regular IO is always unlimited,
> since it passes through the kernel block layer and segemented there on
> demand which is faster that doing it in userspace, while SG_IO is passed as is
> to the underlying SCSI device and lacks this segmentation.
>
> Regardless of how NBD uses these limits, I think that these limits should be 
> correctly
> exposed by the block drivers, and therefore I propose is that each qemu block 
> driver
> would expose a pair of block limits.
> One for the regular block IO, and other for SG_IO.
>
> Then block driver clients (like scsi devices that you mention, nbd, etc)
> can choose which limit to use, depending on which IO api they use.
> The scsi-generic, and scsi-block can use the SG_IO limits,
> while the rest  can use the normal (unlimited for file I/O) limits, including 
> the NBD.
>
> Best regards,
> Maxim Levitsky
>
> > >
> > > On Fri, 4 Sep 2020 at 10:09, Tom Yan  wrote:
> > > > sg devices have different major/minor than their corresponding
> > > > block devices. Using sysfs to get max segments never really worked
> > > > for them.
> > > >
> > > > Fortunately the sg driver provides an ioctl to get sg_tablesize,
> > > > which is appa

Re: [PATCH v4 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-06 Thread Tom Yan
Maybe you want to add some condition for this:
https://github.com/qemu/qemu/blob/v5.1.0/nbd/server.c#L659
Or not clamp it at all.

On Sun, 6 Sep 2020 at 18:58, Tom Yan  wrote:
>
> In commit 867eccfed84f96b54f4a432c510a02c2ce03b430, Levitsky appears
> to have assumed that the only "SCSI Passthrough" is `-device
> scsi-generic`, while the fact is there's also `-device scsi-block`
> (passthrough without the sg driver). Unlike `-device scsi-hd`, getting
> max_sectors is necessary to it (more precisely, hw_max_sectors might
> what matters, but BLKSECTGET reports max_sectors, so).
>
> I'm unsure about how qemu-nbd works, but the commit clearly wasn't the
> right approach to fix the original issue it addresses. (It should for
> example ignore the max_transfer if it will never matter in to it, or
> overrides it in certain cases; when I glimpsed over
> https://bugzilla.redhat.com/show_bug.cgi?id=1647104, I don't see how
> it could be file-posix problem when it is reporting the right thing,
> regardless of whether "removing" the code helps.)
>
> I don't think we want to "mark" `-device scsi-block` as sg either. It
> will probably bring even more unexpected problems, because they are
> literally different sets of things behind the scene / in the kernel.
>
> On Fri, 4 Sep 2020 at 10:09, Tom Yan  wrote:
> >
> > sg devices have different major/minor than their corresponding
> > block devices. Using sysfs to get max segments never really worked
> > for them.
> >
> > Fortunately the sg driver provides an ioctl to get sg_tablesize,
> > which is apparently equivalent to max segments.
> >
> > Signed-off-by: Tom Yan 
> > ---
> >  block/file-posix.c | 17 -
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/block/file-posix.c b/block/file-posix.c
> > index 411a23cf99..9e37594145 100644
> > --- a/block/file-posix.c
> > +++ b/block/file-posix.c
> > @@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
> >  #endif
> >  }
> >
> > +static int sg_get_max_segments(int fd)
> > +{
> > +#ifdef SG_GET_SG_TABLESIZE
> > +long max_segments = 0;
> > +
> > +if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
> > +return max_segments;
> > +} else {
> > +return -errno;
> > +}
> > +#else
> > +return -ENOSYS;
> > +#endif
> > +}
> > +
> >  static int get_max_transfer_length(int fd)
> >  {
> >  #if defined(BLKSECTGET) && defined(BLKSSZGET)
> > @@ -1268,7 +1283,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
> > Error **errp)
> >  bs->bl.max_transfer = pow2floor(ret);
> >  }
> >
> > -ret = get_max_segments(s->fd);
> > +ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
> >  if (ret > 0) {
> >  bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
> > ret * qemu_real_host_page_size);
> > --
> > 2.28.0
> >



Re: [PATCH v4 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-06 Thread Tom Yan
In commit 867eccfed84f96b54f4a432c510a02c2ce03b430, Levitsky appears
to have assumed that the only "SCSI Passthrough" is `-device
scsi-generic`, while the fact is there's also `-device scsi-block`
(passthrough without the sg driver). Unlike `-device scsi-hd`, getting
max_sectors is necessary to it (more precisely, hw_max_sectors might
what matters, but BLKSECTGET reports max_sectors, so).

I'm unsure about how qemu-nbd works, but the commit clearly wasn't the
right approach to fix the original issue it addresses. (It should for
example ignore the max_transfer if it will never matter in to it, or
overrides it in certain cases; when I glimpsed over
https://bugzilla.redhat.com/show_bug.cgi?id=1647104, I don't see how
it could be file-posix problem when it is reporting the right thing,
regardless of whether "removing" the code helps.)

I don't think we want to "mark" `-device scsi-block` as sg either. It
will probably bring even more unexpected problems, because they are
literally different sets of things behind the scene / in the kernel.

On Fri, 4 Sep 2020 at 10:09, Tom Yan  wrote:
>
> sg devices have different major/minor than their corresponding
> block devices. Using sysfs to get max segments never really worked
> for them.
>
> Fortunately the sg driver provides an ioctl to get sg_tablesize,
> which is apparently equivalent to max segments.
>
> Signed-off-by: Tom Yan 
> ---
>  block/file-posix.c | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/block/file-posix.c b/block/file-posix.c
> index 411a23cf99..9e37594145 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
>  #endif
>  }
>
> +static int sg_get_max_segments(int fd)
> +{
> +#ifdef SG_GET_SG_TABLESIZE
> +long max_segments = 0;
> +
> +if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
> +return max_segments;
> +} else {
> +return -errno;
> +}
> +#else
> +return -ENOSYS;
> +#endif
> +}
> +
>  static int get_max_transfer_length(int fd)
>  {
>  #if defined(BLKSECTGET) && defined(BLKSSZGET)
> @@ -1268,7 +1283,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>  bs->bl.max_transfer = pow2floor(ret);
>  }
>
> -ret = get_max_segments(s->fd);
> +ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
>  if (ret > 0) {
>  bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
> ret * qemu_real_host_page_size);
> --
> 2.28.0
>



[PATCH v4 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-03 Thread Tom Yan
sg devices have different major/minor than their corresponding
block devices. Using sysfs to get max segments never really worked
for them.

Fortunately the sg driver provides an ioctl to get sg_tablesize,
which is apparently equivalent to max segments.

Signed-off-by: Tom Yan 
---
 block/file-posix.c | 17 -
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 411a23cf99..9e37594145 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
+static int sg_get_max_segments(int fd)
+{
+#ifdef SG_GET_SG_TABLESIZE
+long max_segments = 0;
+
+if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
+return max_segments;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
 static int get_max_transfer_length(int fd)
 {
 #if defined(BLKSECTGET) && defined(BLKSSZGET)
@@ -1268,7 +1283,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
Error **errp)
 bs->bl.max_transfer = pow2floor(ret);
 }
 
-ret = get_max_segments(s->fd);
+ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
 if (ret > 0) {
 bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
ret * qemu_real_host_page_size);
-- 
2.28.0




[PATCH v4 1/2] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
We can and should get max transfer length and max segments for all host
devices / cdroms (on Linux).

Also use MIN_NON_ZERO instead when we clamp max transfer length against
max segments.

Signed-off-by: Tom Yan 
---
v2: fix get_max_transfer_length for non-sg devices; also add/fix
sg_get_max_segments for sg devices
v3: fix typo in get_max_transfer_length
v4: fix accidental .orig inclusion
 block/file-posix.c | 57 +-
 1 file changed, 41 insertions(+), 16 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 9a00d4190a..411a23cf99 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,7 +1178,24 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
-static int sg_get_max_segments(int fd)
+static int get_max_transfer_length(int fd)
+{
+#if defined(BLKSECTGET) && defined(BLKSSZGET)
+int sect = 0;
+int ssz = 0;
+
+if (ioctl(fd, BLKSECTGET, ) == 0 &&
+ioctl(fd, BLKSSZGET, ) == 0) {
+return sect * ssz;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
+static int get_max_segments(int fd)
 {
 #ifdef CONFIG_LINUX
 char buf[32];
@@ -1233,23 +1250,31 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 {
 BDRVRawState *s = bs->opaque;
 
-if (bs->sg) {
-int ret = sg_get_max_transfer_length(s->fd);
+raw_probe_alignment(bs, s->fd, errp);
+bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+}
+
+static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+BDRVRawState *s = bs->opaque;
+
+/* BLKSECTGET has always been implemented wrongly in the sg driver
+   and BLKSSZGET has never been supported by it*/
+int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
+   get_max_transfer_length(s->fd);
 
-if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
-bs->bl.max_transfer = pow2floor(ret);
-}
+if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
+bs->bl.max_transfer = pow2floor(ret);
+}
 
-ret = sg_get_max_segments(s->fd);
-if (ret > 0) {
-bs->bl.max_transfer = MIN(bs->bl.max_transfer,
-  ret * qemu_real_host_page_size);
-}
+ret = get_max_segments(s->fd);
+if (ret > 0) {
+bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+   ret * qemu_real_host_page_size);
 }
 
-raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
-bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+raw_refresh_limits(bs, errp);
 }
 
 static int check_for_dasd(int fd)
@@ -3604,7 +3629,7 @@ static BlockDriver bdrv_host_device = {
 .bdrv_co_pdiscard   = hdev_co_pdiscard,
 .bdrv_co_copy_range_from = raw_co_copy_range_from,
 .bdrv_co_copy_range_to  = raw_co_copy_range_to,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
@@ -3728,7 +3753,7 @@ static BlockDriver bdrv_host_cdrom = {
 .bdrv_co_preadv = raw_co_preadv,
 .bdrv_co_pwritev= raw_co_pwritev,
 .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
-- 
2.28.0




[PATCH v3 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-03 Thread Tom Yan
sg devices have different major/minor than their corresponding
block devices. Using sysfs to get max segments never really worked
for them.

Fortunately the sg driver provides an ioctl to get sg_tablesize,
which is apparently equivalent to max segments.

Signed-off-by: Tom Yan 
---
 block/file-posix.c  |   17 +-
 block/file-posix.c.orig | 3920 +++
 2 files changed, 3936 insertions(+), 1 deletion(-)
 create mode 100644 block/file-posix.c.orig

diff --git a/block/file-posix.c b/block/file-posix.c
index 411a23cf99..9e37594145 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
+static int sg_get_max_segments(int fd)
+{
+#ifdef SG_GET_SG_TABLESIZE
+long max_segments = 0;
+
+if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
+return max_segments;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
 static int get_max_transfer_length(int fd)
 {
 #if defined(BLKSECTGET) && defined(BLKSSZGET)
@@ -1268,7 +1283,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
Error **errp)
 bs->bl.max_transfer = pow2floor(ret);
 }
 
-ret = get_max_segments(s->fd);
+ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
 if (ret > 0) {
 bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
ret * qemu_real_host_page_size);
diff --git a/block/file-posix.c.orig b/block/file-posix.c.orig
new file mode 100644
index 00..411a23cf99
--- /dev/null
+++ b/block/file-posix.c.orig
@@ -0,0 +1,3920 @@
+/*
+ * Block driver for RAW files (posix)
+ *
+ * Copyright (c) 2006 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to 
deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "qapi/error.h"
+#include "qemu/cutils.h"
+#include "qemu/error-report.h"
+#include "block/block_int.h"
+#include "qemu/module.h"
+#include "qemu/option.h"
+#include "qemu/units.h"
+#include "trace.h"
+#include "block/thread-pool.h"
+#include "qemu/iov.h"
+#include "block/raw-aio.h"
+#include "qapi/qmp/qdict.h"
+#include "qapi/qmp/qstring.h"
+
+#include "scsi/pr-manager.h"
+#include "scsi/constants.h"
+
+#if defined(__APPLE__) && (__MACH__)
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+//#include 
+#include 
+#include 
+#endif
+
+#ifdef __sun__
+#define _POSIX_PTHREAD_SEMANTICS 1
+#include 
+#endif
+#ifdef __linux__
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#ifdef __s390__
+#include 
+#endif
+#ifndef FS_NOCOW_FL
+#define FS_NOCOW_FL 0x0080 /* Do not cow file */
+#endif
+#endif
+#if defined(CONFIG_FALLOCATE_PUNCH_HOLE) || 
defined(CONFIG_FALLOCATE_ZERO_RANGE)
+#include 
+#endif
+#if defined (__FreeBSD__) || defined(__FreeBSD_kernel__)
+#include 
+#include 
+#endif
+
+#ifdef __OpenBSD__
+#include 
+#include 
+#include 
+#endif
+
+#ifdef __NetBSD__
+#include 
+#include 
+#include 
+#include 
+#endif
+
+#ifdef __DragonFly__
+#include 
+#include 
+#endif
+
+#ifdef CONFIG_XFS
+#include 
+#endif
+
+#include "trace.h"
+
+/* OS X does not have O_DSYNC */
+#ifndef O_DSYNC
+#ifdef O_SYNC
+#define O_DSYNC O_SYNC
+#elif defined(O_FSYNC)
+#define O_DSYNC O_FSYNC
+#endif
+#endif
+
+/* Approximate O_DIRECT with O_DSYNC if O_DIRECT isn't available */
+#ifndef O_DIRECT
+#define O_DIRECT O_DSYNC
+#endif
+
+#define FTYPE_FILE   0
+#define FTYPE_CD 1
+
+#define MAX_BLOCKSIZE  4096
+
+/* Posix file locking bytes. Libvirt takes byte 0, we start from higher bytes,
+ * leaving a few more 

[PATCH v3 1/2] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
We can and should get max transfer length and max segments for all host
devices / cdroms (on Linux).

Also use MIN_NON_ZERO instead when we clamp max transfer length against
max segments.

Signed-off-by: Tom Yan 
---
v2: fix get_max_transfer_length for non-sg devices; also add/fix
sg_get_max_segments for sg devices
v3: fix typo in get_max_transfer_length
 block/file-posix.c | 57 +-
 1 file changed, 41 insertions(+), 16 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 9a00d4190a..411a23cf99 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,7 +1178,24 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
-static int sg_get_max_segments(int fd)
+static int get_max_transfer_length(int fd)
+{
+#if defined(BLKSECTGET) && defined(BLKSSZGET)
+int sect = 0;
+int ssz = 0;
+
+if (ioctl(fd, BLKSECTGET, ) == 0 &&
+ioctl(fd, BLKSSZGET, ) == 0) {
+return sect * ssz;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
+static int get_max_segments(int fd)
 {
 #ifdef CONFIG_LINUX
 char buf[32];
@@ -1233,23 +1250,31 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 {
 BDRVRawState *s = bs->opaque;
 
-if (bs->sg) {
-int ret = sg_get_max_transfer_length(s->fd);
+raw_probe_alignment(bs, s->fd, errp);
+bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+}
+
+static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+BDRVRawState *s = bs->opaque;
+
+/* BLKSECTGET has always been implemented wrongly in the sg driver
+   and BLKSSZGET has never been supported by it*/
+int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
+   get_max_transfer_length(s->fd);
 
-if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
-bs->bl.max_transfer = pow2floor(ret);
-}
+if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
+bs->bl.max_transfer = pow2floor(ret);
+}
 
-ret = sg_get_max_segments(s->fd);
-if (ret > 0) {
-bs->bl.max_transfer = MIN(bs->bl.max_transfer,
-  ret * qemu_real_host_page_size);
-}
+ret = get_max_segments(s->fd);
+if (ret > 0) {
+bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+   ret * qemu_real_host_page_size);
 }
 
-raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
-bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+raw_refresh_limits(bs, errp);
 }
 
 static int check_for_dasd(int fd)
@@ -3604,7 +3629,7 @@ static BlockDriver bdrv_host_device = {
 .bdrv_co_pdiscard   = hdev_co_pdiscard,
 .bdrv_co_copy_range_from = raw_co_copy_range_from,
 .bdrv_co_copy_range_to  = raw_co_copy_range_to,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
@@ -3728,7 +3753,7 @@ static BlockDriver bdrv_host_cdrom = {
 .bdrv_co_preadv = raw_co_preadv,
 .bdrv_co_pwritev= raw_co_pwritev,
 .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
-- 
2.28.0




[PATCH RESEND v2 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-03 Thread Tom Yan
sg devices have different major/minor than their corresponding
block devices. Using sysfs to get max segments never really worked
for them.

Fortunately the sg driver provides an ioctl to get sg_tablesize,
which is apparently equivalent to max segments.

Signed-off-by: Tom Yan 
---
 block/file-posix.c | 17 -
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 0c9124c8aa..d2cd9a3362 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
+static int sg_get_max_segments(int fd)
+{
+#ifdef SG_GET_SG_TABLESIZE
+long max_segments = 0;
+
+if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
+return max_segments;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
 static int get_max_transfer_length(int fd)
 {
 #if defined(BLKSECTGET) && defined(BLKSSZGET)
@@ -1267,7 +1282,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
Error **errp)
 bs->bl.max_transfer = pow2floor(ret);
 }
 
-ret = get_max_segments(s->fd);
+ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
 if (ret > 0) {
 bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
ret * qemu_real_host_page_size);
-- 
2.28.0




[PATCH RESEND v2 1/2] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
We can and should get max transfer length and max segments for all host
devices / cdroms (on Linux).

Also use MIN_NON_ZERO instead when we clamp max transfer length against
max segments.

Signed-off-by: Tom Yan 
---
v2: fix get_max_transfer_length for non-sg devices; also add/fix
sg_get_max_segments for sg devices
 block/file-posix.c | 56 +-
 1 file changed, 40 insertions(+), 16 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 9a00d4190a..0c9124c8aa 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,7 +1178,23 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
-static int sg_get_max_segments(int fd)
+static int get_max_transfer_length(int fd)
+{
+#if defined(BLKSECTGET) && defined(BLKSSZGET)
+int sect = 0;
+int ssz = 0;
+
+if (ioctl(fd, BLKSECTGET, ) == 0 && ioctl(fd, BLKSSZGET, )) {
+return sect * ssz;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
+static int get_max_segments(int fd)
 {
 #ifdef CONFIG_LINUX
 char buf[32];
@@ -1233,23 +1249,31 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 {
 BDRVRawState *s = bs->opaque;
 
-if (bs->sg) {
-int ret = sg_get_max_transfer_length(s->fd);
+raw_probe_alignment(bs, s->fd, errp);
+bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+}
+
+static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+BDRVRawState *s = bs->opaque;
+
+/* BLKSECTGET has always been implemented wrongly in the sg driver
+   and BLKSSZGET has never been supported by it*/
+int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
+   get_max_transfer_length(s->fd);
 
-if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
-bs->bl.max_transfer = pow2floor(ret);
-}
+if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
+bs->bl.max_transfer = pow2floor(ret);
+}
 
-ret = sg_get_max_segments(s->fd);
-if (ret > 0) {
-bs->bl.max_transfer = MIN(bs->bl.max_transfer,
-  ret * qemu_real_host_page_size);
-}
+ret = get_max_segments(s->fd);
+if (ret > 0) {
+bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+   ret * qemu_real_host_page_size);
 }
 
-raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
-bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+raw_refresh_limits(bs, errp);
 }
 
 static int check_for_dasd(int fd)
@@ -3604,7 +3628,7 @@ static BlockDriver bdrv_host_device = {
 .bdrv_co_pdiscard   = hdev_co_pdiscard,
 .bdrv_co_copy_range_from = raw_co_copy_range_from,
 .bdrv_co_copy_range_to  = raw_co_copy_range_to,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
@@ -3728,7 +3752,7 @@ static BlockDriver bdrv_host_cdrom = {
 .bdrv_co_preadv = raw_co_preadv,
 .bdrv_co_pwritev= raw_co_pwritev,
 .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
-- 
2.28.0




[PATCH v2 2/2] file-posix: add sg_get_max_segments that actually works with sg

2020-09-03 Thread Tom Yan
sg devices have different major/minor than their corresponding
block devices. Using sysfs to get max segments never really worked
for them.

Fortunately the sg driver provides an ioctl to get sg_tablesize,
which is apparently equivalent to max segments.

Signed-off-by: Tom Yan 
---
 block/file-posix.c | 17 -
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 0c9124c8aa..d2cd9a3362 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,6 +1178,21 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
+static int sg_get_max_segments(int fd)
+{
+#ifdef SG_GET_SG_TABLESIZE
+long max_segments = 0;
+
+if (ioctl(fd, SG_GET_SG_TABLESIZE, _segments) == 0) {
+return max_segments;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
 static int get_max_transfer_length(int fd)
 {
 #if defined(BLKSECTGET) && defined(BLKSSZGET)
@@ -1267,7 +1282,7 @@ static void hdev_refresh_limits(BlockDriverState *bs, 
Error **errp)
 bs->bl.max_transfer = pow2floor(ret);
 }
 
-ret = get_max_segments(s->fd);
+ret = bs->sg ? sg_get_max_segments(s->fd) : get_max_segments(s->fd);
 if (ret > 0) {
 bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
ret * qemu_real_host_page_size);
-- 
2.28.0




[PATCH v2 1/2] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
We can and should get max transfer length and max segments for all host
devices / cdroms (on Linux).

Also use MIN_NON_ZERO instead when we clamp max transfer length against
max segments.

Signed-off-by: Tom Yan 
---
v2: fix get_max_transfer_length for non-sg devices; also add/fix
sg_get_max_segments for sg devices
 block/file-posix.c | 56 +-
 1 file changed, 40 insertions(+), 16 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 9a00d4190a..0c9124c8aa 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1178,7 +1178,23 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
-static int sg_get_max_segments(int fd)
+static int get_max_transfer_length(int fd)
+{
+#if defined(BLKSECTGET) && defined(BLKSSZGET)
+int sect = 0;
+int ssz = 0;
+
+if (ioctl(fd, BLKSECTGET, ) == 0 && ioctl(fd, BLKSSZGET, )) {
+return sect * ssz;
+} else {
+return -errno;
+}
+#else
+return -ENOSYS;
+#endif
+}
+
+static int get_max_segments(int fd)
 {
 #ifdef CONFIG_LINUX
 char buf[32];
@@ -1233,23 +1249,31 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 {
 BDRVRawState *s = bs->opaque;
 
-if (bs->sg) {
-int ret = sg_get_max_transfer_length(s->fd);
+raw_probe_alignment(bs, s->fd, errp);
+bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+}
+
+static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+BDRVRawState *s = bs->opaque;
+
+/* BLKSECTGET has always been implemented wrongly in the sg driver
+   and BLKSSZGET has never been supported by it*/
+int ret = bs->sg ? sg_get_max_transfer_length(s->fd) :
+   get_max_transfer_length(s->fd);
 
-if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
-bs->bl.max_transfer = pow2floor(ret);
-}
+if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
+bs->bl.max_transfer = pow2floor(ret);
+}
 
-ret = sg_get_max_segments(s->fd);
-if (ret > 0) {
-bs->bl.max_transfer = MIN(bs->bl.max_transfer,
-  ret * qemu_real_host_page_size);
-}
+ret = get_max_segments(s->fd);
+if (ret > 0) {
+bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+   ret * qemu_real_host_page_size);
 }
 
-raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
-bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+raw_refresh_limits(bs, errp);
 }
 
 static int check_for_dasd(int fd)
@@ -3604,7 +3628,7 @@ static BlockDriver bdrv_host_device = {
 .bdrv_co_pdiscard   = hdev_co_pdiscard,
 .bdrv_co_copy_range_from = raw_co_copy_range_from,
 .bdrv_co_copy_range_to  = raw_co_copy_range_to,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
@@ -3728,7 +3752,7 @@ static BlockDriver bdrv_host_cdrom = {
 .bdrv_co_preadv = raw_co_preadv,
 .bdrv_co_pwritev= raw_co_pwritev,
 .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
-- 
2.28.0




Re: [PATCH] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
Oops I forgot max_transfer needs to be in bytes (and BLKSECTGET in sg
has always been implemented wrongly). Will send a new version.

On Fri, 4 Sep 2020 at 06:37, Tom Yan  wrote:
>
> sg_get_max_transfer_length() and sg_get_max_segments() have nothing
> to do with the sg driver or SG_IO at all. They can be and should be
> used on all host devices / cdroms (on Linux).
>
> Also use MIN_NON_ZERO instead when we clamp max_sectors against
> max_segments.
>
> Signed-off-by: Tom Yan 
> ---
>  block/file-posix.c | 39 ++-
>  1 file changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/block/file-posix.c b/block/file-posix.c
> index 9a00d4190a..a38f43af4f 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -1163,7 +1163,7 @@ static void raw_reopen_abort(BDRVReopenState *state)
>  s->reopen_state = NULL;
>  }
>
> -static int sg_get_max_transfer_length(int fd)
> +static int get_max_transfer_length(int fd)
>  {
>  #ifdef BLKSECTGET
>  int max_bytes = 0;
> @@ -1178,7 +1178,7 @@ static int sg_get_max_transfer_length(int fd)
>  #endif
>  }
>
> -static int sg_get_max_segments(int fd)
> +static int get_max_segments(int fd)
>  {
>  #ifdef CONFIG_LINUX
>  char buf[32];
> @@ -1233,23 +1233,28 @@ static void raw_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>  {
>  BDRVRawState *s = bs->opaque;
>
> -if (bs->sg) {
> -int ret = sg_get_max_transfer_length(s->fd);
> +raw_probe_alignment(bs, s->fd, errp);
> +bs->bl.min_mem_alignment = s->buf_align;
> +bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +}
>
> -if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> -bs->bl.max_transfer = pow2floor(ret);
> -}
> +static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
> +{
> +BDRVRawState *s = bs->opaque;
>
> -ret = sg_get_max_segments(s->fd);
> -if (ret > 0) {
> -bs->bl.max_transfer = MIN(bs->bl.max_transfer,
> -  ret * qemu_real_host_page_size);
> -}
> +int ret = get_max_transfer_length(s->fd);
> +
> +if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
> +bs->bl.max_transfer = pow2floor(ret);
>  }
>
> -raw_probe_alignment(bs, s->fd, errp);
> -bs->bl.min_mem_alignment = s->buf_align;
> -bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
> +ret = get_max_segments(s->fd);
> +if (ret > 0) {
> +bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
> +   ret * qemu_real_host_page_size);
> +}
> +
> +raw_refresh_limits(bs, errp);
>  }
>
>  static int check_for_dasd(int fd)
> @@ -3604,7 +3609,7 @@ static BlockDriver bdrv_host_device = {
>  .bdrv_co_pdiscard   = hdev_co_pdiscard,
>  .bdrv_co_copy_range_from = raw_co_copy_range_from,
>  .bdrv_co_copy_range_to  = raw_co_copy_range_to,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> @@ -3728,7 +3733,7 @@ static BlockDriver bdrv_host_cdrom = {
>  .bdrv_co_preadv = raw_co_preadv,
>  .bdrv_co_pwritev= raw_co_pwritev,
>  .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
> -.bdrv_refresh_limits = raw_refresh_limits,
> +.bdrv_refresh_limits = hdev_refresh_limits,
>  .bdrv_io_plug = raw_aio_plug,
>  .bdrv_io_unplug = raw_aio_unplug,
>  .bdrv_attach_aio_context = raw_aio_attach_aio_context,
> --
> 2.28.0
>



[PATCH] file-posix: split hdev_refresh_limits from raw_refresh_limits

2020-09-03 Thread Tom Yan
sg_get_max_transfer_length() and sg_get_max_segments() have nothing
to do with the sg driver or SG_IO at all. They can be and should be
used on all host devices / cdroms (on Linux).

Also use MIN_NON_ZERO instead when we clamp max_sectors against
max_segments.

Signed-off-by: Tom Yan 
---
 block/file-posix.c | 39 ++-
 1 file changed, 22 insertions(+), 17 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 9a00d4190a..a38f43af4f 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1163,7 +1163,7 @@ static void raw_reopen_abort(BDRVReopenState *state)
 s->reopen_state = NULL;
 }
 
-static int sg_get_max_transfer_length(int fd)
+static int get_max_transfer_length(int fd)
 {
 #ifdef BLKSECTGET
 int max_bytes = 0;
@@ -1178,7 +1178,7 @@ static int sg_get_max_transfer_length(int fd)
 #endif
 }
 
-static int sg_get_max_segments(int fd)
+static int get_max_segments(int fd)
 {
 #ifdef CONFIG_LINUX
 char buf[32];
@@ -1233,23 +1233,28 @@ static void raw_refresh_limits(BlockDriverState *bs, 
Error **errp)
 {
 BDRVRawState *s = bs->opaque;
 
-if (bs->sg) {
-int ret = sg_get_max_transfer_length(s->fd);
+raw_probe_alignment(bs, s->fd, errp);
+bs->bl.min_mem_alignment = s->buf_align;
+bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+}
 
-if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
-bs->bl.max_transfer = pow2floor(ret);
-}
+static void hdev_refresh_limits(BlockDriverState *bs, Error **errp)
+{
+BDRVRawState *s = bs->opaque;
 
-ret = sg_get_max_segments(s->fd);
-if (ret > 0) {
-bs->bl.max_transfer = MIN(bs->bl.max_transfer,
-  ret * qemu_real_host_page_size);
-}
+int ret = get_max_transfer_length(s->fd);
+
+if (ret > 0 && ret <= BDRV_REQUEST_MAX_BYTES) {
+bs->bl.max_transfer = pow2floor(ret);
 }
 
-raw_probe_alignment(bs, s->fd, errp);
-bs->bl.min_mem_alignment = s->buf_align;
-bs->bl.opt_mem_alignment = MAX(s->buf_align, qemu_real_host_page_size);
+ret = get_max_segments(s->fd);
+if (ret > 0) {
+bs->bl.max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
+   ret * qemu_real_host_page_size);
+}
+
+raw_refresh_limits(bs, errp);
 }
 
 static int check_for_dasd(int fd)
@@ -3604,7 +3609,7 @@ static BlockDriver bdrv_host_device = {
 .bdrv_co_pdiscard   = hdev_co_pdiscard,
 .bdrv_co_copy_range_from = raw_co_copy_range_from,
 .bdrv_co_copy_range_to  = raw_co_copy_range_to,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
@@ -3728,7 +3733,7 @@ static BlockDriver bdrv_host_cdrom = {
 .bdrv_co_preadv = raw_co_preadv,
 .bdrv_co_pwritev= raw_co_pwritev,
 .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
-.bdrv_refresh_limits = raw_refresh_limits,
+.bdrv_refresh_limits = hdev_refresh_limits,
 .bdrv_io_plug = raw_aio_plug,
 .bdrv_io_unplug = raw_aio_unplug,
 .bdrv_attach_aio_context = raw_aio_attach_aio_context,
-- 
2.28.0




[Bug 1893634] Re: blk_get_max_transfer() works only with sg

2020-08-31 Thread Tom Yan
** Description changed:

  blk_get_max_transfer() is supposed to be able to get the max_sectors
  queue limit of the scsi device on the host side and is used in both
  scsi-generic.c (for scsi-generic and scsi-block) and scsi-disk.c (for
  scsi-hd) to set/change the max_xfer_len (and opt_xfer_len in the case of
- scsi-generic).
+ scsi-generic.c).
  
  However, it only works with the sg driver in doing so. It cannot get the
  queue limit with the sd driver and simply returns MAX_INT.
  
  qemu version 5.1.0
  kernel version 5.8.5
  
  Btw, is there a particular reason that it doesn't MIN_NON_ZERO against
  the original max_xfer_len:
  https://github.com/qemu/qemu/blob/v5.1.0/hw/scsi/scsi-generic.c#L172?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1893634

Title:
  blk_get_max_transfer() works only with sg

Status in QEMU:
  New

Bug description:
  blk_get_max_transfer() is supposed to be able to get the max_sectors
  queue limit of the scsi device on the host side and is used in both
  scsi-generic.c (for scsi-generic and scsi-block) and scsi-disk.c (for
  scsi-hd) to set/change the max_xfer_len (and opt_xfer_len in the case
  of scsi-generic.c).

  However, it only works with the sg driver in doing so. It cannot get
  the queue limit with the sd driver and simply returns MAX_INT.

  qemu version 5.1.0
  kernel version 5.8.5

  Btw, is there a particular reason that it doesn't MIN_NON_ZERO against
  the original max_xfer_len:
  https://github.com/qemu/qemu/blob/v5.1.0/hw/scsi/scsi-generic.c#L172?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1893634/+subscriptions



[Bug 1893634] [NEW] blk_get_max_transfer() works only with sg

2020-08-31 Thread Tom Yan
Public bug reported:

blk_get_max_transfer() is supposed to be able to get the max_sectors
queue limit of the scsi device on the host side and is used in both
scsi-generic.c (for scsi-generic and scsi-block) and scsi-disk.c (for
scsi-hd) to set/change the max_xfer_len (and opt_xfer_len in the case of
scsi-generic).

However, it only works with the sg driver in doing so. It cannot get the
queue limit with the sd driver and simply returns MAX_INT.

qemu version 5.1.0
kernel version 5.8.5

Btw, is there a particular reason that it doesn't MIN_NON_ZERO against
the original max_xfer_len:
https://github.com/qemu/qemu/blob/v5.1.0/hw/scsi/scsi-generic.c#L172?

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1893634

Title:
  blk_get_max_transfer() works only with sg

Status in QEMU:
  New

Bug description:
  blk_get_max_transfer() is supposed to be able to get the max_sectors
  queue limit of the scsi device on the host side and is used in both
  scsi-generic.c (for scsi-generic and scsi-block) and scsi-disk.c (for
  scsi-hd) to set/change the max_xfer_len (and opt_xfer_len in the case
  of scsi-generic).

  However, it only works with the sg driver in doing so. It cannot get
  the queue limit with the sd driver and simply returns MAX_INT.

  qemu version 5.1.0
  kernel version 5.8.5

  Btw, is there a particular reason that it doesn't MIN_NON_ZERO against
  the original max_xfer_len:
  https://github.com/qemu/qemu/blob/v5.1.0/hw/scsi/scsi-generic.c#L172?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1893634/+subscriptions



[Qemu-devel] [Bug 1649233] Re: scrolling does not work once mouse is grabbed

2016-12-12 Thread Tom Yan
Not sure if it is relevant, but there's a HID button device cannot get
started in Windows 10 RS1.

** Attachment added: "Capture.PNG"
   
https://bugs.launchpad.net/qemu/+bug/1649233/+attachment/4790624/+files/Capture.PNG

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1649233

Title:
  scrolling does not work once mouse is grabbed

Status in QEMU:
  New

Bug description:
  The title pretty much told it all. It occurs in Windows 10 RS1 on qemu
  2.7.0. Interestingly, I can scroll in the guest if the mouse is not
  grabbed. So using usb-tablet sort of works around it, but if I
  explicitly grab the mouse with Ctrl+Alt+G, scrolling will also stop
  working.

  The host is Arch Linux so the qemu build uses gtk(3) for GUI by
  default. I wanted to test with sdl but it seems sdl support in qemu is
  sort of broken that I can't even start the virtual machine properly
  with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1649233/+subscriptions



[Qemu-devel] [Bug 1649233] [NEW] scrolling does not work once mouse is grabbed

2016-12-12 Thread Tom Yan
Public bug reported:

The title pretty much told it all. It occurs in Windows 10 RS1 on qemu
2.7.0. Interestingly, I can scroll in the guest if the mouse is not
grabbed. So using usb-tablet sort of works around it, but if I
explicitly grab the mouse with Ctrl+Alt+G, scrolling will also stop
working.

The host is Arch Linux so the qemu build uses gtk(3) for GUI by default.
I wanted to test with sdl but it seems sdl support in qemu is sort of
broken that I can't even start the virtual machine properly with that.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1649233

Title:
  scrolling does not work once mouse is grabbed

Status in QEMU:
  New

Bug description:
  The title pretty much told it all. It occurs in Windows 10 RS1 on qemu
  2.7.0. Interestingly, I can scroll in the guest if the mouse is not
  grabbed. So using usb-tablet sort of works around it, but if I
  explicitly grab the mouse with Ctrl+Alt+G, scrolling will also stop
  working.

  The host is Arch Linux so the qemu build uses gtk(3) for GUI by
  default. I wanted to test with sdl but it seems sdl support in qemu is
  sort of broken that I can't even start the virtual machine properly
  with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1649233/+subscriptions



[Qemu-devel] [Bug 1649233] Re: scrolling does not work once mouse is grabbed

2016-12-12 Thread Tom Yan
Full commands I used:

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -drive
file=test.img,format=raw

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -drive
file=test.img,format=raw -device nec-usb-xhci -device usb-kbd -device
usb-mouse

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -drive
file=test.img,format=raw -device nec-usb-xhci -device usb-kbd -device
usb-tablet

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1649233

Title:
  scrolling does not work once mouse is grabbed

Status in QEMU:
  New

Bug description:
  The title pretty much told it all. It occurs in Windows 10 RS1 on qemu
  2.7.0. Interestingly, I can scroll in the guest if the mouse is not
  grabbed. So using usb-tablet sort of works around it, but if I
  explicitly grab the mouse with Ctrl+Alt+G, scrolling will also stop
  working.

  The host is Arch Linux so the qemu build uses gtk(3) for GUI by
  default. I wanted to test with sdl but it seems sdl support in qemu is
  sort of broken that I can't even start the virtual machine properly
  with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1649233/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-12-12 Thread Tom Yan
The problem still exists with qemu 2.7.0 and Windows 10 RS1. Btw I just
did one more test, that I disable the working nvme controller in Device
Manager to see if it makes the non-working one comes alive, but it does
not, even if I reboot the virtual machine.

For the record, the serial number translation is not fixed in RS1
either.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-12-08 Thread Tom Yan
I think they are two separate issues in usb-uas and usb-host
respectively. I probably should not have bring in the usb-host case here
but create another report for it.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



Re: [Qemu-devel] seabios does not support booting from usb-uas with nec-usb-xhci

2016-05-09 Thread Tom Yan
That's what I guessed too. Is there any plan to add it or is it out of
consideration?

On Monday, 9 May 2016, Gerd Hoffmann <kra...@redhat.com> wrote:

> On Fr, 2016-05-06 at 17:00 +0800, Tom Yan wrote:
> > Strangely, seabios supports booting from usb-uas with usb-ehci:
> >
> > https://drive.google.com/open?id=0B50PCxfm5KU1dUFTUG9HU0pEcXc
> >
> > and usb-bot with nec-usb-xhci:
> >
> > https://drive.google.com/open?id=0B50PCxfm5KU1QnVzdTBIdEFzZW8
> >
> > but not usb-uas with nec-usb-xhci:
> >
> > https://drive.google.com/open?id=0B50PCxfm5KU1Z0t1akNXaVZZLTQ
> >
> > This makes me wonder if it is just a missing feature of a bug.
>
> missing support for usb3 streams in seabios.
>
> cheers,
>   Gerd
>
>


[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-07 Thread Tom Yan
Not sure if it's relevant, but when I try completely passthrough a UASP
device to the VM through usb-host, only the BOT/MSC part is exposed.
This is not the case when I do the same thing to a Linux guest (btw usb-
uas apparently works fine in Linux guest as well).

** Attachment added: "xhci-host.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657753/+files/xhci-host.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-07 Thread Tom Yan
Not sure if it's relevant, but when I try completely passthrough a UASP
device to the VM through usb-host, only the BOT/MSC part is exposed.
This is not the case when I do the same thing to a Linux guest (btw usb-
uas apparently works fine in Linux guest as well).

** Attachment added: "xhci-block.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657745/+files/xhci-block.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-07 Thread Tom Yan
Also tried to "passthrough" the SCSI layer of an actual UASP drive with
scsi-block. No luck either.

** Attachment added: "xhci-block.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657752/+files/xhci-block.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-07 Thread Tom Yan
I am using qemu-2.5.1. Here are the commands I used in the test cases:

qemu-system-x86_64 -enable-kvm -cpu host -m 2G -net none -full-screen
-drive file=disks/uas.qcow2,format=qcow2,cache=none,aio=native -drive
file=test.img,format=raw,if=none,id=test -device nec-usb-xhci -device
usb-uas -device scsi-hd,drive=test

qemu-system-x86_64 -enable-kvm -cpu host -m 2G -net none -full-screen
-drive file=disks/uas.qcow2,format=qcow2,cache=none,aio=native -drive
file=test.img,format=raw,if=none,id=test -device nec-usb-xhci -device
usb-bot -device scsi-hd,drive=test

qemu-system-x86_64 -enable-kvm -cpu host -m 2G -net none -full-screen
-drive file=disks/uas.qcow2,format=qcow2,cache=none,aio=native -drive
file=/dev/sdc,format=raw,if=none,id=test -device nec-usb-xhci -device
usb-uas -device scsi-block,drive=test

qemu-system-x86_64 -enable-kvm -cpu host -m 2G -net none -full-screen
-drive file=disks/uas.qcow2,format=qcow2,cache=none,aio=native -device
nec-usb-xhci -device usb-host,hostbus=2,hostport=6

** Attachment removed: "xhci-block.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657734/+files/xhci-block.png

** Attachment removed: "xhci-block.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657745/+files/xhci-block.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-06 Thread Tom Yan
Also tried to "passthrough" the SCSI layer of an actual UASP drive with
scsi-block. No luck either.

** Attachment added: "xhci-block.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657734/+files/xhci-block.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] Re: usb-uas does not work in Windows (10) guest

2016-05-06 Thread Tom Yan
usb-bot works fine

** Attachment added: "xhci-bot.png"
   
https://bugs.launchpad.net/qemu/+bug/1579306/+attachment/4657732/+files/xhci-bot.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



[Qemu-devel] [Bug 1579306] [NEW] usb-uas does not work in Windows (10) guest

2016-05-06 Thread Tom Yan
Public bug reported:

When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
device fail to start with the following error in the guest:

"The device cannot start. (Code 10)

{Operation Failed}
The requested operation was unsuccessful"

If the host controller is nec-usb-xhci, there will be two of the
following error on the terminal of the host as well:

"qemu-system-x86_64: usb_uas_handle_control: unhandled control request"

If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
up on the host side, but the device stil fails with the same error on
the guest side.

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: "xhci-hd.png"
   
https://bugs.launchpad.net/bugs/1579306/+attachment/4657731/+files/xhci-hd.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1579306

Title:
  usb-uas does not work in Windows (10) guest

Status in QEMU:
  New

Bug description:
  When I attach a "-device usb-uas" to a VM with Windows 10 10586, the
  device fail to start with the following error in the guest:

  "The device cannot start. (Code 10)

  {Operation Failed}
  The requested operation was unsuccessful"

  If the host controller is nec-usb-xhci, there will be two of the
  following error on the terminal of the host as well:

  "qemu-system-x86_64: usb_uas_handle_control: unhandled control
  request"

  If it's usb-ehci, ich9-usb-ehci1 or ich9-usb-echi2, this will not show
  up on the host side, but the device stil fails with the same error on
  the guest side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1579306/+subscriptions



Re: [Qemu-devel] seabios does not support booting from usb-uas with nec-usb-xhci

2016-05-06 Thread Tom Yan
Sorry, I forgot to mention that I am using qemu 2.5.1 and seabios 1.9.2.

On 6 May 2016 at 17:04, Tom Yan <tom.t...@gmail.com> wrote:
> On 6 May 2016 at 17:00, Tom Yan <tom.t...@gmail.com> wrote:
>>
>> This makes me wonder if it is just a missing feature of a bug.
>>
>
> I meant "or" instead of "of".



Re: [Qemu-devel] seabios does not support booting from usb-uas with nec-usb-xhci

2016-05-06 Thread Tom Yan
On 6 May 2016 at 17:00, Tom Yan <tom.t...@gmail.com> wrote:
>
> This makes me wonder if it is just a missing feature of a bug.
>

I meant "or" instead of "of".



[Qemu-devel] seabios does not support booting from usb-uas with nec-usb-xhci

2016-05-06 Thread Tom Yan
Strangely, seabios supports booting from usb-uas with usb-ehci:

https://drive.google.com/open?id=0B50PCxfm5KU1dUFTUG9HU0pEcXc

and usb-bot with nec-usb-xhci:

https://drive.google.com/open?id=0B50PCxfm5KU1QnVzdTBIdEFzZW8

but not usb-uas with nec-usb-xhci:

https://drive.google.com/open?id=0B50PCxfm5KU1Z0t1akNXaVZZLTQ

This makes me wonder if it is just a missing feature of a bug.

Btw, not sure if it's relevant, booting usb-uas is not at all
supported in ovmf: https://github.com/tianocore/edk2/issues/85


[Qemu-devel] [Bug 1529449] Re: serial is required for -device nvme

2016-05-05 Thread Tom Yan
** Attachment added: "0.png"
   https://bugs.launchpad.net/qemu/+bug/1529449/+attachment/4656268/+files/0.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1529449

Title:
  serial is required for -device nvme

Status in QEMU:
  New

Bug description:
  I am not exactly sure if this is a bug, but I don't see why the option
  "serial" should be required for -device nvme like the option "drive".
  Truth is it seem to accept random string as its value anyway, if
  that's the case, couldn't qemu just generate one for it when it's not
  specified?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1529449/+subscriptions



[Qemu-devel] [Bug 1529449] Re: serial is required for -device nvme

2016-05-05 Thread Tom Yan
Since both "drive=" and "serial=" expects an arbitrary string (while the
value for "drive=" must be unique since it's the "id=" of a "-drive"),
why not use the same string from "drive=" as the value of "serial=" when
it's not specified explicitly?

Apparently "-device scsi-hd" has already been doing that (although it
does not create the "sn" VPD when a serial is not explicitly specified).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1529449

Title:
  serial is required for -device nvme

Status in QEMU:
  New

Bug description:
  I am not exactly sure if this is a bug, but I don't see why the option
  "serial" should be required for -device nvme like the option "drive".
  Truth is it seem to accept random string as its value anyway, if
  that's the case, couldn't qemu just generate one for it when it's not
  specified?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1529449/+subscriptions



[Qemu-devel] [Bug 1529449] Re: serial is required for -device nvme

2016-05-05 Thread Tom Yan
** Attachment added: "1.png"
   https://bugs.launchpad.net/qemu/+bug/1529449/+attachment/4656269/+files/1.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1529449

Title:
  serial is required for -device nvme

Status in QEMU:
  New

Bug description:
  I am not exactly sure if this is a bug, but I don't see why the option
  "serial" should be required for -device nvme like the option "drive".
  Truth is it seem to accept random string as its value anyway, if
  that's the case, couldn't qemu just generate one for it when it's not
  specified?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1529449/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
I also tested WIndows installation ISOs instead as well.

Windows 10 Enterprise 10586:

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -net bridge
-net nic -full-screen -drive file=ovmf_x64.bin,format=raw,if=pflash
-drive file=Downloads/10586.0.151029-1700
.TH2_RELEASE_CLIENTENTERPRISEEVAL_OEMRET_X64FRE_EN-US.ISO,media=cdrom
-drive file=disks/one.img,if=none,format=qcow2,id=one -drive
file=disks/two.img,if=none,format=qcow2,id=two -device
nvme,drive=one,serial=E86C3CFC43518D6F -device
nvme,drive=two,serial=2BDAC262CF831698

Windows Server 2016 Technical Preview 5:

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -net bridge
-net nic -full-screen -drive file=ovmf_x64.bin,format=raw,if=pflash
-drive file=Downloads/14300.1000.160324-1723
.RS1_RELEASE_SVC_SERVER_OEMRET_X64FRE_EN-US.ISO,media=cdrom -drive
file=disks/one.img,if=none,format=qcow2,id=one -drive
file=disks/two.img,if=none,format=qcow2,id=two -device
nvme,drive=one,serial=E86C3CFC43518D6F -device
nvme,drive=two,serial=2BDAC262CF831698

Both of them shows only one of the NVMe drives.

I also tested with two versions of OVMF, which are "18419"
(https://github.com/tianocore/edk2/commit/ddd097e33f6e6829dc0413820e9971f3bf025f87)
and "18455"
(https://github.com/tianocore/edk2/commit/59844e126614fc8275aab083fafa5818cde0901c).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
Apparently it works fine in Linux though:

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -net bridge
-net nic -full-screen -drive file=ovmf_x64.bin,format=raw,if=pflash
-drive file=Downloads/archlinux-2016.04.01-dual.iso,media=cdrom -drive
file=disks/one.img,if=none,format=qcow2,id=one -drive
file=disks/two.img,if=none,format=qcow2,id=two -device
nvme,drive=one,serial=E86C3CFC43518D6F -device
nvme,drive=two,serial=2BDAC262CF831698

** Attachment added: "Untitled.png"
   
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650918/+files/Untitled.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
"if=virtio" (which similarly has one controller per drive and has each
controller occupies a pci slot as the nvme emulation) works fine:

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m 4G -net bridge
-net nic -full-screen -drive file=ovmf_x64.bin,format=raw,if=pflash
-drive file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
file=disks/one.img,if=virtio,format=qcow2,id=one -drive
file=disks/two.img,if=virtio,format=qcow2,id=two

** Attachment added: "_9GL.PNG"
   
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650884/+files/_9GL.PNG

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1529449] Re: serial is required for -device nvme

2016-04-28 Thread Tom Yan
Instead of requiring a serial of arbitrary length/format, I think a
WWN/EUI-64 is more useful/important, not to mention that the WWN/EUI-64
can pretty much always be used as the serial at the same time.

Unlike Linux, Windows consider the WWN/EUI-64 as the "serial":

"C:\Windows\system32>sg_vpd -i PD1
Device Identification VPD page:
  Addressed logical unit:
designator type: SCSI name string,  code set: UTF-8
  SCSI name string:
  8086QEMU NVMe Ctrl  00012BDAC262CF831698

C:\Windows\system32>sg_vpd -p sn PD1
Unit serial number VPD page:
  Unit serial number: ___."

(https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650553/+files/02.PNG)

UEFI also makes use of the WWN/EUI-64 to generate boot entries for NVMe devices:
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650554/+files/03.png
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650555/+files/04.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1529449

Title:
  serial is required for -device nvme

Status in QEMU:
  New

Bug description:
  I am not exactly sure if this is a bug, but I don't see why the option
  "serial" should be required for -device nvme like the option "drive".
  Truth is it seem to accept random string as its value anyway, if
  that's the case, couldn't qemu just generate one for it when it's not
  specified?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1529449/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
** Attachment added: "UEFI boot entry of the second NVMe device"
   
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650555/+files/04.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
You can see from the "SCSI name string" that the working NVMe device is
the second one specified in the command.

** Attachment added: "sg_vpd outputs"
   
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650553/+files/02.PNG

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1576347] [NEW] Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
Public bug reported:

Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
4G -net bridge -net nic -full-screen -drive
file=ovmf_x64.bin,format=raw,if=pflash -drive
file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
file=disks/one.img,if=none,format=qcow2,id=one -drive
file=disks/two.img,if=none,format=qcow2,id=two -device
nvme,drive=one,serial=E86C3CFC43518D6F -device
nvme,drive=two,serial=2BDAC262CF831698

QEMU version: 2.5.0

Kernel: 4.5.1 (Arch Linux)

When there are two NVMe devices specified, only the second one will be
usable in Windows. The following error is shown under "Device status" of
the failed NVMe controller in Device Manager:

"This device cannot start. (Code 10)

The I/O device is configured incorrectly or the configuration parameters
to the driver are incorrect."

The only thing seems suspicious to me is that the nvme emulation in qemu
does not have WWN/EUI-64 set for the devices, though I have no idea at
all whether that is mandatory:

"C:\Windows\system32>sg_vpd -i PD1
Device Identification VPD page:
  Addressed logical unit:
designator type: SCSI name string,  code set: UTF-8
  SCSI name string:
  8086QEMU NVMe Ctrl  00012BDAC262CF831698

C:\Windows\system32>sg_vpd -p sn PD1
Unit serial number VPD page:
  Unit serial number: ___."

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot of Device Manager and the error."
   https://bugs.launchpad.net/bugs/1576347/+attachment/4650548/+files/01.PNG

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1576347] Re: Only one NVMe device is usable in Windows (10) guest

2016-04-28 Thread Tom Yan
** Attachment added: "UEFI boot entry of the first NVMe device"
   
https://bugs.launchpad.net/qemu/+bug/1576347/+attachment/4650554/+files/03.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1576347

Title:
  Only one NVMe device is usable in Windows (10) guest

Status in QEMU:
  New

Bug description:
  Full command: qemu-system-x86_64 -enable-kvm -cpu host -smp cores=4 -m
  4G -net bridge -net nic -full-screen -drive
  file=ovmf_x64.bin,format=raw,if=pflash -drive
  file=disks/win16_ide.img,format=raw,cache=none,aio=native -drive
  file=disks/one.img,if=none,format=qcow2,id=one -drive
  file=disks/two.img,if=none,format=qcow2,id=two -device
  nvme,drive=one,serial=E86C3CFC43518D6F -device
  nvme,drive=two,serial=2BDAC262CF831698

  QEMU version: 2.5.0

  Kernel: 4.5.1 (Arch Linux)

  When there are two NVMe devices specified, only the second one will be
  usable in Windows. The following error is shown under "Device status"
  of the failed NVMe controller in Device Manager:

  "This device cannot start. (Code 10)

  The I/O device is configured incorrectly or the configuration
  parameters to the driver are incorrect."

  The only thing seems suspicious to me is that the nvme emulation in
  qemu does not have WWN/EUI-64 set for the devices, though I have no
  idea at all whether that is mandatory:

  "C:\Windows\system32>sg_vpd -i PD1
  Device Identification VPD page:
Addressed logical unit:
  designator type: SCSI name string,  code set: UTF-8
SCSI name string:
8086QEMU NVMe Ctrl  00012BDAC262CF831698

  C:\Windows\system32>sg_vpd -p sn PD1
  Unit serial number VPD page:
Unit serial number: ___."

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1576347/+subscriptions



[Qemu-devel] [Bug 1529449] [NEW] serial is required for -device nvme

2015-12-26 Thread Tom Yan
Public bug reported:

I am not exactly sure if this is a bug, but I don't see why the option
"serial" should be required for -device nvme like the option "drive".
Truth is it seem to accept random string as its value anyway, if that's
the case, couldn't qemu just generate one for it when it's not
specified?

** Affects: qemu
 Importance: Undecided
 Status: New


** Tags: nvme

** Description changed:

  I am not exactly sure if this is a bug, but I don't see why the option
- "serial" is required for -device nvme like drive. Truth is it seem to
- accept random string as its value anyway, if that's the case, couldn't
- qemu just generate one for it when it's not specified?
+ "serial" should be required for -device nvme like the option "drive".
+ Truth is it seem to accept random string as its value anyway, if that's
+ the case, couldn't qemu just generate one for it when it's not
+ specified?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1529449

Title:
  serial is required for -device nvme

Status in QEMU:
  New

Bug description:
  I am not exactly sure if this is a bug, but I don't see why the option
  "serial" should be required for -device nvme like the option "drive".
  Truth is it seem to accept random string as its value anyway, if
  that's the case, couldn't qemu just generate one for it when it's not
  specified?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1529449/+subscriptions



[Qemu-devel] [Bug 1504528] [NEW] qemu shows glib-warning with the new glib2 2.46

2015-10-09 Thread Tom Yan
Public bug reported:

qemu shows the following warning with glib2 2.46.0:

[tom@localhost ~]$ qemu-system-x86_64

(process:4222): GLib-WARNING **: gmem.c:482: custom memory allocation
vtable not supported

fwiw process 4222 is qemu-system-x86_64

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1504528

Title:
  qemu shows glib-warning with the new glib2 2.46

Status in QEMU:
  New

Bug description:
  qemu shows the following warning with glib2 2.46.0:

  [tom@localhost ~]$ qemu-system-x86_64

  (process:4222): GLib-WARNING **: gmem.c:482: custom memory allocation
  vtable not supported

  fwiw process 4222 is qemu-system-x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1504528/+subscriptions



[Qemu-devel] [Bug 1459622] [NEW] firefox hang with virtfs

2015-05-28 Thread Tom Yan
Public bug reported:

Firefox hangs once it starts to load pages. I tried to delete
.cache/mozilla/ and .mozilla/ but it doesn't help. But if I mount tmpfs
on to .mozilla (not necessary for .cache/mozilla/), pages loads fine.

I started the vm as root (sudo) with the following command: qemu-system-
x86_64 -enable-kvm -m 4G -virtfs
local,mount_tag=qemu,security_model=passthrough,path=/mnt/qemu/ -kernel
/mnt/qemu/boot/vmlinuz-linux -initrd /mnt/qemu/boot/initramfs-linux-
fallback.img -append 'rw root=qemu fstype=9p' -usbdevice tablet -vga qxl
-spice port=12345,disable-ticketing

/mnt/qemu is a btrfs snapshot of the subvolume used as the host root

Arch Linux, qemu 2.3.0, firefox 38.0.1

** Affects: qemu
 Importance: Undecided
 Status: New


** Tags: 9p firefox qemu virtfs virtio

** Attachment added: the way that firefox hang
   
https://bugs.launchpad.net/bugs/1459622/+attachment/4406107/+files/firefox.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1459622

Title:
  firefox hang with virtfs

Status in QEMU:
  New

Bug description:
  Firefox hangs once it starts to load pages. I tried to delete
  .cache/mozilla/ and .mozilla/ but it doesn't help. But if I mount
  tmpfs on to .mozilla (not necessary for .cache/mozilla/), pages loads
  fine.

  I started the vm as root (sudo) with the following command: qemu-
  system-x86_64 -enable-kvm -m 4G -virtfs
  local,mount_tag=qemu,security_model=passthrough,path=/mnt/qemu/
  -kernel /mnt/qemu/boot/vmlinuz-linux -initrd /mnt/qemu/boot/initramfs-
  linux-fallback.img -append 'rw root=qemu fstype=9p' -usbdevice tablet
  -vga qxl -spice port=12345,disable-ticketing

  /mnt/qemu is a btrfs snapshot of the subvolume used as the host root

  Arch Linux, qemu 2.3.0, firefox 38.0.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1459622/+subscriptions



[Qemu-devel] [Bug 1459626] [NEW] emacs (gtk3) core dumped with -vga qxl

2015-05-28 Thread Tom Yan
Public bug reported:

Emacs (gtk3) exited with bus error and core dumped with -vga qxl. If I
use the builtin modesetting xorg driver, emacs could survive for a short
while sometimes. If I use xf86-video-qxl (git r587.8babd05), it dies
right away with the same error. It seems to corrupt xorg at some point
as well, because sometimes I cannot exit i3 properly and gnome-terminal
can go crazy afterwards (all letters become empty retangles).

It doesn't seem to happen with other -vga driver.

Error message is attached. Can also provide the core dumped but it's of
47M.

I started the vm as root (sudo) with the following command: qemu-system-
x86_64 -enable-kvm -m 4G -virtfs
local,mount_tag=qemu,security_model=passthrough,path=/mnt/qemu/ -kernel
/mnt/qemu/boot/vmlinuz-linux -initrd /mnt/qemu/boot/initramfs-linux-
fallback.img -append 'rw root=qemu fstype=9p' -usbdevice tablet -vga qxl
-spice port=12345,disable-ticketing

/mnt/qemu is a btrfs snapshot of the subvolume used as the host root

Arch Linux, qemu 2.3.0, emacs 24.5, xorg-server 1.17.1, linux 4.0.4, gtk
3.16.3, glib 2.44.1

** Affects: qemu
 Importance: Undecided
 Status: New


** Tags: bus core dump emacs error gtk qemu qxl spice

** Attachment added: Error message
   
https://bugs.launchpad.net/bugs/1459626/+attachment/4406109/+files/emacs_error

** Tags added: bus core dump emacs error gtk qemu qxl spice

** Description changed:

  Emacs (gtk3) exited with bus error and core dumped with -vga qxl. If I
  use the builtin modesetting xorg driver, emacs could survive for a short
  while sometimes. If I use xf86-video-qxl (git r587.8babd05), it dies
  right away with the same error. It seems to corrupt xorg at some point
  as well, because sometimes I cannot exit i3 properly and gnome-terminal
  can go crazy afterwards (all letters become empty retangles).
  
  It doesn't seem to happen with other -vga driver.
  
  Error message is attached. Can also provide the core dumped but it's of
  47M.
  
  I started the vm as root (sudo) with the following command: qemu-system-
  x86_64 -enable-kvm -m 4G -virtfs
  local,mount_tag=qemu,security_model=passthrough,path=/mnt/qemu/ -kernel
  /mnt/qemu/boot/vmlinuz-linux -initrd /mnt/qemu/boot/initramfs-linux-
  fallback.img -append 'rw root=qemu fstype=9p' -usbdevice tablet -vga qxl
  -spice port=12345,disable-ticketing
  
  /mnt/qemu is a btrfs snapshot of the subvolume used as the host root
  
- Arch Linux, qemu 2.3.0, xorg-server 1.17.1, linux 4.0.4, gtk 3.16.3,
- glib 2.44.1
+ Arch Linux, qemu 2.3.0, emacs 24.5, xorg-server 1.17.1, linux 4.0.4, gtk
+ 3.16.3, glib 2.44.1

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1459626

Title:
  emacs (gtk3) core dumped with -vga qxl

Status in QEMU:
  New

Bug description:
  Emacs (gtk3) exited with bus error and core dumped with -vga qxl. If I
  use the builtin modesetting xorg driver, emacs could survive for a
  short while sometimes. If I use xf86-video-qxl (git r587.8babd05), it
  dies right away with the same error. It seems to corrupt xorg at some
  point as well, because sometimes I cannot exit i3 properly and gnome-
  terminal can go crazy afterwards (all letters become empty retangles).

  It doesn't seem to happen with other -vga driver.

  Error message is attached. Can also provide the core dumped but it's
  of 47M.

  I started the vm as root (sudo) with the following command: qemu-
  system-x86_64 -enable-kvm -m 4G -virtfs
  local,mount_tag=qemu,security_model=passthrough,path=/mnt/qemu/
  -kernel /mnt/qemu/boot/vmlinuz-linux -initrd /mnt/qemu/boot/initramfs-
  linux-fallback.img -append 'rw root=qemu fstype=9p' -usbdevice tablet
  -vga qxl -spice port=12345,disable-ticketing

  /mnt/qemu is a btrfs snapshot of the subvolume used as the host root

  Arch Linux, qemu 2.3.0, emacs 24.5, xorg-server 1.17.1, linux 4.0.4,
  gtk 3.16.3, glib 2.44.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1459626/+subscriptions