Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-18 Thread Paolo Bonzini

On 04/18/2011 04:05 PM, Hannes Reinecke wrote:

My proposal would be to implement a full virtio-scsi _host_, and extend
the proposal to be able to handle the transport layer too.


Yes, I have added this independently from Friday to today, and it is why 
I haven't sent the proposal yet.



At the lastest we would need to include a LUN address before the CDB,
and define TMF command values for proper error recovery.


I haven't yet worked out TMF, but I did add a LUN.


That way we could handle hotplug / -unplug via a simple host rescan


It's a bit more complicated because you also want guest-initiated 
unplug, and SAM transport reset events include more than hotplug/unplug.



I couldn't find that in either SPC or SAM indeed. It seems like a
pretty widespread assumption though. Perhaps Nicholas or Hannes know
where it comes from.


96 bytes is a carry-over from scsi parallel. We shouldn't rely
on a fixed length here but rather use an additional pointer/iovec and
length field.

Check SG_IO header on how it's done.


Will do.

Paolo



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-18 Thread Stefan Hajnoczi
On Mon, Apr 18, 2011 at 3:05 PM, Hannes Reinecke  wrote:
> On 04/15/2011 10:56 PM, Paolo Bonzini wrote:
>>
>> On 04/15/2011 05:04 PM, Stefan Hajnoczi wrote:
>>>
>>> The way I approached virtio-scsi was to look at the SCSI Architecture
>>> Model document and some of the Linux SCSI code. I'm not sure if
>>> letting virtio-blk SCSI pass-through or scsi-generic guide us is a
>>> good approach.
>>>
>>> How do your ioprio and barrier relate to SCSI?
>>
>> Both are part of the transport protocol, which can provide
>> additional features with respect to SAM. For example SCSI doesn't
>> provide the full details of hotplug/hotunplug, or doesn't have a way
>> for the guest to trigger a drive unplug on the host, but these are
>> all desirable features for virtio-scsi (and they are supported by
>> vmw_pvscsi by the way).
>>
> And this is something I really miss in the current proposals, namely
> a working transport layer.
>
> The SCSI spec (SPC etc) itself just handles command delivery between
> initiator and target. Anything else (like hotplug, error recovery, target
> addressing etc) is out of the scope of the spec and needs to be implemented
> on another layer (that's the ominous
> transport layer).
>
> Hence any protocol implemented to the above spec would be missing those
> parts, and they would need to be implemented additionally.
> Which also explains why these features are missing when just using SCSI CDBs
> as the main command container.
>
> My proposal would be to implement a full virtio-scsi _host_, and extend the
> proposal to be able to handle the transport layer too.
> At the lastest we would need to include a LUN address before the CDB, and
> define TMF command values for proper error recovery.
>
> That way we could handle hotplug / -unplug via a simple host rescan, and
> would even be able to pass-in NPIV hosts.

In my prototype there is a header and a footer for the request and
response, respectively:
http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=blob;f=include/linux/virtio_scsi.h;hb=refs/heads/tcm_vhost

We definitely need more than plain CDB pass-through.

Stefan



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-18 Thread Hannes Reinecke

On 04/15/2011 10:56 PM, Paolo Bonzini wrote:

On 04/15/2011 05:04 PM, Stefan Hajnoczi wrote:

The way I approached virtio-scsi was to look at the SCSI Architecture
Model document and some of the Linux SCSI code. I'm not sure if
letting virtio-blk SCSI pass-through or scsi-generic guide us is a
good approach.

How do your ioprio and barrier relate to SCSI?


Both are part of the transport protocol, which can provide
additional features with respect to SAM. For example SCSI doesn't
provide the full details of hotplug/hotunplug, or doesn't have a way
for the guest to trigger a drive unplug on the host, but these are
all desirable features for virtio-scsi (and they are supported by
vmw_pvscsi by the way).


And this is something I really miss in the current proposals, namely
a working transport layer.

The SCSI spec (SPC etc) itself just handles command delivery between 
initiator and target. Anything else (like hotplug, error recovery, 
target addressing etc) is out of the scope of the spec and needs to 
be implemented on another layer (that's the ominous

transport layer).

Hence any protocol implemented to the above spec would be missing 
those parts, and they would need to be implemented additionally.
Which also explains why these features are missing when just using 
SCSI CDBs as the main command container.


My proposal would be to implement a full virtio-scsi _host_, and 
extend the proposal to be able to handle the transport layer too.
At the lastest we would need to include a LUN address before the 
CDB, and define TMF command values for proper error recovery.


That way we could handle hotplug / -unplug via a simple host rescan, 
and would even be able to pass-in NPIV hosts.



There seem to be recent/exotic commands that can have both data-in
and data-out buffers.



These are bi-directional commands which are required for OSD.


That can fit by adding more stuff at the end of the buffer. It can
be in the first version, or it can be an extra feature for later.
Since QEMU currently cannot handle it, probably it would need
negotiation even if it were in the first version.


The sense buffer length is also not necessarily 96
bytes max, I believe.


I couldn't find that in either SPC or SAM indeed. It seems like a
pretty widespread assumption though. Perhaps Nicholas or Hannes know
where it comes from.


96 bytes is a carry-over from scsi parallel. We shouldn't rely
on a fixed length here but rather use an additional pointer/iovec 
and length field.


Check SG_IO header on how it's done.

Cheers,

Hannes
--
Dr. Hannes Reinecke   zSeries & Storage
h...@suse.de  +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Paolo Bonzini

On 04/15/2011 05:04 PM, Stefan Hajnoczi wrote:

The way I approached virtio-scsi was to look at the SCSI Architecture
Model document and some of the Linux SCSI code.  I'm not sure if
letting virtio-blk SCSI pass-through or scsi-generic guide us is a
good approach.

How do your ioprio and barrier relate to SCSI?


Both are part of the transport protocol, which can provide additional 
features with respect to SAM.  For example SCSI doesn't provide the full 
details of hotplug/hotunplug, or doesn't have a way for the guest to 
trigger a drive unplug on the host, but these are all desirable features 
for virtio-scsi (and they are supported by vmw_pvscsi by the way).



There seem to be recent/exotic commands that can have both data-in and
data-out buffers.


That can fit by adding more stuff at the end of the buffer.  It can be 
in the first version, or it can be an extra feature for later.  Since 
QEMU currently cannot handle it, probably it would need negotiation even 
if it were in the first version.



The sense buffer length is also not necessarily 96
bytes max, I believe.


I couldn't find that in either SPC or SAM indeed.  It seems like a 
pretty widespread assumption though.  Perhaps Nicholas or Hannes know 
where it comes from.


Paolo



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Stefan Hajnoczi
On Fri, Apr 15, 2011 at 3:37 PM, Paolo Bonzini  wrote:
> On 04/15/2011 04:28 PM, Stefan Hajnoczi wrote:
>> Nothing formal.  I'm trying to learn SCSI as I go along:
>>
>> http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=blob;f=include/linux/virtio_scsi.h;hb=refs/heads/tcm_vhost
>>
>> That's the interface I'm using.  Requests are:
>>
>> [Header][CDB][Data-out buffers*][Data-in buffers*][Footer]
>>
>> The footer gets filled in with the response.
>
> My interface is exactly the same as virtio-blk's SCSI passthrough requests:
>
> -- 8<-- 
>
> Device operation: request queue
> ---
>
> The driver queues requests to the virtqueue, and they are used by the device
> (not necessarily in order).  Each request is of the form
>
> Requests have the following format:
>
>    struct virtio_scsi_req {
>        u32 type;
>        u32 ioprio;
>        char cmd[];
>        char data[][512];
>        u8 sense[SCSI_SENSE_BUFFERSIZE];
>        u32 sense_len;
>        u32 residual;
>        u8 status;
>        u8 response;
>    };

The way I approached virtio-scsi was to look at the SCSI Architecture
Model document and some of the Linux SCSI code.  I'm not sure if
letting virtio-blk SCSI pass-through or scsi-generic guide us is a
good approach.

How do your ioprio and barrier relate to SCSI?

There seem to be recent/exotic commands that can have both data-in and
data-out buffers.  The sense buffer length is also not necessarily 96
bytes max, I believe.  I haven't looked into the these two issues but
a proper virtio-scsi design should be future proof and include them
based on these fancy commands that are being added to SCSI.

Stefan



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Paolo Bonzini
> Why vmw_pvscsi?

Because all I wanted to do was to learn qemu's SCSI, and vmw_pvscsi is
pretty much the simplest device I could pick...  It's just an exercise,
but since it works I thought I'd post it.

> Good luck. Paul Brook absolutely insists on having them, but they kill
> performance for any sane backend. And both are basically impossible to
> reconcile; tried it once but got pushed back.
> 
> And after about the third attempt I gave up. Let me know if you have
> more luck here.

Thanks. :)

> But keep me in the loop for the virtio-scsi spec. I do have some ideas
> what needs to get in there.  As I think hch has.

I surely will, thanks.

Paolo



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Hannes Reinecke
On 04/15/2011 04:17 PM, Paolo Bonzini wrote:
> On 04/15/2011 04:01 PM, Stefan Hajnoczi wrote:
>> I think SCSI brings many benefits.  Guests can deal with it better
>> than these alien vdX virtio-blk devices, which makes migration easier.
>> It becomes possible to attach many disks without burning through free
>> PCI slots.  We don't need to update guests to add cache control,
>> discard, and other commands because they are part of SCSI.  We can
>> pass through more exotic devices.  The list goes on...
> 
> And we also have to reimplement all of MMC. :)
> 
> A few questions:
> 
> 1) Do you have anything posted for the virtio-scsi spec?  I had started
> working on one, but I haven't yet made it final.  It included also
> hotplug/unplug.  I can send it out on Monday.
> 
> 2) Have you thought about making scsi-disk and scsi-generic provide a
> logical unit rather than a target?  Otherwise passthrough of a whole
> host or target becomes hard or messy.
> 
> 3) Since I noticed Hannes is CCed, my next step for vmw_pvscsi would be
> to dust off his patches to remove the bounce buffers, and see how they
> apply to vmw_pvscsi.  But I'd like to avoid duplicated work if possible.
> 

Argl.

Why vmw_pvscsi? Any paravirtualized driver doesn't improve the situation
here; we still wouldn't have a driver for unmodified guests.
So either emulate existing drivers (like megasas :-) or go the full
route and do a proper virtio-scsi.

As for the bounce buffers thing:
Good luck. Paul Brook absolutely insists on having them, but they kill
performance for any sane backend. And both are basically impossible to
reconcile; tried it once but got pushed back.

And after about the third attempt I gave up. Let me know if you have
more luck here.

But keep me in the loop for the virtio-scsi spec. I do have some ideas
what needs to get in there.
As I think hch has.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke  zSeries & Storage
h...@suse.de  +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Paolo Bonzini
On 04/15/2011 04:28 PM, Stefan Hajnoczi wrote:
> Nothing formal.  I'm trying to learn SCSI as I go along:
> 
> http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=blob;f=include/linux/virtio_scsi.h;hb=refs/heads/tcm_vhost
> 
> That's the interface I'm using.  Requests are:
> 
> [Header][CDB][Data-out buffers*][Data-in buffers*][Footer]
> 
> The footer gets filled in with the response.

My interface is exactly the same as virtio-blk's SCSI passthrough requests:

-- 8<-- 

Device operation: request queue
---

The driver queues requests to the virtqueue, and they are used by the device
(not necessarily in order).  Each request is of the form

Requests have the following format:

struct virtio_scsi_req {
u32 type;
u32 ioprio;
char cmd[];
char data[][512];
u8 sense[SCSI_SENSE_BUFFERSIZE];
u32 sense_len;
u32 residual;
u8 status;
u8 response;
};

#define VIRTIO_SCSI_T_CMD 2
#define VIRTIO_SCSI_T_BARRIER 0x8000

/* status values */
#define VIRTIO_SCSI_S_OK  0
#define VIRTIO_SCSI_S_FAILURE 1
#define VIRTIO_SCSI_S_CLOSED  128

The type of the request must currently be VIRTIO_SCSI_T_SCSI_CMD.
The VIRTIO_SCSI_T_BARRIER field indicates that this request acts
as a barrier and that all preceding requests must be complete
before this one, and all following requests must not be started
until this is complete.  Note that a barrier does not flush caches
in the underlying backend device in host, and thus does not serve
as data consistency guarantee.  The driver must send a SYNCHRONIZE
CACHE command to flush the host cache.

The ioprio field will indicate the priority of this request, with
higher values corresponding to higher priorities.

The cmd and data fields must reside in separate buffers.  The cmd field
indicates the command to perform and is always read-only.  The data field
may be either read-only or write-only, depending on the request.

Remaining fields are filled in by the device.  The sense_len field
indicates the number of bytes actually written to the sense buffer,
while the residual field indicates the residual size, calculated as
data_length - number_of_transferred_bytes.

The response byte is written by the device to be one of the following

The status byte is written by the device to be the SCSI status code.

- VIRTIO_SCSI_S_OK when the request was completed and the status byte
  is filled with a SCSI status code (not necessarily "GOOD").

- VIRTIO_SCSI_S_FAILURE for host or guest error.

- VIRTIO_SCSI_S_CLOSED if the virtqueue is not currently associated
  to a LU.



There is more meat to handle hotplug/hotunplug and to choose which
LUNs maps to which virtqueue, but you can wait a few days to know
the details.

Paolo



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Stefan Hajnoczi
On Fri, Apr 15, 2011 at 3:17 PM, Paolo Bonzini  wrote:
> On 04/15/2011 04:01 PM, Stefan Hajnoczi wrote:
>>
>> I think SCSI brings many benefits.  Guests can deal with it better
>> than these alien vdX virtio-blk devices, which makes migration easier.
>> It becomes possible to attach many disks without burning through free
>> PCI slots.  We don't need to update guests to add cache control,
>> discard, and other commands because they are part of SCSI.  We can
>> pass through more exotic devices.  The list goes on...
>
> And we also have to reimplement all of MMC. :)
>
> A few questions:
>
> 1) Do you have anything posted for the virtio-scsi spec?  I had started
> working on one, but I haven't yet made it final.  It included also
> hotplug/unplug.  I can send it out on Monday.

Nothing formal.  I'm trying to learn SCSI as I go along:

http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=blob;f=include/linux/virtio_scsi.h;hb=refs/heads/tcm_vhost

That's the interface I'm using.  Requests are:

[Header][CDB][Data-out buffers*][Data-in buffers*][Footer]

The footer gets filled in with the response.

> 2) Have you thought about making scsi-disk and scsi-generic provide a
> logical unit rather than a target?  Otherwise passthrough of a whole host or
> target becomes hard or messy.

I haven't been working at the QEMU SCSI bus level.  I want to wire up
the Linux-iSCSI.org target stack straight to the guest.  This bypasses
the QEMU SCSI and block layers completely.

I agree that the BlockDriverState in QEMU is more of a LUN than a
target and passing through multiple block devices as LUNs should be
possible.  So we probably need to restructure as you suggested and/or
provide an indirection for LUN mapping.

Stefan



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Paolo Bonzini

On 04/15/2011 04:01 PM, Stefan Hajnoczi wrote:

I think SCSI brings many benefits.  Guests can deal with it better
than these alien vdX virtio-blk devices, which makes migration easier.
It becomes possible to attach many disks without burning through free
PCI slots.  We don't need to update guests to add cache control,
discard, and other commands because they are part of SCSI.  We can
pass through more exotic devices.  The list goes on...


And we also have to reimplement all of MMC. :)

A few questions:

1) Do you have anything posted for the virtio-scsi spec?  I had started 
working on one, but I haven't yet made it final.  It included also 
hotplug/unplug.  I can send it out on Monday.


2) Have you thought about making scsi-disk and scsi-generic provide a 
logical unit rather than a target?  Otherwise passthrough of a whole 
host or target becomes hard or messy.


3) Since I noticed Hannes is CCed, my next step for vmw_pvscsi would be 
to dust off his patches to remove the bounce buffers, and see how they 
apply to vmw_pvscsi.  But I'd like to avoid duplicated work if possible.


Paolo



Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Stefan Hajnoczi
On Fri, Apr 15, 2011 at 2:42 PM, Paolo Bonzini  wrote:
> Lightly tested with Linux guests; at least it can successfully partition
> and format a disk.  scsi-generic also lightly tested.
>
> Doesn't do migration, doesn't do hotplug (the device would support that,
> but it is not 100% documented and the Linux driver in particular cannot
> initiate hot-unplug).  I did it as quick one-day hack to study the SCSI
> subsystem and it is my first real foray into device model land, please
> be gentle. :)
>
> vmw_pvscsi.h is taken from Linux, so it doesn't fully respect coding
> standards.  I think that's fair.
>
> Size is curiously close to the recently added sPAPR adapter:
>
>  911  2354 25553 hw/vmw_pvscsi.c
>  988  3177 29628 hw/spapr_vscsi.c
>
> Sounds like that's just the amount of code it takes to implement a SCSI
> HBA in QEMU. :)

Interesting, thanks for posting this.  I've been playing with virtio
SCSI and it is still in the early stages.  Nicholas A. Bellinger and I
have been wiring the in-kernel SCSI target up to KVM using vhost.
Feel free to take a peek at the work-in-progress:

http://repo.or.cz/w/qemu/stefanha.git/shortlog/refs/heads/virtio-scsi
http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=shortlog;h=refs/heads/tcm_vhost

I think SCSI brings many benefits.  Guests can deal with it better
than these alien vdX virtio-blk devices, which makes migration easier.
 It becomes possible to attach many disks without burning through free
PCI slots.  We don't need to update guests to add cache control,
discard, and other commands because they are part of SCSI.  We can
pass through more exotic devices.  The list goes on...

Stefan



[Qemu-devel] [RFC PATCH] implement vmware pvscsi device

2011-04-15 Thread Paolo Bonzini
Lightly tested with Linux guests; at least it can successfully partition
and format a disk.  scsi-generic also lightly tested.

Doesn't do migration, doesn't do hotplug (the device would support that,
but it is not 100% documented and the Linux driver in particular cannot
initiate hot-unplug).  I did it as quick one-day hack to study the SCSI
subsystem and it is my first real foray into device model land, please
be gentle. :)

vmw_pvscsi.h is taken from Linux, so it doesn't fully respect coding
standards.  I think that's fair.

Size is curiously close to the recently added sPAPR adapter:

  911  2354 25553 hw/vmw_pvscsi.c
  988  3177 29628 hw/spapr_vscsi.c

Sounds like that's just the amount of code it takes to implement a SCSI
HBA in QEMU. :)

Signed-off-by: Paolo Bonzini 
Cc: Zachary Amsden 
---
 Makefile.objs   |1 +
 default-configs/pci.mak |1 +
 hw/pci.h|1 +
 hw/vmw_pvscsi.c |  911 +++
 hw/vmw_pvscsi.h |  389 
 trace-events|   15 +
 6 files changed, 1318 insertions(+), 0 deletions(-)
 create mode 100644 hw/vmw_pvscsi.c
 create mode 100644 hw/vmw_pvscsi.h

diff --git a/Makefile.objs b/Makefile.objs
index 44ce368..f056502 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -255,6 +255,7 @@ hw-obj-$(CONFIG_AHCI) += ide/ich.o
 
 # SCSI layer
 hw-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o
+hw-obj-$(CONFIG_VMWARE_PVSCSI_PCI) += vmw_pvscsi.o
 hw-obj-$(CONFIG_ESP) += esp.o
 
 hw-obj-y += dma-helpers.o sysbus.o isa-bus.o
diff --git a/default-configs/pci.mak b/default-configs/pci.mak
index 0471efb..b1817f5 100644
--- a/default-configs/pci.mak
+++ b/default-configs/pci.mak
@@ -8,6 +8,7 @@ CONFIG_EEPRO100_PCI=y
 CONFIG_PCNET_PCI=y
 CONFIG_PCNET_COMMON=y
 CONFIG_LSI_SCSI_PCI=y
+CONFIG_VMWARE_PVSCSI_PCI=y
 CONFIG_RTL8139_PCI=y
 CONFIG_E1000_PCI=y
 CONFIG_IDE_CORE=y
diff --git a/hw/pci.h b/hw/pci.h
index 52ee8c9..26ce6d7 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -59,6 +59,7 @@
 #define PCI_DEVICE_ID_VMWARE_NET 0x0720
 #define PCI_DEVICE_ID_VMWARE_SCSI0x0730
 #define PCI_DEVICE_ID_VMWARE_IDE 0x1729
+#define PCI_DEVICE_ID_VMWARE_PVSCSI  0x07c0
 
 /* Intel (0x8086) */
 #define PCI_DEVICE_ID_INTEL_82551IT  0x1209
diff --git a/hw/vmw_pvscsi.c b/hw/vmw_pvscsi.c
new file mode 100644
index 000..fdda652
--- /dev/null
+++ b/hw/vmw_pvscsi.c
@@ -0,0 +1,911 @@
+/*
+ * VMware Paravirtualized SCSI Host Bus Adapter emulation
+ *
+ * Copyright (c) 2011 Red Hat, Inc.
+ * Written by Paolo Bonzini
+ *
+ * This code is licensed under GPLv2 or later.
+ */
+
+#include 
+
+#include "hw.h"
+#include "pci.h"
+#include "scsi.h"
+#include "scsi-defs.h"
+#include "vmw_pvscsi.h"
+#include "block_int.h"
+#include "host-utils.h"
+#include "trace.h"
+
+#define PVSCSI_MAX_DEVS 127
+#define PAGE_SIZE   4096
+#define PAGE_SHIFT  12
+
+typedef struct PVSCSIRequest {
+SCSIDevice *sdev;
+uint8_t sensing;
+uint8_t sense_key;
+uint8_t completed;
+int lun;
+target_phys_addr_t sg_current_addr;
+target_phys_addr_t sg_current_dataAddr;
+uint32_t sg_current_resid;
+uint64_t resid;
+struct PVSCSIRingReqDesc req;
+struct PVSCSIRingCmpDesc cmp;
+QTAILQ_ENTRY(PVSCSIRequest) next;
+} PVSCSIRequest;
+
+typedef QTAILQ_HEAD(, PVSCSIRequest) PVSCSIRequestList;
+
+typedef struct {
+PCIDevice dev;
+SCSIBus bus;
+QEMUBH *complete_reqs_bh;
+
+int mmio_io_addr;
+
+/* zeroed on reset */
+uint32_t cmd_latch;
+uint32_t cmd_buffer[sizeof(struct PVSCSICmdDescSetupRings)
+/ sizeof(uint32_t)];
+uint32_t cmd_ptr;
+uint32_t cmd_status;
+uint32_t intr_status;
+uint32_t intr_mask;
+uint32_t intr_cmpl;
+uint32_t intr_msg;
+struct PVSCSICmdDescSetupRings rings;
+struct PVSCSICmdDescSetupMsgRing msgRing;
+uint32_t reqNumEntriesLog2;
+uint32_t cmpNumEntriesLog2;
+uint32_t msgNumEntriesLog2;
+
+PVSCSIRequestList pending_queue;
+PVSCSIRequestList complete_queue;
+} PVSCSIState;
+
+
+static inline int pvscsi_get_lun(uint8_t *lun)
+{
+uint64_t lunval;
+lunval = ((uint64_t)lun[0] << 56) || ((uint64_t)lun[1] << 48) ||
+ ((uint64_t)lun[2] << 40) || ((uint64_t)lun[3] << 32) ||
+ ((uint64_t)lun[4] << 24) || ((uint64_t)lun[5] << 16) ||
+ ((uint64_t)lun[6] <<  8) ||  (uint64_t)lun[7];
+if ((lunval & ~(uint64_t) 255) != 0) {
+return -1;
+}
+return lunval & 255;
+}
+
+static inline int pvscsi_get_dev_lun(PVSCSIState *s,
+ uint8_t *lun, uint32_t target,
+ SCSIDevice **sdev)
+{
+SCSIBus *bus = &s->bus;
+int lunval;
+*sdev = NULL;
+if (target > PVSCSI_MAX_DEVS) {
+return -1;
+}
+lunval = pvscsi_get_lun(lun);
+if (lunval < 0) {
+return -1;
+}
+*sdev = bus->devs[target];
+if (!sdev) {
+return -1;