Re: [PATCH v2 0/6] Device tree support for Hyper-V VMBus driver

2023-02-07 Thread Rob Herring
On Wed, Feb 1, 2023 at 10:34 AM Saurabh Singh Sengar
 wrote:
>
> On Wed, Feb 01, 2023 at 08:51:46AM -0600, Rob Herring wrote:
> > On Tue, Jan 31, 2023 at 06:04:49PM -0800, Saurabh Singh Sengar wrote:
> > > On Tue, Jan 31, 2023 at 02:27:51PM -0600, Rob Herring wrote:
> > > > On Tue, Jan 31, 2023 at 12:10 PM Saurabh Sengar
> > > >  wrote:
> > > > >
> > > > > This set of patches expands the VMBus driver to include device tree
> > > > > support.
> > > > >
> > > > > The first two patches enable compilation of Hyper-V APIs in a non-ACPI
> > > > > build.
> > > > >
> > > > > The third patch converts the VMBus driver from acpi to more generic
> > > > > platform driver.
> > > > >
> > > > > Further to add device tree documentation for VMBus, it needs to club 
> > > > > with
> > > > > other virtualization driver's documentation. For this rename the 
> > > > > virtio
> > > > > folder to more generic hypervisor, so that all the hypervisor based
> > > > > devices can co-exist in a single place in device tree documentation. 
> > > > > The
> > > > > fourth patch does this renaming.
> > > > >
> > > > > The fifth patch introduces the device tree documentation for VMBus.
> > > > >
> > > > > The sixth patch adds device tree support to the VMBus driver. 
> > > > > Currently
> > > > > this is tested only for x86 and it may not work for other archs.
> > > >
> > > > I can read all the patches and see *what* they do. You don't really
> > > > need to list that here. I'm still wondering *why*. That is what the
> > > > cover letter and commit messages should answer. Why do you need DT
> > > > support? How does this even work on x86? FDT is only enabled for
> > > > CE4100 platform.
> > >
> > > HI Rob,
> > >
> > > Thanks for your comments.
> > > We are working on a solution where kernel is booted without ACPI tables 
> > > to keep
> > > the overall system's memory footprints slim and possibly faster boot time.
> > > We have tested this by enabling CONFIG_OF for x86.
> >
> > It's CONFIG_OF_EARLY_FLATTREE which you would need and that's not user
> > selectable. At a minimum, you need some kconfig changes. Where are
> > those?
>
> You are right we have define a new config flag in Kconfig, and selected 
> CONFIG_OF
> and CONFIG_OF_EARLY_FLATTREE. We are working on upstreaming that patch as well
> however that will be a separate patch series.

Fair enough, but that should come first IMO. Really I just want to see
a complete picture. That can be a reference to a git branch(es) or
other patch series. But again, what I want to see in particular is the
actual DT and validation run on it.

> > Also see my comment on v1 about running DT validation on your dtb. I'm
> > sure running it would point out other issues. Such as the root level
> > comaptible string(s) need to be documented. You need cpu nodes,
> > interrupt controller, timers, etc. Those all have to be documented.
>
> I will be changing the parent node to soc node as suggested by Krzysztof
> in other thread.

Another issue yes, but orthogonal to my comments.

>
> soc {
> #address-cells = <2>;
> #size-cells = <2>;

You are missing 'ranges' here. Without it, addresses aren't translatable.

You are also missing 'compatible = "simple-bus";'. This happens to
work on x86 because of legacy reasons, but we don't want new cases
added.

>
> vmbus@ff000 {
> #address-cells = <2>;
> #size-cells = <1>;
> compatible = "Microsoft,vmbus";

'Microsoft' is not a vendor prefix.

> ranges = <0x00 0x00 0x0f 0xf000 0x1000>;
> };
> };
>
> This will be sufficient.

All these comments are unnecessary because the tools will now check
these things and we shouldn't have to.

Rob
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 0/6] Device tree support for Hyper-V VMBus driver

2023-02-01 Thread Rob Herring
On Tue, Jan 31, 2023 at 06:04:49PM -0800, Saurabh Singh Sengar wrote:
> On Tue, Jan 31, 2023 at 02:27:51PM -0600, Rob Herring wrote:
> > On Tue, Jan 31, 2023 at 12:10 PM Saurabh Sengar
> >  wrote:
> > >
> > > This set of patches expands the VMBus driver to include device tree
> > > support.
> > >
> > > The first two patches enable compilation of Hyper-V APIs in a non-ACPI
> > > build.
> > >
> > > The third patch converts the VMBus driver from acpi to more generic
> > > platform driver.
> > >
> > > Further to add device tree documentation for VMBus, it needs to club with
> > > other virtualization driver's documentation. For this rename the virtio
> > > folder to more generic hypervisor, so that all the hypervisor based
> > > devices can co-exist in a single place in device tree documentation. The
> > > fourth patch does this renaming.
> > >
> > > The fifth patch introduces the device tree documentation for VMBus.
> > >
> > > The sixth patch adds device tree support to the VMBus driver. Currently
> > > this is tested only for x86 and it may not work for other archs.
> > 
> > I can read all the patches and see *what* they do. You don't really
> > need to list that here. I'm still wondering *why*. That is what the
> > cover letter and commit messages should answer. Why do you need DT
> > support? How does this even work on x86? FDT is only enabled for
> > CE4100 platform.
> 
> HI Rob,
> 
> Thanks for your comments.
> We are working on a solution where kernel is booted without ACPI tables to 
> keep
> the overall system's memory footprints slim and possibly faster boot time.
> We have tested this by enabling CONFIG_OF for x86.

It's CONFIG_OF_EARLY_FLATTREE which you would need and that's not user 
selectable. At a minimum, you need some kconfig changes. Where are 
those?

Also see my comment on v1 about running DT validation on your dtb. I'm 
sure running it would point out other issues. Such as the root level 
comaptible string(s) need to be documented. You need cpu nodes, 
interrupt controller, timers, etc. Those all have to be documented.

Rob
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 0/8] Staging: hv: vmbus: Driver cleanup

2011-08-15 Thread K. Y. Srinivasan
Further cleanup of the vmbus driver:

1) Cleanup the interrupt handler by inlining some code. 

2) Ensure message handling is performed on the same CPU that
   takes the vmbus interrupt. 

3) Check for events before messages (from the host).

4) Disable auto eoi for the vmbus interrupt since Linux will eoi the
   interrupt anyway. 

5) Some general cleanup.
  

Regards,

K. Y 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 01/77] Staging: hv: vmbus: Increase the timeout value in the vmbus driver

2011-06-16 Thread K. Y. Srinivasan
On some loaded windows hosts, we have discovered that the host may not 
respond to guest requests within the specified time (one second)
as evidenced by the guest timing out. Fix this problem by increasing
the timeout to 5 seconds. 

It may be useful to apply this patch to the 3.0 kernel as well.

Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
Cc: stable sta...@kernel.org
---
 drivers/staging/hv/channel.c  |2 +-
 drivers/staging/hv/channel_mgmt.c |2 +-
 drivers/staging/hv/connection.c   |2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/staging/hv/channel.c b/drivers/staging/hv/channel.c
index cffca7c..455f47a 100644
--- a/drivers/staging/hv/channel.c
+++ b/drivers/staging/hv/channel.c
@@ -211,7 +211,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 
send_ringbuffer_size,
if (ret != 0)
goto cleanup;
 
-   t = wait_for_completion_timeout(openInfo-waitevent, HZ);
+   t = wait_for_completion_timeout(openInfo-waitevent, 5*HZ);
if (t == 0) {
err = -ETIMEDOUT;
goto errorout;
diff --git a/drivers/staging/hv/channel_mgmt.c 
b/drivers/staging/hv/channel_mgmt.c
index 2d270ce..bf011f3 100644
--- a/drivers/staging/hv/channel_mgmt.c
+++ b/drivers/staging/hv/channel_mgmt.c
@@ -767,7 +767,7 @@ int vmbus_request_offers(void)
goto cleanup;
}
 
-   t = wait_for_completion_timeout(msginfo-waitevent, HZ);
+   t = wait_for_completion_timeout(msginfo-waitevent, 5*HZ);
if (t == 0) {
ret = -ETIMEDOUT;
goto cleanup;
diff --git a/drivers/staging/hv/connection.c b/drivers/staging/hv/connection.c
index 7e15392..e6b4039 100644
--- a/drivers/staging/hv/connection.c
+++ b/drivers/staging/hv/connection.c
@@ -135,7 +135,7 @@ int vmbus_connect(void)
}
 
/* Wait for the connection response */
-   t =  wait_for_completion_timeout(msginfo-waitevent, HZ);
+   t =  wait_for_completion_timeout(msginfo-waitevent, 5*HZ);
if (t == 0) {
spin_lock_irqsave(vmbus_connection.channelmsg_lock,
flags);
-- 
1.7.4.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-23 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Sunday, May 22, 2011 7:00 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
  I see maintainers for each of the clocksource drivers and I see John Stultz 
  and
  Thomas  Gleixner listed as the maintainers for Timekeeping. Who should sign-
 off
  on the Hyper-V clocksource.
 
 just send it to both of the with linux-kernel in Cc, and either of them
 will probably put it in.
 

John, Thomas,

I am working on getting Hyper-V drivers (drivers/staging/hv/*) out of staging.
I would like to request you to look at the Hyper-V timesource driver:
drivers/staging/hv/hv_timesource.c. The supporting code for this driver
is already part of the base kernel. Let me know if this driver is ready to exit 
staging.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-23 Thread Thomas Gleixner
On Mon, 23 May 2011, KY Srinivasan wrote:
 I am working on getting Hyper-V drivers (drivers/staging/hv/*) out of staging.
 I would like to request you to look at the Hyper-V timesource driver:
 drivers/staging/hv/hv_timesource.c. The supporting code for this driver
 is already part of the base kernel. Let me know if this driver is ready to 
 exit staging.

Can you please send a patch against drivers/clocksource (the staging
part is uninteresting for review).

Thanks,

tglx
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-23 Thread KY Srinivasan


 -Original Message-
 From: Thomas Gleixner [mailto:t...@linutronix.de]
 Sent: Monday, May 23, 2011 9:52 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; johns...@us.ibm.com; gre...@suse.de; linux-
 ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org
 Subject: RE: vmbus driver
 
 On Mon, 23 May 2011, KY Srinivasan wrote:
  I am working on getting Hyper-V drivers (drivers/staging/hv/*) out of 
  staging.
  I would like to request you to look at the Hyper-V timesource driver:
  drivers/staging/hv/hv_timesource.c. The supporting code for this driver
  is already part of the base kernel. Let me know if this driver is ready to 
  exit
 staging.
 
 Can you please send a patch against drivers/clocksource (the staging
 part is uninteresting for review).

Will do.

Regards,

K. Y


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: vmbus driver

2011-05-22 Thread Christoph Hellwig
 I see maintainers for each of the clocksource drivers and I see John Stultz 
 and
 Thomas  Gleixner listed as the maintainers for Timekeeping. Who should 
 sign-off
 on the Hyper-V clocksource.

just send it to both of the with linux-kernel in Cc, and either of them
will probably put it in.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-22 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Sunday, May 22, 2011 7:00 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
  I see maintainers for each of the clocksource drivers and I see John Stultz 
  and
  Thomas  Gleixner listed as the maintainers for Timekeeping. Who should sign-
 off
  on the Hyper-V clocksource.
 
 just send it to both of the with linux-kernel in Cc, and either of them
 will probably put it in.
 
Will do.

Thanks,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: vmbus driver

2011-05-20 Thread Christoph Hellwig
On Thu, May 19, 2011 at 03:06:25PM -0700, K. Y. Srinivasan wrote:
 A few days ago you applied all the outstanding patches for the Hyper-V
 drivers. With these patches, I have addressed all of the known review 
 comments for the  vmbus driver (and a lot of comments/issues in other
 drivers as well). I am still hoping I can address 
 whatever other issues/comments there might be with the intention to 
 get the vmbus driver out of staging in the current window. What is your 
 sense in terms of how feasible this is. From my side, I can assure you 
 that I will address all legitimate issues in a very timely manner and this
 will not be dependent upon the location of the drivers (staging or 
 outside staging). Looking forward to hearing from you.

There's no point in merging it without a user.  Make sure either
the network or storage driver is in a good enough shape to move with it,
to make sure the APIs it exports are actually sanely usable.

On the other hand the HV clocksource looks mostly mergeable and doesn't
depend on vmbus.  Send a patch to add it to drivers/clocksource to the
maintainer and it should be mergeable with very little remaining
cleanup.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: vmbus driver

2011-05-20 Thread Greg KH
On Thu, May 19, 2011 at 03:06:25PM -0700, K. Y. Srinivasan wrote:
  
 Greg,
 
 A few days ago you applied all the outstanding patches for the Hyper-V
 drivers. With these patches, I have addressed all of the known review 
 comments for the  vmbus driver (and a lot of comments/issues in other
 drivers as well). I am still hoping I can address 
 whatever other issues/comments there might be with the intention to 
 get the vmbus driver out of staging in the current window. What is your 
 sense in terms of how feasible this is. From my side, I can assure you 
 that I will address all legitimate issues in a very timely manner and this
 will not be dependent upon the location of the drivers (staging or 
 outside staging). Looking forward to hearing from you.

The merge window is closed now, and I'm on the road in asia for about 3
weeks, so doing this, at this point in the development cycle, is going
to be hard.

I'll go review the bus code again after the code is all merged with
Linus, which should take a week or so depending on my schedule, and let
you know what's left to do (I think there still is something wierd with
the way the hv_driver is structured, but I could be wrong.)

In the mean time, I'm sure the block and network driver still need a lot
of work, and merging the bus code doesn't make much sense without them
as a user as that is what people really want to use, so you can continue
to work on them.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-20 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Friday, May 20, 2011 8:27 AM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
 On Thu, May 19, 2011 at 03:06:25PM -0700, K. Y. Srinivasan wrote:
  A few days ago you applied all the outstanding patches for the Hyper-V
  drivers. With these patches, I have addressed all of the known review
  comments for the  vmbus driver (and a lot of comments/issues in other
  drivers as well). I am still hoping I can address
  whatever other issues/comments there might be with the intention to
  get the vmbus driver out of staging in the current window. What is your
  sense in terms of how feasible this is. From my side, I can assure you
  that I will address all legitimate issues in a very timely manner and this
  will not be dependent upon the location of the drivers (staging or
  outside staging). Looking forward to hearing from you.
 
 There's no point in merging it without a user.  Make sure either
 the network or storage driver is in a good enough shape to move with it,
 to make sure the APIs it exports are actually sanely usable.

Well, the util driver that implements a range of other services such as KVP, 
time synch, heartbeat etc. is also a client of the vmbus driver (perhaps not in 
the 
same way as the storage and network drivers). I  was hoping  to
move the util driver out of staging along with the vmbus driver.

On a different note, thanks to the feedback I got from you, Greg and others,
both storage and network drivers are in a much better shape than they ever were.
I will continue to cleanup the storage drivers and I would greatly appreciate 
your 
feedback and review. 

 
 On the other hand the HV clocksource looks mostly mergeable and doesn't
 depend on vmbus.  Send a patch to add it to drivers/clocksource to the
 maintainer and it should be mergeable with very little remaining
 cleanup.

Agreed, now that the merge window is closed, I will have to wait for a few 
weeks.

Regards,

K. Y
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-20 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Friday, May 20, 2011 9:05 AM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
 On Thu, May 19, 2011 at 03:06:25PM -0700, K. Y. Srinivasan wrote:
 
  Greg,
 
  A few days ago you applied all the outstanding patches for the Hyper-V
  drivers. With these patches, I have addressed all of the known review
  comments for the  vmbus driver (and a lot of comments/issues in other
  drivers as well). I am still hoping I can address
  whatever other issues/comments there might be with the intention to
  get the vmbus driver out of staging in the current window. What is your
  sense in terms of how feasible this is. From my side, I can assure you
  that I will address all legitimate issues in a very timely manner and this
  will not be dependent upon the location of the drivers (staging or
  outside staging). Looking forward to hearing from you.
 
 The merge window is closed now, and I'm on the road in asia for about 3
 weeks, so doing this, at this point in the development cycle, is going
 to be hard.
 
 I'll go review the bus code again after the code is all merged with
 Linus, which should take a week or so depending on my schedule, and let
 you know what's left to do (I think there still is something wierd with
 the way the hv_driver is structured, but I could be wrong.)

Thanks Greg. I look forward to your feedback.

 
 In the mean time, I'm sure the block and network driver still need a lot
 of work, and merging the bus code doesn't make much sense without them
 as a user as that is what people really want to use, so you can continue
 to work on them.

I will continue to cleanup the block and network driver code. As you know the
util driver is also a client of the vmbus driver (as far as the communication 
with 
the host goes). So, it may still make sense to plan for getting the vmbus 
driver out 
of staging along with the util and the timesource driver. 

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: vmbus driver

2011-05-20 Thread Christoph Hellwig
On Fri, May 20, 2011 at 01:12:32PM +, KY Srinivasan wrote:
 Well, the util driver that implements a range of other services such as KVP, 
 time synch, heartbeat etc. is also a client of the vmbus driver (perhaps not 
 in the 

The KVP driver is a different module as far as I can see.  But it really
needs a lot of work, as no one should use the ugly connector interface
for new code.  The closest equivalent is gennetlink, but I'd like to
understand what it's actually supposed to do in practice.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-20 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Friday, May 20, 2011 8:27 AM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
 On Thu, May 19, 2011 at 03:06:25PM -0700, K. Y. Srinivasan wrote:
  A few days ago you applied all the outstanding patches for the Hyper-V
  drivers. With these patches, I have addressed all of the known review
  comments for the  vmbus driver (and a lot of comments/issues in other
  drivers as well). I am still hoping I can address
  whatever other issues/comments there might be with the intention to
  get the vmbus driver out of staging in the current window. What is your
  sense in terms of how feasible this is. From my side, I can assure you
  that I will address all legitimate issues in a very timely manner and this
  will not be dependent upon the location of the drivers (staging or
  outside staging). Looking forward to hearing from you.
 
 There's no point in merging it without a user.  Make sure either
 the network or storage driver is in a good enough shape to move with it,
 to make sure the APIs it exports are actually sanely usable.
 
 On the other hand the HV clocksource looks mostly mergeable and doesn't
 depend on vmbus.  Send a patch to add it to drivers/clocksource to the
 maintainer and it should be mergeable with very little remaining
 cleanup.

I see maintainers for each of the clocksource drivers and I see John Stultz and
Thomas  Gleixner listed as the maintainers for Timekeeping. Who should sign-off
on the Hyper-V clocksource.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: vmbus driver

2011-05-20 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Friday, May 20, 2011 9:22 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: vmbus driver
 
 On Fri, May 20, 2011 at 01:12:32PM +, KY Srinivasan wrote:
  Well, the util driver that implements a range of other services such as KVP,
  time synch, heartbeat etc. is also a client of the vmbus driver (perhaps 
  not in
 the
 
 The KVP driver is a different module as far as I can see.  But it really
 needs a lot of work, as no one should use the ugly connector interface
 for new code.  The closest equivalent is gennetlink, but I'd like to
 understand what it's actually supposed to do in practice.

Chris,

I wrote the KVP component of the util driver less than a year ago and 
This code was reviewed on this list before it was accepted. The KVP (Key Value 
Pair)
functionality supports host based queries on the guest. The data gathering in
the guest is done in user-mode and the kernel component of KVP is used to
communicate with the host. I am using the connector interface to support
communication between the kernel component and the user-mode daemon.
The KVP functionality is needed to integrate with the Microsoft management 
stack on the host.

Regards,

K. Y 
 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


vmbus driver

2011-05-19 Thread K. Y. Srinivasan
 
Greg,

A few days ago you applied all the outstanding patches for the Hyper-V
drivers. With these patches, I have addressed all of the known review 
comments for the  vmbus driver (and a lot of comments/issues in other
drivers as well). I am still hoping I can address 
whatever other issues/comments there might be with the intention to 
get the vmbus driver out of staging in the current window. What is your 
sense in terms of how feasible this is. From my side, I can assure you 
that I will address all legitimate issues in a very timely manner and this
will not be dependent upon the location of the drivers (staging or 
outside staging). Looking forward to hearing from you.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 184/206] Staging: hv: Include the new header files in vmbus driver

2011-05-10 Thread K. Y. Srinivasan
Include the new header files in vmbus driver.

Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Abhishek Kane v-abk...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
---
 drivers/staging/hv/channel.c  |6 +++---
 drivers/staging/hv/channel_mgmt.c |7 +++
 drivers/staging/hv/connection.c   |5 ++---
 drivers/staging/hv/hv.c   |6 +++---
 drivers/staging/hv/ring_buffer.c  |4 ++--
 drivers/staging/hv/vmbus_drv.c|8 ++--
 6 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/drivers/staging/hv/channel.c b/drivers/staging/hv/channel.c
index a2a190e..b9b082c 100644
--- a/drivers/staging/hv/channel.c
+++ b/drivers/staging/hv/channel.c
@@ -26,9 +26,9 @@
 #include linux/mm.h
 #include linux/slab.h
 #include linux/module.h
-#include hv_api.h
-#include logging.h
-#include vmbus_private.h
+
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 #define NUM_PAGES_SPANNED(addr, len) \
 ((PAGE_ALIGN(addr + len)  PAGE_SHIFT) - (addr  PAGE_SHIFT))
diff --git a/drivers/staging/hv/channel_mgmt.c 
b/drivers/staging/hv/channel_mgmt.c
index 33cb5d5..0e4e05a 100644
--- a/drivers/staging/hv/channel_mgmt.c
+++ b/drivers/staging/hv/channel_mgmt.c
@@ -28,10 +28,9 @@
 #include linux/list.h
 #include linux/module.h
 #include linux/completion.h
-#include hv_api.h
-#include logging.h
-#include vmbus_private.h
-#include utils.h
+
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 struct vmbus_channel_message_table_entry {
enum vmbus_channel_message_type message_type;
diff --git a/drivers/staging/hv/connection.c b/drivers/staging/hv/connection.c
index dd62585..445db48 100644
--- a/drivers/staging/hv/connection.c
+++ b/drivers/staging/hv/connection.c
@@ -28,10 +28,9 @@
 #include linux/mm.h
 #include linux/slab.h
 #include linux/vmalloc.h
-#include hv_api.h
-#include logging.h
-#include vmbus_private.h
 
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 struct vmbus_connection vmbus_connection = {
.conn_state = DISCONNECTED,
diff --git a/drivers/staging/hv/hv.c b/drivers/staging/hv/hv.c
index 2efac38..037424b6 100644
--- a/drivers/staging/hv/hv.c
+++ b/drivers/staging/hv/hv.c
@@ -25,9 +25,9 @@
 #include linux/mm.h
 #include linux/slab.h
 #include linux/vmalloc.h
-#include hv_api.h
-#include logging.h
-#include vmbus_private.h
+
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 /* The one and only */
 struct hv_context hv_context = {
diff --git a/drivers/staging/hv/ring_buffer.c b/drivers/staging/hv/ring_buffer.c
index badf52a..ec262c1 100644
--- a/drivers/staging/hv/ring_buffer.c
+++ b/drivers/staging/hv/ring_buffer.c
@@ -25,9 +25,9 @@
 
 #include linux/kernel.h
 #include linux/mm.h
-#include logging.h
-#include ring_buffer.h
 
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 /* #defines */
 
diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
index 5dcd87a..5a049ab 100644
--- a/drivers/staging/hv/vmbus_drv.c
+++ b/drivers/staging/hv/vmbus_drv.c
@@ -34,13 +34,9 @@
 #include linux/acpi.h
 #include acpi/acpi_bus.h
 #include linux/completion.h
-#include version_info.h
-#include hv_api.h
-#include logging.h
-#include vmbus.h
-#include channel.h
-#include vmbus_private.h
 
+#include linux/hyperv.h
+#include hyperv_vmbus.h
 
 static struct pci_dev *hv_pci_dev;
 
-- 
1.7.4.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-02 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Sunday, May 01, 2011 4:53 PM
 To: KY Srinivasan
 Cc: Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Sun, May 01, 2011 at 06:08:37PM +, KY Srinivasan wrote:
  Could you elaborate on the problems/issues when the block driver registers 
  for
 the
  IDE majors. On the Qemu side, we have a mechanism to disable the emulation
 when
  PV drivers load. I don't think there is an equivalent mechanism on the 
  Windows
 side.
  So, as far as I know, registering for the IDE majors is the only way to also
 prevent native
  drivers in Linux from taking control of the emulated device.
 
 What qemu are you talking about for the qemu side?  Upstream qemu
 doesn't have any way to provide the same image as multiple devices,
 nevermind dynamically unplugging bits in that case.  Nor does it support
 the hyperv devices.

I am talking about the qemu that was (is) shipping with Xen.  In Hyper-V,
the block devices configured as IDE devices for the guest will be taken over
by the native drivers if the PV drivers don't load first and  take over the IDE 
majors.
If you want to have the root device be managed by the PV drivers, this appears 
to be
the only way to ensure that native IDE drivers don't take over the root device. 
Granted,
this depends on ensuring the PV drivers load first, but I don't know if there 
is another way
to achieve this.

 
 When you steal majors you rely on:
 
  a) loading earlier than the driver you steal them from
  b) the driver not simple using other numbers
  c) if it doesn't preventing it from working at all, also for
 devices you don't replace with your PV devices.

These are exactly the issues that had to be solved to have the PV 
drivers manage the root device.

  d) that the guest actually uses the minors your claim, e.g. any
 current linux distribution uses libata anyway, so you old IDE
 major claim wouldn't do anything.  Nor would claiming sd majors
 as the low-level libata driver would still drive the hardware
 even if SD doesn't bind to it.

By setting up appropriate modprobe rules, this can be addressed.

 
 You really must never present the same device as two emulated devices
 instead of doing such hacks.

Agreed; I am not sure what the right solution for Hyper-V is other than 
(a) preventing the native IDE drivers from loading and (b) having
the right modprobe rules to ensure libata would not present these
same devices to the guest as scsi devices.

Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-02 Thread Christoph Hellwig
On Mon, May 02, 2011 at 07:48:38PM +, KY Srinivasan wrote:
 By setting up appropriate modprobe rules, this can be addressed.

That assumes libata is a module, which it is not for many popular
distributions.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-02 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Monday, May 02, 2011 4:00 PM
 To: KY Srinivasan
 Cc: Christoph Hellwig; Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Mon, May 02, 2011 at 07:48:38PM +, KY Srinivasan wrote:
  By setting up appropriate modprobe rules, this can be addressed.
 
 That assumes libata is a module, which it is not for many popular
 distributions.
 
As long as you can prevent ata_piix from loading, it should be fine.

Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-02 Thread Christoph Hellwig
On Mon, May 02, 2011 at 09:16:36PM +, KY Srinivasan wrote:
  That assumes libata is a module, which it is not for many popular
  distributions.
  
 As long as you can prevent ata_piix from loading, it should be fine.

Again, this might very well be built in, e.g. take a look at:

http://pkgs.fedoraproject.org/gitweb/?p=kernel.git;a=blob;f=config-generic;h=779415bcc036b922ba92de9c4b15b9da64e9707c;hb=HEAD

http://gitorious.org/opensuse/kernel-source/blobs/master/config/x86_64/default
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-02 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Monday, May 02, 2011 5:35 PM
 To: KY Srinivasan
 Cc: Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Mon, May 02, 2011 at 09:16:36PM +, KY Srinivasan wrote:
   That assumes libata is a module, which it is not for many popular
   distributions.
  
  As long as you can prevent ata_piix from loading, it should be fine.
 
 Again, this might very well be built in, e.g. take a look at:
 
 http://pkgs.fedoraproject.org/gitweb/?p=kernel.git;a=blob;f=config-
 generic;h=779415bcc036b922ba92de9c4b15b9da64e9707c;hb=HEAD
 
 http://gitorious.org/opensuse/kernel-
 source/blobs/master/config/x86_64/default

Good point! For what it is worth, last night I hacked up code
to present the block devices currently managed by the blkvsc
driver as scsi devices. I have still retained the blkvsc driver to
handshake with the host and sert up the channel etc. Rather than
presenting this device as an IDE device to the guest, as you had 
suggested, I am adding this device as a scsi device under the HBA
implemented by the storvsc driver. I have assigned a special channel
number to distinguish these IDE disks, so that on the I/O paths we can 
communicate over the appropriate channels. Given that the host is 
completely oblivious to this arrangement on the guest, I suspect
we don't need to worry about future versions of Windows breaking this.
From, very minimal testing I have done, things appear to work well.
However, the motherboard emulation in Hyper-V requires the boot
device to be an IDE device and other than taking over the IDE majors, I don't
know of a way to prevent the native drivers from taking over the boot device.
On SLES, I had implemented modprobe rules to deal with the issue you had 
mentioned; it is not clear what the general solution might be for this problem 
if any,
other than changes to the host.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Christoph Hellwig
On Fri, Apr 29, 2011 at 04:32:35PM +, KY Srinivasan wrote:
 On the host-side, as part of configuring a guest  you can specify block 
 devices
 as being under an IDE controller or under a
 SCSI controller. Those are the only options you have. Devices configured under
 the IDE controller cannot be seen in the guest under the emulated SCSI 
 front-end which is
 the scsi driver (storvsc_drv). So, when you do a bus scan in the emulated 
 scsi front-end,
 the devices enumerated will not include block devices configured under the 
 IDE 
 controller. So, it is not clear to me how I can do what you are proposing 
 given the 
 restrictions imposed by the host.

Just because a device is not reported by REPORT_LUNS doesn't mean you
can't talk to it using a SCSI LLDD.  We have SCSI transports with all
kinds of strange ways to discover devices.  Using scsi_add_device you
can add LUNs found by your own discovery methods, and use all the
existing scsi command handling.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Christoph Hellwig
On Fri, Apr 29, 2011 at 09:40:25AM -0700, Greg KH wrote:
 Are you sure the libata core can't see this ide controller and connect
 to it?  That way you would use the scsi system if you do that and you
 would need a much smaller ide driver, perhaps being able to merge it
 with your scsi driver.
 
 We really don't want to write new IDE drivers anymore that don't use
 libata.

The blkvsc driver isn't an IDE driver, although it currently claims
the old IDE drivers major numbers, which is a no-no and can't work
in most usual setups.  I'm pretty sure I already complained about
this in a previous review round.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Sunday, May 01, 2011 11:41 AM
 To: Greg KH
 Cc: KY Srinivasan; Christoph Hellwig; gre...@suse.de; linux-
 ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Fri, Apr 29, 2011 at 09:40:25AM -0700, Greg KH wrote:
  Are you sure the libata core can't see this ide controller and connect
  to it?  That way you would use the scsi system if you do that and you
  would need a much smaller ide driver, perhaps being able to merge it
  with your scsi driver.
 
  We really don't want to write new IDE drivers anymore that don't use
  libata.
 
 The blkvsc driver isn't an IDE driver, although it currently claims
 the old IDE drivers major numbers, which is a no-no and can't work
 in most usual setups.

What is the issue here? This is no different than what is done in other
Virtualization platforms. For instance, the Xen blkfront driver is no
different - if you specify the block device to be presented to the guest
as an ide device, it will register for the appropriate ide major number.

Regards,

K. Y 

 I'm pretty sure I already complained about
 this in a previous review round.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Greg KH
On Sun, May 01, 2011 at 11:39:21AM -0400, Christoph Hellwig wrote:
 On Fri, Apr 29, 2011 at 04:32:35PM +, KY Srinivasan wrote:
  On the host-side, as part of configuring a guest  you can specify block 
  devices
  as being under an IDE controller or under a
  SCSI controller. Those are the only options you have. Devices configured 
  under
  the IDE controller cannot be seen in the guest under the emulated SCSI 
  front-end which is
  the scsi driver (storvsc_drv). So, when you do a bus scan in the emulated 
  scsi front-end,
  the devices enumerated will not include block devices configured under the 
  IDE 
  controller. So, it is not clear to me how I can do what you are proposing 
  given the 
  restrictions imposed by the host.
 
 Just because a device is not reported by REPORT_LUNS doesn't mean you
 can't talk to it using a SCSI LLDD.  We have SCSI transports with all
 kinds of strange ways to discover devices.  Using scsi_add_device you
 can add LUNs found by your own discovery methods, and use all the
 existing scsi command handling.

Yeah, it seems to me that no matter how the user specifies the disk
type for the guest configuration, we should use the same Linux driver,
with the same naming scheme for both ways.

As Christoph points out, it's just a matter of hooking the device up to
the scsi subsystem.  We do that today for ide, usb, scsi, and loads of
other types of devices all with the common goal of making it easier for
userspace to handle the devices in a standard manner.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Christoph Hellwig
On Sun, May 01, 2011 at 03:46:23PM +, KY Srinivasan wrote:
 What is the issue here? This is no different than what is done in other
 Virtualization platforms. For instance, the Xen blkfront driver is no
 different - if you specify the block device to be presented to the guest
 as an ide device, it will register for the appropriate ide major number.

No, it won't - at least not in mainline just because it's so buggy.
If distros keep that crap around I can only recommed you to not use
them.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Sunday, May 01, 2011 12:07 PM
 To: KY Srinivasan
 Cc: Christoph Hellwig; Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Sun, May 01, 2011 at 03:46:23PM +, KY Srinivasan wrote:
  What is the issue here? This is no different than what is done in other
  Virtualization platforms. For instance, the Xen blkfront driver is no
  different - if you specify the block device to be presented to the guest
  as an ide device, it will register for the appropriate ide major number.
 
 No, it won't - at least not in mainline just because it's so buggy.
 If distros keep that crap around I can only recommed you to not use
 them.

Christoph,

Could you elaborate on the problems/issues when the block driver registers for 
the 
IDE majors. On the Qemu side, we have a mechanism to disable the emulation when 
PV drivers load. I don't think there is an equivalent mechanism on the Windows 
side.
So, as far as I know, registering for the IDE majors is the only way to also 
prevent native
drivers in Linux from taking control of the emulated device. 

Regards,

K. Y 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Sunday, May 01, 2011 11:48 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Sun, May 01, 2011 at 11:39:21AM -0400, Christoph Hellwig wrote:
  On Fri, Apr 29, 2011 at 04:32:35PM +, KY Srinivasan wrote:
   On the host-side, as part of configuring a guest  you can specify block 
   devices
   as being under an IDE controller or under a
   SCSI controller. Those are the only options you have. Devices configured
 under
   the IDE controller cannot be seen in the guest under the emulated SCSI 
   front-
 end which is
   the scsi driver (storvsc_drv). So, when you do a bus scan in the emulated 
   scsi
 front-end,
   the devices enumerated will not include block devices configured under the
 IDE
   controller. So, it is not clear to me how I can do what you are proposing 
   given
 the
   restrictions imposed by the host.
 
  Just because a device is not reported by REPORT_LUNS doesn't mean you
  can't talk to it using a SCSI LLDD.  We have SCSI transports with all
  kinds of strange ways to discover devices.  Using scsi_add_device you
  can add LUNs found by your own discovery methods, and use all the
  existing scsi command handling.
 
 Yeah, it seems to me that no matter how the user specifies the disk
 type for the guest configuration, we should use the same Linux driver,
 with the same naming scheme for both ways.
 
 As Christoph points out, it's just a matter of hooking the device up to
 the scsi subsystem.  We do that today for ide, usb, scsi, and loads of
 other types of devices all with the common goal of making it easier for
 userspace to handle the devices in a standard manner.

This is not what is being done in Xen and KVM - they both have a PV front-end
block drivers that is not managed by the scsi stack. The Hyper-V block driver is
equivalent to what we have in Xen and KVM in this respect.

Regards,

K. Y 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Christoph Hellwig
On Sun, May 01, 2011 at 06:56:58PM +, KY Srinivasan wrote:
  Yeah, it seems to me that no matter how the user specifies the disk
  type for the guest configuration, we should use the same Linux driver,
  with the same naming scheme for both ways.
  
  As Christoph points out, it's just a matter of hooking the device up to
  the scsi subsystem.  We do that today for ide, usb, scsi, and loads of
  other types of devices all with the common goal of making it easier for
  userspace to handle the devices in a standard manner.
 
 This is not what is being done in Xen and KVM - they both have a PV front-end
 block drivers that is not managed by the scsi stack. The Hyper-V block driver 
 is
 equivalent to what we have in Xen and KVM in this respect.

Xen also has a PV SCSI driver, although that isn't used very much.
For virtio we think it was a mistake to not speak SCSI these days,
and ponder introducing a virtio-scsi to replace virtio-blk.

But that's not the point here at all.  The point is that blockvsc
speaks a SCSI protocol over the wire, so it should be implemented
as a SCSI LLDD unless you have a good reason not to do it.  This
is especially important to get advanced features like block level
cache flush and FUA support, device topology, discard support, for
free.  Cache flush and FUA are good example for something that blkvsc
currently gets wrong, btw.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-05-01 Thread Christoph Hellwig
On Sun, May 01, 2011 at 06:08:37PM +, KY Srinivasan wrote:
 Could you elaborate on the problems/issues when the block driver registers 
 for the 
 IDE majors. On the Qemu side, we have a mechanism to disable the emulation 
 when 
 PV drivers load. I don't think there is an equivalent mechanism on the 
 Windows side.
 So, as far as I know, registering for the IDE majors is the only way to also 
 prevent native
 drivers in Linux from taking control of the emulated device. 

What qemu are you talking about for the qemu side?  Upstream qemu
doesn't have any way to provide the same image as multiple devices,
nevermind dynamically unplugging bits in that case.  Nor does it support
the hyperv devices.

When you steal majors you rely on:

 a) loading earlier than the driver you steal them from
 b) the driver not simple using other numbers
 c) if it doesn't preventing it from working at all, also for
devices you don't replace with your PV devices.
 d) that the guest actually uses the minors your claim, e.g. any
current linux distribution uses libata anyway, so you old IDE
major claim wouldn't do anything.  Nor would claiming sd majors
as the low-level libata driver would still drive the hardware
even if SD doesn't bind to it.

You really must never present the same device as two emulated devices
instead of doing such hacks.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-29 Thread Greg KH
On Fri, Apr 29, 2011 at 02:26:13PM +, KY Srinivasan wrote:
 Perhaps I did not properly formulate my question here. The review
 process itself may be open-ended, and that is fine - we will fix all 
 legitimate
 issues/concerns  in our drivers whether they are in the staging area or not. 
 My question was specifically with regards to the review process that may gate 
 exiting
 staging. I am hoping to re-spin the remaining patches of the last patch-set 
 and send it to
 you by early next week and ask for a review. I fully intend to address 
 whatever review 
 comments I may get in a very timely manner. Assuming at some point in time 
 after I ask
 for this review there are no outstanding issues, would that be sufficient to 
 exit staging?  

If it looks acceptable to me, and there are no other objections from
other developers, then yes, that would be sufficient to move it out of
staging.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-29 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Wednesday, April 27, 2011 8:19 AM
 To: KY Srinivasan
 Cc: Christoph Hellwig; Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Wed, Apr 27, 2011 at 11:47:03AM +, KY Srinivasan wrote:
  On the host side, Windows emulates the  standard PC hardware
  to permit hosting of fully virtualized operating systems.
  To enhance disk I/O performance, we support a virtual block driver.
  This block driver currently handles disks that have been setup as IDE
  disks for the guest - as specified in the guest configuration.
 
  On the SCSI side, we emulate a SCSI HBA. Devices configured
  under the SCSI controller for the guest are handled via this
  emulated HBA (SCSI front-end). So, SCSI disks configured for
  the guest are handled through native SCSI upper-level drivers.
  If this SCSI front-end driver is not loaded, currently, the guest
  cannot see devices that have been configured as SCSI devices.
  So, while the virtual block driver described earlier could potentially
  handle all block devices, the implementation choices made on the host
  will not permit it. Also, the only SCSI device that can be currently 
  configured
  for the guest is a disk device.
 
  Both the block device driver (hv_blkvsc) and the SCSI front-end
  driver (hv_storvsc) communicate with the host via unique channels
  that are implemented as bi-directional ring buffers. Each (storage)
  channel carries with it enough state to uniquely identify the device on
  the host side. Microsoft has chosen to use SCSI verbs for this storage 
  channel
  communication.
 
 This doesn't really explain much at all.  The only important piece
 of information I can read from this statement is that both blkvsc
 and storvsc only support disks, but not any other kind of device,
 and that chosing either one is an arbitrary seletin when setting up
 a VM configuration.
 
 But this still isn't an excuse to implement a block layer driver for
 a SCSI protocol, and it doesn't not explain in what way the two
 protocols actually differ.  You really should implement blksvs as a SCSI
 LLDD, too - and from the looks of it it doesn't even have to be a
 separate one, but just adding the ids to storvsc would do the work.

On the host-side, as part of configuring a guest  you can specify block devices
as being under an IDE controller or under a
SCSI controller. Those are the only options you have. Devices configured under
the IDE controller cannot be seen in the guest under the emulated SCSI 
front-end which is
the scsi driver (storvsc_drv). So, when you do a bus scan in the emulated scsi 
front-end,
the devices enumerated will not include block devices configured under the IDE 
controller. So, it is not clear to me how I can do what you are proposing given 
the 
restrictions imposed by the host.

Regards,

K. Y
 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-29 Thread Greg KH
On Fri, Apr 29, 2011 at 04:32:35PM +, KY Srinivasan wrote:
 
 
  -Original Message-
  From: Christoph Hellwig [mailto:h...@infradead.org]
  Sent: Wednesday, April 27, 2011 8:19 AM
  To: KY Srinivasan
  Cc: Christoph Hellwig; Greg KH; gre...@suse.de; 
  linux-ker...@vger.kernel.org;
  de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
  Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
  
  On Wed, Apr 27, 2011 at 11:47:03AM +, KY Srinivasan wrote:
   On the host side, Windows emulates the  standard PC hardware
   to permit hosting of fully virtualized operating systems.
   To enhance disk I/O performance, we support a virtual block driver.
   This block driver currently handles disks that have been setup as IDE
   disks for the guest - as specified in the guest configuration.
  
   On the SCSI side, we emulate a SCSI HBA. Devices configured
   under the SCSI controller for the guest are handled via this
   emulated HBA (SCSI front-end). So, SCSI disks configured for
   the guest are handled through native SCSI upper-level drivers.
   If this SCSI front-end driver is not loaded, currently, the guest
   cannot see devices that have been configured as SCSI devices.
   So, while the virtual block driver described earlier could potentially
   handle all block devices, the implementation choices made on the host
   will not permit it. Also, the only SCSI device that can be currently 
   configured
   for the guest is a disk device.
  
   Both the block device driver (hv_blkvsc) and the SCSI front-end
   driver (hv_storvsc) communicate with the host via unique channels
   that are implemented as bi-directional ring buffers. Each (storage)
   channel carries with it enough state to uniquely identify the device on
   the host side. Microsoft has chosen to use SCSI verbs for this storage 
   channel
   communication.
  
  This doesn't really explain much at all.  The only important piece
  of information I can read from this statement is that both blkvsc
  and storvsc only support disks, but not any other kind of device,
  and that chosing either one is an arbitrary seletin when setting up
  a VM configuration.
  
  But this still isn't an excuse to implement a block layer driver for
  a SCSI protocol, and it doesn't not explain in what way the two
  protocols actually differ.  You really should implement blksvs as a SCSI
  LLDD, too - and from the looks of it it doesn't even have to be a
  separate one, but just adding the ids to storvsc would do the work.
 
 On the host-side, as part of configuring a guest  you can specify block 
 devices
 as being under an IDE controller or under a
 SCSI controller. Those are the only options you have. Devices configured under
 the IDE controller cannot be seen in the guest under the emulated SCSI 
 front-end which is
 the scsi driver (storvsc_drv).

Are you sure the libata core can't see this ide controller and connect
to it?  That way you would use the scsi system if you do that and you
would need a much smaller ide driver, perhaps being able to merge it
with your scsi driver.

We really don't want to write new IDE drivers anymore that don't use
libata.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-29 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Friday, April 29, 2011 12:40 PM
 To: KY Srinivasan
 Cc: Christoph Hellwig; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Fri, Apr 29, 2011 at 04:32:35PM +, KY Srinivasan wrote:
 
 
   -Original Message-
   From: Christoph Hellwig [mailto:h...@infradead.org]
   Sent: Wednesday, April 27, 2011 8:19 AM
   To: KY Srinivasan
   Cc: Christoph Hellwig; Greg KH; gre...@suse.de; linux-
 ker...@vger.kernel.org;
   de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
   Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
  
   On Wed, Apr 27, 2011 at 11:47:03AM +, KY Srinivasan wrote:
On the host side, Windows emulates the  standard PC hardware
to permit hosting of fully virtualized operating systems.
To enhance disk I/O performance, we support a virtual block driver.
This block driver currently handles disks that have been setup as IDE
disks for the guest - as specified in the guest configuration.
   
On the SCSI side, we emulate a SCSI HBA. Devices configured
under the SCSI controller for the guest are handled via this
emulated HBA (SCSI front-end). So, SCSI disks configured for
the guest are handled through native SCSI upper-level drivers.
If this SCSI front-end driver is not loaded, currently, the guest
cannot see devices that have been configured as SCSI devices.
So, while the virtual block driver described earlier could potentially
handle all block devices, the implementation choices made on the host
will not permit it. Also, the only SCSI device that can be currently 
configured
for the guest is a disk device.
   
Both the block device driver (hv_blkvsc) and the SCSI front-end
driver (hv_storvsc) communicate with the host via unique channels
that are implemented as bi-directional ring buffers. Each (storage)
channel carries with it enough state to uniquely identify the device on
the host side. Microsoft has chosen to use SCSI verbs for this storage
 channel
communication.
  
   This doesn't really explain much at all.  The only important piece
   of information I can read from this statement is that both blkvsc
   and storvsc only support disks, but not any other kind of device,
   and that chosing either one is an arbitrary seletin when setting up
   a VM configuration.
  
   But this still isn't an excuse to implement a block layer driver for
   a SCSI protocol, and it doesn't not explain in what way the two
   protocols actually differ.  You really should implement blksvs as a SCSI
   LLDD, too - and from the looks of it it doesn't even have to be a
   separate one, but just adding the ids to storvsc would do the work.
 
  On the host-side, as part of configuring a guest  you can specify block 
  devices
  as being under an IDE controller or under a
  SCSI controller. Those are the only options you have. Devices configured 
  under
  the IDE controller cannot be seen in the guest under the emulated SCSI 
  front-
 end which is
  the scsi driver (storvsc_drv).
 
 Are you sure the libata core can't see this ide controller and connect
 to it?  That way you would use the scsi system if you do that and you
 would need a much smaller ide driver, perhaps being able to merge it
 with your scsi driver.

If we don't load the blkvsc driver, the emulated IDE controller exposed to
the guest can and will be seen by the libata core. In this case though, your
disk I/O will be taking the emulated path with the usual performance hit.

When you load the blkvsc driver, the device access does not go through the 
emulated
IDE controller. Blkvsc is truly a generic block driver that registers as a 
block driver in
the guest and talks to an appropriate device driver on the host, communicating 
over
the vmbus. In this respect, it is identical to block drivers we have for guests 
in other
virtualization platforms (Xen etc.). The only difference is that on the host 
side,
the only way you can assign a scsi disk to the guest is to configure this scsi 
disk
under the scsi controller. So, while blkvsc is a generic block driver, because 
of the
restrictions on the host side, it only ends up managing block devices that have 
IDE majors.   

 
 We really don't want to write new IDE drivers anymore that don't use
 libata.

As I noted earlier, it is incorrect to view Hyper-V blkvsc driver as an IDE 
driver. There
is nothing IDE specific about it. It is very much like other block front-end 
drivers
(like in Xen) that get their device information from the host and register the 
block
device accordingly with the guest. It just happens that in the current version 
of the
Windows host, only devices that are configured as IDE devices in the host end 
up being
managed by this driver. To make this clear, in my recent

[RESEND] [PATCH 00/18] Staging: hv: Cleanup vmbus driver code

2011-04-29 Thread K. Y. Srinivasan
This is a resend of the patches yet to be applied.
This patch-set addresses some of the bus/driver model cleanup that
Greg sugested over the last couple of days.  In this patch-set we
deal with the following issues:


1) Cleanup error handling in the vmbus_probe() and 
   vmbus_child_device_register() functions. Fixed a 
   bug in the probe failure path as part of this cleanup.

2) The Windows host cannot handle the vmbus_driver being 
   unloaded and subsequently loaded. Cleanup the driver with
   this in mind.

3) Get rid of struct hv_bus that embedded struct bus_type to 
   conform with the LDM.

4) Add probe/remove/shutdown functions to struct hv_driver to
   conform to LDM.

5) On some older Hyper-V hosts, the Linux PCI sub-sytem is not able
   to allocate irq resources to the vmbus driver. I recently learnt
   that the vmbus driver is an acpi enumerated device on the Hyper-V
   platform. Added code to retrieve irq information from DSDT.



Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread Greg KH
On Tue, Apr 26, 2011 at 09:19:45AM -0700, K. Y. Srinivasan wrote:
 This patch-set addresses some of the bus/driver model cleanup that
 Greg sugested over the last couple of days.  In this patch-set we
 deal with the following issues:
 
   1) Cleanup unnecessary state in struct hv_device and 
  struct hv_driver to be compliant with the Linux
  Driver model.
 
   2) Cleanup the vmbus_match() function to conform with the 
  Linux Driver model.
 
   3) Cleanup error handling in the vmbus_probe() and 
  vmbus_child_device_register() functions. Fixed a 
  bug in the probe failure path as part of this cleanup.
 
   4) The Windows host cannot handle the vmbus_driver being 
  unloaded and subsequently loaded. Cleanup the driver with
  this in mind.

I've stopped at this patch (well, I applied one more, but you can see
that.)

I'd like to get some confirmation that this is really what you all want
to do here before applying it.  If it is, care to resend them with a bit
more information about this issue and why you all are making it?

Anyway, other than this one, the series looks good.  But you should
follow-up with some driver structure changes like what Christoph said to
do.  After that, do you want another round of review of the code, or do
you have more things you want to send in (like the name[64] removal?)

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Tuesday, April 26, 2011 7:29 PM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Tue, Apr 26, 2011 at 09:19:45AM -0700, K. Y. Srinivasan wrote:
  This patch-set addresses some of the bus/driver model cleanup that
  Greg sugested over the last couple of days.  In this patch-set we
  deal with the following issues:
 
  1) Cleanup unnecessary state in struct hv_device and
 struct hv_driver to be compliant with the Linux
 Driver model.
 
  2) Cleanup the vmbus_match() function to conform with the
 Linux Driver model.
 
  3) Cleanup error handling in the vmbus_probe() and
 vmbus_child_device_register() functions. Fixed a
 bug in the probe failure path as part of this cleanup.
 
  4) The Windows host cannot handle the vmbus_driver being
 unloaded and subsequently loaded. Cleanup the driver with
 this in mind.
 
 I've stopped at this patch (well, I applied one more, but you can see
 that.)
 
 I'd like to get some confirmation that this is really what you all want
 to do here before applying it.  If it is, care to resend them with a bit
 more information about this issue and why you all are making it?

Greg, this is restriction imposed by the Windows host:  you cannot reload the
Vmbus driver without rebooting the guest. If you cannot re-load, what good is it
to be able to unload? Distros that integrate these drivers will load these 
drivers 
automatically on boot and there is not much point in being able to unload this 
since 
most likely the root device will be handled by these drivers. For systems that 
don't
integrate these drivers; I don't see much point in allowing the driver to be 
unloaded,
if you cannot reload the driver without rebooting the guest. If and when the 
Windows 
host supports reloading the vmbus driver, we can very easily add this 
functionality.
The situation currently at best very misleading - you think you can unload the 
vmbus
driver, only to discover that you have to reboot the guest!

 
 Anyway, other than this one, the series looks good.  But you should
 follow-up with some driver structure changes like what Christoph said to
 do. 

I will send you a patch for this.

 After that, do you want another round of review of the code, or do
 you have more things you want to send in (like the name[64] removal?)

I would prefer that we go through the  review process. What is the process for
this review? Is there a time window for people to respond. I am hoping I will 
be able
to address all the review comments well in advance of the  next closing of the 
tree,
with the hope of taking the vmbus driver out of staging this go around (hope 
springs
eternal in the human breast ...)! 

Regards,

K. Y
 
 thanks,
 
 greg k-h

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread Christoph Hellwig
On Wed, Apr 27, 2011 at 01:54:02AM +, KY Srinivasan wrote:
 I would prefer that we go through the  review process. What is the process for
 this review? Is there a time window for people to respond. I am hoping I will 
 be able
 to address all the review comments well in advance of the  next closing of 
 the tree,
 with the hope of taking the vmbus driver out of staging this go around (hope 
 springs
 eternal in the human breast ...)! 

It would be useful if you'd send one driver at a time to the list as the
full source to review.

Did we make any progress on the naming discussion?  In my opinion hv is
a far to generic name for your drivers.  Why not call it mshv dor the
driver directory and prefixes?

As far as the core code is concerned, can you explain the use of the
dev_add, dev_rm and cleanup methods and how they relate to the
normal probe/remove/shutdown methods?

As far as the storage drivers are concerned I still have issues with the
architecture.  I haven't seen any good explanation why you want to  have
the blkvsc and storvsc drivers different from each other.  They both
speak the same vmbus-level protocol and tunnel scsi commands over it.
Why would you sometimes expose this SCSI protocol as a SCSI LLDD and
sometimes as a block driver?  What decides that a device is exported
in a way to that blkvsc is bound to them vs storvsc?  How do they look
like on the windows side?  From my understanding of the windows driver
models both the recent storport model and the older scsiport model are
more or less talking scsi to the driver anyway, so what is the
difference between the two for a Windows guest?

Also pleae get rid of struct storvsc_driver_object, it's just a very
strange way to store file-scope variables, and useless indirection
for the I/O submission handler.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Wednesday, April 27, 2011 2:46 AM
 To: KY Srinivasan
 Cc: Greg KH; gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 On Wed, Apr 27, 2011 at 01:54:02AM +, KY Srinivasan wrote:
  I would prefer that we go through the  review process. What is the process 
  for
  this review? Is there a time window for people to respond. I am hoping I 
  will be
 able
  to address all the review comments well in advance of the  next closing of 
  the
 tree,
  with the hope of taking the vmbus driver out of staging this go around (hope
 springs
  eternal in the human breast ...)!
 
 It would be useful if you'd send one driver at a time to the list as the
 full source to review.

 
 Did we make any progress on the naming discussion?  In my opinion hv is
 a far to generic name for your drivers.  Why not call it mshv dor the
 driver directory and prefixes?

This topic was discussed at some great length back in Feb/March when I 
did a bunch of cleanup with regards how the driver and device data structures
were layered. At that point, the consensus was to keep the hv prefix.

 
 As far as the core code is concerned, can you explain the use of the
 dev_add, dev_rm and cleanup methods and how they relate to the
 normal probe/remove/shutdown methods?

While I am currently cleaning up our block drivers, my goal this go around is to
work on getting the vmbus driver out of  staging. I am hoping when I am ready 
for
having you guys review the storage drivers, I will have dealt with the issues 
you
raise here.

 
 As far as the storage drivers are concerned I still have issues with the
 architecture.  I haven't seen any good explanation why you want to  have
 the blkvsc and storvsc drivers different from each other.  They both
 speak the same vmbus-level protocol and tunnel scsi commands over it.
 Why would you sometimes expose this SCSI protocol as a SCSI LLDD and
 sometimes as a block driver?  What decides that a device is exported
 in a way to that blkvsc is bound to them vs storvsc?  How do they look
 like on the windows side?  From my understanding of the windows driver
 models both the recent storport model and the older scsiport model are
 more or less talking scsi to the driver anyway, so what is the
 difference between the two for a Windows guest?

I had written up a brief note that I had sent out setting the stage for the
first patch-set for cleaning up the block drivers. I am copying it here for your
convenience:

From: K. Y. Srinivasan k...@microsoft.com
Date: Tue, 22 Mar 2011 11:54:46 -0700
Subject: [PATCH 00/16] Staging: hv: Cleanup storage drivers - Phase I

This is first in a series of patch-sets aimed at cleaning up the storage
drivers for Hyper-V. Before I get into the details of this patch-set, I think
it is  useful to give a brief overview of the storage related front-end
drivers currently in the tree for Linux on Hyper-V:

On the host side, Windows emulates the  standard PC hardware
to permit hosting of fully virtualized operating systems.
To enhance disk I/O performance, we support a virtual block driver.
This block driver currently handles disks that have been setup as IDE
disks for the guest - as specified in the guest configuration.

On the SCSI side, we emulate a SCSI HBA. Devices configured
under the SCSI controller for the guest are handled via this
emulated HBA (SCSI front-end). So, SCSI disks configured for
the guest are handled through native SCSI upper-level drivers.
If this SCSI front-end driver is not loaded, currently, the guest
cannot see devices that have been configured as SCSI devices.
So, while the virtual block driver described earlier could potentially
handle all block devices, the implementation choices made on the host
will not permit it. Also, the only SCSI device that can be currently configured
for the guest is a disk device.

Both the block device driver (hv_blkvsc) and the SCSI front-end
driver (hv_storvsc) communicate with the host via unique channels 
that are implemented as bi-directional ring buffers. Each (storage) 
channel carries with it enough state to uniquely identify the device on
the host side. Microsoft has chosen to use SCSI verbs for this storage channel 
communication. 
 
 
 Also pleae get rid of struct storvsc_driver_object, it's just a very
 strange way to store file-scope variables, and useless indirection
 for the I/O submission handler.
 

I will do this as part of storage cleanup I am currently doing. Thank you
for taking the time to review the code.

Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread Christoph Hellwig
On Wed, Apr 27, 2011 at 11:47:03AM +, KY Srinivasan wrote:
 On the host side, Windows emulates the  standard PC hardware
 to permit hosting of fully virtualized operating systems.
 To enhance disk I/O performance, we support a virtual block driver.
 This block driver currently handles disks that have been setup as IDE
 disks for the guest - as specified in the guest configuration.
 
 On the SCSI side, we emulate a SCSI HBA. Devices configured
 under the SCSI controller for the guest are handled via this
 emulated HBA (SCSI front-end). So, SCSI disks configured for
 the guest are handled through native SCSI upper-level drivers.
 If this SCSI front-end driver is not loaded, currently, the guest
 cannot see devices that have been configured as SCSI devices.
 So, while the virtual block driver described earlier could potentially
 handle all block devices, the implementation choices made on the host
 will not permit it. Also, the only SCSI device that can be currently 
 configured
 for the guest is a disk device.
 
 Both the block device driver (hv_blkvsc) and the SCSI front-end
 driver (hv_storvsc) communicate with the host via unique channels 
 that are implemented as bi-directional ring buffers. Each (storage) 
 channel carries with it enough state to uniquely identify the device on
 the host side. Microsoft has chosen to use SCSI verbs for this storage 
 channel 
 communication. 

This doesn't really explain much at all.  The only important piece
of information I can read from this statement is that both blkvsc
and storvsc only support disks, but not any other kind of device,
and that chosing either one is an arbitrary seletin when setting up
a VM configuration.

But this still isn't an excuse to implement a block layer driver for
a SCSI protocol, and it doesn't not explain in what way the two
protocols actually differ.  You really should implement blksvs as a SCSI
LLDD, too - and from the looks of it it doesn't even have to be a
separate one, but just adding the ids to storvsc would do the work.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-27 Thread Greg KH
On Wed, Apr 27, 2011 at 01:54:02AM +, KY Srinivasan wrote:
  After that, do you want another round of review of the code, or do
  you have more things you want to send in (like the name[64] removal?)
 
 I would prefer that we go through the  review process. What is the process for
 this review?

The same as always, just ask.

 Is there a time window for people to respond.

No.  We don't have time limits here, this is a community, we don't have
deadlines, you know that.

 I am hoping I will be able to address all the review comments well in
 advance of the  next closing of the tree, with the hope of taking the
 vmbus driver out of staging this go around (hope springs eternal in
 the human breast ...)! 

Yes, it would be nice, and I understand your the corporate pressures you
are under to get this done, and I am doing my best to fit the patch
review and apply cycle into my very-very-limited-at-the-moment spare
time.

As always, if you miss this kernel release, there's always another one 3
months away, so it's no big deal in the long-run.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-26 Thread K. Y. Srinivasan
This patch-set addresses some of the bus/driver model cleanup that
Greg sugested over the last couple of days.  In this patch-set we
deal with the following issues:

1) Cleanup unnecessary state in struct hv_device and 
   struct hv_driver to be compliant with the Linux
   Driver model.

2) Cleanup the vmbus_match() function to conform with the 
   Linux Driver model.

3) Cleanup error handling in the vmbus_probe() and 
   vmbus_child_device_register() functions. Fixed a 
   bug in the probe failure path as part of this cleanup.

4) The Windows host cannot handle the vmbus_driver being 
   unloaded and subsequently loaded. Cleanup the driver with
   this in mind.

5) Get rid of struct hv_bus that embedded struct bus_type to 
   conform with the LDM.

6) Add probe/remove/shutdown functions to struct hv_driver to
   conform to LDM.

7) On some older Hyper-V hosts, the Linux PCI sub-sytem is not able
   to allocate irq resources to the vmbus driver. I recently learnt
   that the vmbus driver is an acpi enumerated device on the Hyper-V
   platform. Added code to retrieve irq information from DSDT.



Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-26 Thread Christoph Hellwig
Do you have a repository containing the current state of your patche
somewhere?  There's been so much cleanup that it's hard to review these
patches against the current mainline codebase.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-26 Thread KY Srinivasan


 -Original Message-
 From: Christoph Hellwig [mailto:h...@infradead.org]
 Sent: Tuesday, April 26, 2011 12:57 PM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
 
 Do you have a repository containing the current state of your patche
 somewhere?  There's been so much cleanup that it's hard to review these
 patches against the current mainline codebase.

Christoph,

Yesterday (April 25, 2011), Greg checked in all of the outstanding hv patches. 
So, if
You checkout Greg's tree today, you will get the most recent hv codebase. This 
current
patch-set is against Greg's current tree.

Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

2011-04-26 Thread Greg KH
On Tue, Apr 26, 2011 at 05:04:36PM +, KY Srinivasan wrote:
 
 
  -Original Message-
  From: Christoph Hellwig [mailto:h...@infradead.org]
  Sent: Tuesday, April 26, 2011 12:57 PM
  To: KY Srinivasan
  Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
  de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
  Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
  
  Do you have a repository containing the current state of your patche
  somewhere?  There's been so much cleanup that it's hard to review these
  patches against the current mainline codebase.
 
 Christoph,
 
 Yesterday (April 25, 2011), Greg checked in all of the outstanding hv 
 patches. So, if
 You checkout Greg's tree today, you will get the most recent hv codebase. 
 This current
 patch-set is against Greg's current tree.

It's also always in the linux-next tree, which is easier for most people
to work off of.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: Hyper-V vmbus driver

2011-04-24 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Saturday, April 23, 2011 11:21 AM
 To: Greg KH
 Cc: KY Srinivasan; de...@linuxdriverproject.org; linux-ker...@vger.kernel.org;
 virtualizat...@lists.osdl.org
 Subject: Re: Hyper-V vmbus driver
 
 On Mon, Apr 11, 2011 at 12:07:08PM -0700, Greg KH wrote:
 
 Due to other external issues, my patch backlog is still not gotten
 through yet, sorry.  Sometimes real life intrudes on the best of
 plans.
 
 I'll get to this when I get through the rest of your hv patches, and the
 other patches pending that I have in my queues.

Thanks Greg. The latest re-send of my hv patches are against the tree:
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6.git
that I picked up on April 22, 2011.  I hope there won't be any issues
this time around.

 
 But, I would recommend you going through and looking at the code and
 verifying that you feel the bus code is ready.  At a very quick
 glance, you should not have individual drivers have to set their 'struct
 device' pointers directly, that is something that the bus does, not the
 driver.  The driver core will call your bus and your bus will then do
 the matching and call the probe function of the driver if needed.

Are you referring to the fact that in the vmbus_match function,
the current code binds the device specific driver to the
corresponding hv_device structure?
 
 
 See the PCI driver structure for an example of this if you are curious.
 It should also allow you to get rid of that unneeded *priv pointer in
 the struct hv_driver.

I am pretty sure, I can get rid of this. The way this code was originally
structured, in the vmbus_match() function, you needed to get at the
device specific driver pointer so that we could do the binding between
the hv_device and the correspond device specific driver. The earlier code 
depended on the structure layout to map a pointer to the hv_driver to
the corresponding device specific driver (net, block etc.) To get rid of 
this layout dependency, I introduced an addition field (priv) in the hv_driver.

There is, I suspect sufficient state available to:

(a) Not require the vmbus_match() function to do the binding.
(b) And to get at the device specific driver structure from the generic
   driver structure without having to have an explicit mapping 
   maintained in the   hv_driver structure.

Before, I go ahead and make these changes, Greg, can you confirm
if I have captured your concerns correctly.

  You should be able to set that structure
 constant, like all other busses.  Right now you can not which shows a
 design issue.

I am a little confused here. While I agree with you that perhaps we could
get rid the priv element in the hv_driver structure, what else would you 
want done here. 

 
 So, take a look at that and let me know what you think.

Once I hear from you, I will work on getting rid of the 
priv pointer from hv_driver structure as well as the code that 
currently does the binding in vmbus_match. 

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Hyper-V vmbus driver

2011-04-24 Thread Greg KH
On Sun, Apr 24, 2011 at 04:18:24PM +, KY Srinivasan wrote:
  On Mon, Apr 11, 2011 at 12:07:08PM -0700, Greg KH wrote:
  
  Due to other external issues, my patch backlog is still not gotten
  through yet, sorry.  Sometimes real life intrudes on the best of
  plans.
  
  I'll get to this when I get through the rest of your hv patches, and the
  other patches pending that I have in my queues.
 
 Thanks Greg. The latest re-send of my hv patches are against the tree:
 git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6.git
 that I picked up on April 22, 2011.  I hope there won't be any issues
 this time around.

Me too :)

  But, I would recommend you going through and looking at the code and
  verifying that you feel the bus code is ready.  At a very quick
  glance, you should not have individual drivers have to set their 'struct
  device' pointers directly, that is something that the bus does, not the
  driver.  The driver core will call your bus and your bus will then do
  the matching and call the probe function of the driver if needed.
 
 Are you referring to the fact that in the vmbus_match function,
 the current code binds the device specific driver to the
 corresponding hv_device structure?

Yes, that's the problem (well, kind of the problem.)

You seem to be doing things a bit odd and that's due to the old way
the code was written.

First off, don't embed a struct bus_type in another structure, that's
not needed at all.  Why is that done?  Anyway...

In your vmbus_match function, you should be matching to see if your
device matches the driver that is passed to you.  You do this by looking
at some type of id.  For the vmbus you should do this by looking at
the GUID, right?  And it looks like you do do this, so that's fine.

And then your vmbus_probe() function calls the driver probe function,
with the device it is to bind to.  BUT, you need to have your probe
function pass in the correct device type (i.e. struct hv_device, NOT
struct device.)

That way, your hv_driver will have a type all its own, with probe
functions that look nothing like the probe functions that 'struct
driver' has in it.  Look at 'struct pci_driver' for an example of this.
Don't try to overload the probe/remove/suspend/etc functions of your
hv_driver by using the base 'struct device_driver' callbacks, that's
putting knowledge of the driver core into the individual hv drivers,
where it's not needed at all.

And, by doing that, you should be able to drop your private pointer in
the hv_driver function completly, right?  That shouldn't be needed at
all.

  See the PCI driver structure for an example of this if you are curious.
  It should also allow you to get rid of that unneeded *priv pointer in
  the struct hv_driver.
 
 I am pretty sure, I can get rid of this. The way this code was originally
 structured, in the vmbus_match() function, you needed to get at the
 device specific driver pointer so that we could do the binding between
 the hv_device and the correspond device specific driver. The earlier code 
 depended on the structure layout to map a pointer to the hv_driver to
 the corresponding device specific driver (net, block etc.) To get rid of 
 this layout dependency, I introduced an addition field (priv) in the 
 hv_driver.
 
 There is, I suspect sufficient state available to:
 
 (a) Not require the vmbus_match() function to do the binding.

No, you still want that, see above.

 (b) And to get at the device specific driver structure from the generic
driver structure without having to have an explicit mapping 
maintained in the   hv_driver structure.

Kind of, see above for more details.

If you want a good example, again, look at the PCI core code, it's
pretty simple in this area (hint, don't look at the USB code, it does
much more complex things than you want, due to things that the USB bus
imposes on devices, that's never a good example to look at.)

Hope this helps.  Please let me know if it doesn't :)

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: Hyper-V vmbus driver

2011-04-24 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Sunday, April 24, 2011 8:14 PM
 To: KY Srinivasan
 Cc: Greg KH; de...@linuxdriverproject.org; linux-ker...@vger.kernel.org;
 virtualizat...@lists.osdl.org
 Subject: Re: Hyper-V vmbus driver
 
 On Sun, Apr 24, 2011 at 04:18:24PM +, KY Srinivasan wrote:
   On Mon, Apr 11, 2011 at 12:07:08PM -0700, Greg KH wrote:
  
   Due to other external issues, my patch backlog is still not gotten
   through yet, sorry.  Sometimes real life intrudes on the best of
   plans.
  
   I'll get to this when I get through the rest of your hv patches, and the
   other patches pending that I have in my queues.
 
  Thanks Greg. The latest re-send of my hv patches are against the tree:
  git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6.git
  that I picked up on April 22, 2011.  I hope there won't be any issues
  this time around.
 
 Me too :)

Just curious; when are you planning to drain the hv patch queue next.

 
   But, I would recommend you going through and looking at the code and
   verifying that you feel the bus code is ready.  At a very quick
   glance, you should not have individual drivers have to set their 'struct
   device' pointers directly, that is something that the bus does, not the
   driver.  The driver core will call your bus and your bus will then do
   the matching and call the probe function of the driver if needed.
 
  Are you referring to the fact that in the vmbus_match function,
  the current code binds the device specific driver to the
  corresponding hv_device structure?
 
 Yes, that's the problem (well, kind of the problem.)
 
 You seem to be doing things a bit odd and that's due to the old way
 the code was written.
 
 First off, don't embed a struct bus_type in another structure, that's
 not needed at all.  Why is that done?  Anyway...

Currently, struct bus_type is embedded in struct hv_bus that has very minimal
additional state. I will clean this up.

 
 In your vmbus_match function, you should be matching to see if your
 device matches the driver that is passed to you.  You do this by looking
 at some type of id.  For the vmbus you should do this by looking at
 the GUID, right?  And it looks like you do do this, so that's fine.
 
 And then your vmbus_probe() function calls the driver probe function,
 with the device it is to bind to.  BUT, you need to have your probe
 function pass in the correct device type (i.e. struct hv_device, NOT
 struct device.)

I will clean this up.

 
 That way, your hv_driver will have a type all its own, with probe
 functions that look nothing like the probe functions that 'struct
 driver' has in it.  Look at 'struct pci_driver' for an example of this.
 Don't try to overload the probe/remove/suspend/etc functions of your
 hv_driver by using the base 'struct device_driver' callbacks, that's
 putting knowledge of the driver core into the individual hv drivers,
 where it's not needed at all.
 
 And, by doing that, you should be able to drop your private pointer in
 the hv_driver function completly, right?  That shouldn't be needed at
 all.

After sending you the mail this afternoon, I worked on patches that do exactly 
that.
I did this with the current model where probe/remove/ etc. get a pointer
to struct device. Within a specific driver you can always map a struct device
pointer to the class specific device driver. I will keep that code; I will 
however
do what you are suggesting here and make probe/remove etc. take a pointer
to struct hv_device.
 
 
   See the PCI driver structure for an example of this if you are curious.
   It should also allow you to get rid of that unneeded *priv pointer in
   the struct hv_driver.
 
  I am pretty sure, I can get rid of this. The way this code was originally
  structured, in the vmbus_match() function, you needed to get at the
  device specific driver pointer so that we could do the binding between
  the hv_device and the correspond device specific driver. The earlier code
  depended on the structure layout to map a pointer to the hv_driver to
  the corresponding device specific driver (net, block etc.) To get rid of
  this layout dependency, I introduced an addition field (priv) in the 
  hv_driver.
 
  There is, I suspect sufficient state available to:
 
  (a) Not require the vmbus_match() function to do the binding.
 
 No, you still want that, see above.

The current code has the following
assignment after a match is found:

device_ctx-drv = drv-priv;

What I meant was that I would get rid of this assignment (binding)
since I can get that information quite easily in the class specific
(net, block, etc.) where it is needed.  
 
 
  (b) And to get at the device specific driver structure from the generic
 driver structure without having to have an explicit mapping
 maintained in the   hv_driver structure.
 
 Kind of, see above for more details.
 
 If you want a good example, again, look at the PCI core code, it's

Re: Hyper-V vmbus driver

2011-04-24 Thread Greg KH
On Mon, Apr 25, 2011 at 02:15:47AM +, KY Srinivasan wrote:
  On Sun, Apr 24, 2011 at 04:18:24PM +, KY Srinivasan wrote:
On Mon, Apr 11, 2011 at 12:07:08PM -0700, Greg KH wrote:
   
Due to other external issues, my patch backlog is still not gotten
through yet, sorry.  Sometimes real life intrudes on the best of
plans.
   
I'll get to this when I get through the rest of your hv patches, and the
other patches pending that I have in my queues.
  
   Thanks Greg. The latest re-send of my hv patches are against the tree:
   git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6.git
   that I picked up on April 22, 2011.  I hope there won't be any issues
   this time around.
  
  Me too :)
 
 Just curious; when are you planning to drain the hv patch queue next.

When I get a chance to get to it :)

But, I would recommend you going through and looking at the code and
verifying that you feel the bus code is ready.  At a very quick
glance, you should not have individual drivers have to set their 'struct
device' pointers directly, that is something that the bus does, not the
driver.  The driver core will call your bus and your bus will then do
the matching and call the probe function of the driver if needed.
  
   Are you referring to the fact that in the vmbus_match function,
   the current code binds the device specific driver to the
   corresponding hv_device structure?
  
  Yes, that's the problem (well, kind of the problem.)
  
  You seem to be doing things a bit odd and that's due to the old way
  the code was written.
  
  First off, don't embed a struct bus_type in another structure, that's
  not needed at all.  Why is that done?  Anyway...
 
 Currently, struct bus_type is embedded in struct hv_bus that has very minimal
 additional state. I will clean this up.

Thanks.

  In your vmbus_match function, you should be matching to see if your
  device matches the driver that is passed to you.  You do this by looking
  at some type of id.  For the vmbus you should do this by looking at
  the GUID, right?  And it looks like you do do this, so that's fine.
  
  And then your vmbus_probe() function calls the driver probe function,
  with the device it is to bind to.  BUT, you need to have your probe
  function pass in the correct device type (i.e. struct hv_device, NOT
  struct device.)
 
 I will clean this up.

Thanks.

  That way, your hv_driver will have a type all its own, with probe
  functions that look nothing like the probe functions that 'struct
  driver' has in it.  Look at 'struct pci_driver' for an example of this.
  Don't try to overload the probe/remove/suspend/etc functions of your
  hv_driver by using the base 'struct device_driver' callbacks, that's
  putting knowledge of the driver core into the individual hv drivers,
  where it's not needed at all.
  
  And, by doing that, you should be able to drop your private pointer in
  the hv_driver function completly, right?  That shouldn't be needed at
  all.
 
 After sending you the mail this afternoon, I worked on patches that do 
 exactly that.
 I did this with the current model where probe/remove/ etc. get a pointer
 to struct device. Within a specific driver you can always map a struct device
 pointer to the class specific device driver. I will keep that code; I will 
 however
 do what you are suggesting here and make probe/remove etc. take a pointer
 to struct hv_device.

Great.

See the PCI driver structure for an example of this if you are curious.
It should also allow you to get rid of that unneeded *priv pointer in
the struct hv_driver.
  
   I am pretty sure, I can get rid of this. The way this code was originally
   structured, in the vmbus_match() function, you needed to get at the
   device specific driver pointer so that we could do the binding between
   the hv_device and the correspond device specific driver. The earlier code
   depended on the structure layout to map a pointer to the hv_driver to
   the corresponding device specific driver (net, block etc.) To get rid of
   this layout dependency, I introduced an addition field (priv) in the 
   hv_driver.
  
   There is, I suspect sufficient state available to:
  
   (a) Not require the vmbus_match() function to do the binding.
  
  No, you still want that, see above.
 
 The current code has the following
 assignment after a match is found:
 
   device_ctx-drv = drv-priv;
 
 What I meant was that I would get rid of this assignment (binding)
 since I can get that information quite easily in the class specific
 (net, block, etc.) where it is needed.

Yes, that is good as it is not needed.

It's also a flaw in that you would not allow multiple devices attached
to the same driver, but as you can't run this bus that way, it was never
noticed.

   (b) And to get at the device specific driver structure from the generic
  driver structure without having to have an explicit mapping
  

Re: Hyper-V vmbus driver

2011-04-23 Thread Greg KH
On Mon, Apr 11, 2011 at 12:07:08PM -0700, Greg KH wrote:
  With that patch-set, I think I have addressed all architectural issues that 
  I
  am aware of.
  
  I was wondering if you would have the time to let me know what else would 
  have
  to be addressed
  
  in the vmbus driver, before it could be considered ready for exiting 
  staging.
  As always your help is
  
  greatly appreciated.
 
 Anyway, yes, I discussed this with Hank last week at the LF Collab
 summit.  I'll look at the vmbus code later this week when I catch up on
 all of my other work (stable, usb, tty, staging, etc.) that has piled up
 during my 2 week absence, and get back to you with what I feel is still
 needed to be done, if anything.

Due to other external issues, my patch backlog is still not gotten
through yet, sorry.  Sometimes real life intrudes on the best of
plans.

I'll get to this when I get through the rest of your hv patches, and the
other patches pending that I have in my queues.

But, I would recommend you going through and looking at the code and
verifying that you feel the bus code is ready.  At a very quick
glance, you should not have individual drivers have to set their 'struct
device' pointers directly, that is something that the bus does, not the
driver.  The driver core will call your bus and your bus will then do
the matching and call the probe function of the driver if needed.

See the PCI driver structure for an example of this if you are curious.
It should also allow you to get rid of that unneeded *priv pointer in
the struct hv_driver.  You should be able to set that structure
constant, like all other busses.  Right now you can not which shows a
design issue.

So, take a look at that and let me know what you think.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-04-17 Thread Greg KH
On Thu, Mar 17, 2011 at 05:39:27PM -0400, valdis.kletni...@vt.edu wrote:
 On Tue, 15 Mar 2011 15:04:54 PDT, Greg KH said:
 
  Thanks for the patches, but as the .39 merge window is closed, I'll be
  holding on to these until after .39-rc1 is out before I can do anything
  with them.
 
 Is that Linus's merge window, or your window to freeze a for-linus tree?

You are replying to a month-old email, I can't recall at the moment
which I was referring to, sorry.

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Hyper-V vmbus driver

2011-04-11 Thread KY Srinivasan
Greg,

Recently, you applied a patch-set from me that cleaned a bunch of architectural 
issues  in the vmbus driver.
With that patch-set, I think I have addressed all architectural issues that I 
am aware of.
I was wondering if you would have the time to let me know what else would have 
to be addressed
in the vmbus driver, before it could be considered ready for exiting staging. 
As always your help is
greatly appreciated.

Regards,

K. Y


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: Hyper-V vmbus driver

2011-04-11 Thread Greg KH
On Mon, Apr 11, 2011 at 06:46:24PM +, KY Srinivasan wrote:
 Greg,
 
  
 
 Recently, you applied a patch-set from me that cleaned a bunch of 
 architectural
 issues  in the vmbus driver.
 
 With that patch-set, I think I have addressed all architectural issues that I
 am aware of.
 
 I was wondering if you would have the time to let me know what else would have
 to be addressed
 
 in the vmbus driver, before it could be considered ready for exiting staging.
 As always your help is
 
 greatly appreciated.

Hm, interesting word wrapping there, might I consider a real email
client one of these days?  :)

Anyway, yes, I discussed this with Hank last week at the LF Collab
summit.  I'll look at the vmbus code later this week when I catch up on
all of my other work (stable, usb, tty, staging, etc.) that has piled up
during my 2 week absence, and get back to you with what I feel is still
needed to be done, if anything.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 01/12] Staging: hv: Make vmbus driver a pci driver

2011-04-05 Thread Greg KH
On Tue, Mar 15, 2011 at 03:03:32PM -0700, K. Y. Srinivasan wrote:
 Make vmbus driver a pci driver. This is
 in preparation to cleaning up the root device
 management as well as the  irq allocation for this
 driver.
 
 Signed-off-by: K. Y. Srinivasan k...@microsoft.com
 Signed-off-by: Haiyang Zhang haiya...@microsoft.com
 Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
 Signed-off-by: Abhishek Kane v-abk...@microsoft.com
 Signed-off-by: Hank Janssen hjans...@microsoft.com
 ---
  drivers/staging/hv/vmbus_drv.c |   63 
 +++-
  1 files changed, 36 insertions(+), 27 deletions(-)
 
 diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
 index b473f46..1ef2f0f 100644
 --- a/drivers/staging/hv/vmbus_drv.c
 +++ b/drivers/staging/hv/vmbus_drv.c
 @@ -40,6 +40,8 @@
  #define VMBUS_IRQ0x5
  #define VMBUS_IRQ_VECTOR IRQ5_VECTOR
  
 +struct pci_dev *hv_pci_dev;

Why is this global?

Have you forgot to run 'sparse' on your changes?

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-18 Thread Valdis . Kletnieks
On Tue, 15 Mar 2011 15:04:54 PDT, Greg KH said:

 Thanks for the patches, but as the .39 merge window is closed, I'll be
 holding on to these until after .39-rc1 is out before I can do anything
 with them.

Is that Linus's merge window, or your window to freeze a for-linus tree?


pgpQQcJwkCpIy.pgp
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-18 Thread Valdis . Kletnieks
On Thu, 17 Mar 2011 14:45:37 PDT, Greg KH said:
 On Thu, Mar 17, 2011 at 05:39:27PM -0400, valdis.kletni...@vt.edu wrote:
  Is that Linus's merge window, or your window to freeze a for-linus tree?
 
 My window.
 
 Linus's merge window is for the subsystem maintainers.  Everything that
 is to be sent during Linus's merge window had to be in the linux-next
 tree for a bit of time before the merge window opens.

OK, that's what I suspected, but wasn't 100% sure, because..

 I also posted the my tree is now closed for .39 stuff message to the
 devel mailing list, so there shouldn't have been any questions about
 this.

some of us are only on lkml, where usually it's Linus who gets to say
The 2.6.N merge window is closed.


pgpLzhXvAcZTu.pgp
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-17 Thread Greg KH
On Thu, Mar 17, 2011 at 05:39:27PM -0400, valdis.kletni...@vt.edu wrote:
 On Tue, 15 Mar 2011 15:04:54 PDT, Greg KH said:
 
  Thanks for the patches, but as the .39 merge window is closed, I'll be
  holding on to these until after .39-rc1 is out before I can do anything
  with them.
 
 Is that Linus's merge window, or your window to freeze a for-linus tree?

My window.

Linus's merge window is for the subsystem maintainers.  Everything that
is to be sent during Linus's merge window had to be in the linux-next
tree for a bit of time before the merge window opens.

It's been this way for a number of years now, nothing new happening
here...

I also posted the my tree is now closed for .39 stuff message to the
devel mailing list, so there shouldn't have been any questions about
this.

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-15 Thread K. Y. Srinivasan
This patch-set fixes the following issues in the vmbus driver (vmbus_drv.c):

Make vmbus driver a platform pci device and cleanup
root device management and  irq allocation
(patches 1/12 through 3/12):
1) Make vmbus driver a platform pci driver.
2) Cleanup root device management.
3) Leverage the pci model for allocating irq.

General cleanup of vmbus driver (patches 4/12 though  12/12):
1) Rename vmbus_driver_context structure and do
   related cleanup.
2) Get rid of forward declarations by moving code.

Regards,

K. Y
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 01/12] Staging: hv: Make vmbus driver a pci driver

2011-03-15 Thread K. Y. Srinivasan
Make vmbus driver a pci driver. This is
in preparation to cleaning up the root device
management as well as the  irq allocation for this
driver.

Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
Signed-off-by: Abhishek Kane v-abk...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
---
 drivers/staging/hv/vmbus_drv.c |   63 +++-
 1 files changed, 36 insertions(+), 27 deletions(-)

diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
index b473f46..1ef2f0f 100644
--- a/drivers/staging/hv/vmbus_drv.c
+++ b/drivers/staging/hv/vmbus_drv.c
@@ -40,6 +40,8 @@
 #define VMBUS_IRQ  0x5
 #define VMBUS_IRQ_VECTOR   IRQ5_VECTOR
 
+struct pci_dev *hv_pci_dev;
+
 /* Main vmbus driver data structure */
 struct vmbus_driver_context {
 
@@ -977,36 +979,24 @@ static irqreturn_t vmbus_isr(int irq, void *dev_id)
}
 }
 
-static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
-   {
-   .ident = Hyper-V,
-   .matches = {
-   DMI_MATCH(DMI_SYS_VENDOR, Microsoft Corporation),
-   DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
-   DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
-   },
-   },
-   { },
-};
-MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
 
-static int __init vmbus_init(void)
+
+static int __devinit hv_pci_probe(struct pci_dev *pdev,
+   const struct pci_device_id *ent)
 {
-   DPRINT_INFO(VMBUS_DRV,
-   Vmbus initializing current log level 0x%x (%x,%x),
-   vmbus_loglevel, HIWORD(vmbus_loglevel), LOWORD(vmbus_loglevel));
-   /* Todo: it is used for loglevel, to be ported to new kernel. */
+   int err;
 
-   if (!dmi_check_system(microsoft_hv_dmi_table))
-   return -ENODEV;
+   hv_pci_dev = pdev;
 
-   return vmbus_bus_init();
-}
+   err = pci_enable_device(pdev);
+   if (err)
+   return err;
 
-static void __exit vmbus_exit(void)
-{
-   vmbus_bus_exit();
-   /* Todo: it is used for loglevel, to be ported to new kernel. */
+   err = vmbus_bus_init();
+   if (err)
+   pci_disable_device(pdev);
+
+   return err;
 }
 
 /*
@@ -1021,10 +1011,29 @@ static const struct pci_device_id 
microsoft_hv_pci_table[] = {
 };
 MODULE_DEVICE_TABLE(pci, microsoft_hv_pci_table);
 
+static struct pci_driver hv_bus_driver = {
+   .name =   hv_bus,
+   .probe =  hv_pci_probe,
+   .id_table =   microsoft_hv_pci_table,
+};
+
+static int __init hv_pci_init(void)
+{
+   return pci_register_driver(hv_bus_driver);
+}
+
+static void __exit hv_pci_exit(void)
+{
+   vmbus_bus_exit();
+   pci_unregister_driver(hv_bus_driver);
+}
+
+
+
 MODULE_LICENSE(GPL);
 MODULE_VERSION(HV_DRV_VERSION);
 module_param(vmbus_irq, int, S_IRUGO);
 module_param(vmbus_loglevel, int, S_IRUGO);
 
-module_init(vmbus_init);
-module_exit(vmbus_exit);
+module_init(hv_pci_init);
+module_exit(hv_pci_exit);
-- 
1.5.5.6

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-15 Thread Greg KH
On Tue, Mar 15, 2011 at 03:02:07PM -0700, K. Y. Srinivasan wrote:
 This patch-set fixes the following issues in the vmbus driver (vmbus_drv.c):

snip

Thanks for the patches, but as the .39 merge window is closed, I'll be
holding on to these until after .39-rc1 is out before I can do anything
with them.

So don't be surprised if I don't respond to them for a few weeks.  Don't
worry, they aren't lost. :)

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-15 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Tuesday, March 15, 2011 6:05 PM
 To: KY Srinivasan
 Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II
 
 On Tue, Mar 15, 2011 at 03:02:07PM -0700, K. Y. Srinivasan wrote:
  This patch-set fixes the following issues in the vmbus driver (vmbus_drv.c):
 
 snip
 
 Thanks for the patches, but as the .39 merge window is closed, I'll be
 holding on to these until after .39-rc1 is out before I can do anything
 with them.
 
 So don't be surprised if I don't respond to them for a few weeks.  Don't
 worry, they aren't lost. :)

If possible, I would love to get your feedback even if you cannot check 
in these patches. Also, if you can give me feedback as to what else
would need to be fixed to exit staging  as far as the vmbus driver is 
concerned, I can work on those over the next couple of weeks
until the tree opens up. If it is ok with you we will send the
DPRINT cleanup patches in the next couple of days.
 
Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II

2011-03-15 Thread Greg KH
On Tue, Mar 15, 2011 at 10:24:41PM +, KY Srinivasan wrote:
 
 
  -Original Message-
  From: Greg KH [mailto:gre...@suse.de]
  Sent: Tuesday, March 15, 2011 6:05 PM
  To: KY Srinivasan
  Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
  virtualizat...@lists.osdl.org
  Subject: Re: [PATCH 00/12] Staging: hv: Cleanup vmbus driver - Phase II
  
  On Tue, Mar 15, 2011 at 03:02:07PM -0700, K. Y. Srinivasan wrote:
   This patch-set fixes the following issues in the vmbus driver 
   (vmbus_drv.c):
  
  snip
  
  Thanks for the patches, but as the .39 merge window is closed, I'll be
  holding on to these until after .39-rc1 is out before I can do anything
  with them.
  
  So don't be surprised if I don't respond to them for a few weeks.  Don't
  worry, they aren't lost. :)
 
 If possible, I would love to get your feedback even if you cannot check 
 in these patches. Also, if you can give me feedback as to what else
 would need to be fixed to exit staging  as far as the vmbus driver is 
 concerned, I can work on those over the next couple of weeks
 until the tree opens up.

As I'm going to be very busy with the merge window issues, coupled with
spring break and the LF Collab summit, I doubt I'll be able to do this,
sorry.  Give me a few weeks please.

 If it is ok with you we will send the DPRINT cleanup patches in the
 next couple of days.

That's fine, they can sit in the same to-apply mbox next to these :)

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-14 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Sunday, March 13, 2011 11:25 PM
 To: KY Srinivasan
 Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
 (Mindtree Consulting PVT LTD); Hank Janssen
 Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci 
 driver
 
 On Sat, Mar 12, 2011 at 11:23:05PM +, KY Srinivasan wrote:
  Greg, I have redone this patch as well as [PATCH 12/21]. Do you want
  me send you just these two patches or the entire series including
  these two.
 
 Just resend those two patches if that is easier.

Will do.
 
  Also, does this patch-set address all of architectural issues you had
  noted earlier in the vmbus core. Please let us know what else needs to
  be done to exit staging as far as the vmbus driver is concerned. I
  want get a head start before the new week begins! Also, we have
  patches ready for all DPRINT cleanup. Hank is holding them off until
  we finish addressing the architectural issues first.
 
 I do not know if this addresses everything, sorry, I have not had the
 time to review all of them yet.  Give me a few days at the least to go
 over them and apply them before I will be able to tell you this.

Thanks for taking the time to look at this.

 
 Also note that there shouldn't be anything holding back the DPRINT
 stuff, why wait?  If they apply on top of yours that should be fine,
 right?

You are right. We will submit the DPRINT patches soon.

Regards,

K. Y 



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-14 Thread Greg KH
On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
 Make vmbus driver a platform pci driver. This is
 in preparation to cleaning up irq allocation for this
 driver.

Now wouldn't this be the root device that everything else hangs off
of? 

 Signed-off-by: K. Y. Srinivasan k...@microsoft.com
 Signed-off-by: Haiyang Zhang haiya...@microsoft.com
 Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
 Signed-off-by: Abhishek Kane v-abk...@microsoft.com
 Signed-off-by: Hank Janssen hjans...@microsoft.com
 ---
  drivers/staging/hv/vmbus_drv.c |   63 
 +++-
  1 files changed, 36 insertions(+), 27 deletions(-)
 
 diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
 index 8b9394a..e4855ac 100644
 --- a/drivers/staging/hv/vmbus_drv.c
 +++ b/drivers/staging/hv/vmbus_drv.c
 @@ -43,6 +43,8 @@
  
  static struct device *root_dev; /* Root device */
  
 +struct pci_dev *hv_pci_dev;

Why do you have 2 different devices here?  Is the root_dev still needed
now?

Still confused,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 00/21] Staging: hv: Cleanup vmbus driver

2011-03-14 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:g...@kroah.com]
 Sent: Monday, March 14, 2011 3:37 PM
 To: KY Srinivasan
 Cc: gre...@suse.de; linux-ker...@vger.kernel.org;
 de...@linuxdriverproject.org; virtualizat...@lists.osdl.org
 Subject: Re: [PATCH 00/21] Staging: hv: Cleanup vmbus driver
 
 On Thu, Mar 10, 2011 at 01:59:42PM -0800, K. Y. Srinivasan wrote:
  This patch-set fixes the following issues in the vmbus driver (vmbus_drv.c):
 
  Cleanup root device management: (patches 1/21 through 10/21)
  1) Get rid of the hv_driver code from the vmbus abstraction
  2) Get rid of unnecessary call sequences and functions
  3) Cleanup the management of the root device by using the
 standard mechanism for grouping devices under /sys/devices
 
 I've applied the first 9 patches, as I had questions on 10 and 11, and
 the others after that don't apply without those two applied.

Let me know what the questions are on 10 and 11. When I first sent you the 
patch-set, you had suggested some name changes in 11. Thomas had a comment
on 12.  This morning I sent
the corrected version of 11 as well as 12.
 
 So, care to resend the rest of the series when we've figured out the
 root device stuff?

Sure. Let me know what the issues/concerns are with regards to root
device handling.

Regards,

K. Y


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-13 Thread Greg KH
On Sat, Mar 12, 2011 at 11:23:05PM +, KY Srinivasan wrote:
 Greg, I have redone this patch as well as [PATCH 12/21]. Do you want
 me send you just these two patches or the entire series including
 these two.

Just resend those two patches if that is easier.

 Also, does this patch-set address all of architectural issues you had
 noted earlier in the vmbus core. Please let us know what else needs to
 be done to exit staging as far as the vmbus driver is concerned. I
 want get a head start before the new week begins! Also, we have
 patches ready for all DPRINT cleanup. Hank is holding them off until
 we finish addressing the architectural issues first.

I do not know if this addresses everything, sorry, I have not had the
time to review all of them yet.  Give me a few days at the least to go
over them and apply them before I will be able to tell you this.

Also note that there shouldn't be anything holding back the DPRINT
stuff, why wait?  If they apply on top of yours that should be fine,
right?

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-12 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Thursday, March 10, 2011 5:33 PM
 To: KY Srinivasan
 Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
 (Mindtree Consulting PVT LTD); Hank Janssen
 Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci 
 driver
 
 On Thu, Mar 10, 2011 at 10:28:27PM +, KY Srinivasan wrote:
 
 
   -Original Message-
   From: Greg KH [mailto:gre...@suse.de]
   Sent: Thursday, March 10, 2011 5:21 PM
   To: KY Srinivasan
   Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
   virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
   (Mindtree Consulting PVT LTD); Hank Janssen
   Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci
 driver
  
   On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
Make vmbus driver a platform pci driver. This is
in preparation to cleaning up irq allocation for this
driver.
  
   The idea is nice, but the nameing is a bit confusing.
  
   We have platform drivers which are much different from what you are
   doing here, you are just creating a normal pci driver.
  
   Very minor comments below.
  
   
Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
Signed-off-by: Abhishek Kane v-abk...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
---
 drivers/staging/hv/vmbus_drv.c |   63 +++
 -
   
 1 files changed, 36 insertions(+), 27 deletions(-)
   
diff --git a/drivers/staging/hv/vmbus_drv.c
 b/drivers/staging/hv/vmbus_drv.c
index 8b9394a..e4855ac 100644
--- a/drivers/staging/hv/vmbus_drv.c
+++ b/drivers/staging/hv/vmbus_drv.c
@@ -43,6 +43,8 @@
   
 static struct device *root_dev; /* Root device */
   
+struct pci_dev *hv_pci_dev;
+
 /* Main vmbus driver data structure */
 struct vmbus_driver_context {
   
@@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void 
*dev_id)
}
 }
   
-static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
-   {
-   .ident = Hyper-V,
-   .matches = {
-   DMI_MATCH(DMI_SYS_VENDOR, Microsoft
   Corporation),
-   DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
-   DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
-   },
-   },
-   { },
-};
-MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
  
   You're sure it's safe to delete this now and just rely on the PCI ids,
   right?  For some wierd reason I thought we needed both to catch all
   types of systems, but I can't remember why.
  I have tested this; I don't think we need the dmi table.
 
 Ok, if you are sure, that's fine with me.
 
   How about hv_bus as a name, as that's what this really is.  It's a
   bus adapter, like USB, Firewire, and all sorts of other bus
   controllers.
 
  Sure; I will make these changes. Would you mind if I submit these name
 changes as a separate patch.
 
 How about just redo this patch?  I haven't reviewed the others yet, so
 you might want to wait a day to see if I don't like any of them either
 :)

Greg, I have redone this patch as well as [PATCH 12/21]. Do you want me send 
you just these two 
patches or the entire series including these two. Also, does this patch-set 
address all of architectural 
issues you had noted earlier in the vmbus core. Please let us know what else 
needs to be done to 
exit staging as far as the vmbus driver is concerned. I want get a head start 
before the new week
begins! Also, we have patches ready for all DPRINT cleanup. Hank is holding 
them off until
we finish addressing the architectural issues first.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 00/21] Staging: hv: Cleanup vmbus driver

2011-03-10 Thread K. Y. Srinivasan
This patch-set fixes the following issues in the vmbus driver (vmbus_drv.c):

Cleanup root device management: (patches 1/21 through 10/21)
1) Get rid of the hv_driver code from the vmbus abstraction
2) Get rid of unnecessary call sequences and functions
3) Cleanup the management of the root device by using the 
   standard mechanism for grouping devices under /sys/devices

Make vmbus driver a platform pci device and cleanup irq allocation
(patches 11/21 through 12/21):
1) Make vmbus driver a platform pci driver.
2) Leverage the pci model for allocating irq.

General cleanup of vmbus driver (patches 13/21 though  21/21):
1) Rename vmbus_driver_context structure and do
   related cleanup.
2) Get rid of forward declarations by moving code.

Regards,

K. Y
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-10 Thread K. Y. Srinivasan
Make vmbus driver a platform pci driver. This is
in preparation to cleaning up irq allocation for this
driver.

Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
Signed-off-by: Abhishek Kane v-abk...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
---
 drivers/staging/hv/vmbus_drv.c |   63 +++-
 1 files changed, 36 insertions(+), 27 deletions(-)

diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
index 8b9394a..e4855ac 100644
--- a/drivers/staging/hv/vmbus_drv.c
+++ b/drivers/staging/hv/vmbus_drv.c
@@ -43,6 +43,8 @@
 
 static struct device *root_dev; /* Root device */
 
+struct pci_dev *hv_pci_dev;
+
 /* Main vmbus driver data structure */
 struct vmbus_driver_context {
 
@@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void *dev_id)
}
 }
 
-static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
-   {
-   .ident = Hyper-V,
-   .matches = {
-   DMI_MATCH(DMI_SYS_VENDOR, Microsoft Corporation),
-   DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
-   DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
-   },
-   },
-   { },
-};
-MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
 
-static int __init vmbus_init(void)
+
+static int __devinit hv_pci_probe(struct pci_dev *pdev,
+   const struct pci_device_id *ent)
 {
-   DPRINT_INFO(VMBUS_DRV,
-   Vmbus initializing current log level 0x%x (%x,%x),
-   vmbus_loglevel, HIWORD(vmbus_loglevel), LOWORD(vmbus_loglevel));
-   /* Todo: it is used for loglevel, to be ported to new kernel. */
+   int err;
 
-   if (!dmi_check_system(microsoft_hv_dmi_table))
-   return -ENODEV;
+   hv_pci_dev = pdev;
 
-   return vmbus_bus_init();
-}
+   err = pci_enable_device(pdev);
+   if (err)
+   return err;
 
-static void __exit vmbus_exit(void)
-{
-   vmbus_bus_exit();
-   /* Todo: it is used for loglevel, to be ported to new kernel. */
+   err = vmbus_bus_init();
+   if (err)
+   pci_disable_device(pdev);
+
+   return err;
 }
 
 /*
@@ -931,10 +921,29 @@ static const struct pci_device_id 
microsoft_hv_pci_table[] = {
 };
 MODULE_DEVICE_TABLE(pci, microsoft_hv_pci_table);
 
+static struct pci_driver platform_driver = {
+   .name =   hv-platform-pci,
+   .probe =  hv_pci_probe,
+   .id_table =   microsoft_hv_pci_table,
+};
+
+static int __init hv_pci_init(void)
+{
+   return pci_register_driver(platform_driver);
+}
+
+static void __exit hv_pci_exit(void)
+{
+   vmbus_bus_exit();
+   pci_unregister_driver(platform_driver);
+}
+
+
+
 MODULE_LICENSE(GPL);
 MODULE_VERSION(HV_DRV_VERSION);
 module_param(vmbus_irq, int, S_IRUGO);
 module_param(vmbus_loglevel, int, S_IRUGO);
 
-module_init(vmbus_init);
-module_exit(vmbus_exit);
+module_init(hv_pci_init);
+module_exit(hv_pci_exit);
-- 
1.5.5.6

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-10 Thread Greg KH
On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
 Make vmbus driver a platform pci driver. This is
 in preparation to cleaning up irq allocation for this
 driver.

The idea is nice, but the nameing is a bit confusing.

We have platform drivers which are much different from what you are
doing here, you are just creating a normal pci driver.

Very minor comments below.

 
 Signed-off-by: K. Y. Srinivasan k...@microsoft.com
 Signed-off-by: Haiyang Zhang haiya...@microsoft.com
 Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
 Signed-off-by: Abhishek Kane v-abk...@microsoft.com
 Signed-off-by: Hank Janssen hjans...@microsoft.com
 ---
  drivers/staging/hv/vmbus_drv.c |   63 
 +++-
  1 files changed, 36 insertions(+), 27 deletions(-)
 
 diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
 index 8b9394a..e4855ac 100644
 --- a/drivers/staging/hv/vmbus_drv.c
 +++ b/drivers/staging/hv/vmbus_drv.c
 @@ -43,6 +43,8 @@
  
  static struct device *root_dev; /* Root device */
  
 +struct pci_dev *hv_pci_dev;
 +
  /* Main vmbus driver data structure */
  struct vmbus_driver_context {
  
 @@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void *dev_id)
   }
  }
  
 -static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
 - {
 - .ident = Hyper-V,
 - .matches = {
 - DMI_MATCH(DMI_SYS_VENDOR, Microsoft Corporation),
 - DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
 - DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
 - },
 - },
 - { },
 -};
 -MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);

You're sure it's safe to delete this now and just rely on the PCI ids,
right?  For some wierd reason I thought we needed both to catch all
types of systems, but I can't remember why.

  
 -static int __init vmbus_init(void)
 +
 +static int __devinit hv_pci_probe(struct pci_dev *pdev,
 + const struct pci_device_id *ent)
  {
 - DPRINT_INFO(VMBUS_DRV,
 - Vmbus initializing current log level 0x%x (%x,%x),
 - vmbus_loglevel, HIWORD(vmbus_loglevel), LOWORD(vmbus_loglevel));
 - /* Todo: it is used for loglevel, to be ported to new kernel. */
 + int err;
  
 - if (!dmi_check_system(microsoft_hv_dmi_table))
 - return -ENODEV;
 + hv_pci_dev = pdev;
  
 - return vmbus_bus_init();
 -}
 + err = pci_enable_device(pdev);
 + if (err)
 + return err;
  
 -static void __exit vmbus_exit(void)
 -{
 - vmbus_bus_exit();
 - /* Todo: it is used for loglevel, to be ported to new kernel. */
 + err = vmbus_bus_init();
 + if (err)
 + pci_disable_device(pdev);
 +
 + return err;
  }
  
  /*
 @@ -931,10 +921,29 @@ static const struct pci_device_id 
 microsoft_hv_pci_table[] = {
  };
  MODULE_DEVICE_TABLE(pci, microsoft_hv_pci_table);
  
 +static struct pci_driver platform_driver = {

hv_bus_driver?

 + .name =   hv-platform-pci,

How about hv_bus as a name, as that's what this really is.  It's a
bus adapter, like USB, Firewire, and all sorts of other bus
controllers.

 + .probe =  hv_pci_probe,
 + .id_table =   microsoft_hv_pci_table,
 +};
 +
 +static int __init hv_pci_init(void)
 +{
 + return pci_register_driver(platform_driver);
 +}
 +
 +static void __exit hv_pci_exit(void)
 +{
 + vmbus_bus_exit();
 + pci_unregister_driver(platform_driver);
 +}
 +
 +
 +
  MODULE_LICENSE(GPL);
  MODULE_VERSION(HV_DRV_VERSION);
  module_param(vmbus_irq, int, S_IRUGO);
  module_param(vmbus_loglevel, int, S_IRUGO);
  
 -module_init(vmbus_init);
 -module_exit(vmbus_exit);
 +module_init(hv_pci_init);
 +module_exit(hv_pci_exit);
 -- 
 1.5.5.6
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-10 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Thursday, March 10, 2011 5:21 PM
 To: KY Srinivasan
 Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
 (Mindtree Consulting PVT LTD); Hank Janssen
 Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci 
 driver
 
 On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
  Make vmbus driver a platform pci driver. This is
  in preparation to cleaning up irq allocation for this
  driver.
 
 The idea is nice, but the nameing is a bit confusing.
 
 We have platform drivers which are much different from what you are
 doing here, you are just creating a normal pci driver.
 
 Very minor comments below.
 
 
  Signed-off-by: K. Y. Srinivasan k...@microsoft.com
  Signed-off-by: Haiyang Zhang haiya...@microsoft.com
  Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
  Signed-off-by: Abhishek Kane v-abk...@microsoft.com
  Signed-off-by: Hank Janssen hjans...@microsoft.com
  ---
   drivers/staging/hv/vmbus_drv.c |   63 +++-
 
   1 files changed, 36 insertions(+), 27 deletions(-)
 
  diff --git a/drivers/staging/hv/vmbus_drv.c b/drivers/staging/hv/vmbus_drv.c
  index 8b9394a..e4855ac 100644
  --- a/drivers/staging/hv/vmbus_drv.c
  +++ b/drivers/staging/hv/vmbus_drv.c
  @@ -43,6 +43,8 @@
 
   static struct device *root_dev; /* Root device */
 
  +struct pci_dev *hv_pci_dev;
  +
   /* Main vmbus driver data structure */
   struct vmbus_driver_context {
 
  @@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void *dev_id)
  }
   }
 
  -static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
  -   {
  -   .ident = Hyper-V,
  -   .matches = {
  -   DMI_MATCH(DMI_SYS_VENDOR, Microsoft
 Corporation),
  -   DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
  -   DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
  -   },
  -   },
  -   { },
  -};
  -MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
 
 You're sure it's safe to delete this now and just rely on the PCI ids,
 right?  For some wierd reason I thought we needed both to catch all
 types of systems, but I can't remember why.
I have tested this; I don't think we need the dmi table.

 
 
  -static int __init vmbus_init(void)
  +
  +static int __devinit hv_pci_probe(struct pci_dev *pdev,
  +   const struct pci_device_id *ent)
   {
  -   DPRINT_INFO(VMBUS_DRV,
  -   Vmbus initializing current log level 0x%x (%x,%x),
  -   vmbus_loglevel, HIWORD(vmbus_loglevel),
 LOWORD(vmbus_loglevel));
  -   /* Todo: it is used for loglevel, to be ported to new kernel. */
  +   int err;
 
  -   if (!dmi_check_system(microsoft_hv_dmi_table))
  -   return -ENODEV;
  +   hv_pci_dev = pdev;
 
  -   return vmbus_bus_init();
  -}
  +   err = pci_enable_device(pdev);
  +   if (err)
  +   return err;
 
  -static void __exit vmbus_exit(void)
  -{
  -   vmbus_bus_exit();
  -   /* Todo: it is used for loglevel, to be ported to new kernel. */
  +   err = vmbus_bus_init();
  +   if (err)
  +   pci_disable_device(pdev);
  +
  +   return err;
   }
 
   /*
  @@ -931,10 +921,29 @@ static const struct pci_device_id
 microsoft_hv_pci_table[] = {
   };
   MODULE_DEVICE_TABLE(pci, microsoft_hv_pci_table);
 
  +static struct pci_driver platform_driver = {
 
 hv_bus_driver?
 
  +   .name =   hv-platform-pci,
 
 How about hv_bus as a name, as that's what this really is.  It's a
 bus adapter, like USB, Firewire, and all sorts of other bus
 controllers.

Sure; I will make these changes. Would you mind if I submit these name changes 
as a separate patch.

Regards,

K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-10 Thread Greg KH
On Thu, Mar 10, 2011 at 10:28:27PM +, KY Srinivasan wrote:
 
 
  -Original Message-
  From: Greg KH [mailto:gre...@suse.de]
  Sent: Thursday, March 10, 2011 5:21 PM
  To: KY Srinivasan
  Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
  virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
  (Mindtree Consulting PVT LTD); Hank Janssen
  Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci 
  driver
  
  On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
   Make vmbus driver a platform pci driver. This is
   in preparation to cleaning up irq allocation for this
   driver.
  
  The idea is nice, but the nameing is a bit confusing.
  
  We have platform drivers which are much different from what you are
  doing here, you are just creating a normal pci driver.
  
  Very minor comments below.
  
  
   Signed-off-by: K. Y. Srinivasan k...@microsoft.com
   Signed-off-by: Haiyang Zhang haiya...@microsoft.com
   Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
   Signed-off-by: Abhishek Kane v-abk...@microsoft.com
   Signed-off-by: Hank Janssen hjans...@microsoft.com
   ---
drivers/staging/hv/vmbus_drv.c |   63 
   +++-
  
1 files changed, 36 insertions(+), 27 deletions(-)
  
   diff --git a/drivers/staging/hv/vmbus_drv.c 
   b/drivers/staging/hv/vmbus_drv.c
   index 8b9394a..e4855ac 100644
   --- a/drivers/staging/hv/vmbus_drv.c
   +++ b/drivers/staging/hv/vmbus_drv.c
   @@ -43,6 +43,8 @@
  
static struct device *root_dev; /* Root device */
  
   +struct pci_dev *hv_pci_dev;
   +
/* Main vmbus driver data structure */
struct vmbus_driver_context {
  
   @@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void *dev_id)
 }
}
  
   -static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
   - {
   - .ident = Hyper-V,
   - .matches = {
   - DMI_MATCH(DMI_SYS_VENDOR, Microsoft
  Corporation),
   - DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
   - DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
   - },
   - },
   - { },
   -};
   -MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
  
  You're sure it's safe to delete this now and just rely on the PCI ids,
  right?  For some wierd reason I thought we needed both to catch all
  types of systems, but I can't remember why.
 I have tested this; I don't think we need the dmi table.

Ok, if you are sure, that's fine with me.

  How about hv_bus as a name, as that's what this really is.  It's a
  bus adapter, like USB, Firewire, and all sorts of other bus
  controllers.
 
 Sure; I will make these changes. Would you mind if I submit these name 
 changes as a separate patch.

How about just redo this patch?  I haven't reviewed the others yet, so
you might want to wait a day to see if I don't like any of them either
:)

 
 Regards,
 
 K. Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci driver

2011-03-10 Thread KY Srinivasan


 -Original Message-
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Thursday, March 10, 2011 5:33 PM
 To: KY Srinivasan
 Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
 virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
 (Mindtree Consulting PVT LTD); Hank Janssen
 Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci 
 driver
 
 On Thu, Mar 10, 2011 at 10:28:27PM +, KY Srinivasan wrote:
 
 
   -Original Message-
   From: Greg KH [mailto:gre...@suse.de]
   Sent: Thursday, March 10, 2011 5:21 PM
   To: KY Srinivasan
   Cc: linux-ker...@vger.kernel.org; de...@linuxdriverproject.org;
   virtualizat...@lists.osdl.org; Haiyang Zhang; Mike Sterling; Abhishek Kane
   (Mindtree Consulting PVT LTD); Hank Janssen
   Subject: Re: [PATCH 11/21] Staging: hv: Make vmbus driver a platform pci
 driver
  
   On Thu, Mar 10, 2011 at 02:08:32PM -0800, K. Y. Srinivasan wrote:
Make vmbus driver a platform pci driver. This is
in preparation to cleaning up irq allocation for this
driver.
  
   The idea is nice, but the nameing is a bit confusing.
  
   We have platform drivers which are much different from what you are
   doing here, you are just creating a normal pci driver.
  
   Very minor comments below.
  
   
Signed-off-by: K. Y. Srinivasan k...@microsoft.com
Signed-off-by: Haiyang Zhang haiya...@microsoft.com
Signed-off-by: Mike Sterling mike.sterl...@microsoft.com
Signed-off-by: Abhishek Kane v-abk...@microsoft.com
Signed-off-by: Hank Janssen hjans...@microsoft.com
---
 drivers/staging/hv/vmbus_drv.c |   63 +++
 -
   
 1 files changed, 36 insertions(+), 27 deletions(-)
   
diff --git a/drivers/staging/hv/vmbus_drv.c
 b/drivers/staging/hv/vmbus_drv.c
index 8b9394a..e4855ac 100644
--- a/drivers/staging/hv/vmbus_drv.c
+++ b/drivers/staging/hv/vmbus_drv.c
@@ -43,6 +43,8 @@
   
 static struct device *root_dev; /* Root device */
   
+struct pci_dev *hv_pci_dev;
+
 /* Main vmbus driver data structure */
 struct vmbus_driver_context {
   
@@ -887,36 +889,24 @@ static irqreturn_t vmbus_isr(int irq, void 
*dev_id)
}
 }
   
-static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
-   {
-   .ident = Hyper-V,
-   .matches = {
-   DMI_MATCH(DMI_SYS_VENDOR, Microsoft
   Corporation),
-   DMI_MATCH(DMI_PRODUCT_NAME, Virtual Machine),
-   DMI_MATCH(DMI_BOARD_NAME, Virtual Machine),
-   },
-   },
-   { },
-};
-MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
  
   You're sure it's safe to delete this now and just rely on the PCI ids,
   right?  For some wierd reason I thought we needed both to catch all
   types of systems, but I can't remember why.
  I have tested this; I don't think we need the dmi table.
 
 Ok, if you are sure, that's fine with me.
 
   How about hv_bus as a name, as that's what this really is.  It's a
   bus adapter, like USB, Firewire, and all sorts of other bus
   controllers.
 
  Sure; I will make these changes. Would you mind if I submit these name
 changes as a separate patch.
 
 How about just redo this patch?  I haven't reviewed the others yet, so
 you might want to wait a day to see if I don't like any of them either
 :)

Ok; I will wait for the reviews.

Regards,

K. Y

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] staging: hv: Convert vmbus driver interface function pointer table to constant

2010-09-14 Thread Greg KH
On Thu, Sep 09, 2010 at 02:53:03PM +, Haiyang Zhang wrote:
  From: Greg KH [mailto:gre...@suse.de]
  Sent: Wednesday, September 08, 2010 6:44 PM
   Convert vmbus driver interface function pointer table to constant
   The vmbus interface functions are assigned to a constant - vmbus_ops.
  
  You also remove a function pointer in this patch, why?  Please break up
  the patch into logical parts, one patch, one thing.
  
  This looks like it should be 2 patches, right?
 
 Because the vmbus interface function pointer table is converted to a
 constant variable -- vmbus_ops, the function GetChannelInterface(),
 VmbusGetChannelInterface() and pointer GetChannelInterface are no longer
 in use. The deprecated function's work is done by the initialization of
 the newly added constant variable vmbus_ops.
 
 I created the new constant variable vmbus_ops and removed the deprecated
 function pointer GetChannelInterface in one patch.

Great, next time say that in the patch please :)

I'll go edit the wording and apply this...

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH 1/1] staging: hv: Convert vmbus driver interface function pointer table to constant

2010-09-09 Thread Haiyang Zhang
 From: Greg KH [mailto:gre...@suse.de]
 Sent: Wednesday, September 08, 2010 6:44 PM
  Convert vmbus driver interface function pointer table to constant
  The vmbus interface functions are assigned to a constant - vmbus_ops.
 
 You also remove a function pointer in this patch, why?  Please break up
 the patch into logical parts, one patch, one thing.
 
 This looks like it should be 2 patches, right?

Because the vmbus interface function pointer table is converted to a
constant variable -- vmbus_ops, the function GetChannelInterface(),
VmbusGetChannelInterface() and pointer GetChannelInterface are no longer
in use. The deprecated function's work is done by the initialization of
the newly added constant variable vmbus_ops.

I created the new constant variable vmbus_ops and removed the deprecated
function pointer GetChannelInterface in one patch.

Thanks,

- Haiyang

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] staging: hv: Convert vmbus driver interface function pointer table to constant

2010-09-08 Thread Greg KH
On Wed, Sep 08, 2010 at 08:29:45PM +, Haiyang Zhang wrote:
 From: Haiyang Zhang haiya...@microsoft.com
 
 Convert vmbus driver interface function pointer table to constant
 The vmbus interface functions are assigned to a constant - vmbus_ops.

You also remove a function pointer in this patch, why?  Please break up
the patch into logical parts, one patch, one thing.

This looks like it should be 2 patches, right?

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization