Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Laszlo Hornyak
Hi Wido,

If I understand correctly from the documentation and your examples, virtio
provides virtio interface to the guest while virtio-scsi provides scsi
interface, therefore an IaaS service should not replace it without user
request / approval. It would be probably better to let the user set what
kind of IO interface the VM needs.

Best regards,
Laszlo

On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander  wrote:

> Hi,
>
> VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> but inside CloudStack we are not using it. There is a issue for this [1].
>
> It would bring more (theoretical) performance to VMs, but one of the
> motivators (for me) is that we can support TRIM/DISCARD [2].
>
> This would allow for RBD images on Ceph to shrink, but it can also give
> back free space on QCOW2 images if quests run fstrim. Something all modern
> distributions all do weekly in a CRON.
>
> Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
>
> For GRUB and such this is no problems. This usually work on UUIDs and/or
> labels, but for static mounts on /dev/vdb1 for example things break.
>
> We currently don't have any configuration method on how we want to present
> a disk to a guest, so when attaching a volume we can't say that we want to
> use a different driver. If we think that a Operating System supports VirtIO
> we use that driver in KVM.
>
> Any suggestion on how to add VirtIO SCSI support?
>
> Wido
>
>
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
>



-- 

EOF


[GitHub] cloudstack pull request #1914: CLOUDSTACK-9753 - Update L10N resource files ...

2017-01-21 Thread milamberspace
GitHub user milamberspace opened a pull request:

https://github.com/apache/cloudstack/pull/1914

CLOUDSTACK-9753 - Update L10N resource files with 4.10 strings from T…

…ransifex (20170121)


cc @rhtyd 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/milamberspace/cloudstack 
L10N-update-Master-20170121

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1914.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1914


commit 2df53d659d0e68e095d29fec357beaf3366509cb
Author: Milamber 
Date:   2017-01-21T12:25:58Z

CLOUDSTACK-9753 - Update L10N resource files with 4.10 strings from 
Transifex (20170121)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1741: Updated StrongSwan VPN Implementation

2017-01-21 Thread remibergsma
Github user remibergsma commented on the issue:

https://github.com/apache/cloudstack/pull/1741
  
@PaulAngus Sounds like an old bug? This PR was supposed to fix it and is 
merged: https://github.com/apache/cloudstack/pull/1483


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Syed Ahmed
Wido,

Were you thinking of adding this as a global setting? I can see why it will
be useful. I'm happy to review any ideas you might have around this.

Thanks,
-Syed
On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
wrote:

> Hi Wido,
>
> If I understand correctly from the documentation and your examples, virtio
> provides virtio interface to the guest while virtio-scsi provides scsi
> interface, therefore an IaaS service should not replace it without user
> request / approval. It would be probably better to let the user set what
> kind of IO interface the VM needs.
>
> Best regards,
> Laszlo
>
> On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> wrote:
>
> > Hi,
> >
> > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > but inside CloudStack we are not using it. There is a issue for this [1].
> >
> > It would bring more (theoretical) performance to VMs, but one of the
> > motivators (for me) is that we can support TRIM/DISCARD [2].
> >
> > This would allow for RBD images on Ceph to shrink, but it can also give
> > back free space on QCOW2 images if quests run fstrim. Something all
> modern
> > distributions all do weekly in a CRON.
> >
> > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> >
> > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > labels, but for static mounts on /dev/vdb1 for example things break.
> >
> > We currently don't have any configuration method on how we want to
> present
> > a disk to a guest, so when attaching a volume we can't say that we want
> to
> > use a different driver. If we think that a Operating System supports
> VirtIO
> > we use that driver in KVM.
> >
> > Any suggestion on how to add VirtIO SCSI support?
> >
> > Wido
> >
> >
> > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> >
>
>
>
> --
>
> EOF
>


Re: [GitHub] cloudstack issue #1741: Updated StrongSwan VPN Implementation

2017-01-21 Thread Will Stevens
It is likely my environment. I had some connectivity issues in this envs
when I was using them before. I have a pretty recent master in, but I can
relate tonight to be sure. Can we kick off your CI to see what yours says?

On Jan 21, 2017 2:24 AM, "PaulAngus"  wrote:

> Github user PaulAngus commented on the issue:
>
> https://github.com/apache/cloudstack/pull/1741
>
> hi @remibergsma , the design puts the same MAC on the two VPC routers.
> XenServer doesn't seem to like this. (ESXi hosts give a specific warning).
> @swill have you pulled in the updated marvin smoke tests? We had all
> green on KVM tests with the updated component test suite
>
>
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
> with INFRA.
> ---
>


[GitHub] cloudstack issue #1711: XenServer 7 Support

2017-01-21 Thread ciroiriarte
Github user ciroiriarte commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Hi!, is this already included in 4.9.2.0?, I'm installing a new XS7 lab and 
would like to integrate it with Cloudstack.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Wido den Hollander

> Op 21 januari 2017 om 16:15 schreef Syed Ahmed :
> 
> 
> Wido,
> 
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
> 

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> wrote:
> 
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >


RE: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Simon Weller
For the record, we've been looking into this as well.
Has anyone tried it with Windows VMs before? The standard virtio driver doesn't 
support spanned disks and that's something we'd really like to enable for our 
customers.



Simon Weller/615-312-6068

-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Saturday, 21 Jan 2017, 2:56PM
To: Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org 
[dev@cloudstack.apache.org]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors


> Op 21 januari 2017 om 16:15 schreef Syed Ahmed :
>
>
> Wido,
>
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
>

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> wrote:
>
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Syed Ahmed
Exposing this via an API would be tricky but it can definitely be added as
a cluster-wide or a global setting in my opinion. By enabling that, all the
instances would be using VirtIO SCSI. Is there a reason you'd want some
instances to use VirtIIO and others to use VirtIO SCSI?



On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  wrote:

> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
> > Op 21 januari 2017 om 16:15 schreef Syed Ahmed :
> >
> >
> > Wido,
> >
> > Were you thinking of adding this as a global setting? I can see why it
> will
> > be useful. I'm happy to review any ideas you might have around this.
> >
>
> Well, not really. We don't have any structure for this in place right now
> to define what type of driver/disk we present to a guest.
>
> See my answer below.
>
> > Thanks,
> > -Syed
> > On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> > wrote:
> >
> > > Hi Wido,
> > >
> > > If I understand correctly from the documentation and your examples,
> virtio
> > > provides virtio interface to the guest while virtio-scsi provides scsi
> > > interface, therefore an IaaS service should not replace it without user
> > > request / approval. It would be probably better to let the user set
> what
> > > kind of IO interface the VM needs.
> > >
>
> You'd say, but we already do those. Some Operating Systems get a IDE disk,
> others a SCSI disk and when Linux guest support it according to our
> database we use VirtIO.
>
> CloudStack has no way of telling how to present a volume to a guest. I
> think it would be a bit to much to just make that configurable. That would
> mean extra database entries, API calls. A bit overkill imho in this case.
>
> VirtIO SCSI is supported by all Linux distributions for a very long time.
>
> Wido
>
> > > Best regards,
> > > Laszlo
> > >
> > > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > VirtIO SCSI [0] has been supported a while now by Linux and all
> kernels,
> > > > but inside CloudStack we are not using it. There is a issue for this
> [1].
> > > >
> > > > It would bring more (theoretical) performance to VMs, but one of the
> > > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > > >
> > > > This would allow for RBD images on Ceph to shrink, but it can also
> give
> > > > back free space on QCOW2 images if quests run fstrim. Something all
> > > modern
> > > > distributions all do weekly in a CRON.
> > > >
> > > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however
> mean
> > > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > > >
> > > > For GRUB and such this is no problems. This usually work on UUIDs
> and/or
> > > > labels, but for static mounts on /dev/vdb1 for example things break.
> > > >
> > > > We currently don't have any configuration method on how we want to
> > > present
> > > > a disk to a guest, so when attaching a volume we can't say that we
> want
> > > to
> > > > use a different driver. If we think that a Operating System supports
> > > VirtIO
> > > > we use that driver in KVM.
> > > >
> > > > Any suggestion on how to add VirtIO SCSI support?
> > > >
> > > > Wido
> > > >
> > > >
> > > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > EOF
> > >
>


RE: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Simon Weller
Personally, I'd doubt it. The scsi is the replacement for the blk driver.

Simon Weller/615-312-6068

-Original Message-
From: Syed Ahmed [sah...@cloudops.com]
Received: Saturday, 21 Jan 2017, 3:59PM
To: Simon Weller [swel...@ena.com]
CC: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors

Exposing this via an API would be tricky but it can definitely be added as a 
cluster-wide or a global setting in my opinion. By enabling that, all the 
instances would be using VirtIO SCSI. Is there a reason you'd want some 
instances to use VirtIIO and others to use VirtIO SCSI?



On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller 
mailto:swel...@ena.com>> wrote:
For the record, we've been looking into this as well.
Has anyone tried it with Windows VMs before? The standard virtio driver doesn't 
support spanned disks and that's something we'd really like to enable for our 
customers.



Simon Weller/615-312-6068


-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Saturday, 21 Jan 2017, 2:56PM
To: Syed Ahmed [sah...@cloudops.com]; 
dev@cloudstack.apache.org 
[dev@cloudstack.apache.org]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors


> Op 21 januari 2017 om 16:15 schreef Syed Ahmed 
> mailto:sah...@cloudops.com>>:
>
>
> Wido,
>
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
>

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> mailto:laszlo.horn...@gmail.com>>
> wrote:
>
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > mailto:w...@widodh.nl>>
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >



Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Wido den Hollander


> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  het volgende 
> geschreven:
> 
> Exposing this via an API would be tricky but it can definitely be added as
> a cluster-wide or a global setting in my opinion. By enabling that, all the
> instances would be using VirtIO SCSI. Is there a reason you'd want some
> instances to use VirtIIO and others to use VirtIO SCSI?
> 

Even a global setting would be a bit of work and hacky as well.

I do not see any reason to keep VirtIO, it os just that devices will be named 
sdX instead of vdX in the guest.

That might break existing Instances when not using labels or UUIDs in the 
Instance when mounting.

Wido

> 
>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  wrote:
>> 
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The standard virtio driver
>> doesn't support spanned disks and that's something we'd really like to
>> enable for our customers.
>> 
>> 
>> 
>> Simon Weller/615-312-6068 <(615)%20312-6068>
>> 
>> 
>> -Original Message-
>> *From:* Wido den Hollander [w...@widodh.nl]
>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
>> dev@cloudstack.apache.org]
>> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>> 
>> 
>>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed :
>>> 
>>> 
>>> Wido,
>>> 
>>> Were you thinking of adding this as a global setting? I can see why it
>> will
>>> be useful. I'm happy to review any ideas you might have around this.
>>> 
>> 
>> Well, not really. We don't have any structure for this in place right now
>> to define what type of driver/disk we present to a guest.
>> 
>> See my answer below.
>> 
>>> Thanks,
>>> -Syed
>>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
>>> wrote:
>>> 
 Hi Wido,
 
 If I understand correctly from the documentation and your examples,
>> virtio
 provides virtio interface to the guest while virtio-scsi provides scsi
 interface, therefore an IaaS service should not replace it without user
 request / approval. It would be probably better to let the user set
>> what
 kind of IO interface the VM needs.
 
>> 
>> You'd say, but we already do those. Some Operating Systems get a IDE disk,
>> others a SCSI disk and when Linux guest support it according to our
>> database we use VirtIO.
>> 
>> CloudStack has no way of telling how to present a volume to a guest. I
>> think it would be a bit to much to just make that configurable. That would
>> mean extra database entries, API calls. A bit overkill imho in this case.
>> 
>> VirtIO SCSI is supported by all Linux distributions for a very long time.
>> 
>> Wido
>> 
 Best regards,
 Laszlo
 
 On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
 wrote:
 
> Hi,
> 
> VirtIO SCSI [0] has been supported a while now by Linux and all
>> kernels,
> but inside CloudStack we are not using it. There is a issue for this
>> [1].
> 
> It would bring more (theoretical) performance to VMs, but one of the
> motivators (for me) is that we can support TRIM/DISCARD [2].
> 
> This would allow for RBD images on Ceph to shrink, but it can also
>> give
> back free space on QCOW2 images if quests run fstrim. Something all
 modern
> distributions all do weekly in a CRON.
> 
> Now, it is simple to swap VirtIO for VirtIO SCSI. This would however
>> mean
> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> 
> For GRUB and such this is no problems. This usually work on UUIDs
>> and/or
> labels, but for static mounts on /dev/vdb1 for example things break.
> 
> We currently don't have any configuration method on how we want to
 present
> a disk to a guest, so when attaching a volume we can't say that we
>> want
 to
> use a different driver. If we think that a Operating System supports
 VirtIO
> we use that driver in KVM.
> 
> Any suggestion on how to add VirtIO SCSI support?
> 
> Wido
> 
> 
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> 
 
 
 
 --
 
 EOF
 
>> 


[GitHub] cloudstack issue #1711: XenServer 7 Support

2017-01-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Yes it is in 4.9.2.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---