Friðvin,
Thanks for the suggestion. I’ll go with the schema update.
- Suresh
On 21/02/17, 7:02 PM, "Friðvin Logi Oddbjörnsson"
wrote:
On 18 February 2017 at 20:51:42, Suresh Anaparti (
suresh.anapa...@accelerite.com) wrote:
I checked the limits set for VMware hypervisor and o
On 18 February 2017 at 20:51:42, Suresh Anaparti (
suresh.anapa...@accelerite.com) wrote:
I checked the limits set for VMware hypervisor and observed some
discrepancies. These can be either updated from the
updateHypervisorCapabilities API (max_data_volumes_limit,
max_hosts_per_cluster after impro
Thanks for bringing this up. The max data volumes limit of a VM should be based
on the hypervisor capabilities, instead of the hardcoded value. I created the
PR# 1953 (https://github.com/apache/cloudstack/pull/1953). Please check.
Even though the underlying hypervisor supports more limit, it is
The hardcoded value of 15 needs to be fixed, it can be replaced with
getMaxDataVolumesSupported() just above it. Please file a bug and if possible
raise a PR as well.
-Koushik
On 15/02/17, 11:21 PM, "Voloshanenko Igor" wrote:
On VM we try to emulate real hardware )))
So any device hon
On VM we try to emulate real hardware )))
So any device honor specification
In this case PCI :)
To be honest we can increase limits by adding multifunctional devices or
migrate to virtio-iscsi-blk
But as for me - 14 disks more than enough now
About 3 for cdrom. I will check . I think CDROM emul
I thought that on a VM we would not be bound by PCI limitations.
Interesting explanations, thanks.
On Wed, Feb 15, 2017 at 12:19 PM, Voloshanenko Igor <
igor.voloshane...@gmail.com> wrote:
> I think explanation very easy.
> PCI itself can handle up to 32 devices.
>
> If you run lspci inside empt
Know that 0 is reserved for the ROOT disk and 3 is for the CD-ROM for
attaching ISOs
On Wed, Feb 15, 2017 at 12:20 Voloshanenko Igor
wrote:
> I think explanation very easy.
> PCI itself can handle up to 32 devices.
>
> If you run lspci inside empty (fresh created) VM - you will see, that 8
> slot
I think explanation very easy.
PCI itself can handle up to 32 devices.
If you run lspci inside empty (fresh created) VM - you will see, that 8
slots already occupied
[root@test-virtio-blk ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Int
I hate to say this, but probably no one knows why.
I looked at the history and this method has always being like this.
The device ID 3 seems to be something reserved, probably for Xen tools (big
guess here)?
Also, regarding the limit; I could speculate two explanations for the
limit. A developer
CloudStack is currently limiting the number of data volumes, that can be
attached to an instance, to 14.
More specifically, this limitation relates to the device ids that
CloudStack considers valid for data volumes.
In method VolumeApiServiceImpl.getDeviceId(long, Long), only device ids 1,
2, and 4
10 matches
Mail list logo