Thanks for bringing this up. The max data volumes limit of a VM should be based 
on the hypervisor capabilities, instead of the hardcoded value. I created the 
PR# 1953 (https://github.com/apache/cloudstack/pull/1953). Please check.

Even though the underlying hypervisor supports more limit, it is being 
restricted in CloudStack using the limits set in cloud.hypervisor_capabilities 
table. Only max number of Guest VMs per host (max_guests_limit, which was 
introduced initially) can be updated using the API 
updateHypervisorCapabilities. The other limits (max_data_volumes_limit, 
max_hosts_per_cluster), that were introduced later, are not part of 
updateHypervisorCapabilities API. This API has to be improved to include these 
limits as well. I’m working on this, will raise a PR.

I checked the limits set for VMware hypervisor and observed some discrepancies. 
These can be either updated from the updateHypervisorCapabilities API 
(max_data_volumes_limit, max_hosts_per_cluster  after improvements) or schema 
update during upgradation. Which one would be better? For schema update, I have 
to raise a PR.

mysql> SELECT hypervisor_version, max_guests_limit, max_data_volumes_limit, 
max_hosts_per_cluster FROM cloud.hypervisor_capabilities where hypervisor_type 
= 'VMware';
+--------------------+------------------+------------------------+-----------------------+
| hypervisor_version | max_guests_limit | max_data_volumes_limit | 
max_hosts_per_cluster |
+--------------------+------------------+------------------------+-----------------------+
| 4.0                |              128 |                     13 |              
      32 |
| 4.1                |              128 |                     13 |              
      32 |
| 5.0                |              128 |                     13 |              
      32 |
| 5.1                |              128 |                     13 |              
      32 |
| 5.5                |              128 |                     13 |              
      32 |
| 6.0                |              128 |                     13 |              
      32 |
| default            |              128 |                     13 |              
      32 |
+--------------------+------------------+------------------------+-----------------------+
7 rows in set (0.00 sec)

Actual VMware maximum limits =>
max_guests_limit:         128(3.5), 320(4.0, 4.1), 512(5.0, 5.1, 5.5), 1024(6.0)
max_data_volumes_limit: 60 - SCSI  + 4 - IDE (3.5, 4.0, till 6.0)
max_hosts_per_cluster:    32 (3.5, 4.0, till 5.5); 64(6.0)

-Suresh

On 16/02/17, 11:06 AM, "Koushik Das" <koushik....@accelerite.com> wrote:

    The hardcoded value of 15 needs to be fixed, it can be replaced with 
getMaxDataVolumesSupported() just above it. Please file a bug and if possible 
raise a PR as well.
    
    -Koushik
    
    On 15/02/17, 11:21 PM, "Voloshanenko Igor" <igor.voloshane...@gmail.com> 
wrote:
    
        On VM we try to emulate real hardware )))
        So any device honor specification
        In this case PCI :)
        
        To be honest we can increase limits by adding multifunctional devices or
        migrate to virtio-iscsi-blk
        
        But as for me - 14 disks more than enough now
        
        
        About 3 for cdrom. I will check . I think CDROM emulated as IDE device, 
not
        via virtio-blk
        
        For 0 - root volume , interesting , in this case we can easily add 1 
more
        DATA disk :)
        
        ср, 15 февр. 2017 г. в 19:24, Rafael Weingärtner <
        rafaelweingart...@gmail.com>:
        
        > I thought that on a VM we would not be bound by PCI limitations.
        > Interesting explanations, thanks.
        >
        >
        > On Wed, Feb 15, 2017 at 12:19 PM, Voloshanenko Igor <
        > igor.voloshane...@gmail.com> wrote:
        >
        > > I think explanation very easy.
        > > PCI itself can handle up to 32 devices.
        > >
        > > If you run lspci inside empty (fresh created) VM - you will see, 
that 8
        > > slots already occupied
        > > [root@test-virtio-blk ~]# lspci
        > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] 
(rev
        > > 02)
        > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA 
[Natoma/Triton
        > II]
        > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE 
[Natoma/Triton
        > > II]
        > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB
        > [Natoma/Triton
        > > II] (rev 01)
        > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
        > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446
        > > 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
        > > 00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
        > >
        > > [root@test-virtio-blk ~]# lspci | wc -l
        > > 8
        > >
        > > So, 7 system devices + 1 ROOT disk
        > >
        > > in current implementation, we used virtio-blk, which can handle 
only 1
        > > device per instance.
        > >
        > > So, we have 32-8 == 24 free slots...
        > >
        > > As Cloudstack support more than 1 eth cards - 8 of them reserved for
        > > network cards and 16 available for virtio-blk
        > >
        > > So, practical limit equal 16 devices (for DATA disks)
        > >
        > > Why 2 devices (0 + 3) excluded - interesting question... I will try 
to
        > > research and post explanation
        > >
        > >
        > >
        > >
        > >
        > >
        > >
        > >
        > > 2017-02-15 18:27 GMT+02:00 Rafael Weingärtner <
        > rafaelweingart...@gmail.com
        > > >:
        > >
        > > > I hate to say this, but probably no one knows why.
        > > > I looked at the history and this method has always being like 
this.
        > > >
        > > > The device ID 3 seems to be something reserved, probably for Xen 
tools
        > > (big
        > > > guess here)?
        > > >
        > > > Also, regarding the limit; I could speculate two explanations for 
the
        > > > limit. A developer did not get the full specs and decided to do
        > whatever
        > > > he/she wanted. Or, maybe, at the time of coding (long, long time 
ago)
        > > there
        > > > was a hypervisor that limited (maybe still limits) the number of
        > devices
        > > > that could be plugged to a VM and the first developers decided to 
level
        > > > everything by that spec.
        > > >
        > > > It may be worth checking with KVM, XenServer, Hyper-V, and VMware 
if
        > they
        > > > have such limitation on disks that can be attached to a VM. If 
they do
        > > not
        > > > have, we could remove that, or at least externalize the limit in a
        > > > parameter.
        > > >
        > > > On Wed, Feb 15, 2017 at 5:54 AM, Friðvin Logi Oddbjörnsson <
        > > > frid...@greenqloud.com> wrote:
        > > >
        > > > > CloudStack is currently limiting the number of data volumes, 
that can
        > > be
        > > > > attached to an instance, to 14.
        > > > > More specifically, this limitation relates to the device ids 
that
        > > > > CloudStack considers valid for data volumes.
        > > > > In method VolumeApiServiceImpl.getDeviceId(long, Long), only 
device
        > > ids
        > > > 1,
        > > > > 2, and 4-15 are considered valid.
        > > > > What I would like to know is: is there a reason for this 
limitation?
        > > (of
        > > > > not going higher than device id 15)
        > > > >
        > > > > Note that the current number of attached data volumes is already
        > being
        > > > > checked against the maximum number of data volumes per 
instance, as
        > > > > specified by the relevant hypervisor’s capabilities.
        > > > > E.g. if the relevant hypervisor’s capabilities specify that it 
only
        > > > > supports 6 data volumes per instance, CloudStack rejects 
attaching a
        > > > > seventh data volume.
        > > > >
        > > > >
        > > > > Friðvin Logi Oddbjörnsson
        > > > >
        > > > > Senior Developer
        > > > >
        > > > > Tel: (+354) 415 0200 | frid...@greenqloud.com <
        > jaros...@greenqloud.com
        > > >
        > > > >
        > > > > Mobile: (+354) 696 6528 | PGP Key: 57CA1B00
        > > > > <https://sks-keyservers.net/pks/lookup?op=vindex&search=
        > > > > frid...@greenqloud.com>
        > > > >
        > > > > Twitter: @greenqloud <https://twitter.com/greenqloud> | 
@qstackcloud
        > > > > <https://twitter.com/qstackcloud>
        > > > >
        > > > > www.greenqloud.com | www.qstack.com
        > > > >
        > > > > [image: qstack_blue_landscape_byqreenqloud-01.png]
        > > > >
        > > >
        > > >
        > > >
        > > > --
        > > > Rafael Weingärtner
        > > >
        > >
        >
        >
        >
        > --
        > Rafael Weingärtner
        >
        
    
    
    
    
    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information which is 
the property of Accelerite, a Persistent Systems business. It is intended only 
for the use of the individual or entity to which it is addressed. If you are 
not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.
    




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.

Reply via email to