[GitHub] cloudstack issue #1854: 4.9 multiplex testing

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1854
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1854: 4.9 multiplex testing

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1854
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1854: 4.9 multiplex testing

2016-12-21 Thread rhtyd
GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1854

4.9 multiplex testing

This merges group of PRs in this PR. This will speed up testing.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack 4.9-multiplex-testing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1854.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1854


commit fed58eebdd04cf656ada7037d5c43214c1f23b61
Author: Jayapalu 
Date:   2016-12-08T10:57:16Z

CLOUDSTACK-9615: Fixed applying ingress rules without ports

commit d2ca30a1330bdd5931b8a059a8db7a4cf1327d80
Author: Jayapalu 
Date:   2016-12-12T06:27:12Z

CLOUDSTACK-9617: Fixed enabling remote access after PF or LB  configured on 
vpn tcp ports

commit 2088f0ad73ee30919d7b2153e74a295ff0a9911a
Author: Rohit Yadav 
Date:   2016-12-22T07:07:13Z

Merge pull request #1783 from jayapalu/CLOUDSTACK-9615

CLOUDSTACK-9615: Fixd applying ingress rules without portsWhen ingress rule 
is applied without ports (port start and port end params are not passed) then 
API/UI is showing rule got applied but in the VR, iptables rule not got applied.

Fixed this issue in the VR script.

* pr/1783:
  CLOUDSTACK-9615: Fixed applying ingress rules without ports

Signed-off-by: Rohit Yadav 

commit 60c4fce69092f5afca9e88089fca80a694282d26
Author: Rohit Yadav 
Date:   2016-12-22T07:39:19Z

Merge pull request #1782 from jayapalu/CLOUDSTACK-9617

CLOUDSTACK-9617: Fixed enabling remote access after PF configured on 
Enabling Remote access Vpn Fails when there is a portforwarding rule of the 
reserved ports ( 1701 , 500 , 4500) under TCP protocol on Source nat IP

* pr/1782:
  CLOUDSTACK-9617: Fixed enabling remote access after PF or LB  configured 
on vpn tcp ports

Signed-off-by: Rohit Yadav 

commit 7d678dfcaeaae7ea813132fc9a85572321032c3d
Author: Jayapalu 
Date:   2016-11-24T10:47:09Z

CLOUDSTACK-9612: Fixed issue in restarting redundant network with cleanup
Rvr Network with cleanup which is updated from the isolated network is 
failed.
Corrected the column name string issue.

This closes #1781

(cherry picked from commit 0f742e17237fc84d5e86dae9a67c7ef6a0b6c80c)
Signed-off-by: Rohit Yadav 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1638: CLOUDSTACK-9456: Migrate master to Spring 4.x

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1638
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-411


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error on Master

2016-12-21 Thread Rohit Yadav
Will,


I tried to run some upgrade tests and could not reproduce your issue.

If this happens again, let me know.

Regards.


From: Rohit Yadav 
Sent: 22 December 2016 12:03:41
To: Will Stevens
Cc: dev@cloudstack.apache.org
Subject: Re: Error on Master

Hi Will,

I tried a fresh installation of latest master and I was able to deploy
database. I'll run upgrade tests soon to verify if there are any issues.

Regards.

On Thu, Dec 22, 2016 at 12:41 AM, Will Stevens 
wrote:

> Has anyone seen this.
>
> I upgraded a system which was running master to a later version of master
> and I get the following when I start the service after upgrading.
>
> 2016-12-21 18:55:13,388 WARN  [o.a.c.s.m.c.ResourceApplicationContext]
> (main:null) (logid:) Exception encountered during context initialization -
> cancelling refresh attempt: 
> org.springframework.context.ApplicationContextException:
> Failed to start bean 'cloudStackLifeCycle'; nested exception is
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> com.mysql.jdbc.JDBC4PreparedStatement@69cd2214: SELECT
> network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1
>
>
> I tried running the SQL directly to see what I get and I get the following:
>
> mysql> SELECT network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1;
>
> ERROR 1054 (42S22): Unknown column 'network_offerings.supports_public_access'
> in 'field list'
>
> Did I miss a step or something in my upgrade from master to the latest
> master?  I have done this upgrade a few times in the past without issues,
> but maybe something has changed?
>

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



[GitHub] cloudstack issue #1638: CLOUDSTACK-9456: Migrate master to Spring 4.x

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1638
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1638: CLOUDSTACK-9456: Migrate master to Spring 4.x

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1638
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

2016-12-21 Thread Rohit Yadav
Hi Wei,


Thanks for sharing.


I discovered the issue due to failing test_volumes tests which were previously 
passing on Trillian's CentOS7 based kvm environment:

https://github.com/apache/cloudstack/pull/1837


Trillian deployed environment 
are all automated and unless libvirt/qemu packages were changes/updated from 
upstream repos, there was no change added to Trillian/CentOS7. I'll have a look 
at actual test as well.


One thing we can do is that after detaching the disk, CloudStack can check the 
domain's XML to see if the disk was actually detached? This would reflect and 
notify admin/user whether the operation actually was successful?


Thanks Wido, I think it could be related to libvirt/OS as well.


Regards.


From: Wei ZHOU 
Sent: 21 December 2016 17:26:19
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

Hi Rohit,

I donot think it is an issue in cloudstack.

We have this issue from long time ago, it still exist now.

I have a testing just now, by virsh command, not cloudstack.

this is the working one===

root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
hdc-

root@KVM015:~# virsh attach-disk 39
/mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
vdb
Disk attached successfully

root@KVM015:~#
root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk 39 vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
hdc-

this is not working


root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk i-2-7585-VM vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk i-2-7585-VM vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh attach-disk 26
/mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
vdb
error: Failed to attach disk
error: operation failed: target vdb already exists

end==

I believe this is highly related to the OS and configuration in VM, not
hypervisor, or cloudstack.
IN my testing, I use Ubuntu 12.04 as hypervisor, it works if vm OS is
CentOS7/CentOS6/Ubuntu 16.04, but not working if it is Ubuntu 12.04 .

-Wei


2016-12-21 12:00 GMT+01:00 Rohit Yadav :

> All,
>
>
> Based on results from recent Trillian test runs [1], I've discovered that
> on KVM (CentOS7) based detaching a volume fails to update the virt/domain
> xml and fails to remove the xml. So, while the agent and cloudstack-mgmt
> server succeeds, the entry in the xml is not removed. When the volume is
> attached again, we can an error like:
>
>
> Failed to attach volume xxx to VM VM-; org.libvirt.LibvirtException:
> XML error: target 'vdb' duplicated for disk sources
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'
> and 
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This
> is seen in agent logs:
>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG 
> [kvm.storage.KVMStorageProcessor]
> (agentRequest-Handler-2:) (logid:0648ae70) Detaching device:  device='disk' type='file'>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  name='qemu' type='qcow2' cache='none' />
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-
> a452-43de-8c6b-948dc44aae21'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  bus='virtio'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 

Re: [DISCUSS] Optional new SystemVM Template upgrade for 4.9 LTS release

2016-12-21 Thread Rohit Yadav
Thanks Bobby for sharing the test results.


I've uploaded the new systemvmtemplates that can be optionally be consumed from:

http://packages.shapeblue.com/systemvmtemplate/4.6/new/


In upcoming 4.9 release notes, we'll include information on how to upgrade etc.


Regards.


From: Boris Stoyanov 
Sent: 21 December 2016 16:11:47
To: us...@cloudstack.apache.org
Subject: Re: [DISCUSS] Optional new SystemVM Template upgrade for 4.9 LTS 
release

Hi Franz,

Thanks for bringing this up,

I just had a talk with one of our engineers and he doesn’t think that is being 
fixed. As I mentioned we’re planning to execute more excessive testing of the 
VR and Networking by the end of the week so we’ll probably verify confirm that 
issue.

In the meantime could you please raise defects about those issues in the 
community JIRA so that there’s visibility and people could try to reproduce as 
well.

Thanks,
Boris Stoyanov


boris.stoya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Dec 21, 2016, at 11:56 AM, Skale Franz 
> wrote:

Hi,
that's very interesting.
Perhaps you didn't read my post from Monday regarding MTU Problem on SSVM and 
wrong routing on both SSVM and Console Proxy.
Link:
http://mail-archives.apache.org/mod_mbox/cloudstack-users/201612.mbox/ajax/%3C1482145223160.30359%40citycom-austria.com%3E

Have these bugs been resolved ?

Looking forward to hear from you.

Kind regards

Franz


Von: Boris Stoyanov 
>
Gesendet: Mittwoch, 21. Dezember 2016 10:51
An: us...@cloudstack.apache.org
Cc: Rohit Yadav; dev@cloudstack.apache.org
Betreff: Re: [DISCUSS] Optional new SystemVM Template upgrade for 4.9 LTS 
release

Hi all,

I’ve just completed upgrade tests to 4.9.1 including the new system-vm 
template. I’ve covered the following paths:

4.3 - 4.9.1
-XenServer 6.2 Advanced and Basic zone setup

4.5.2.2 - 4.9.1
-KVM Cenot OS 6.8 Advanced and Basic zone setup
-KVM Cenot OS 7.2 Advanced and Basic zone setup
-XenServer 6.5SP1 Advanced and Basic zone setup
-VMWare 5.5u3 Advanced and Basic zone setup

4.6.2 - 4.9.1
-VMWare 5.5u3 Advanced and Basic zone setup

Using the new system vm template I was able to do basic VR lifecycle operations 
like create/destroy network and vm.

We’re planning to do additional round of tests on the new vm and more deep 
networking related tests by the end of the week, but for now it looks good for 
all the hypervisors.


Thanks,
Boris Stoyanov


boris.stoya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue



On Dec 9, 2016, at 11:02 AM, Wido den Hollander 
> wrote:


Op 9 december 2016 om 9:31 schreef Rohit Yadav 
>:


All,


We've been using the same systemvm template since the 4.6.x releases, which is 
more than a year old now. Since last one year, there have been several packages 
updates especially security updates published for Debian Wheezy 7 (which is 
what is our base for systemvmtemplate).


In our efforts for releasing a high quality LTS (4.9.1.0) release, we've had 
discussions on security@ regarding developing and publishing a new systemvm 
template which should be compatible for ACS 4.6+ releases, while be optional 
for users. In our release notes, we would mention about this, 
installation/upgrade steps and noting that this is recommended for users but 
not mandatory.


Thoughts, questions?


Seems like a good thing to do. A updated SSVM template is really needed due to 
various updates.


For the template to work on master (4.10+), the systemvmtemplate has an 
additional package (compared to 4.6 based systemvmtemplate) `qemu-guest-agent` 
from CLOUDSTACK-8715, and my local tests show that this does not break any.


Great! Qemu Guest Agent is nice :)


I've built and published the template at following location which should be 
available till end of the year. Please help testing these updated templates:


http://188.166.197.146/49lts


Regards.

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue



[GitHub] cloudstack issue #1848: CLOUDSTACK-9693 Cluster View - Status symbol does no...

2016-12-21 Thread rashmidixit
Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1848
  
This is done. Thanks @rhtyd.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error on Master

2016-12-21 Thread Rohit Yadav
Hi Will,

I tried a fresh installation of latest master and I was able to deploy
database. I'll run upgrade tests soon to verify if there are any issues.

Regards.

On Thu, Dec 22, 2016 at 12:41 AM, Will Stevens 
wrote:

> Has anyone seen this.
>
> I upgraded a system which was running master to a later version of master
> and I get the following when I start the service after upgrading.
>
> 2016-12-21 18:55:13,388 WARN  [o.a.c.s.m.c.ResourceApplicationContext]
> (main:null) (logid:) Exception encountered during context initialization -
> cancelling refresh attempt: 
> org.springframework.context.ApplicationContextException:
> Failed to start bean 'cloudStackLifeCycle'; nested exception is
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> com.mysql.jdbc.JDBC4PreparedStatement@69cd2214: SELECT
> network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1
>
>
> I tried running the SQL directly to see what I get and I get the following:
>
> mysql> SELECT network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1;
>
> ERROR 1054 (42S22): Unknown column 'network_offerings.supports_public_access'
> in 'field list'
>
> Did I miss a step or something in my upgrade from master to the latest
> master?  I have done this upgrade a few times in the past without issues,
> but maybe something has changed?
>


[GitHub] cloudstack pull request #1711: XenServer 7 Support

2016-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1711


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1851: schema: Upgrade path from 4.9.1.0 to 4.9.2.0

2016-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1851


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1849: CLOUDSTACK-9690: Scale CentOS7 VM fails with ...

2016-12-21 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1849#discussion_r93571470
  
--- Diff: 
api/src/org/apache/cloudstack/api/command/admin/guest/UpdateGuestOsCmd.java ---
@@ -61,6 +70,22 @@ public String getOsDisplayName() {
 return osDisplayName;
 }
 
+public Map getDetails() {
+Map detailsMap = null;
--- End diff --

don't make getDetails return null, instead return an empty hashmap.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1849: CLOUDSTACK-9690: Scale CentOS7 VM fails with ...

2016-12-21 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1849#discussion_r93571450
  
--- Diff: 
api/src/org/apache/cloudstack/api/command/admin/guest/AddGuestOsCmd.java ---
@@ -70,6 +78,23 @@ public String getOsName() {
 return osName;
 }
 
+public Map getDetails() {
+Map detailsMap = null;
--- End diff --

don't make getDetails return null, instead return an empty hashmap.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1849: CLOUDSTACK-9690: Scale CentOS7 VM fails with ...

2016-12-21 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1849#discussion_r93571410
  
--- Diff: api/src/com/cloud/agent/api/to/VirtualMachineTO.java ---
@@ -69,6 +69,7 @@
 String configDriveIsoRootFolder = null;
 String configDriveIsoFile = null;
 
+Map guestOsDetails = null;
--- End diff --

To avoid potential NPEs, construct a hashmap with no items, such as
`Map guestOsDetails = new Hashmap();`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1852: VM snapshot is disabled if the VM Instance is off

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1852
  
@rashmidixit change the PR's base branch to 4.9 please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1851: schema: Upgrade path from 4.9.1.0 to 4.9.2.0

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1851
  
Tested this manually. Merging this on discretion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1853: CLOUDSTACK-9696: Fixed Virtual Router deploym...

2016-12-21 Thread anshul1886
GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/1853

CLOUDSTACK-9696: Fixed Virtual Router deployment failing if there are…

… no shared resources available

and has access to resources through parent domain heirarachy

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-9696

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1853.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1853


commit b866776708ac664a8b71380b0f62009d0a88a3b3
Author: Anshul Gangwar 
Date:   2016-12-22T06:17:03Z

CLOUDSTACK-9696: Fixed Virtual Router deployment failing if there are no 
shared resources available
and has access to resources through parent domain heirarachy




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1852: VM snapshot is disabled if the VM Instance is off

2016-12-21 Thread rashmidixit
Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1852
  
As part of this fix, added a new property called "isDisabled" for a field 
in a dialog. If this function returns true, then the field will be disabled.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1852: VM snapshot is disabled if the VM Instance is...

2016-12-21 Thread rashmidixit
GitHub user rashmidixit opened a pull request:

https://github.com/apache/cloudstack/pull/1852

VM snapshot is disabled if the VM Instance is off

Refer to 
[CLOUDSTACK-9695](https://issues.apache.org/jira/browse/CLOUDSTACK-9695) for 
more details.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9695

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1852.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1852


commit c5bd65589f3be915d9fe42c71447a7a9bdf1c2d9
Author: Sanket Thite 
Date:   2016-07-12T09:50:47Z

VM snapshot is disabled if the VM Instance is off




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Checked tests, the failure was due to missing licence in sql file. I've 
fixed that and a conflict as upgrade paths were missing, simply adding a sql 
file does not add it to the upgrade path. I'll proceed with merging this now 
based on manual tests and test results from previous runs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: patchviasocket seems to be broken with qemu 2.3(+?)

2016-12-21 Thread Syahrul Sazli Shaharir

On 2016-12-21 23:26, Linas Žilinskas wrote:

At this point I'm not sure what the issue for you could be. Did you
try recreating the failing vrouter?


Yes, multiple times by destroying it and/or restarting the network - 
failed every time.



Also, just in case, check if there's free disk space on it. We had
some vrouters stuck due to this, and i saw another thread here
discussing it.


Plenty of space in the stuck VM:-

root@r-691-VM:~# df -h
Filesystem  Size  Used Avail 
Use% Mounted on
rootfs  461M  157M  281M 
 36% /
udev 10M 0   10M 
  0% /dev
tmpfs50M  236K   50M 
  1% /run
/dev/disk/by-uuid/6a0427bc-6052-48de-a4b8-c82d8217ed1d  461M  157M  281M 
 36% /
tmpfs   5.0M 0  5.0M 
  0% /run/lock
tmpfs   207M 0  207M 
  0% /run/shm
/dev/vda173M   23M   47M 
 33% /boot
/dev/vda692M  5.6M   81M 
  7% /home
/dev/vda8   184M  6.2M  169M 
  4% /opt
/dev/vda11   92M  5.6M   81M 
  7% /tmp
/dev/vda7   751M  493M  219M 
 70% /usr
/dev/vda9   563M  157M  377M 
 30% /var
/dev/vda10  184M  7.2M  168M 
  5% /var/log


Thanks.



Basically the /var/log/ partition fills up, since it's relatively
small. And if you had issues for a period of time with that specific
router and restarted it multiple times, the log partition might be
full.

On 21/12/16 06:35, Syahrul Sazli Shaharir wrote:


On 2016-12-20 17:53, Wei ZHOU wrote:


Hi Synhrul,

Could you upload the /var/log/cloud.log ?


Sure:-

Working router VM: http://pastebin.com/hwwk86ve

Non-working router VM: http://pastebin.com/G4nv09ab

Thanks.

-Wei

2016-12-20 3:18 GMT+01:00 Syahrul Sazli Shaharir :


On 2016-12-19 18:10, Syahrul Sazli Shaharir wrote:

On 2016-12-19 17:03, Linas Žilinskas wrote:

From the logs it doesn't seem that the script timeouts. "Execution
is
successful", so it manages to pass the data over the socket.

I guess the systemvm just doesn't configure itself for some reason.

You are right, I was able to enter the router VM console at some
point
during the timeout loops, and able to capture syslog output during
the
loop:-

http://pastebin.com/n37aHeSa


I restarted another network, and that network's router VM was able to
be
recreated, even on the same host as the failed network (and both
networks
are exactly same configuration, only VLAN & subnet are different).
Comparing between the two syslog outputs during boot shows the
problematic
network router VM self-configuration got stuck in vm_dhcp_entry.json .


1. Working network router VM : http://pastebin.com/Y6zpDa6M
2. Non-working network router VM : http://pastebin.com/jzfGMGQB

Thanks.


Also, in my personal tests, I noticed some different behaviour with


different kernels. Don't remember the specifics right now, but on
some
combinations (qemu / kernel) the socket acted differently. For
example
the data was sent over the socket, but wasn't visible inside the
VM.
Other times the socket would be stuck from the host side.

So i would suggest testing different kernels (3.x, 4.4.x, 4.8.x)
or
try to login to the system vm and see what's happening from
inside.


Will do this next and feedback the results here.

Thanks for your help! :)

On 12/16/16 03:46, Syahrul Sazli Shaharir wrote:

On 2016-12-16 11:27, Syahrul Sazli Shaharir wrote:
On Wed, 26 Oct 2016, Linas ?ilinskas wrote:

So after some investigation I've found out that qemu 2.3.0 is indeed

broken, at least the way CS uses the qemu chardev/socket.

Not sure in which specific version it happened, but it was fixed in
2.4.0-rc3, specifically noting that CloudStack 4.2 was not working.

qemu git commit: 4bf1cb03fbc43b0055af60d4ff093d6894aa4338

Also attaching the patch from that commit.

For our own purposes i've included the patch to the qemu-kvm-ev
package (2.3.0) and all is well.

Hi,

I am facing the exact same issue on latest Cloudstack 4.9.0.1, on
latest CentOS 7.3.1611, with latest qemu-kvm-ev-2.6.0-27.1.el7
package.

The issue initially surfaced following a heartbeat-induced reset of
all hosts, when it was on CS 4.8 @ CentOS 7.0 and stock
qemu-kvm-1.5.3. Since then, the patchviasocket.pl/py timeouts
persisted for 1 out of 4 router VM/networks, even after upgrading to


latest code. (I have checked the qemu-kvm-ev-2.6.0-27.1.el7 source,
and the patched code are pretty much still intact, as per the
2.4.0-rc3 commit).

Any help would be greatly appreciated.

Thanks.

(Attached are some debug logs from the host's 

[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
@syed yes I see them now. I'll first merge #1851 to get the db upgrade 
paths and then apply your PR and include your sql changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread syed
Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
@rhtyd the file is there ... somehow doesn't show in the diff you have to 
manually load it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Trillian test result (tid-704)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 6
Total time taken: 33589 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1837-t704-xenserver-65sp1.zip
Test completed. 43 look ok, 5 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_redundant_vpc_site2site_vpn | `Failure` | 378.58 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Failure` | 546.35 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1387.11 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 539.13 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 744.64 | 
test_privategw_acl.py
test_04_extract_template | `Error` | 5.14 | test_templates.py
test_03_delete_template | `Error` | 5.11 | test_templates.py
test_01_create_template | `Error` | 70.72 | test_templates.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 67.16 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 317.31 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 142.04 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 305.23 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 704.52 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 891.14 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1074.02 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.80 | test_volumes.py
test_08_resize_volume | Success | 111.12 | test_volumes.py
test_07_resize_fail | Success | 116.22 | test_volumes.py
test_06_download_detached_volume | Success | 20.41 | test_volumes.py
test_05_detach_volume | Success | 100.30 | test_volumes.py
test_04_delete_attached_volume | Success | 15.26 | test_volumes.py
test_03_download_attached_volume | Success | 15.40 | test_volumes.py
test_02_attach_volume | Success | 15.81 | test_volumes.py
test_01_create_volume | Success | 397.69 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.34 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 224.73 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 100.73 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 172.56 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 192.76 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.48 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.28 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.18 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.48 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.30 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.30 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 146.28 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.07 | test_templates.py
test_02_edit_template | Success | 90.19 | test_templates.py
test_10_destroy_cpvm | Success | 231.95 | test_ssvm.py
test_09_destroy_ssvm | Success | 230.10 | test_ssvm.py
test_08_reboot_cpvm | Success | 146.70 | test_ssvm.py
test_07_reboot_ssvm | Success | 149.25 | test_ssvm.py
test_06_stop_cpvm | Success | 166.77 | test_ssvm.py
test_05_stop_ssvm | Success | 138.93 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.43 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.62 | test_snapshots.py
test_04_change_offering_small | Success | 126.18 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.05 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.11 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.22 | test_secondary_storage.py
test_01_scale_vm | Success | 5.27 | test_scale_vm.py
test_09_reboot_router | Success | 55.53 | test_routers.py
test_08_start_router | Success | 45.48 | test_routers.py
test_07_stop_router | Success | 15.23 | test_routers.py

[GitHub] cloudstack issue #1828: CLOUDSTACK-9676 Start instance fails after reverting...

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1828
  
@sateesh-chodapuneedi lgtm, can you rebase this against 4.9 branch and 
change PR's base branch to 4.9?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1851: schema: Upgrade path from 4.9.1.0 to 4.9.2.0

2016-12-21 Thread rhtyd
GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1851

schema: Upgrade path from 4.9.1.0 to 4.9.2.0

Upgrade paths added so PRs such as #1711 can use it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack 4910to4920upgradepath

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1851.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1851


commit a98d603e0fabe203e4588466d63c85c8a59def6e
Author: Rohit Yadav 
Date:   2016-12-22T04:40:32Z

schema: Upgrade path from 4.9.1.0 to 4.9.2.0

Signed-off-by: Rohit Yadav 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
Trillian test result (tid-707)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 31664 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1836-t707-kvm-centos7.zip
Test completed. 45 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 847.87 | 
test_vpc_redundant.py
test_10_attachAndDetach_iso | `Failure` | 683.85 | test_vm_life_cycle.py
test_04_rvpc_privategw_static_routes | `Failure` | 359.76 | 
test_privategw_acl.py
test_09_delete_detached_volume | `Error` | 10.15 | test_volumes.py
test_08_resize_volume | `Error` | 5.06 | test_volumes.py
test_07_resize_fail | `Error` | 10.19 | test_volumes.py
test_06_download_detached_volume | `Error` | 5.07 | test_volumes.py
test_05_detach_volume | `Error` | 5.07 | test_volumes.py
test_04_delete_attached_volume | `Error` | 5.07 | test_volumes.py
test_03_download_attached_volume | `Error` | 5.06 | test_volumes.py
test_01_create_volume | `Error` | 249.45 | test_volumes.py
test_01_vpc_site2site_vpn | Success | 149.26 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.03 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 249.76 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 464.36 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 817.64 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1396.17 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 547.10 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1281.34 | 
test_vpc_redundant.py
test_02_attach_volume | Success | 48.66 | test_volumes.py
test_deploy_vm_multiple | Success | 357.37 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 76.11 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.09 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.73 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.63 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.10 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.65 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 75.54 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.08 | test_templates.py
test_01_create_template | Success | 50.40 | test_templates.py
test_10_destroy_cpvm | Success | 166.55 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.07 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.24 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.41 | test_ssvm.py
test_06_stop_cpvm | Success | 136.36 | test_ssvm.py
test_05_stop_ssvm | Success | 133.46 | test_ssvm.py
test_04_cpvm_internals | Success | 0.99 | test_ssvm.py
test_03_ssvm_internals | Success | 3.27 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.03 | test_snapshots.py
test_04_change_offering_small | Success | 239.35 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.09 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.12 | test_secondary_storage.py
test_09_reboot_router | Success | 35.26 | test_routers.py
test_08_start_router | Success | 30.22 | test_routers.py
test_07_stop_router | Success | 10.12 | test_routers.py
test_06_router_advanced | Success | 0.04 | test_routers.py
test_05_router_basic | Success | 0.03 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.53 | test_routers.py
test_03_restart_network_cleanup | Success | 55.44 | test_routers.py
test_02_router_internal_adv | Success | 1.03 | test_routers.py
test_01_router_internal_basic | Success | 0.55 | 

[GitHub] cloudstack pull request #1839: CLOUDSTACK-9683: system.vm.default.hypervisor...

2016-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1839


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1839: CLOUDSTACK-9683: system.vm.default.hypervisor will p...

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1839
  
LGTM. Merging this now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #977: [4.9] CLOUDSTACK-8746: VM Snapshotting implementation...

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@kiwiflyer sorry, the testing operation is restricted to few people to 
avoid abuse. Currently the infra is busy with 4.9.x related PRs and private 
usage.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Trillian test result (tid-705)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 30368 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1837-t705-kvm-centos7.zip
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 358.38 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 396.40 | 
test_privategw_acl.py
test_09_delete_detached_volume | `Error` | 10.20 | test_volumes.py
test_08_resize_volume | `Error` | 5.08 | test_volumes.py
test_07_resize_fail | `Error` | 10.28 | test_volumes.py
test_06_download_detached_volume | `Error` | 5.09 | test_volumes.py
test_05_detach_volume | `Error` | 5.08 | test_volumes.py
test_04_delete_attached_volume | `Error` | 5.09 | test_volumes.py
test_03_download_attached_volume | `Error` | 5.09 | test_volumes.py
test_01_create_volume | `Error` | 247.73 | test_volumes.py
test_01_vpc_site2site_vpn | Success | 164.85 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.43 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 251.24 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 470.10 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 776.60 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 500.31 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1392.38 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 549.37 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 764.70 | 
test_vpc_redundant.py
test_02_attach_volume | Success | 49.32 | test_volumes.py
test_deploy_vm_multiple | Success | 373.47 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.53 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.92 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.83 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.89 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.15 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.94 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 95.92 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.09 | test_templates.py
test_04_extract_template | Success | 5.40 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.11 | test_templates.py
test_01_create_template | Success | 35.41 | test_templates.py
test_10_destroy_cpvm | Success | 161.68 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.67 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.40 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.29 | test_ssvm.py
test_06_stop_cpvm | Success | 131.54 | test_ssvm.py
test_05_stop_ssvm | Success | 163.34 | test_ssvm.py
test_04_cpvm_internals | Success | 0.96 | test_ssvm.py
test_03_ssvm_internals | Success | 2.88 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.25 | test_snapshots.py
test_04_change_offering_small | Success | 240.64 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.12 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_09_reboot_router | Success | 40.34 | test_routers.py
test_08_start_router | Success | 30.29 | test_routers.py
test_07_stop_router | Success | 10.16 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.70 | test_routers.py
test_03_restart_network_cleanup | Success | 75.64 | test_routers.py
test_02_router_internal_adv | Success | 0.91 | test_routers.py
test_01_router_internal_basic | Success | 0.46 | test_routers.py
  

[GitHub] cloudstack issue #1839: CLOUDSTACK-9683: system.vm.default.hypervisor will p...

2016-12-21 Thread abhinandanprateek
Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/1839
  
@rhtyd  most of the automation environments are single hypervisor one, so 
will end up creating a mixed hypervisor zone for this test only. If that can be 
done i can put one together, probably not worth the effort. @borisstoyanov 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread syed
Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
@rhtyd moved the DB changes to `schema-4910to4920.sql` and rebased. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error on Master

2016-12-21 Thread Will Stevens
​I was able to apply ​`setup/db/db/schema-4910to41000.sql` by doing the
following (I am assuming that is how I should have done it).

# mysql -u root -p cloud < schema-4910to41000.sql

​The first attempt, I had the following error.:

ERROR 1060 (42S21) at line 22: Duplicate column name 'update_state'

​I removed the following line and re-applied the sql and it seems that
everything is working now.

ALTER TABLE `cloud`.`domain_router` ADD COLUMN  update_state varchar(64)
DEFAULT NULL;

Cheers,

Will​
​

On Wed, Dec 21, 2016 at 2:16 PM, Will Stevens 
wrote:

> It looks like this file `setup/db/db/schema-4910to41000.sql` has modified
> this.
>
> ALTER TABLE `cloud`.`network_offerings` ADD COLUMN supports_public_access
> boolean default false;
>
> So I guess I need to apply this SQL update somehow.  Since I am going from
> Master to Master, how should I do this?  Any tips or hints for me?
>
> On Wed, Dec 21, 2016 at 2:11 PM, Will Stevens 
> wrote:
>
>> Has anyone seen this.
>>
>> I upgraded a system which was running master to a later version of master
>> and I get the following when I start the service after upgrading.
>>
>> 2016-12-21 18:55:13,388 WARN  [o.a.c.s.m.c.ResourceApplicationContext]
>> (main:null) (logid:) Exception encountered during context initialization -
>> cancelling refresh attempt: 
>> org.springframework.context.ApplicationContextException:
>> Failed to start bean 'cloudStackLifeCycle'; nested exception is
>> com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
>> com.mysql.jdbc.JDBC4PreparedStatement@69cd2214: SELECT
>> network_offerings.id, network_offerings.name,
>> network_offerings.unique_name, network_offerings.display_text,
>> network_offerings.nw_rate, network_offerings.mc_rate,
>> network_offerings.traffic_type, network_offerings.specify_vlan,
>> network_offerings.system_only, network_offerings.service_offering_id,
>> network_offerings.tags, network_offerings.default,
>> network_offerings.availability, network_offerings.state,
>> network_offerings.removed, network_offerings.created,
>> network_offerings.guest_type, network_offerings.dedicated_lb_service,
>> network_offerings.shared_source_nat_service,
>> network_offerings.specify_ip_ranges, network_offerings.sort_key,
>> network_offerings.uuid, network_offerings.redundant_router_service,
>> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
>> network_offerings.eip_associate_public_ip, 
>> network_offerings.elastic_lb_service,
>> network_offerings.inline, network_offerings.is_persistent,
>> network_offerings.egress_default_policy, 
>> network_offerings.concurrent_connections,
>> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
>> network_offerings.supports_public_access, network_offerings.internal_lb,
>> network_offerings.public_lb FROM network_offerings WHERE
>> network_offerings.unique_name = _binary'System-Public-Network'  AND
>> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1
>>
>>
>> I tried running the SQL directly to see what I get and I get the
>> following:
>>
>> mysql> SELECT network_offerings.id, network_offerings.name,
>> network_offerings.unique_name, network_offerings.display_text,
>> network_offerings.nw_rate, network_offerings.mc_rate,
>> network_offerings.traffic_type, network_offerings.specify_vlan,
>> network_offerings.system_only, network_offerings.service_offering_id,
>> network_offerings.tags, network_offerings.default,
>> network_offerings.availability, network_offerings.state,
>> network_offerings.removed, network_offerings.created,
>> network_offerings.guest_type, network_offerings.dedicated_lb_service,
>> network_offerings.shared_source_nat_service,
>> network_offerings.specify_ip_ranges, network_offerings.sort_key,
>> network_offerings.uuid, network_offerings.redundant_router_service,
>> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
>> network_offerings.eip_associate_public_ip, 
>> network_offerings.elastic_lb_service,
>> network_offerings.inline, network_offerings.is_persistent,
>> network_offerings.egress_default_policy, 
>> network_offerings.concurrent_connections,
>> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
>> network_offerings.supports_public_access, network_offerings.internal_lb,
>> network_offerings.public_lb FROM network_offerings WHERE
>> network_offerings.unique_name = _binary'System-Public-Network'  AND
>> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1;
>>
>> ERROR 1054 (42S22): Unknown column 'network_offerings.supports_public_access'
>> in 'field list'
>>
>> Did I miss a step or something in my upgrade from master to the latest
>> master?  I have done this upgrade a few times in the past without issues,
>> but maybe something has changed?
>>
>
>


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Trillian test result (tid-696)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 40114 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1837-t696-vmware-55u3.zip
Test completed. 44 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 957.80 | 
test_privategw_acl.py
ContextSuite context=TestVpcSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestVpcRemoteAccessVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestRVPCSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 705.54 | 
test_vpc_redundant.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_02_VPC_default_routes | Success | 339.10 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 775.00 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 700.45 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1607.67 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 708.09 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 697.93 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1392.10 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 31.10 | test_volumes.py
test_06_download_detached_volume | Success | 75.50 | test_volumes.py
test_05_detach_volume | Success | 110.29 | test_volumes.py
test_04_delete_attached_volume | Success | 20.23 | test_volumes.py
test_03_download_attached_volume | Success | 20.25 | test_volumes.py
test_02_attach_volume | Success | 69.90 | test_volumes.py
test_01_create_volume | Success | 555.35 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.12 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 237.06 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 186.15 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.57 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 307.15 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.70 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.26 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 105.99 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.10 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.12 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 291.51 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 15.18 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.07 | test_templates.py
test_01_create_template | Success | 130.74 | test_templates.py
test_10_destroy_cpvm | Success | 236.59 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.48 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.66 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.44 | test_ssvm.py
test_06_stop_cpvm | Success | 176.77 | test_ssvm.py
test_05_stop_ssvm | Success | 208.86 | test_ssvm.py
test_04_cpvm_internals | Success | 1.03 | test_ssvm.py
test_03_ssvm_internals | Success | 3.17 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 71.32 | test_snapshots.py
test_04_change_offering_small | Success | 96.79 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.10 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.12 | test_secondary_storage.py
test_09_reboot_router | Success | 136.05 | test_routers.py
test_08_start_router | Success | 120.86 | test_routers.py
test_07_stop_router | Success 

[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@blueorangutan test matrix


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Trillian test result (tid-697)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35551 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1837-t697-xenserver-65sp1.zip
Test completed. 43 look ok, 5 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 567.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1355.15 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 534.36 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 760.96 | 
test_privategw_acl.py
ContextSuite context=TestVpcSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestVpcRemoteAccessVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestRVPCSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 889.94 | 
test_vpc_redundant.py
ContextSuite context=TestTemplates>:setup | `Error` | 393.38 | 
test_templates.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_02_VPC_default_routes | Success | 333.45 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 646.94 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 917.09 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1054.16 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.53 | test_volumes.py
test_08_resize_volume | Success | 85.56 | test_volumes.py
test_07_resize_fail | Success | 100.64 | test_volumes.py
test_06_download_detached_volume | Success | 25.26 | test_volumes.py
test_05_detach_volume | Success | 100.23 | test_volumes.py
test_04_delete_attached_volume | Success | 10.13 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 10.63 | test_volumes.py
test_01_create_volume | Success | 393.09 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 191.08 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 100.74 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 236.75 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.55 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.18 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 65.73 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.12 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.11 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.16 | test_vm_life_cycle.py
test_01_stop_vm | Success | 25.15 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 120.68 | test_templates.py
test_01_create_template | Success | 61.06 | test_templates.py
test_10_destroy_cpvm | Success | 196.50 | test_ssvm.py
test_09_destroy_ssvm | Success | 228.58 | test_ssvm.py
test_08_reboot_cpvm | Success | 141.39 | test_ssvm.py
test_07_reboot_ssvm | Success | 153.76 | test_ssvm.py
test_06_stop_cpvm | Success | 161.35 | test_ssvm.py
test_05_stop_ssvm | Success | 168.75 | test_ssvm.py
test_04_cpvm_internals | Success | 0.96 | test_ssvm.py
test_03_ssvm_internals | Success | 3.31 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.08 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.08 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.05 | test_snapshots.py
test_04_change_offering_small | Success | 95.87 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.02 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.06 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.08 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.11 | test_secondary_storage.py
test_01_scale_vm | Success | 5.12 | test_scale_vm.py
test_09_reboot_router | Success | 65.35 | test_routers.py
test_08_start_router | Success | 50.28 | test_routers.py
test_07_stop_router | Success | 15.12 | test_routers.py
test_06_router_advanced | Success | 0.04 | test_routers.py
test_05_router_basic | Success | 0.03 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.46 | 

[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@rhtyd a Trillian-Jenkins matrix job (centos6 mgmt + xs65sp1, centos7 mgmt 
+ vmware55u3, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Packaging result: ✔centos6 ✔centos7 ✖debian. JID-409


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-408


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-410


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error on Master

2016-12-21 Thread Will Stevens
It looks like this file `setup/db/db/schema-4910to41000.sql` has modified
this.

ALTER TABLE `cloud`.`network_offerings` ADD COLUMN supports_public_access
boolean default false;

So I guess I need to apply this SQL update somehow.  Since I am going from
Master to Master, how should I do this?  Any tips or hints for me?

On Wed, Dec 21, 2016 at 2:11 PM, Will Stevens 
wrote:

> Has anyone seen this.
>
> I upgraded a system which was running master to a later version of master
> and I get the following when I start the service after upgrading.
>
> 2016-12-21 18:55:13,388 WARN  [o.a.c.s.m.c.ResourceApplicationContext]
> (main:null) (logid:) Exception encountered during context initialization -
> cancelling refresh attempt: 
> org.springframework.context.ApplicationContextException:
> Failed to start bean 'cloudStackLifeCycle'; nested exception is
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> com.mysql.jdbc.JDBC4PreparedStatement@69cd2214: SELECT
> network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1
>
>
> I tried running the SQL directly to see what I get and I get the following:
>
> mysql> SELECT network_offerings.id, network_offerings.name,
> network_offerings.unique_name, network_offerings.display_text,
> network_offerings.nw_rate, network_offerings.mc_rate,
> network_offerings.traffic_type, network_offerings.specify_vlan,
> network_offerings.system_only, network_offerings.service_offering_id,
> network_offerings.tags, network_offerings.default, 
> network_offerings.availability,
> network_offerings.state, network_offerings.removed,
> network_offerings.created, network_offerings.guest_type,
> network_offerings.dedicated_lb_service, 
> network_offerings.shared_source_nat_service,
> network_offerings.specify_ip_ranges, network_offerings.sort_key,
> network_offerings.uuid, network_offerings.redundant_router_service,
> network_offerings.conserve_mode, network_offerings.elastic_ip_service,
> network_offerings.eip_associate_public_ip, 
> network_offerings.elastic_lb_service,
> network_offerings.inline, network_offerings.is_persistent,
> network_offerings.egress_default_policy, 
> network_offerings.concurrent_connections,
> network_offerings.keep_alive_enabled, network_offerings.supports_streched_l2,
> network_offerings.supports_public_access, network_offerings.internal_lb,
> network_offerings.public_lb FROM network_offerings WHERE
> network_offerings.unique_name = _binary'System-Public-Network'  AND
> network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1;
>
> ERROR 1054 (42S22): Unknown column 'network_offerings.supports_public_access'
> in 'field list'
>
> Did I miss a step or something in my upgrade from master to the latest
> master?  I have done this upgrade a few times in the past without issues,
> but maybe something has changed?
>


Error on Master

2016-12-21 Thread Will Stevens
Has anyone seen this.

I upgraded a system which was running master to a later version of master
and I get the following when I start the service after upgrading.

2016-12-21 18:55:13,388 WARN  [o.a.c.s.m.c.ResourceApplicationContext]
(main:null) (logid:) Exception encountered during context initialization -
cancelling refresh attempt:
org.springframework.context.ApplicationContextException: Failed to start
bean 'cloudStackLifeCycle'; nested exception is
com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
com.mysql.jdbc.JDBC4PreparedStatement@69cd2214: SELECT network_offerings.id,
network_offerings.name, network_offerings.unique_name,
network_offerings.display_text, network_offerings.nw_rate,
network_offerings.mc_rate, network_offerings.traffic_type,
network_offerings.specify_vlan, network_offerings.system_only,
network_offerings.service_offering_id, network_offerings.tags,
network_offerings.default, network_offerings.availability,
network_offerings.state, network_offerings.removed,
network_offerings.created, network_offerings.guest_type,
network_offerings.dedicated_lb_service,
network_offerings.shared_source_nat_service,
network_offerings.specify_ip_ranges, network_offerings.sort_key,
network_offerings.uuid, network_offerings.redundant_router_service,
network_offerings.conserve_mode, network_offerings.elastic_ip_service,
network_offerings.eip_associate_public_ip,
network_offerings.elastic_lb_service, network_offerings.inline,
network_offerings.is_persistent, network_offerings.egress_default_policy,
network_offerings.concurrent_connections,
network_offerings.keep_alive_enabled,
network_offerings.supports_streched_l2,
network_offerings.supports_public_access, network_offerings.internal_lb,
network_offerings.public_lb FROM network_offerings WHERE
network_offerings.unique_name = _binary'System-Public-Network'  AND
network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1


I tried running the SQL directly to see what I get and I get the following:

mysql> SELECT network_offerings.id, network_offerings.name,
network_offerings.unique_name, network_offerings.display_text,
network_offerings.nw_rate, network_offerings.mc_rate,
network_offerings.traffic_type, network_offerings.specify_vlan,
network_offerings.system_only, network_offerings.service_offering_id,
network_offerings.tags, network_offerings.default,
network_offerings.availability, network_offerings.state,
network_offerings.removed, network_offerings.created,
network_offerings.guest_type, network_offerings.dedicated_lb_service,
network_offerings.shared_source_nat_service,
network_offerings.specify_ip_ranges, network_offerings.sort_key,
network_offerings.uuid, network_offerings.redundant_router_service,
network_offerings.conserve_mode, network_offerings.elastic_ip_service,
network_offerings.eip_associate_public_ip,
network_offerings.elastic_lb_service, network_offerings.inline,
network_offerings.is_persistent, network_offerings.egress_default_policy,
network_offerings.concurrent_connections,
network_offerings.keep_alive_enabled,
network_offerings.supports_streched_l2,
network_offerings.supports_public_access, network_offerings.internal_lb,
network_offerings.public_lb FROM network_offerings WHERE
network_offerings.unique_name = _binary'System-Public-Network'  AND
network_offerings.removed IS NULL  ORDER BY RAND() LIMIT 1;

ERROR 1054 (42S22): Unknown column
'network_offerings.supports_public_access' in 'field list'

Did I miss a step or something in my upgrade from master to the latest
master?  I have done this upgrade a few times in the past without issues,
but maybe something has changed?


[GitHub] cloudstack issue #1839: CLOUDSTACK-9683: system.vm.default.hypervisor will p...

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1839
  
@abhinandanprateek do we have a unit test or integration/smoke test to 
validate the global setting, or can you write one?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1846


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
Thanks @serg38 
Merging this on discretion, and manual test results. The previous failing 
tests on test_templates and test_list_ids pass now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1850: CLOUDSTACK-9694: Unable to limit the Public I...

2016-12-21 Thread sudhansu7
GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/1850

CLOUDSTACK-9694: Unable to limit the Public IPs in VPC



Unable to limit the Public IPs in VPC.
In VPC network, while acquiring the IP addresses, in the resource_count 
table, count for the domain is getting increased. However, when the resource 
count is updated at Domain level, resource count is getting reverted to only 
non-vpc ip count.

Steps to Reproduce:

1. Create a VPC
2. Create a VPC tier.
3. Check resource_count table and note the ip address count. (say 1)
4. Keep acquiring the IP addresses, (say 4 IP addresses). Now new ip 
address count resource_count table is 5.
5. update the resource count at domain level.
6. the resource_count is updated back 1

Root Cause: Update resource count command recalculates the resource count. 
While computing public IP we are not considering the ips allocated to VPC.

ResourceLimitManagerImpl.java -> calculatePublicIpForAccount() -> 
IPAddressDaoImpl.countAllocatedIPsForAccount()

Currently we have below query builder. Which does not consider vpc_id 
column.
```
AllocatedIpCountForAccount = createSearchBuilder(Long.class);
AllocatedIpCountForAccount.select(null, Func.COUNT, 
AllocatedIpCountForAccount.entity().getAddress());
AllocatedIpCountForAccount.and("account", 
AllocatedIpCountForAccount.entity().getAllocatedToAccountId(), Op.EQ);
AllocatedIpCountForAccount.and("allocated", 
AllocatedIpCountForAccount.entity().getAllocatedTime(), Op.NNULL);
AllocatedIpCountForAccount.and("network", 
AllocatedIpCountForAccount.entity().getAssociatedWithNetworkId(), Op.NNULL);
AllocatedIpCountForAccount.done();
```
it generates below sql query
```
SELECT COUNT(user_ip_address.public_ip_address) FROM user_ip_address WHERE 
user_ip_address.account_id = 6  AND user_ip_address.allocated IS NOT NULL  AND 
user_ip_address.network_id IS NOT NULL  AND user_ip_address.removed IS NULL
```
Fix:
Add vpc_id check in query.
```
AllocatedIpCountForAccount = createSearchBuilder(Long.class);
AllocatedIpCountForAccount.select(null, Func.COUNT, 
AllocatedIpCountForAccount.entity().getAddress());
AllocatedIpCountForAccount.and("account", 
AllocatedIpCountForAccount.entity().getAllocatedToAccountId(), Op.EQ);
AllocatedIpCountForAccount.and("allocated", 
AllocatedIpCountForAccount.entity().getAllocatedTime(), Op.NNULL);
AllocatedIpCountForAccount.and().op("network", 
AllocatedIpCountForAccount.entity().getAssociatedWithNetworkId(), Op.NNULL);
AllocatedIpCountForAccount.or("vpc", 
AllocatedIpCountForAccount.entity().getVpcId(), Op.NNULL);
AllocatedIpCountForAccount.cp();
AllocatedIpCountForAccount.done();
```
SQL:
```
SELECT COUNT(user_ip_address.public_ip_address) FROM user_ip_address WHERE 
user_ip_address.account_id = 6  AND user_ip_address.allocated IS NOT NULL  AND 
( user_ip_address.network_id IS NOT NULL or user_ip_address.vpc_id IS NOT NULL) 
AND user_ip_address.removed IS NULL
```


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9694

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1850.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1850


commit 24837f655033583388bb608f63039f8e341c16d3
Author: Sudhansu 
Date:   2016-12-21T18:24:01Z

CLOUDSTACK-9694: Unable to limit the Public IPs in VPC

Added missing clause to check for vpc_id




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1849: CLOUDSTACK-9690: Scale CentOS7 VM fails with ...

2016-12-21 Thread sudhansu7
GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/1849

CLOUDSTACK-9690: Scale CentOS7 VM fails with error

Scale CentOS7 VM fails with error "Cannot scale up the vm because of memory 
constraint violation"

When creating VM from CentOS 7 template on the XenServer with dynamically 
scaling enabled, instance starts with base specified memory instead of memory * 
4 as static limit.

As the result, attempt to scale VM throws error in MS log:
```
java.lang.RuntimeException: Job failed due to exception Unable to scale vm 
due to Catch exception com.cloud.utils.exception.CloudRuntimeException when 
scaling VM:i-24-3976-VM due to com.cloud.utils.exception.CloudRuntimeException: 
Cannot scale up the vm because of memory constraint violation: 0 <= 
memory-static-min(2147483648) <= memory-dynamic-min(8589934592) <= 
memory-dynamic-max(8589934592) <= memory-static-max(2147483648)
```
REPO STEPS
=

1. Enable dynamic scaling in Global settings
2. Register an CentOS 7 tempplate(with tools) and tick dynamic scaling
3. Deploy VM with this template
4. Start the VM and try to change service offering

EXPECTED RESULT: VM should start with static limit 4x and 
scale up when offering is changed
ACTUAL RESULT: VM starts with maximum static limit of  and 
doesn't scale up with error in ms log :
Cannot scale up the vm because of memory constraint violation:


Root Cause: Xensever guest OS memory values are missing for 'CentOS 7'.

Solution: Add Xensever guest OS memory values for 'CentOS 7'. But this 
needs patching and restart of management server. In this fix the hardcoded 
values are moved from Java files to database.

1. Removed XenServerGuestOsMemoryMap from CitrixHelper.java
This java file was holding a static in memory map named 
XenServerGuestOsMemoryMap. This was the source for xenserver dynamic memory 
values(max and min). These values were moved to guest_os_details table.

2. DAO layer was modified to access these values.
3. VirtualMachineTo object was modified to populate the dynamic memory 
values.
4. addGuestOs and UpdateGuestOS api has been modified to update memory 
values.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9690

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1849.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1849


commit 60021defea46b8ef8a87660070b4ad6c1c73af79
Author: Sudhansu 
Date:   2016-12-21T17:41:12Z

CLOUDSTACK-9690: Scale CentOS7 VM fails with error

1. Removed XenServerGuestOsMemoryMap from CitrixHelper.java
This java file was holding a static in memory map named 
XenServerGuestOsMemoryMap. This was the source for xenserver dynamic memory 
values(max and min). These values were moved to guest_os_details table.

2. DAO layer was modified to access these values.
3. VirtualMachineTo object was modified to populate the dynamic memory 
values.
4. addGuestOs and UpdateGuestOS api has been modified to update memory 
values.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
@syed can you move the sql changes to a new path, 
setup/db/db/schema-4910to4920.sql and make appropriate db changes? Thanks. 
After this I'm willing to merge this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
LGTM on test. Some of the failing tests are known intermittent issues or 
fixed in a separate PR. Merging this now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
Trillian test result (tid-695)
Environment: kvm-centos6 (x2), Advanced Networking with Mgmt server 7
Total time taken: 29131 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1837-t695-kvm-centos6.zip
Test completed. 44 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 340.84 | 
test_privategw_acl.py
ContextSuite context=TestVpcRemoteAccessVpn>:setup | `Error` | 1870.65 | 
test_vpc_vpn.py
ContextSuite context=TestTemplates>:setup | `Error` | 369.78 | 
test_templates.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 170.24 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 275.69 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 337.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 703.11 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 547.84 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1341.46 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 575.95 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 760.11 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1302.32 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.42 | test_volumes.py
test_08_resize_volume | Success | 15.36 | test_volumes.py
test_07_resize_fail | Success | 20.53 | test_volumes.py
test_06_download_detached_volume | Success | 15.37 | test_volumes.py
test_05_detach_volume | Success | 100.30 | test_volumes.py
test_04_delete_attached_volume | Success | 10.21 | test_volumes.py
test_03_download_attached_volume | Success | 15.44 | test_volumes.py
test_02_attach_volume | Success | 76.72 | test_volumes.py
test_01_create_volume | Success | 861.49 | test_volumes.py
test_deploy_vm_multiple | Success | 302.90 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.56 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.88 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.80 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.80 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.23 | test_vm_life_cycle.py
test_01_stop_vm | Success | 125.84 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 95.89 | test_templates.py
test_01_create_template | Success | 55.57 | test_templates.py
test_10_destroy_cpvm | Success | 161.92 | test_ssvm.py
test_09_destroy_ssvm | Success | 225.09 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.87 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.85 | test_ssvm.py
test_06_stop_cpvm | Success | 132.19 | test_ssvm.py
test_05_stop_ssvm | Success | 164.25 | test_ssvm.py
test_04_cpvm_internals | Success | 1.66 | test_ssvm.py
test_03_ssvm_internals | Success | 3.94 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_04_change_offering_small | Success | 242.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.08 | test_service_offerings.py
test_01_create_service_offering | Success | 0.07 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_09_reboot_router | Success | 45.36 | test_routers.py
test_08_start_router | Success | 35.30 | test_routers.py
test_07_stop_router | Success | 10.15 | test_routers.py
test_06_router_advanced | Success | 0.05 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py
test_04_restart_network_wo_cleanup | Success | 5.74 | test_routers.py
test_03_restart_network_cleanup | Success | 60.51 | test_routers.py
test_02_router_internal_adv | Success | 1.09 | test_routers.py
test_01_router_internal_basic | Success | 0.66 | test_routers.py
test_router_dns_guestipquery | Success | 106.80 | test_router_dns.py
test_router_dns_externalipquery | Success | 0.06 | test_router_dns.py
test_router_dhcphosts | Success | 328.33 | test_router_dhcphosts.py
test_router_dhcp_opts | Success | 22.17 | 

Re: patchviasocket seems to be broken with qemu 2.3(+?)

2016-12-21 Thread Linas Žilinskas
At this point I'm not sure what the issue for you could be. Did you try 
recreating the failing vrouter?


Also, just in case, check if there's free disk space on it. We had some 
vrouters stuck due to this, and i saw another thread here discussing it.


Basically the /var/log/ partition fills up, since it's relatively small. 
And if you had issues for a period of time with that specific router and 
restarted it multiple times, the log partition might be full.



On 21/12/16 06:35, Syahrul Sazli Shaharir wrote:

On 2016-12-20 17:53, Wei ZHOU wrote:

Hi Synhrul,

Could you upload the /var/log/cloud.log ?


Sure:-

Working router VM: http://pastebin.com/hwwk86ve

Non-working router VM: http://pastebin.com/G4nv09ab

Thanks.



-Wei

2016-12-20 3:18 GMT+01:00 Syahrul Sazli Shaharir :


On 2016-12-19 18:10, Syahrul Sazli Shaharir wrote:


On 2016-12-19 17:03, Linas Žilinskas wrote:


From the logs it doesn't seem that the script timeouts. "Execution is
successful", so it manages to pass the data over the socket.

I guess the systemvm just doesn't configure itself for some reason.



You are right, I was able to enter the router VM console at some point
during the timeout loops, and able to capture syslog output during the
loop:-

http://pastebin.com/n37aHeSa



I restarted another network, and that network's router VM was able 
to be
recreated, even on the same host as the failed network (and both 
networks

are exactly same configuration, only VLAN & subnet are different).
Comparing between the two syslog outputs during boot shows the 
problematic

network router VM self-configuration got stuck in vm_dhcp_entry.json .

1. Working network router VM : http://pastebin.com/Y6zpDa6M
2. Non-working network router VM : http://pastebin.com/jzfGMGQB

Thanks.




Also, in my personal tests, I noticed some different behaviour with
different kernels. Don't remember the specifics right now, but on 
some
combinations (qemu / kernel) the socket acted differently. For 
example

the data was sent over the socket, but wasn't visible inside the VM.
Other times the socket would be stuck from the host side.

So i would suggest testing different kernels (3.x, 4.4.x, 4.8.x) or
try to login to the system vm and see what's happening from inside.



Will do this next and feedback the results here.

Thanks for your help! :)


On 12/16/16 03:46, Syahrul Sazli Shaharir wrote:


On 2016-12-16 11:27, Syahrul Sazli Shaharir wrote:

On Wed, 26 Oct 2016, Linas ?ilinskas wrote:

So after some investigation I've found out that qemu 2.3.0 is indeed
broken, at least the way CS uses the qemu chardev/socket.

Not sure in which specific version it happened, but it was fixed in
2.4.0-rc3, specifically noting that CloudStack 4.2 was not working.

qemu git commit: 4bf1cb03fbc43b0055af60d4ff093d6894aa4338

Also attaching the patch from that commit.

For our own purposes i've included the patch to the qemu-kvm-ev
package (2.3.0) and all is well.

Hi,

I am facing the exact same issue on latest Cloudstack 4.9.0.1, on
latest CentOS 7.3.1611, with latest qemu-kvm-ev-2.6.0-27.1.el7
package.

The issue initially surfaced following a heartbeat-induced reset of
all hosts, when it was on CS 4.8 @ CentOS 7.0 and stock
qemu-kvm-1.5.3. Since then, the patchviasocket.pl/py timeouts
persisted for 1 out of 4 router VM/networks, even after upgrading to

latest code. (I have checked the qemu-kvm-ev-2.6.0-27.1.el7 source,
and the patched code are pretty much still intact, as per the
2.4.0-rc3 commit).

Any help would be greatly appreciated.

Thanks.

(Attached are some debug logs from the host's agent.log)



Here are the debug logs as mentioned: http://pastebin.com/yHdsMNzZ

Thanks.

--sazli


On 2016-10-20 09:59, Linas ?ilinskas wrote:

Hi.

We have made an upgrade to 4.9.

Custom build packages with our own patches, which in my mind (i'm
the only
one patching those) should not affect the issue i'll describe.

I'm not sure whether we didn't notice it before, or it's actually
related
to something in 4.9

Basically our system vm's were unable to be patched via the qemu
socket.
The script simply error'ed out with a timeout while trying to push
the
data to the socket.

Executing it manually (with cmd line from the logs) resulted the
same. I
even tried the old perl variant, which also had same result.

So finally we found out that this issue happens only on our HVs
which run
qemu 2.3.0, from the centos 7 special interest virtualization repo.
Other
ones that run qemu 1.5, from official repos, can patch the system
vms
fine.

So i'm wondering if anyone tested 4.9 with kvm with qemu >= 2.x?
Maybe it
something else special in our setup. e.g. we're running the HVs
from a
preconfigured netboot image (pxe), but all of them, including those
with
qemu 1.5, so i have no idea.

Linas ?ilinskas
Head of Development
website  [1] facebook
 [2] twitter
 [3] linkedin

[GitHub] cloudstack issue #1711: XenServer 7 Support

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Trillian test result (tid-694)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35068 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1711-t694-xenserver-65sp1.zip
Test completed. 41 look ok, 7 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 565.22 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1372.16 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 572.47 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 717.84 | 
test_privategw_acl.py
ContextSuite context=TestRVPCSite2SiteVpn>:setup | `Error` | 0.00 | 
test_vpc_vpn.py
ContextSuite context=TestVPCRedundancy>:teardown | `Error` | 877.95 | 
test_vpc_redundant.py
test_06_download_detached_volume | `Error` | 20.41 | test_volumes.py
ContextSuite context=TestTemplates>:setup | `Error` | 364.60 | 
test_templates.py
test_01_primary_storage_iscsi | `Error` | 38.48 | test_primary_storage.py
ContextSuite context=TestListIdsParams>:setup | `Error` | 0.00 | 
test_list_ids_parameter.py
test_01_vpc_site2site_vpn | Success | 366.86 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 156.64 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 370.50 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 686.64 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 924.77 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1072.26 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.65 | test_volumes.py
test_08_resize_volume | Success | 116.33 | test_volumes.py
test_07_resize_fail | Success | 116.10 | test_volumes.py
test_05_detach_volume | Success | 100.28 | test_volumes.py
test_04_delete_attached_volume | Success | 15.23 | test_volumes.py
test_03_download_attached_volume | Success | 20.31 | test_volumes.py
test_02_attach_volume | Success | 15.81 | test_volumes.py
test_01_create_volume | Success | 393.00 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 216.46 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 130.78 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 257.75 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.69 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 66.05 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.16 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.22 | test_vm_life_cycle.py
test_01_stop_vm | Success | 25.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 151.05 | test_templates.py
test_01_create_template | Success | 65.59 | test_templates.py
test_10_destroy_cpvm | Success | 226.70 | test_ssvm.py
test_09_destroy_ssvm | Success | 199.05 | test_ssvm.py
test_08_reboot_cpvm | Success | 151.62 | test_ssvm.py
test_07_reboot_ssvm | Success | 144.01 | test_ssvm.py
test_06_stop_cpvm | Success | 167.09 | test_ssvm.py
test_05_stop_ssvm | Success | 138.92 | test_ssvm.py
test_04_cpvm_internals | Success | 1.11 | test_ssvm.py
test_03_ssvm_internals | Success | 3.58 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.53 | test_snapshots.py
test_04_change_offering_small | Success | 116.02 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_01_scale_vm | Success | 5.18 | test_scale_vm.py
test_09_reboot_router | Success | 60.44 | test_routers.py
test_08_start_router | Success | 55.42 | test_routers.py
test_07_stop_router | Success | 15.26 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py
test_05_router_basic | Success | 0.04 | test_routers.py

[GitHub] cloudstack issue #977: [4.9] CLOUDSTACK-8746: VM Snapshotting implementation...

2016-12-21 Thread kiwiflyer
Github user kiwiflyer commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #977: [4.9] CLOUDSTACK-8746: VM Snapshotting implementation...

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-407


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1844: CLOUDSTACK-9668 : disksizeallocated of PrimaryStorag...

2016-12-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1844
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #977: [4.9] CLOUDSTACK-8746: VM Snapshotting implementation...

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@kiwiflyer a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #977: [4.9] CLOUDSTACK-8746: VM Snapshotting implementation...

2016-12-21 Thread kiwiflyer
Github user kiwiflyer commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1839: CLOUDSTACK-9683: system.vm.default.hypervisor will p...

2016-12-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1839
  
LGTM based on code review and test results, there seems to be no new test 
failures present. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1848: CLOUDSTACK-9693 Cluster View - Status symbol ...

2016-12-21 Thread rashmidixit
GitHub user rashmidixit opened a pull request:

https://github.com/apache/cloudstack/pull/1848

CLOUDSTACK-9693 Cluster View - Status symbol does not change based on 
Cluster state

Refer to 
[CLOUDSTACK-9693](https://issues.apache.org/jira/browse/CLOUDSTACK-9693) for 
more details

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9693

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1848.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1848


commit 3970a3769f3da6eeb6bf4aabafd41dae09f9f698
Author: Sanket Thite 
Date:   2016-07-11T10:23:02Z

CLOUDSTACK-9693 Cluster View - Status symbol does not change to 
Unmanaged/Disabled based on cluster status




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

2016-12-21 Thread Wei ZHOU
Hi Rohit,

I donot think it is an issue in cloudstack.

We have this issue from long time ago, it still exist now.

I have a testing just now, by virsh command, not cloudstack.

this is the working one===

root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
hdc-

root@KVM015:~# virsh attach-disk 39
/mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
vdb
Disk attached successfully

root@KVM015:~#
root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk 39 vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 39
Target Source

vda
 /mnt/1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e/75d35578-ed6d-4019-8239-c2d3ff87af25
hdc-

this is not working


root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk i-2-7585-VM vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh detach-disk i-2-7585-VM vdb
Disk detached successfully

root@KVM015:~# virsh domblklist 26
Target Source

vda
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/2311416f-b778-4490-8365-cfbad2214842
vdb
 /mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
hdc-

root@KVM015:~# virsh attach-disk 26
/mnt/f773b66d-fd8c-3576-aa37-e3f0e685b183/ba11047c-ce69-4982-892a-38b343b4f841
vdb
error: Failed to attach disk
error: operation failed: target vdb already exists

end==

I believe this is highly related to the OS and configuration in VM, not
hypervisor, or cloudstack.
IN my testing, I use Ubuntu 12.04 as hypervisor, it works if vm OS is
CentOS7/CentOS6/Ubuntu 16.04, but not working if it is Ubuntu 12.04 .

-Wei


2016-12-21 12:00 GMT+01:00 Rohit Yadav :

> All,
>
>
> Based on results from recent Trillian test runs [1], I've discovered that
> on KVM (CentOS7) based detaching a volume fails to update the virt/domain
> xml and fails to remove the xml. So, while the agent and cloudstack-mgmt
> server succeeds, the entry in the xml is not removed. When the volume is
> attached again, we can an error like:
>
>
> Failed to attach volume xxx to VM VM-; org.libvirt.LibvirtException:
> XML error: target 'vdb' duplicated for disk sources
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'
> and 
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This
> is seen in agent logs:
>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG 
> [kvm.storage.KVMStorageProcessor]
> (agentRequest-Handler-2:) (logid:0648ae70) Detaching device:  device='disk' type='file'>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  name='qemu' type='qcow2' cache='none' />
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-
> a452-43de-8c6b-948dc44aae21'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  bus='virtio'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
>
> While, after above completes. This is still seen in the VM's dumped xml:
> 
>   
>   
>   
>   
>   af85ff7ea45243de8c6b
>   
>function='0x0'/>
> 
> Steps to reproduce:
> 1. Deploy a VM, create a data volume disk and attach to the VM.
> 2. Detach the volume.
> 3. Attach the volume to the same VM again, exception is caught.Thoughts,
> comments?[1] https://github.com/apache/cloudstack/pull/1837
> Regards.
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-406


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

2016-12-21 Thread Wido den Hollander

> Op 21 december 2016 om 12:00 schreef Rohit Yadav :
> 
> 
> All,
> 
> 
> Based on results from recent Trillian test runs [1], I've discovered that on 
> KVM (CentOS7) based detaching a volume fails to update the virt/domain xml 
> and fails to remove the xml. So, while the agent and cloudstack-mgmt server 
> succeeds, the entry in the xml is not removed. When the volume is attached 
> again, we can an error like:
> 
> 
> Failed to attach volume xxx to VM VM-; org.libvirt.LibvirtException: XML 
> error: target 'vdb' duplicated for disk sources 
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'
>  and 
> '/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This
>  is seen in agent logs:
> 
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG 
> [kvm.storage.KVMStorageProcessor] (agentRequest-Handler-2:) (logid:0648ae70) 
> Detaching device: 
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  type='qcow2' cache='none' />
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]:  bus='virtio'/>
> Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
> 
> While, after above completes. This is still seen in the VM's dumped xml:
> 
>   
>file='/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'/>
>   
>   
>   af85ff7ea45243de8c6b
>   
>function='0x0'/>
> 
> Steps to reproduce:
> 1. Deploy a VM, create a data volume disk and attach to the VM.
> 2. Detach the volume.
> 3. Attach the volume to the same VM again, exception is caught.Thoughts, 
> comments?[1] https://github.com/apache/cloudstack/pull/1837
> Regards.
> 

Isn't this a Qemu bug? That it thinks it detached it, but it doesn't. Or a 
libvirt thing?

> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
>


[GitHub] cloudstack pull request #1847: CLOUDSTACK-9691: Fixed unhandeled excetion in...

2016-12-21 Thread anshul1886
GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/1847

CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a 
primary store is deleted related to it

@mike-tutkowski After support for snapshots on solidifire there are many 
places which are prone to these NullPointer exceptions resulting in various 
issues. Root cause for these issues is that we get the primary storage 
associated with snapshot and then figure out how to handle but if that store is 
deleted then it results in NullPointer exceptions.

Should we handle that as issues are found or could there be other way to 
fix all of these issues? 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-9691

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1847.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1847


commit 8821a7d7c5f085350c5a0d4942f88683de25b4cf
Author: Anshul Gangwar 
Date:   2016-08-30T06:31:20Z

CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when
a primary store is deleted related to it




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

2016-12-21 Thread Rohit Yadav
All,


I've tested this against CentOS6 and able to reproduce this issue as well.


Regards.


From: Rohit Yadav 
Sent: 21 December 2016 16:30:09
To: dev@cloudstack.apache.org
Subject: [DISCUSS][KVM][BUG] Detaching of volume fails on KVM

All,


Based on results from recent Trillian test runs [1], I've discovered that on 
KVM (CentOS7) based detaching a volume fails to update the virt/domain xml and 
fails to remove the xml. So, while the agent and cloudstack-mgmt server 
succeeds, the entry in the xml is not removed. When the volume is attached 
again, we can an error like:


Failed to attach volume xxx to VM VM-; org.libvirt.LibvirtException: XML 
error: target 'vdb' duplicated for disk sources 
'/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'
 and 
'/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This
 is seen in agent logs:

Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG 
[kvm.storage.KVMStorageProcessor] (agentRequest-Handler-2:) (logid:0648ae70) 
Detaching device: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 

While, after above completes. This is still seen in the VM's dumped xml:

  
  
  
  
  af85ff7ea45243de8c6b
  
  

Steps to reproduce:
1. Deploy a VM, create a data volume disk and attach to the VM.
2. Detach the volume.
3. Attach the volume to the same VM again, exception is caught.Thoughts, 
comments?[1] https://github.com/apache/cloudstack/pull/1837
Regards.


rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



[GitHub] cloudstack issue #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1846
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1846: CLOUDSTACK-9688: Fix failing smoke tests

2016-12-21 Thread rhtyd
GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1846

CLOUDSTACK-9688: Fix failing smoke tests

Fixes failing smoke tests due to enviroment issues or corner cases:
- Fixes NPE in Template Manager

@blueorangutan package

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack 49smoketest-fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1846.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1846


commit 6f98fcd3a5b1b498a9c1f53e85522282b1cc2b28
Author: Rohit Yadav 
Date:   2016-12-21T06:15:20Z

CLOUDSTACK-9688: Fix failing smoke tests

Fixes failing smoke tests due to enviroment issues or corner cases:
- Fixes NPE in Template Manager

Signed-off-by: Rohit Yadav 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS][KVM][BUG] Detaching of volume fails on KVM

2016-12-21 Thread Rohit Yadav
All,


Based on results from recent Trillian test runs [1], I've discovered that on 
KVM (CentOS7) based detaching a volume fails to update the virt/domain xml and 
fails to remove the xml. So, while the agent and cloudstack-mgmt server 
succeeds, the entry in the xml is not removed. When the volume is attached 
again, we can an error like:


Failed to attach volume xxx to VM VM-; org.libvirt.LibvirtException: XML 
error: target 'vdb' duplicated for disk sources 
'/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'
 and 
'/mnt/8a70be4e-4c3c-38e5-aea2-4b38fef83fd5/af85ff7e-a452-43de-8c6b-948dc44aae21'This
 is seen in agent logs:

Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: DEBUG 
[kvm.storage.KVMStorageProcessor] (agentRequest-Handler-2:) (logid:0648ae70) 
Detaching device: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 
Dec 21 10:46:35 pr1837-t692-kvm-centos7-kvm2 sh[27400]: 

While, after above completes. This is still seen in the VM's dumped xml:

  
  
  
  
  af85ff7ea45243de8c6b
  
  

Steps to reproduce:
1. Deploy a VM, create a data volume disk and attach to the VM.
2. Detach the volume.
3. Attach the volume to the same VM again, exception is caught.Thoughts, 
comments?[1] https://github.com/apache/cloudstack/pull/1837
Regards.


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Request for Jira access - ticket assignment permissions

2016-12-21 Thread Adwait Patankar
Hi,



I'm trying to assign a bug in Jira to myself. However, it doesn't look like I 
have the appropriate permissions.

Can someone please grant the assignment permissions to me? My username is 
"adwaitpatankar".



Regards,

Adwait



Adwait Patankar

Principal Product Engineer | CloudPlatform | 
www.accelerite.com




DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.


[GitHub] cloudstack issue #1804: CLOUDSTACK-9639: Unable to create shared network wit...

2016-12-21 Thread murali-reddy
Github user murali-reddy commented on the issue:

https://github.com/apache/cloudstack/pull/1804
  
LGTM.

a small side note. User can create a guest VM with nic's in two networks 
with same CIDR but different VLAN isolations and mess-up routing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@murali-reddy a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) 
has been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Optional new SystemVM Template upgrade for 4.9 LTS release

2016-12-21 Thread Boris Stoyanov
Hi all,

I’ve just completed upgrade tests to 4.9.1 including the new system-vm 
template. I’ve covered the following paths:

4.3 - 4.9.1
-XenServer 6.2 Advanced and Basic zone setup

4.5.2.2 - 4.9.1
-KVM Cenot OS 6.8 Advanced and Basic zone setup
-KVM Cenot OS 7.2 Advanced and Basic zone setup
-XenServer 6.5SP1 Advanced and Basic zone setup
-VMWare 5.5u3 Advanced and Basic zone setup

4.6.2 - 4.9.1
-VMWare 5.5u3 Advanced and Basic zone setup

Using the new system vm template I was able to do basic VR lifecycle operations 
like create/destroy network and vm.

We’re planning to do additional round of tests on the new vm and more deep 
networking related tests by the end of the week, but for now it looks good for 
all the hypervisors.


Thanks,
Boris Stoyanov


boris.stoya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Dec 9, 2016, at 11:02 AM, Wido den Hollander 
> wrote:


Op 9 december 2016 om 9:31 schreef Rohit Yadav 
>:


All,


We've been using the same systemvm template since the 4.6.x releases, which is 
more than a year old now. Since last one year, there have been several packages 
updates especially security updates published for Debian Wheezy 7 (which is 
what is our base for systemvmtemplate).


In our efforts for releasing a high quality LTS (4.9.1.0) release, we've had 
discussions on security@ regarding developing and publishing a new systemvm 
template which should be compatible for ACS 4.6+ releases, while be optional 
for users. In our release notes, we would mention about this, 
installation/upgrade steps and noting that this is recommended for users but 
not mandatory.


Thoughts, questions?


Seems like a good thing to do. A updated SSVM template is really needed due to 
various updates.


For the template to work on master (4.10+), the systemvmtemplate has an 
additional package (compared to 4.6 based systemvmtemplate) `qemu-guest-agent` 
from CLOUDSTACK-8715, and my local tests show that this does not break any.


Great! Qemu Guest Agent is nice :)


I've built and published the template at following location which should be 
available till end of the year. Please help testing these updated templates:


http://188.166.197.146/49lts


Regards.

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue



[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread murali-reddy
Github user murali-reddy commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
 @blueorangutan test centos7 xenserver-65sp1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@murali-reddy a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1837: [4.9] Smoketest Health

2016-12-21 Thread murali-reddy
Github user murali-reddy commented on the issue:

https://github.com/apache/cloudstack/pull/1837
  
@blueorangutan test centos7 vmware-55u3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1845: CLOUDSTACK-9689: [Hyper-V] Fixed VM console i...

2016-12-21 Thread anshul1886
GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/1845

CLOUDSTACK-9689: [Hyper-V] Fixed VM console is freezing sometimes and…

… becomes unresponsive

Changed handling of socket timeout

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-9689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1845.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1845


commit ee92bee3482633c18fa15c62348995d05bdbea73
Author: Anshul Gangwar 
Date:   2016-06-07T07:30:06Z

CLOUDSTACK-9689: [Hyper-V] Fixed VM console is freezing sometimes and 
becomes unresponsive
Changed handling of socket timeout




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1844: CLOUDSTACK-9668 : disksizeallocated of Primar...

2016-12-21 Thread sudhansu7
GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/1844

CLOUDSTACK-9668 : disksizeallocated of PrimaryStorage is different fr…

…om the total size of a volume

update capacity if current allocated is different from used bytes in DB.



Disksizeallocated of PrimaryStorage is different from the total size of a 
volume.

steps to reproduce:
1. create another primary storage( apart from default) with storage tag 
(say tag1)
2. create a disk offering with storage tag as that step1.
3. create a data disk with above disk offering.
4. attach the disk to vm.
5. when capacity checker thread runs it will update the used_capacity. Note 
down the op_host_capacity details for pool created in step1.
6. detach the disk and destroy the volume.
7. when capacity checker thread runs it will not update the used_capacity 
to 0.

Root Cause: If all volumes have been removed from storage_pool then 
capacity checker does not update the op_host_capacity.
Resolution: Update capacity if current allocated is different from used 
bytes in DB.

```

public void createCapacityEntry(StoragePoolVO storagePool, short 
capacityType, long allocated) {
..
..
..

} else {
CapacityVO capacity = capacities.get(0);
if (capacity.getTotalCapacity() != totalOverProvCapacity || 
**allocated != 0L** || capacity.getCapacityState() != capacityState) { 
   
..
}
}

..
.
```



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9668

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1844.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1844


commit a8ba7c47787afa660327edca5604b1a82ab70aa8
Author: Sudhansu 
Date:   2016-12-21T08:37:47Z

CLOUDSTACK-9668 : disksizeallocated of PrimaryStorage is different from the 
total size of a volume

update capacity if current allocated is different from used bytes in DB.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---