[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958414#comment-15958414
 ] 

ASF subversion and git services commented on CLOUDSTACK-9720:
-

Commit 4db186ef6ff31574486d2aee4908be9ee022c22a in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4db186e ]

Merge pull request #1880 from Accelerite/CLOUDSTACK-9720

CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with 
correct template physical size in template_size column.Updated the 
template_spool_ref table with the correct template (VMware - OVA file) size.

* pr/1880:
  CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated 
with correct template physical size in template_size column.

Signed-off-by: Rajani Karuturi 


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958412#comment-15958412
 ] 

ASF subversion and git services commented on CLOUDSTACK-9720:
-

Commit 4db186ef6ff31574486d2aee4908be9ee022c22a in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4db186e ]

Merge pull request #1880 from Accelerite/CLOUDSTACK-9720

CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with 
correct template physical size in template_size column.Updated the 
template_spool_ref table with the correct template (VMware - OVA file) size.

* pr/1880:
  CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated 
with correct template physical size in template_size column.

Signed-off-by: Rajani Karuturi 


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958410#comment-15958410
 ] 

ASF subversion and git services commented on CLOUDSTACK-9720:
-

Commit 4db186ef6ff31574486d2aee4908be9ee022c22a in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4db186e ]

Merge pull request #1880 from Accelerite/CLOUDSTACK-9720

CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with 
correct template physical size in template_size column.Updated the 
template_spool_ref table with the correct template (VMware - OVA file) size.

* pr/1880:
  CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated 
with correct template physical size in template_size column.

Signed-off-by: Rajani Karuturi 


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958407#comment-15958407
 ] 

ASF subversion and git services commented on CLOUDSTACK-9720:
-

Commit 8676b202767d8e8d94e6891a23e0261b07afd2af in cloudstack's branch 
refs/heads/master from [~sureshkumar.anaparti]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=8676b20 ]

CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with 
correct template physical size in template_size column.


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958404#comment-15958404
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 6548839417013f58d9ed05a6550c74a057039134 in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6548839 ]

Merge pull request #1944 from shapeblue/4.9-metrics-enhancement

CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view 
feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance that re-implement the 
frontend logic at backend:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi 
@rashmidixit

Marvin test results:

=== TestName: test_list_clusters_metrics | Status : SUCCESS ===

=== TestName: test_list_hosts_metrics | Status : SUCCESS ===

=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===

=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===

=== TestName: test_list_vms_metrics | Status : SUCCESS ===

=== TestName: test_list_volumes_metrics | Status : SUCCESS ===

=== TestName: test_list_zones_metrics | Status : SUCCESS ===

* pr/1944:
  CLOUDSTACK-9783: Improve metrics view performance

Signed-off-by: Rajani Karuturi 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958399#comment-15958399
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 6548839417013f58d9ed05a6550c74a057039134 in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6548839 ]

Merge pull request #1944 from shapeblue/4.9-metrics-enhancement

CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view 
feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance that re-implement the 
frontend logic at backend:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi 
@rashmidixit

Marvin test results:

=== TestName: test_list_clusters_metrics | Status : SUCCESS ===

=== TestName: test_list_hosts_metrics | Status : SUCCESS ===

=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===

=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===

=== TestName: test_list_vms_metrics | Status : SUCCESS ===

=== TestName: test_list_volumes_metrics | Status : SUCCESS ===

=== TestName: test_list_zones_metrics | Status : SUCCESS ===

* pr/1944:
  CLOUDSTACK-9783: Improve metrics view performance

Signed-off-by: Rajani Karuturi 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958397#comment-15958397
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 402253504e9520104caf9fbc1317042f2fd89474 in cloudstack's branch 
refs/heads/4.9 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4022535 ]

CLOUDSTACK-9783: Improve metrics view performance

This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Signed-off-by: Rohit Yadav 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958393#comment-15958393
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1880


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9720) [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958390#comment-15958390
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9720:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1880
  
The above tests failure are not related to this PR. These are failing most 
for the other PRs.


> [VMware] template_spool_ref table is not getting updated with correct 
> template physical size in template_size column.
> -
>
> Key: CLOUDSTACK-9720
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9720
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template, VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack is not updating template_spool_ref table with correct template 
> physical_size in template_size column which leads to incorrect calculation of 
> allocated primary storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958389#comment-15958389
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 5c0979fff5fba6cca07998ef76e01897d1218747 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5c0979f ]

Merge release branch 4.9 to master

* 4.9:
  CLOUDSTACK-9783: Improve metrics view performance


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958385#comment-15958385
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 6548839417013f58d9ed05a6550c74a057039134 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6548839 ]

Merge pull request #1944 from shapeblue/4.9-metrics-enhancement

CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view 
feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance that re-implement the 
frontend logic at backend:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi 
@rashmidixit

Marvin test results:

=== TestName: test_list_clusters_metrics | Status : SUCCESS ===

=== TestName: test_list_hosts_metrics | Status : SUCCESS ===

=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===

=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===

=== TestName: test_list_vms_metrics | Status : SUCCESS ===

=== TestName: test_list_volumes_metrics | Status : SUCCESS ===

=== TestName: test_list_zones_metrics | Status : SUCCESS ===

* pr/1944:
  CLOUDSTACK-9783: Improve metrics view performance

Signed-off-by: Rajani Karuturi 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958380#comment-15958380
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 6548839417013f58d9ed05a6550c74a057039134 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6548839 ]

Merge pull request #1944 from shapeblue/4.9-metrics-enhancement

CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view 
feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance that re-implement the 
frontend logic at backend:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi 
@rashmidixit

Marvin test results:

=== TestName: test_list_clusters_metrics | Status : SUCCESS ===

=== TestName: test_list_hosts_metrics | Status : SUCCESS ===

=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===

=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===

=== TestName: test_list_vms_metrics | Status : SUCCESS ===

=== TestName: test_list_volumes_metrics | Status : SUCCESS ===

=== TestName: test_list_zones_metrics | Status : SUCCESS ===

* pr/1944:
  CLOUDSTACK-9783: Improve metrics view performance

Signed-off-by: Rajani Karuturi 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9200) Account Resources fail to get cleaned up if a snapshot is in Allocated State

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958381#comment-15958381
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9200:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1282
  
@swill @rhtyd can you review?


> Account Resources fail to get cleaned up if a snapshot is in Allocated State
> 
>
> Key: CLOUDSTACK-9200
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9200
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> If a snapshot ( Volume ) is in Allocated state (snapshots table), and if we 
> delete the account associated with that account, it is removed from UI but 
> the account resources (snapshots and virtual machines) are not cleaned up 
> because of the failure to delete the snapshot which is in Allocated state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958376#comment-15958376
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1944


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958375#comment-15958375
 ] 

ASF subversion and git services commented on CLOUDSTACK-9783:
-

Commit 402253504e9520104caf9fbc1317042f2fd89474 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4022535 ]

CLOUDSTACK-9783: Improve metrics view performance

This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.

List of APIs introduced for improving the performance:

listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics

Signed-off-by: Rohit Yadav 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958374#comment-15958374
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1944
  
merging now


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9591) VMware dvSwitch Requires a Dummy, Standard vSwitch

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958364#comment-15958364
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9591:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/2022
  
@rhtyd can you check on the failures if they are related?


> VMware dvSwitch Requires a Dummy, Standard vSwitch
> --
>
> Key: CLOUDSTACK-9591
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9591
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.6.2, 4.7.0, 4.7.1, 4.8.0, 4.9.0
>Reporter: John Burwell
>Priority: Minor
>
> When using the VMware dvSwitch, templates fail to register and VMs with the 
> following secondary storage error:
> createImportSpec error: Host did not have any virtual network defined.
> Defining dummy, standard vSwitch on the same network works around this issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9718) Revamp the dropdown showing lists of hosts available for migration in a Zone

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958350#comment-15958350
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9718:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1889
  
@rashmidixit  travis is failing. Can you rebase with master and force push?


> Revamp the dropdown showing lists of hosts available for migration in a Zone
> 
>
> Key: CLOUDSTACK-9718
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9718
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: MigrateInstance-SeeHosts.PNG, 
> MigrateInstance-SeeHosts-Search.PNG
>
>
> There are a couple of issues:
> 1. When looking for the possible hosts for migration, not all are displayed.
> 2. If there is a large number of hosts, then the drop down showing is not 
> easy to use.
> To fix this, propose to change the view to a list view which will show the 
> hosts in a list view with radio button. Additionally have a search option 
> where the hostname can be searched in this list to make it more usable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9317) Disabling static NAT on many IPs can leave wrong IPs on the router

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958330#comment-15958330
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9317:


Github user jayapalu commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1908#discussion_r110082664
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
 ---
@@ -1764,9 +1765,12 @@ protected ExecutionResult 
cleanupNetworkElementCommand(final IpAssocCommand cmd)
 }
 nicNum = 
broadcastUriAllocatedToVM.get(ip.getBroadcastUri());
 
-if (numOfIps == 1 && !ip.isAdd()) {
-vifHotUnPlug(conn, routerName, ip.getVifMacAddress());
-networkUsage(routerIp, "deleteVif", "eth" + nicNum);
+if (lastIp != null && lastIp.equalsIgnoreCase("true") && 
!ip.isAdd()) {
--- End diff --

In CitrixresourceBase StringUtils is used from the  
com.cloud.utils.StringUtils. So using  StringUtils from java.lang will be 
ambiguous.


> Disabling static NAT on many IPs can leave wrong IPs on the router
> --
>
> Key: CLOUDSTACK-9317
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9317
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Virtual Router
>Affects Versions: 4.7.0, 4.7.1, 4.7.2
>Reporter: Jeff Hair
>
> The current behavior of enabling or disabling static NAT will call the apply 
> IP associations method in the management server. The method is not 
> thread-safe. If it's called from multiple threads, each thread will load up 
> the list of public IPs in different states (add or revoke)--correct for the 
> thread, but not correct overall. Depending on execution order on the virtual 
> router, the router can end up with public IPs assigned to it that are not 
> supposed to be on it anymore. When another account acquires the same IP, this 
> of course leads to network problems.
> The problem has been in CS since at least 4.2, and likely affects all 
> recently released versions. Affected version is set to 4.7.x because that's 
> what we verified against.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9317) Disabling static NAT on many IPs can leave wrong IPs on the router

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958329#comment-15958329
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9317:


Github user jayapalu commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1908#discussion_r110082657
  
--- Diff: 
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
 ---
@@ -625,15 +627,20 @@ protected ExecutionResult 
cleanupNetworkElementCommand(final IpAssocCommand cmd)
 
 // there is only one ip in this public vlan and removing 
it, so
 // remove the nic
-if (ipsCount == 1 && !ip.isAdd()) {
-removeVif = true;
+if (lastIp != null && lastIp.equalsIgnoreCase("true") && 
!ip.isAdd()) {
--- End diff --

In CitrixresourceBase StringUtils is used from the  
com.cloud.utils.StringUtils. So using  StringUtils from java.lang will be 
ambiguous.



> Disabling static NAT on many IPs can leave wrong IPs on the router
> --
>
> Key: CLOUDSTACK-9317
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9317
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Virtual Router
>Affects Versions: 4.7.0, 4.7.1, 4.7.2
>Reporter: Jeff Hair
>
> The current behavior of enabling or disabling static NAT will call the apply 
> IP associations method in the management server. The method is not 
> thread-safe. If it's called from multiple threads, each thread will load up 
> the list of public IPs in different states (add or revoke)--correct for the 
> thread, but not correct overall. Depending on execution order on the virtual 
> router, the router can end up with public IPs assigned to it that are not 
> supposed to be on it anymore. When another account acquires the same IP, this 
> of course leads to network problems.
> The problem has been in CS since at least 4.2, and likely affects all 
> recently released versions. Affected version is set to 4.7.x because that's 
> what we verified against.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958321#comment-15958321
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has 
been kicked to run smoke tests


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958319#comment-15958319
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@blueorangutan test centos7 vmware-55u3


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (CLOUDSTACK-9512) listTemplates ids returns all templates instead of the requested ones

2017-04-05 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-9512.
-
   Resolution: Fixed
Fix Version/s: (was: 4.8.2.0)
   (was: 4.9.1.0)

works on latest. Please check previous comment from [~pdion]

> listTemplates ids returns all templates instead of the requested ones
> -
>
> Key: CLOUDSTACK-9512
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9512
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, Template
>Affects Versions: 4.8.1
> Environment: CentOS 7.2 + VMWare 5.5u3 + NFS primary/secondary storage
>Reporter: Boris Stoyanov
>Priority: Critical
>  Labels: 4.8.2.0-smoke-test-failure
> Fix For: 4.10.0.0
>
>
> Actual call form the logs:
> {code}{'account': u'test-a-TestListIdsParams-KYBF19', 
> 'domainid': u'41c6fda1-84cf-11e6-bbd2-066638010710', 
> 'ids': 
> u'a4bcd20f-0c4c-4999-bb5e-02aab8f763a1,10a429da-829b-4492-9674-26b1c172462e,d3a9a86d-17a1-4199-8116-ceefc6ef31d5',
>  
> 'apiKey': 
> u'LIN6rqXuaJwMPfGYFh13qDwYz5VNNz1J2J6qIOWcd3oLQOq0WtD4CwRundBL6rzXToa3lQOC_vKjI3nkHtiD8Q',
>  
> 'command': 'listTemplates', 
> 'listall': True, 
> 'signature': 'Yo7dRnPdSch+mEzcF8TTo1xhxpo=', 
> 'templatefilter': 'all', 
> 'response': 'json', 
> 'listAll': True}{code}
> When asking to list 3 or more 
> {code}(local) SBCM5> list templates templatefilter=all 
> ids=a4bcd20f-0c4c-4999-bb5e-02aab8f763a1,10a429da-829b-4492-9674-26b1c172462e,d3a9a86d-17a1-4199-8116-ceefc6ef31d5{code}
> You receive all templates (count: 14)
> Response
> {code}
> {
>   "count": 14,
>   "template": [
> {
>   "account": "system",
>   "checksum": "4b415224fe00b258f66cad9fce9f73fc",
>   "created": "2016-09-27T17:38:31+0100",
>   "crossZones": true,
>   "displaytext": "SystemVM Template (vSphere)",
>   "domain": "ROOT",
>   "domainid": "41c6fda1-84cf-11e6-bbd2-066638010710",
>   "format": "OVA",
>   "hypervisor": "VMware",
>   "id": "6114746a-aefa-4be7-8234-f0d76ff175d0",
>   "isdynamicallyscalable": true,
>   "isextractable": false,
>   "isfeatured": false,
>   "ispublic": false,
>   "isready": true,
>   "name": "SystemVM Template (vSphere)",
>   "ostypeid": "41db0847-84cf-11e6-bbd2-066638010710",
>   "ostypename": "Debian GNU/Linux 5.0 (64-bit)",
>   "passwordenabled": false,
>   "size": 3145728000,
>   "sshkeyenabled": false,
>   "status": "Download Complete",
>   "tags": [],
>   "templatetype": "SYSTEM",
>   "zoneid": "b8d4cea4-6b4b-4cfb-9f17-0a6b31fec09f",
> .
> .
> .
> .
> .
> 
> }{code}
> Marvin failure:
> {code}2016-09-29 11:43:39,819 - CRITICAL - FAILED: test_02_list_templates: 
> ['Traceback (most recent call last):\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 369, in run\n
> testMethod()\n', '  File "/marvin/tests/smoke/test_list_ids_parameter.py", 
> line 253, in test_02_list_templates\n"ListTemplates response expected 3 
> Templates, received %s" % len(list_template_response)\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 553, in assertEqual\n
> assertion_func(first, second, msg=msg)\n', '  File 
> "/usr/lib64/python2.7/unittest/case.py", line 546, in _baseAssertEqual\n
> raise self.failureException(msg)\n', 'AssertionError: ListTemplates response 
> expected 3 Templates, received 14\n']{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9848) VR commands exist status is not checked in python config files

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15958300#comment-15958300
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9848:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/2018
  
@wido 
In the patch it can be observed that  it  is ("-A FIREWALL_%s DROP" % ) 
running without '-j' and it is not caught. There is duplicate rule due this the 
impact is not seen. 

So I have added the checking the iptables add command exit status, with 
this iptables add command failure are caught.  If there is error or exception 
then it error will be returned to management server.


> VR commands exist status is not checked in python config files
> --
>
> Key: CLOUDSTACK-9848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>
> When iptables rules are configured on the VR failures or exceptions are not 
> detected in VR because iptables commands exit/return status is not 
> checked.Also in exception catch failure is not returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9848) VR commands exist status is not checked in python config files

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957504#comment-15957504
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9848:


Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/2018
  
Can you explain a bit more why you are changing the iptable rules? As they 
can break a lot of things by accident.


> VR commands exist status is not checked in python config files
> --
>
> Key: CLOUDSTACK-9848
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9848
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>
> When iptables rules are configured on the VR failures or exceptions are not 
> detected in VR because iptables commands exit/return status is not 
> checked.Also in exception catch failure is not returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9857) CloudStack KVM Agent Self Fencing - improper systemd config

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957501#comment-15957501
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9857:


Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/2024
  
LGTM


> CloudStack KVM Agent Self Fencing  - improper systemd config
> 
>
> Key: CLOUDSTACK-9857
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9857
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> We had a database outage few days ago, we noticed that most of cloudstack KVM 
> agents committed a suicide and never retried to connect. Moreover - we had 
> puppet - that was suppose to restart cloudstack-agent daemon when it goes 
> into failed, but apparently it never does go to “failed” state.
> 2017-03-30 04:07:50,720 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Request:Seq -1--1:  { Cmd , MgmtId: -1, via: 
> -1, Ver: v1, Flags: 111, 
> [{"com.cloud.agent.api.ReadyCommand":{"_details":"com.cloud.utils.exception.CloudRuntimeException:
>  DB Exception on: null","wait":0}}] }
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.ReadyCommand
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Not ready to connect to mgt server: 
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
> 2017-03-30 04:07:50,722 INFO  [cloud.agent.Agent] (AgentShutdownThread:null) 
> Stopping the agent: Reason = sig.kill
> 2017-03-30 04:07:50,723 DEBUG [cloud.agent.Agent] (AgentShutdownThread:null) 
> Sending shutdown to management server
> While agent fenced itself for whatever logic reason it had - the systemd 
> agent did not exit properly.
> Here what the status of the cloudstack-agent looks like
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: active (exited) since Fri 2017-03-31 23:50:47 GMT; 12s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 632 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=0/SUCCESS)
>   Process: 654 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 441
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Starting SYSV: Cloud Agent...
> Mar 31 23:50:47 mqa6-kvm02 cloudstack-agent[654]: Starting Cloud Agent:
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Started SYSV: Cloud Agent.
> Mar 31 23:50:49 mqa6-kvm02 sudo[806]: root : TTY=unknown ; PWD=/ ; 
> USER=root ; COMMAND=/bin/grep InitiatorName= /etc/iscsi/initiatorname.iscsi
> The "Active: active (exited)" should be "Active: failed (Result: exit-code)”
> Solution:
> The fix is to add pidfile into /etc/init.d/cloudstack-agent 
> Like so:
> # chkconfig: 35 99 10
> # description: Cloud Agent
> + # pidfile: /var/run/cloudstack-agent.pid
> Post that - if agent dies - the systemd will catch it properly and it will 
> look as expected
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: failed (Result: exit-code) since Fri 2017-03-31 23:51:40 GMT; 7s 
> ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 1124 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=255)
>   Process: 949 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 975
> With this change - some other tool can properly inspect the state of daemon 
> and take actions when it failed instead of it being in active (exited) state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9853) IPv6 Prefix Delegation support in Basic Networking

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957483#comment-15957483
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9853:


GitHub user wido opened a pull request:

https://github.com/apache/cloudstack/pull/2028

CLOUDSTACK-9853: Add support for Secondary IPv6 Addresses and Subnets

This commit adds support for passing IPv6 Addresses and/or Subnets as
Secondary IPs.

This is groundwork for CLOUDSTACK-9853 where IPv6 Subnets have to be
allowed in the Security Groups of Instances to we can add DHCPv6
Prefix Delegation.

Use ; instead of : for separating addresses, otherwise it would cause
problems with IPv6 Addresses.

Signed-off-by: Wido den Hollander 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wido/cloudstack ipv6-secips

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2028.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2028


commit 05f8e5c9fd9086f1ad9fc643fa25b108acfc3f48
Author: Wido den Hollander 
Date:   2017-01-31T15:59:28Z

CLOUDSTACK-9853: Add support for Secondary IPv6 Addresses and Subnets

This commit adds support for passing IPv6 Addresses and/or Subnets as
Secondary IPs.

This is groundwork for CLOUDSTACK-9853 where IPv6 Subnets have to be
allowed in the Security Groups of Instances to we can add DHCPv6
Prefix Delegation.

Use ; instead of : for separating addresses, otherwise it would cause
problems with IPv6 Addresses.

Signed-off-by: Wido den Hollander 




> IPv6 Prefix Delegation support in Basic Networking
> --
>
> Key: CLOUDSTACK-9853
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9853
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
>  Labels: basic-networking, dhcp, dhcpv6, ipv6, virtual-router
>
> In addition to have a single IPv6 address (/128) Instances in Basic 
> Networking should be able to have a IPv6 subnet, like a /60 for example 
> routed to them.
> The mechanism for this is DHCPv6 Prefix Delegation. A DHCPv6 server can tell 
> the Instance which subnet is routed to it.
> On the physical router a (static) route needs to be configured to do this. So 
> in Basic Networking it will be up to the network admin to make sure the 
> routes are present.
> The Management Server will pick a subnet for a Instance when needed and 
> configure the VR with the proper DHCPv6 arguments so that the right answer is 
> provided to the Instance.
> For example when running containers it is very nice to have a subnet routed 
> to your Instance so you can give each container a unique IPv6 address.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9719) [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957306#comment-15957306
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9719:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@rhtyd Please kick off VMware tests on this PR.


> [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery
> --
>
> Key: CLOUDSTACK-9719
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9719
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> After HA being triggered on VMware, some VMs fail to acquire DHCP address 
> from a VR. These VMs are live migrated as part of vCenter HA to another 
> available host before the VR and couldn't acquire DHCP address as VR is not 
> migrated yet and these VMs request failed to reach the VR.
> Resolving this requires manual intervention by the CloudStack administrator; 
> the router must be rebooted or the network restarted. This behavior is not 
> ideal and will prolong downtime caused by an HA event and there is no point 
> for the non-functional virtual router to even be running. CloudStack should 
> handle this situation by setting VR restart priority to high in the vCenter 
> when HA is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9719) [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957303#comment-15957303
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9719:


Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@nvazquez Updated the code for the issue reported. Can you please re-test?


> [VMware] VR loses DHCP settings and VMs cannot obtain IP after HA recovery
> --
>
> Key: CLOUDSTACK-9719
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9719
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> After HA being triggered on VMware, some VMs fail to acquire DHCP address 
> from a VR. These VMs are live migrated as part of vCenter HA to another 
> available host before the VR and couldn't acquire DHCP address as VR is not 
> migrated yet and these VMs request failed to reach the VR.
> Resolving this requires manual intervention by the CloudStack administrator; 
> the router must be rebooted or the network restarted. This behavior is not 
> ideal and will prolong downtime caused by an HA event and there is no point 
> for the non-functional virtual router to even be running. CloudStack should 
> handle this situation by setting VR restart priority to high in the vCenter 
> when HA is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9863) problems installing cloudstack 4.9.2 over Ubuntu 16

2017-04-05 Thread Andres Felipe Osorio henker (JIRA)
Andres Felipe Osorio henker created CLOUDSTACK-9863:
---

 Summary: problems installing cloudstack 4.9.2 over Ubuntu 16
 Key: CLOUDSTACK-9863
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9863
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Install and Setup
Affects Versions: 4.9.2.0
 Environment: Ubuntu Linux 16.04.2 LTS  x86_64
Cloudstack 4.9.2
java-7-openjdk-amd64
Tomcat 7
Mysql 5.5.50
Reporter: Andres Felipe Osorio henker
Priority: Minor
 Attachments: instaling_cloudstack4.9.2_over_Ubuntu_16.pdf

Hi, I am trying to install cloudstack 4.9.2 over Ubuntu 16.04.2 LTS. 
I know that currently is not supported but I saw in the ubuntu repository the 
package for xenial version: http://cloudstack.apt-get.eu/ubuntu/dists/

In the installation process there are some problems because the tomcat version 
for ubuntu xenial it's tomcat7 but the process still look up tomcat6. 

I attach a document with the complete instalation process.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957127#comment-15957127
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-620


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957093#comment-15957093
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957091#comment-15957091
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@serg38 Got it, any idea when we'll get UI for this feature, I'd love to 
see it. 
@blueorangutan package


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957054#comment-15957054
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
@rhtyd a Trillian-Jenkins test job (ubuntu mgmt + kvm-ubuntu) has been 
kicked to run smoke tests


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957050#comment-15957050
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
@blueorangutan test ubuntu kvm-ubuntu


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15957031#comment-15957031
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@borisstoyanov Let's re-run vmware B.O test for this PR now


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956965#comment-15956965
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-618


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956925#comment-15956925
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956923#comment-15956923
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
Thanks @ustcweizhou 
@blueorangutan package


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9857) CloudStack KVM Agent Self Fencing - improper systemd config

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956916#comment-15956916
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9857:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/2024
  
LGTM. @abhinandanprateek should we retarget this PR to 4.9? Thanks. 
@karuturi let's merge this before next 4.10 RC


> CloudStack KVM Agent Self Fencing  - improper systemd config
> 
>
> Key: CLOUDSTACK-9857
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9857
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> We had a database outage few days ago, we noticed that most of cloudstack KVM 
> agents committed a suicide and never retried to connect. Moreover - we had 
> puppet - that was suppose to restart cloudstack-agent daemon when it goes 
> into failed, but apparently it never does go to “failed” state.
> 2017-03-30 04:07:50,720 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Request:Seq -1--1:  { Cmd , MgmtId: -1, via: 
> -1, Ver: v1, Flags: 111, 
> [{"com.cloud.agent.api.ReadyCommand":{"_details":"com.cloud.utils.exception.CloudRuntimeException:
>  DB Exception on: null","wait":0}}] }
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.ReadyCommand
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Not ready to connect to mgt server: 
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
> 2017-03-30 04:07:50,722 INFO  [cloud.agent.Agent] (AgentShutdownThread:null) 
> Stopping the agent: Reason = sig.kill
> 2017-03-30 04:07:50,723 DEBUG [cloud.agent.Agent] (AgentShutdownThread:null) 
> Sending shutdown to management server
> While agent fenced itself for whatever logic reason it had - the systemd 
> agent did not exit properly.
> Here what the status of the cloudstack-agent looks like
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: active (exited) since Fri 2017-03-31 23:50:47 GMT; 12s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 632 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=0/SUCCESS)
>   Process: 654 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 441
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Starting SYSV: Cloud Agent...
> Mar 31 23:50:47 mqa6-kvm02 cloudstack-agent[654]: Starting Cloud Agent:
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Started SYSV: Cloud Agent.
> Mar 31 23:50:49 mqa6-kvm02 sudo[806]: root : TTY=unknown ; PWD=/ ; 
> USER=root ; COMMAND=/bin/grep InitiatorName= /etc/iscsi/initiatorname.iscsi
> The "Active: active (exited)" should be "Active: failed (Result: exit-code)”
> Solution:
> The fix is to add pidfile into /etc/init.d/cloudstack-agent 
> Like so:
> # chkconfig: 35 99 10
> # description: Cloud Agent
> + # pidfile: /var/run/cloudstack-agent.pid
> Post that - if agent dies - the systemd will catch it properly and it will 
> look as expected
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: failed (Result: exit-code) since Fri 2017-03-31 23:51:40 GMT; 7s 
> ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 1124 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=255)
>   Process: 949 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 975
> With this change - some other tool can properly inspect the state of daemon 
> and take actions when it failed instead of it being in active (exited) state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9862) list template with id= no longer work as domain admin

2017-04-05 Thread Pierre-Luc Dion (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre-Luc Dion updated CLOUDSTACK-9862:

Description: 
As domain admin, listTemplates templatefilter=featured no longer work if id= is 
specified.

Using cloudmonkey with domain-admin credential:

{code}
(beta2r1-ninja) > list templates templatefilter=featured filter=name,id
count = 9
template:
+--+-+
|  id  |   name  |
+--+-+
| 513b3a6d-c011-46f0-a4a3-2a954cadb673 |  CoreOS Alpha 1367.5.0  |
| 0c04d876-1f85-45a7-b6f4-504de435bf12 |Debian 8.5 PV base (64bit)   |
| 285f2203-449a-428f-997a-1ffbebbf1382 |   CoreOS Alpha  |
| 332b6ca8-b3d6-42c7-83e5-60fe87be6576 |  CoreOS Stable  |
| 3b705008-c186-464d-ad59-312d902420af |   Windows Server 2016 std SPLA  |
| 4256aebe-a1c1-4b49-9993-de2bc712d521 |   Ubuntu 16.04.01 HVM   |
| 59e6b00a-b88e-4539-aa3c-75c9c7e9fa6c | Ubuntu 14.04.5 HVM base (64bit) |
| 3ab936eb-d8c2-44d8-a64b-17ad5adf8a51 |  CentOS 6.8 PV  |
| 7de5d423-c91e-49cc-86e8-9d6ed6abd997 |  CentOS 7.2 HVM |
+--+-+
(beta2r1-ninja) > list templates templatefilter=featured 
id=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
Error 531: Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not have 
permission to operate within domain id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
cserrorcode = 4365
errorcode = 531
errortext = Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not have 
permission to operate within domain id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
uuidList:
(beta2r1-ninja) > list templates templatefilter=featured 
ids=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
count = 1
template:
+--++
|  id  |  name  |
+--++
| 7de5d423-c91e-49cc-86e8-9d6ed6abd997 | CentOS 7.2 HVM |
+--++
{code}

  was:As domain admin, listTemplates templatefilter=featured no longer work if 
id= is specified.


> list template with id= no longer work as domain admin
> -
>
> Key: CLOUDSTACK-9862
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9862
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.10.0.0
>Reporter: Pierre-Luc Dion
>Priority: Critical
>
> As domain admin, listTemplates templatefilter=featured no longer work if id= 
> is specified.
> Using cloudmonkey with domain-admin credential:
> {code}
> (beta2r1-ninja) > list templates templatefilter=featured filter=name,id
> count = 9
> template:
> +--+-+
> |  id  |   name  |
> +--+-+
> | 513b3a6d-c011-46f0-a4a3-2a954cadb673 |  CoreOS Alpha 1367.5.0  |
> | 0c04d876-1f85-45a7-b6f4-504de435bf12 |Debian 8.5 PV base (64bit)   |
> | 285f2203-449a-428f-997a-1ffbebbf1382 |   CoreOS Alpha  |
> | 332b6ca8-b3d6-42c7-83e5-60fe87be6576 |  CoreOS Stable  |
> | 3b705008-c186-464d-ad59-312d902420af |   Windows Server 2016 std SPLA  |
> | 4256aebe-a1c1-4b49-9993-de2bc712d521 |   Ubuntu 16.04.01 HVM   |
> | 59e6b00a-b88e-4539-aa3c-75c9c7e9fa6c | Ubuntu 14.04.5 HVM base (64bit) |
> | 3ab936eb-d8c2-44d8-a64b-17ad5adf8a51 |  CentOS 6.8 PV  |
> | 7de5d423-c91e-49cc-86e8-9d6ed6abd997 |  CentOS 7.2 HVM |
> +--+-+
> (beta2r1-ninja) > list templates templatefilter=featured 
> id=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
> Error 531: Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not 
> have permission to operate within domain 
> id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
> cserrorcode = 4365
> errorcode = 531
> errortext = Acct[b285d62e-0ec2-4a7c-b773-961595ec6356-Ninja-5664] does not 
> have permission to operate within domain 
> id=c9b4f83d-16eb-11e7-a8b9-367e6fe958a9
> uuidList:
> (beta2r1-ninja) > list templates templatefilter=featured 
> ids=7de5d423-c91e-49cc-86e8-9d6ed6abd997 filter=name,id
> count = 1
> template:
> +--++
> |  id  |  name  |
> +

[jira] [Created] (CLOUDSTACK-9862) list template with id= no longer work as domain admin

2017-04-05 Thread Pierre-Luc Dion (JIRA)
Pierre-Luc Dion created CLOUDSTACK-9862:
---

 Summary: list template with id= no longer work as domain admin
 Key: CLOUDSTACK-9862
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9862
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Template
Affects Versions: 4.10.0.0
Reporter: Pierre-Luc Dion
Priority: Critical


As domain admin, listTemplates templatefilter=featured no longer work if id= is 
specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9840) Datetime format of snapshot events is inconsistent with other events

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956784#comment-15956784
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9840:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/2008
  
@olivierlemasle I think this PR needs at least 2 LGTMs and as agreed Rajani 
is taking care of mergin PRs. 


> Datetime format of snapshot events is inconsistent with other events
> 
>
> Key: CLOUDSTACK-9840
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9840
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: eventbus
>Affects Versions: 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.4.2, 4.4.3, 4.3.2, 
> 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.7.1, 4.8.0, 4.9.0, 
> 4.9.2.0, 4.9.1.0, 4.8.1.1, 4.9.0.1, 4.5.2.2
>Reporter: Olivier Lemasle
>Assignee: Olivier Lemasle
>
> The timezone is not included in datetime format of snapshot events, whereas 
> it is included for other events.
> "eventDateTime" was added by [~chipchilders] in commit 14ee684 and was 
> updated the same day to add the timezone (commit bf967eb) except for 
> Snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956763#comment-15956763
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/1950
  
LGTM for me


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9840) Datetime format of snapshot events is inconsistent with other events

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956731#comment-15956731
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9840:


Github user olivierlemasle commented on the issue:

https://github.com/apache/cloudstack/pull/2008
  
The Travis tests have finally succeeded.
@borisstoyanov Any chance merging this pull request?


> Datetime format of snapshot events is inconsistent with other events
> 
>
> Key: CLOUDSTACK-9840
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9840
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: eventbus
>Affects Versions: 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.4.2, 4.4.3, 4.3.2, 
> 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.7.1, 4.8.0, 4.9.0, 
> 4.9.2.0, 4.9.1.0, 4.8.1.1, 4.9.0.1, 4.5.2.2
>Reporter: Olivier Lemasle
>Assignee: Olivier Lemasle
>
> The timezone is not included in datetime format of snapshot events, whereas 
> it is included for other events.
> "eventDateTime" was added by [~chipchilders] in commit 14ee684 and was 
> updated the same day to add the timezone (commit bf967eb) except for 
> Snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9348) CloudStack Server degrades when a lot of connections on port 8250

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956692#comment-15956692
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9348:


Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/2027
  
@rhtyd the `NioTest` result is not consistent on my laptop and fails from 
time to time.


> CloudStack Server degrades when a lot of connections on port 8250
> -
>
> Key: CLOUDSTACK-9348
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9348
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.9.0
>
>
> An intermittent issue was found with a large CloudStack deployment, where 
> servers could not keep agents connected on port 8250.
> All connections are handled by accept() in NioConnection:
> https://github.com/apache/cloudstack/blob/master/utils/src/main/java/com/cloud/utils/nio/NioConnection.java#L125
> A new connection is handled by accept() which does blocking SSL handshake. A 
> good fix would be to make this non-blocking and handle expensive tasks in 
> separate threads/pool. This way the main IO loop won't be blocked and can 
> continue to serve other agents/clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8897) baremetal:addHost:make host tag info mandtory in baremetal addhost Api call

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956665#comment-15956665
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8897:


Github user kishankavala commented on the issue:

https://github.com/apache/cloudstack/pull/874
  
LGTM


> baremetal:addHost:make host tag info mandtory in baremetal addhost Api call
> ---
>
> Key: CLOUDSTACK-8897
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8897
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal, Management Server
>Reporter: Harikrishna Patnala
>Assignee: Harikrishna Patnala
> Fix For: 4.9.1.0
>
>
> Right now in baremetal, addhost api is successful with out providing the host 
> tag info and we recommend host tag is mandatory for bare-metal.
> in the current implementation host tag check is happening at vm deployment 
> stage but it will be good to have host tag field as mandatory field during 
> adding of the host it self.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9348) CloudStack Server degrades when a lot of connections on port 8250

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956633#comment-15956633
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9348:


GitHub user marcaurele opened a pull request:

https://github.com/apache/cloudstack/pull/2027

Activate NioTest following changes in CLOUDSTACK-9348 PR #1549

The first PR #1493 re-enabled the NioTest but not the new PR #1549.

@rhtyd the test fails locally on my laptop. Is there any special 
configuration requirements?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/exoscale/cloudstack niotest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2027.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2027


commit 226e79c8ce0686ba3d5690ed90134934e26b635d
Author: Marc-Aurèle Brothier 
Date:   2017-04-05T10:25:17Z

Activate NioTest following changes in CLOUDSTACK-9348 PR #1549




> CloudStack Server degrades when a lot of connections on port 8250
> -
>
> Key: CLOUDSTACK-9348
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9348
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.9.0
>
>
> An intermittent issue was found with a large CloudStack deployment, where 
> servers could not keep agents connected on port 8250.
> All connections are handled by accept() in NioConnection:
> https://github.com/apache/cloudstack/blob/master/utils/src/main/java/com/cloud/utils/nio/NioConnection.java#L125
> A new connection is handled by accept() which does blocking SSL handshake. A 
> good fix would be to make this non-blocking and handle expensive tasks in 
> separate threads/pool. This way the main IO loop won't be blocked and can 
> continue to serve other agents/clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9208) Assertion Error in VM_POWER_STATE handler.

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956578#comment-15956578
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9208:


Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/1997
  
so that is two situations
1. it is known to be powered off (no sendStop() should be performed at all)
2. it is unknown because their is no report. Wouldn't one first check the 
host to send the stop command to before sending it?

That said, I am +0 on this code, the check is not harming in any way.


> Assertion Error in VM_POWER_STATE handler.
> --
>
> Key: CLOUDSTACK-9208
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9208
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>Priority: Minor
>
> 1. Enable the assertions.
> LOG
> 2015-12-31 04:09:06,687 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-981a85d4) (logid:863754b8) Found 0 networks to 
> update RvR status.
> 2015-12-31 04:09:07,394 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Ping from 5(10.147.40.18)
> 2015-12-31 04:09:07,394 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process host VM state 
> report from ping process. host: 5
> 2015-12-31 04:09:07,416 INFO [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unable to find matched 
> VM in CloudStack DB. name: New Virtual Machine
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process VM state report. 
> host: 5, number of records in report: 5
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report. host: 
> 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,530 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report is 
> updated. host: 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,540 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM r-69-VM is at Stopped 
> and we received a power-off report while there is no pending jobs on it
> 2015-12-31 04:09:07,541 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2879)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.Sche

[jira] [Commented] (CLOUDSTACK-9857) CloudStack KVM Agent Self Fencing - improper systemd config

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956566#comment-15956566
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9857:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/2024
  
@karuturi we have 2 LGTMs on this trivial PR, I guess we can proceed and 
merge it. 


> CloudStack KVM Agent Self Fencing  - improper systemd config
> 
>
> Key: CLOUDSTACK-9857
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9857
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> We had a database outage few days ago, we noticed that most of cloudstack KVM 
> agents committed a suicide and never retried to connect. Moreover - we had 
> puppet - that was suppose to restart cloudstack-agent daemon when it goes 
> into failed, but apparently it never does go to “failed” state.
> 2017-03-30 04:07:50,720 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Request:Seq -1--1:  { Cmd , MgmtId: -1, via: 
> -1, Ver: v1, Flags: 111, 
> [{"com.cloud.agent.api.ReadyCommand":{"_details":"com.cloud.utils.exception.CloudRuntimeException:
>  DB Exception on: null","wait":0}}] }
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.ReadyCommand
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Not ready to connect to mgt server: 
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
> 2017-03-30 04:07:50,722 INFO  [cloud.agent.Agent] (AgentShutdownThread:null) 
> Stopping the agent: Reason = sig.kill
> 2017-03-30 04:07:50,723 DEBUG [cloud.agent.Agent] (AgentShutdownThread:null) 
> Sending shutdown to management server
> While agent fenced itself for whatever logic reason it had - the systemd 
> agent did not exit properly.
> Here what the status of the cloudstack-agent looks like
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: active (exited) since Fri 2017-03-31 23:50:47 GMT; 12s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 632 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=0/SUCCESS)
>   Process: 654 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 441
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Starting SYSV: Cloud Agent...
> Mar 31 23:50:47 mqa6-kvm02 cloudstack-agent[654]: Starting Cloud Agent:
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Started SYSV: Cloud Agent.
> Mar 31 23:50:49 mqa6-kvm02 sudo[806]: root : TTY=unknown ; PWD=/ ; 
> USER=root ; COMMAND=/bin/grep InitiatorName= /etc/iscsi/initiatorname.iscsi
> The "Active: active (exited)" should be "Active: failed (Result: exit-code)”
> Solution:
> The fix is to add pidfile into /etc/init.d/cloudstack-agent 
> Like so:
> # chkconfig: 35 99 10
> # description: Cloud Agent
> + # pidfile: /var/run/cloudstack-agent.pid
> Post that - if agent dies - the systemd will catch it properly and it will 
> look as expected
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: failed (Result: exit-code) since Fri 2017-03-31 23:51:40 GMT; 7s 
> ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 1124 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=255)
>   Process: 949 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 975
> With this change - some other tool can properly inspect the state of daemon 
> and take actions when it failed instead of it being in active (exited) state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956564#comment-15956564
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user cloudsadhu commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
added logic to handle global level full clone setting.
full clone parameter are handled  differently (as per code its referring in 
3 places due to that even if we set the full clone parameter at storage level 
still deployment is failing because  during vm deployment its looking for 
global level config value event through  we set storage level value to true.

added logic to update global level ,restart management server and during 
cleaning resetting back to original values

SadhuMAC:test_deploy_vm_root_resize_F94M72 sadhuccp$ cat results.txt 
Test deploy virtual machine with root resize ... === TestName: 
test_00_deploy_vm_root_resize | Status : SUCCESS ===
ok
Test proper failure to deploy virtual machine with rootdisksize of 0 ... 
=== TestName: test_01_deploy_vm_root_resize | Status : SUCCESS ===
ok
Test proper failure to deploy virtual machine with rootdisksize less than 
template size ... === TestName: test_02_deploy_vm_root_resize | Status : 
SUCCESS ===
ok

--
Ran 3 tests in 763.269s





added the 




> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from templat

[jira] [Commented] (CLOUDSTACK-9208) Assertion Error in VM_POWER_STATE handler.

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956562#comment-15956562
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9208:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1997
  
@DaanHoogland  In the case of VM_POWER_STATE handler, if PowerOff or 
PowerReportMissing state is encountered, 
handlePowerOffReportWithNoPendingJobsOnVM() is called. If the VM is already in 
stopped state, so in DB the host ID is set to NULL. But in the above function, 
the sendStop() is still called on the empty hostID.
So added condition in the sendStop() itself to check for the host id


> Assertion Error in VM_POWER_STATE handler.
> --
>
> Key: CLOUDSTACK-9208
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9208
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>Priority: Minor
>
> 1. Enable the assertions.
> LOG
> 2015-12-31 04:09:06,687 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-981a85d4) (logid:863754b8) Found 0 networks to 
> update RvR status.
> 2015-12-31 04:09:07,394 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Ping from 5(10.147.40.18)
> 2015-12-31 04:09:07,394 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process host VM state 
> report from ping process. host: 5
> 2015-12-31 04:09:07,416 INFO [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unable to find matched 
> VM in CloudStack DB. name: New Virtual Machine
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Process VM state report. 
> host: 5, number of records in report: 5
> 2015-12-31 04:09:07,420 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report. host: 
> 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,530 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM state report is 
> updated. host: 5, vm id: 69, power state: PowerOff
> 2015-12-31 04:09:07,540 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) VM r-69-VM is at Stopped 
> and we received a power-off report while there is no pending jobs on it
> 2015-12-31 04:09:07,541 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-3:ctx-3ba82e46) (logid:02dcbd48) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2879)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.

[jira] [Commented] (CLOUDSTACK-9857) CloudStack KVM Agent Self Fencing - improper systemd config

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956560#comment-15956560
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9857:


Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/2024
  
trivial enough: LGTM


> CloudStack KVM Agent Self Fencing  - improper systemd config
> 
>
> Key: CLOUDSTACK-9857
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9857
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> We had a database outage few days ago, we noticed that most of cloudstack KVM 
> agents committed a suicide and never retried to connect. Moreover - we had 
> puppet - that was suppose to restart cloudstack-agent daemon when it goes 
> into failed, but apparently it never does go to “failed” state.
> 2017-03-30 04:07:50,720 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Request:Seq -1--1:  { Cmd , MgmtId: -1, via: 
> -1, Ver: v1, Flags: 111, 
> [{"com.cloud.agent.api.ReadyCommand":{"_details":"com.cloud.utils.exception.CloudRuntimeException:
>  DB Exception on: null","wait":0}}] }
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.ReadyCommand
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Not ready to connect to mgt server: 
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
> 2017-03-30 04:07:50,722 INFO  [cloud.agent.Agent] (AgentShutdownThread:null) 
> Stopping the agent: Reason = sig.kill
> 2017-03-30 04:07:50,723 DEBUG [cloud.agent.Agent] (AgentShutdownThread:null) 
> Sending shutdown to management server
> While agent fenced itself for whatever logic reason it had - the systemd 
> agent did not exit properly.
> Here what the status of the cloudstack-agent looks like
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: active (exited) since Fri 2017-03-31 23:50:47 GMT; 12s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 632 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=0/SUCCESS)
>   Process: 654 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 441
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Starting SYSV: Cloud Agent...
> Mar 31 23:50:47 mqa6-kvm02 cloudstack-agent[654]: Starting Cloud Agent:
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Started SYSV: Cloud Agent.
> Mar 31 23:50:49 mqa6-kvm02 sudo[806]: root : TTY=unknown ; PWD=/ ; 
> USER=root ; COMMAND=/bin/grep InitiatorName= /etc/iscsi/initiatorname.iscsi
> The "Active: active (exited)" should be "Active: failed (Result: exit-code)”
> Solution:
> The fix is to add pidfile into /etc/init.d/cloudstack-agent 
> Like so:
> # chkconfig: 35 99 10
> # description: Cloud Agent
> + # pidfile: /var/run/cloudstack-agent.pid
> Post that - if agent dies - the systemd will catch it properly and it will 
> look as expected
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: failed (Result: exit-code) since Fri 2017-03-31 23:51:40 GMT; 7s 
> ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 1124 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=255)
>   Process: 949 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 975
> With this change - some other tool can properly inspect the state of daemon 
> and take actions when it failed instead of it being in active (exited) state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8865) Adding SR doesn't create Storage_pool_host_ref entry for disabled host

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956531#comment-15956531
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8865:


Github user SudharmaJain commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/876#discussion_r109859606
  
--- Diff: 
plugins/storage/volume/cloudbyte/src/org/apache/cloudstack/storage/datastore/lifecycle/ElastistorPrimaryDataStoreLifeCycle.java
 ---
@@ -359,7 +359,7 @@ public boolean attachCluster(DataStore store, 
ClusterScope scope) {
 
 PrimaryDataStoreInfo primarystore = (PrimaryDataStoreInfo) store;
 // Check if there is host up in this cluster
-List allHosts = 
_resourceMgr.listAllUpAndEnabledHosts(Host.Type.Routing, 
primarystore.getClusterId(), primarystore.getPodId(), 
primarystore.getDataCenterId());
+List allHosts = 
_resourceMgr.listAllUpHosts(Host.Type.Routing, primarystore.getClusterId(), 
primarystore.getPodId(), primarystore.getDataCenterId());
--- End diff --

@syed We cannot send commands to the host in maintenance mode. So it is not 
possible to add an SR to those host. 


> Adding SR doesn't create Storage_pool_host_ref entry for disabled host
> --
>
> Key: CLOUDSTACK-8865
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8865
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.0
>Reporter: sudharma jain
>
> When we add Primary Storage into XS cluster which has a host in disabled 
> state the mapping info about each host and each storage pool on 
> storage_pool_host_ref is not created for the disabled host. However from XS 
> side SR is added in the pool elvel so SR can be seen from all hosts. James 
> wants mapping info populated in db.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9857) CloudStack KVM Agent Self Fencing - improper systemd config

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956530#comment-15956530
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9857:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/2024
  
ping @DaanHoogland @PaulAngus @rhtyd for review


> CloudStack KVM Agent Self Fencing  - improper systemd config
> 
>
> Key: CLOUDSTACK-9857
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9857
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.10.0.0
>
>
> We had a database outage few days ago, we noticed that most of cloudstack KVM 
> agents committed a suicide and never retried to connect. Moreover - we had 
> puppet - that was suppose to restart cloudstack-agent daemon when it goes 
> into failed, but apparently it never does go to “failed” state.
> 2017-03-30 04:07:50,720 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Request:Seq -1--1:  { Cmd , MgmtId: -1, via: 
> -1, Ver: v1, Flags: 111, 
> [{"com.cloud.agent.api.ReadyCommand":{"_details":"com.cloud.utils.exception.CloudRuntimeException:
>  DB Exception on: null","wait":0}}] }
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.ReadyCommand
> 2017-03-30 04:07:50,721 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Not ready to connect to mgt server: 
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
> 2017-03-30 04:07:50,722 INFO  [cloud.agent.Agent] (AgentShutdownThread:null) 
> Stopping the agent: Reason = sig.kill
> 2017-03-30 04:07:50,723 DEBUG [cloud.agent.Agent] (AgentShutdownThread:null) 
> Sending shutdown to management server
> While agent fenced itself for whatever logic reason it had - the systemd 
> agent did not exit properly.
> Here what the status of the cloudstack-agent looks like
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: active (exited) since Fri 2017-03-31 23:50:47 GMT; 12s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 632 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=0/SUCCESS)
>   Process: 654 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 441
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Starting SYSV: Cloud Agent...
> Mar 31 23:50:47 mqa6-kvm02 cloudstack-agent[654]: Starting Cloud Agent:
> Mar 31 23:50:47 mqa6-kvm02 systemd[1]: Started SYSV: Cloud Agent.
> Mar 31 23:50:49 mqa6-kvm02 sudo[806]: root : TTY=unknown ; PWD=/ ; 
> USER=root ; COMMAND=/bin/grep InitiatorName= /etc/iscsi/initiatorname.iscsi
> The "Active: active (exited)" should be "Active: failed (Result: exit-code)”
> Solution:
> The fix is to add pidfile into /etc/init.d/cloudstack-agent 
> Like so:
> # chkconfig: 35 99 10
> # description: Cloud Agent
> + # pidfile: /var/run/cloudstack-agent.pid
> Post that - if agent dies - the systemd will catch it properly and it will 
> look as expected
> [root@mqa6-kvm02 ~]# service cloudstack-agent status
> ● cloudstack-agent.service - SYSV: Cloud Agent
>Loaded: loaded (/etc/rc.d/init.d/cloudstack-agent)
>Active: failed (Result: exit-code) since Fri 2017-03-31 23:51:40 GMT; 7s 
> ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 1124 ExecStop=/etc/rc.d/init.d/cloudstack-agent stop (code=exited, 
> status=255)
>   Process: 949 ExecStart=/etc/rc.d/init.d/cloudstack-agent start 
> (code=exited, status=0/SUCCESS)
>  Main PID: 975
> With this change - some other tool can properly inspect the state of daemon 
> and take actions when it failed instead of it being in active (exited) state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9756) IP address must not be allocated to other VR if releasing ip address is failed

2017-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956522#comment-15956522
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9756:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1917
  
There are two LGTMs and there are not test failures in the test results. So 
marking tag:mergeready


>  IP address must not be allocated to other VR if releasing ip address is 
> failed
> ---
>
> Key: CLOUDSTACK-9756
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9756
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> Apply rule (delete) is success on failure of ip assoc on back end. Cloudstack 
> ignored the ip assoc failure.
> Due to this the ip got freed and assigned to another network/account. It 
> caused the ip to be present in more than one router.
> Fix: Failing the apply rule (delete) on ipassoc failure
> Repro steps:
> 1. Configure PF/static nat/Firewall rules
> 2. Delete the rule configured.
> On deleting the rule, fail the ip assoc on the router.
> 3. Delete rule fails because ip assoc got failed.
> For RVR:
> 1. acquire several public ips,
> 2. add some rules on those public ips, so ips should show up in RVR,
> 3. change ipassoc.sh in RVR, make it always returns error on disassociate ip.
> 4. disassociate ip from  UI, ip should  is freed even though disassociate 
> fails inside VR.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)