[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983814#comment-14983814
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9008:


GitHub user mike-tutkowski opened a pull request:

https://github.com/apache/cloudstack/pull/1016

CLOUDSTACK-9008 - Pass hypervisor snapshot reserve field in when crea…

…ting compute and disk offerings

https://issues.apache.org/jira/browse/CLOUDSTACK-9008

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mike-tutkowski/cloudstack hsr_marvin

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1016


commit c2d4d2972dc7d87e0adc94cb809e27d5e51f95c1
Author: Mike Tutkowski 
Date:   2015-10-31T04:13:56Z

CLOUDSTACK-9008 - Pass hypervisor snapshot reserve field in when creating 
compute and disk offerings




> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983811#comment-14983811
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ba9a600410c8d7818bb19e24b4389722ef6507e8 in cloudstack's branch 
refs/heads/sf-plugins-a from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ba9a600 ]

CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Adds logrotate rules for cloudstack-agent.{err,out} log files

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983813#comment-14983813
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ef90fec5eaba0b7a9f0707ee3bd5eed9aea9eedb in cloudstack's branch 
refs/heads/sf-plugins-a from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ef90fec ]

Merge pull request #993 from shapeblue/4.5-logrotate-kvm-agent-erroutlogs

[4.5] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out} log files

cc @remibergsma @wido @wilderrodrigues and others

* pr/993:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983812#comment-14983812
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ef90fec5eaba0b7a9f0707ee3bd5eed9aea9eedb in cloudstack's branch 
refs/heads/sf-plugins-a from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ef90fec ]

Merge pull request #993 from shapeblue/4.5-logrotate-kvm-agent-erroutlogs

[4.5] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out} log files

cc @remibergsma @wido @wilderrodrigues and others

* pr/993:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983810#comment-14983810
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit bacf971220ea97e80ac2fcf28de3ed9a51749522 in cloudstack's branch 
refs/heads/sf-plugins-a from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bacf971 ]

CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Adds logrotate rules for cloudstack-agent.{err,out} log files

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Tutkowski updated CLOUDSTACK-9008:
---
Priority: Major  (was: Blocker)

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9004) Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983619#comment-14983619
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9004:


Github user jharshman commented on the pull request:

https://github.com/apache/cloudstack/pull/1013#issuecomment-152675128
  
My apologies Daan,
I am happy to clarify.

The PR for this Jira ticket (9004), was created as a subtask of 
CLOUDSTACK-8978.  It's intent is to add the ability to set certain hv 
parameters with an end goal in 8978 of implementing hyper-V enlightenment 
features for Windows Server 2008 guests.

The current code for HyperVEnlightenmentFeatureDef only allowed the setting 
of the hv_relaxed bit.  This bit disables a windows sanity check that commonly 
results in a BSOD when the VM is running on a host under heavy load.

The change here adds the ability to set the hv_vapic and hv_spinlocks bits 
as well as retry time.  The hv_vapic bit tries to reduce interrupt overhead in 
guests.  The hv_spinlock bit is used by the guest to notify the hypervisor that 
the calling virtual processor is attempting to access a resource that may be 
held by another virtual processor.  For the host, the retry value for 
hv_spinlock indicates the number of times the virtual processor should attempt 
to access before the spinlock is considered excessive.

Basically these changes are setting the stage for changes I am going to 
submit in CLOUDSTACK-8974.


> Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef
> ---
>
> Key: CLOUDSTACK-9004
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9004
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Josh Harshman
>Priority: Minor
>  Labels: easyfix, patch, perfomance, windows
>
> LibvirtVMDef.HyperVEnlightenmentFeatureDef only supports the setting of the 
> relaxed mode feature.  This change will expand the subclass to be able to set 
> vapic and spinlock boolean values, as well as spinlock retry value.
> These values will then be written out to the XML appropriately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8964) Can't create template or volume from snapshot - "Are you sure you got the right type of server?"

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983539#comment-14983539
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8964:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/1015#issuecomment-152668125
  
ping @miguelaferreira @wilderrodrigues @karuturi @remibergsma 
@therestoftheworld as @snuf asked I applied what I suggested myself. It is to 
easy an improvement to not do it in my opinion.


> Can't create template or volume from snapshot - "Are you sure you got the 
> right type of server?"
> 
>
> Key: CLOUDSTACK-8964
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8964
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs & mgmt
>Reporter: Nux
>Assignee: Wei Zhou
>Priority: Blocker
> Fix For: 4.6.0
>
>
> I have a couple of snapshots left-over from by  now deleted instances. Trying 
> to turn them into volumes fails with (UI/cloudmonkey shows this):
> "Failed to create templateUnsupported command issued: 
> org.apache.cloudstack.storage.command.CopyCommand. Are you sure you got the 
> right type of server?"
> mgmt server logs for when trying to create template:
> "2015-10-18 09:15:58,437 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be) ===START===  192.168.192.198 -- GET  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,459 DEBUG [c.c.t.TemplateManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) This template is getting created 
> from other template, setting source template Id to: 201
> 2015-10-18 09:15:58,500 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Add job-135 into job monitoring
> 2015-10-18 09:15:58,506 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) submit async job-135, details: 
> AsyncJobVO {id:135, userId: 2, accountId: 2, instanceType: Template, 
> instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257-840656e1c0bd","ctxAccountId":"2","ctxStartEventId":"253"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-10-18 09:15:58,506 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) ===END===  192.168.192.198 -- GET 
>  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,507 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Executing AsyncJobVO {id:135, 
> userId: 2, accountId: 2, instanceType: Template, instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257

[jira] [Commented] (CLOUDSTACK-9004) Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983508#comment-14983508
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9004:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/1013#issuecomment-152667236
  
I can see that your code does what you say in the description @jharsman but 
I totally lack the background in windows to judge whether this make sense. Can 
you expand on the reason behind this change here or in the jira ticket? Code 
looks good, change I cannot judge.

Also I can see you created this as a subtask but not to what. Can you give 
that context?

If you are already discussing this with someone else in the community 
please ping them here so they can comment and compensate for my ignorance.

Thanks for working on Apache CloudStack!


> Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef
> ---
>
> Key: CLOUDSTACK-9004
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9004
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Josh Harshman
>Priority: Minor
>  Labels: easyfix, patch, perfomance, windows
>
> LibvirtVMDef.HyperVEnlightenmentFeatureDef only supports the setting of the 
> relaxed mode feature.  This change will expand the subclass to be able to set 
> vapic and spinlock boolean values, as well as spinlock retry value.
> These values will then be written out to the XML appropriately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9006) ListTemplates API returns result in inconsistent order when called concurrently

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983467#comment-14983467
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9006:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/1009#issuecomment-152662658
  
code lgtm, but I'm wondering on testing this (in integration sense) I think 
a unit test would by nice as @bhaisaab. In addition to that if we want to 
verify the fix from a user perspective what do we do (without having to add 
1000+ templetes) @rags22489664 ? Can you give a short description?


> ListTemplates API returns result in inconsistent order when called 
> concurrently
> ---
>
> Key: CLOUDSTACK-9006
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9006
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ramamurti Subramanian
>Assignee: Ramamurti Subramanian
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8964) Can't create template or volume from snapshot - "Are you sure you got the right type of server?"

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983432#comment-14983432
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8964:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/975#issuecomment-152660873
  
#1015 made to apply my remarks


> Can't create template or volume from snapshot - "Are you sure you got the 
> right type of server?"
> 
>
> Key: CLOUDSTACK-8964
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8964
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs & mgmt
>Reporter: Nux
>Assignee: Wei Zhou
>Priority: Blocker
> Fix For: 4.6.0
>
>
> I have a couple of snapshots left-over from by  now deleted instances. Trying 
> to turn them into volumes fails with (UI/cloudmonkey shows this):
> "Failed to create templateUnsupported command issued: 
> org.apache.cloudstack.storage.command.CopyCommand. Are you sure you got the 
> right type of server?"
> mgmt server logs for when trying to create template:
> "2015-10-18 09:15:58,437 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be) ===START===  192.168.192.198 -- GET  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,459 DEBUG [c.c.t.TemplateManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) This template is getting created 
> from other template, setting source template Id to: 201
> 2015-10-18 09:15:58,500 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Add job-135 into job monitoring
> 2015-10-18 09:15:58,506 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) submit async job-135, details: 
> AsyncJobVO {id:135, userId: 2, accountId: 2, instanceType: Template, 
> instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257-840656e1c0bd","ctxAccountId":"2","ctxStartEventId":"253"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-10-18 09:15:58,506 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) ===END===  192.168.192.198 -- GET 
>  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,507 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Executing AsyncJobVO {id:135, 
> userId: 2, accountId: 2, instanceType: Template, instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257-840656e1c0bd","ctxAccountId":"2","ctxStartEventId":"253"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 2

[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983424#comment-14983424
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


Here is the ticket that has yet to be resolved that relates to vmopsSnapshot 
not returning an error when it encounters this space issue:

https://issues.apache.org/jira/browse/CLOUDSTACK-5583

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983395#comment-14983395
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


OK, I have confirmed my fix to Marvin is sufficient to close this ticket.

I'll go ahead and open a PR.

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983289#comment-14983289
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


Assuming my fix to Marvin works as I expect it will, I can open a PR for it.

That does not solve the issue with the XenServer plug-in (in vmopsSnapshot) not 
returning an exception, but that is for another ticket to resolve.

At the time being, I have a procedural workaround in place for that issue in 
that customers are to make sure the backend volume is large enough for 
hypervisor snapshots (and reverts), if they would like to use hypervisor 
snapshots.

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983282#comment-14983282
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


I believe I see the problem. It is two fold:

1) Marvin was silently ignoring a parameter related to managed storage.

2) Without this parameter, the SAN volume to support the SR was not large 
enough to accommodate the space needs of the revert hypervisor snapshot command.

I believe we already have a ticket logged (probably from more than a year ago) 
that says the vmopsSnapshot command should be throwing an exception in this 
situation and not corrupting data.

I plan to fix Marvin.

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983216#comment-14983216
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


I get a NullPointerException from CitrixRevertToVMSnapshotCommandWrapper when 
calling this method:

citrixResourceBase.revertToSnapshot(conn, vmSnapshot, vmName, 
vm.getUuid(conn), snapshotMemory, citrixResourceBase.getHost().getUuid());

For some reason, I'm having a hard time stepping into the revertToSnapshot 
method, though. I have a breakpoint set, but it doesn't get hit. The method, 
however, must be getting invoked because it is what calls the vmopsSnapshot 
revert_memory_snapshot functionality in XenServer.

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8977) cloudstack UI creates a session for users not yet logged in

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983153#comment-14983153
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8977:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/961#issuecomment-152631348
  
@K0zka FYI: it also didn't work in tomcat 7


> cloudstack UI creates a session for users not yet logged in
> ---
>
> Key: CLOUDSTACK-8977
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8977
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.2
>Reporter: Laszlo Hornyak
>Assignee: Laszlo Hornyak
> Fix For: Future
>
>   Original Estimate: 0.1h
>  Remaining Estimate: 0.1h
>
> The cloudstack UI always creates a session. By executing a command like 'ab 
> -n 20 -c 32' the server can be killed reqlly quick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9013) Virtual router failed to start on KVM

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983127#comment-14983127
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9013:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/1014#issuecomment-152624797
  
Ping @wilderrodrigues to have a look


> Virtual router failed to start on KVM
> -
>
> Key: CLOUDSTACK-9013
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9013
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Priority: Blocker
>
> log:
> 2015-10-30 13:48:55,331 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) Executing: 
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
> 169.254.2.176 -c /var/cache/cloud/VR-2edaa939-e9b9-4f06-b646-8c1643db4e69.cfg
> 2015-10-30 13:48:55,769 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) Exit value is 1
> 2015-10-30 13:48:55,770 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) VR config: execution failed: 
> "/opt/cloud/bin/update_config.py ip_associations.json", check 
> /var/log/cloud.log in VR for details
> StartAnswer:
> 2015-10-30 13:48:55,788 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-3:null) Seq 20-465278136502714391:  { Ans: , MgmtId: 
> 345051313197, via: 20, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":7514,"name":"r-7514-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"Debian
>  GNU/Linux 7(64-bit)","platformEmulator":"Debian GNU/Linux 
> 7(64-bit)","bootArgs":" template=domP name=r-7514-VM eth2ip=10.11.115.143 
> eth2mask=255.255.255.0 gateway=10.11.115.254 eth0ip=10.1.43.1 
> eth0mask=255.255.255.0 domain=devcloud.lan cidrsize=24 dhcprange=10.1.43.1 
> eth1ip=169.254.2.176 eth1mask=255.255.0.0 type=router disable_rp_filter=true 
> dns1=8.8.8.8 
> dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"38TQRIKGcN2FhQKBqvWh6A","vncAddr":"172.16.15.15","params":{},"uuid":"91352ace-cf1e-454b-8542-3bd3c9c27fff","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"204b7e56-f089-4861-9a42-00703c098fd5","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e","id":2,"poolType":"NetworkFilesystem","host":"172.16.15.254","path":"/storage/cs-115-pri","port":2049,"url":"NetworkFilesystem://172.16.15.254/storage/cs-115-pri/?ROLE=Primary&STOREUUID=1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e"}},"name":"ROOT-7514","size":3145728000,"path":"204b7e56-f089-4861-9a42-00703c098fd5","volumeId":7516,"vmName":"r-7514-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":7516,"deviceId":0,"cacheMode":"NONE","hypervisorType":"KVM"}},"diskSeq":0,"path":"204b7e56-f089-4861-9a42-00703c098fd5","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.15.254","volumeSize":"3145728000"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"09f90817-d35c-4a56-961e-2b1560144d68","uuid":"765b43a0-43f7-4b23-abcc-86ccc6197a0e","ip":"10.11.115.143","netmask":"255.255.255.0","gateway":"10.11.115.254","mac":"06:cc:fe:00:00:36","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://115","isolationUri":"vlan://115","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"9ec805c8-df0b-40b6-9505-adefb9e436f0","uuid":"a15faf7f-959f-4d63-a478-79c794c7e312","ip":"10.1.43.1","netmask":"255.255.255.0","mac":"02:00:68:a4:00:1f","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://854","isolationUri":"vlan://854","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"acbb3a99-9853-49ad-b54b-606017fbe069","uuid":"dc8a1a58-e581-49a2-8377-dc4fba1dfa57","ip":"169.254.2.176","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:02:b0","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"templateVersion":"Cloudstack
>  Release 4.6.0 Thu Aug 6 23:23:49 UTC 
> 2015","scriptsVersion":"8e577757f8423c7479bc4ca71de97792\n","result":true,"details":"Cloudstack
>  Release 4.6.0 Thu Aug 6 23:23:49 UTC 
> 2015&8e577757f8423c7479bc4ca71de97792\n","wait":0}},{"com.c

[jira] [Commented] (CLOUDSTACK-9013) Virtual router failed to start on KVM

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983097#comment-14983097
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9013:


GitHub user ustcweizhou opened a pull request:

https://github.com/apache/cloudstack/pull/1014

CLOUDSTACK-9013: Virtual router failed to start on KVM

This fix a typo of commit 4a177031b055f3649e3b4a00c80eddb5cafa1dd7

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ustcweizhou/cloudstack CLOUDSTACK-9013

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1014


commit 9f7f42330aeb33ad819075586529e88db4d5c90a
Author: Wei Zhou 
Date:   2015-10-30T16:04:14Z

CLOUDSTACK-9013: Virtual router failed to start on KVM

This fix a typo of commit 4a177031b055f3649e3b4a00c80eddb5cafa1dd7




> Virtual router failed to start on KVM
> -
>
> Key: CLOUDSTACK-9013
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9013
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Priority: Blocker
>
> log:
> 2015-10-30 13:48:55,331 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) Executing: 
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
> 169.254.2.176 -c /var/cache/cloud/VR-2edaa939-e9b9-4f06-b646-8c1643db4e69.cfg
> 2015-10-30 13:48:55,769 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) Exit value is 1
> 2015-10-30 13:48:55,770 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-3:null) VR config: execution failed: 
> "/opt/cloud/bin/update_config.py ip_associations.json", check 
> /var/log/cloud.log in VR for details
> StartAnswer:
> 2015-10-30 13:48:55,788 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-3:null) Seq 20-465278136502714391:  { Ans: , MgmtId: 
> 345051313197, via: 20, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":7514,"name":"r-7514-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"Debian
>  GNU/Linux 7(64-bit)","platformEmulator":"Debian GNU/Linux 
> 7(64-bit)","bootArgs":" template=domP name=r-7514-VM eth2ip=10.11.115.143 
> eth2mask=255.255.255.0 gateway=10.11.115.254 eth0ip=10.1.43.1 
> eth0mask=255.255.255.0 domain=devcloud.lan cidrsize=24 dhcprange=10.1.43.1 
> eth1ip=169.254.2.176 eth1mask=255.255.0.0 type=router disable_rp_filter=true 
> dns1=8.8.8.8 
> dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"38TQRIKGcN2FhQKBqvWh6A","vncAddr":"172.16.15.15","params":{},"uuid":"91352ace-cf1e-454b-8542-3bd3c9c27fff","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"204b7e56-f089-4861-9a42-00703c098fd5","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e","id":2,"poolType":"NetworkFilesystem","host":"172.16.15.254","path":"/storage/cs-115-pri","port":2049,"url":"NetworkFilesystem://172.16.15.254/storage/cs-115-pri/?ROLE=Primary&STOREUUID=1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e"}},"name":"ROOT-7514","size":3145728000,"path":"204b7e56-f089-4861-9a42-00703c098fd5","volumeId":7516,"vmName":"r-7514-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":7516,"deviceId":0,"cacheMode":"NONE","hypervisorType":"KVM"}},"diskSeq":0,"path":"204b7e56-f089-4861-9a42-00703c098fd5","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.15.254","volumeSize":"3145728000"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"09f90817-d35c-4a56-961e-2b1560144d68","uuid":"765b43a0-43f7-4b23-abcc-86ccc6197a0e","ip":"10.11.115.143","netmask":"255.255.255.0","gateway":"10.11.115.254","mac":"06:cc:fe:00:00:36","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://115","isolationUri":"vlan://115","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"9ec805c8-df0b-40b6-9505-adefb9e436f0","uuid":"a15faf7f-959f-4d63-a478-79c794c7e312","ip":"10.1.43.1","netmask":"255.255.255.0","mac":"02:00:68:a4:00:1f","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://854","isolationUri":"vlan://854","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":1,"networkRateMbps"

[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983072#comment-14983072
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


More details with comments inline:

The VM's root disk in on SR 424db333-31b4-9390-84e9-59c9e2fd2a28. It is the 
only VDI on the SR as can be seen below.

[root@XenServer-6 ~]# xe vdi-list sr-uuid=424db333-31b4-9390-84e9-59c9e2fd2a28
uuid ( RO): 40dee945-2c4d-4227-b508-1adb5e7709c2
  name-label ( RW): ROOT-50
name-description ( RW): 
 sr-uuid ( RO): 424db333-31b4-9390-84e9-59c9e2fd2a28
virtual-size ( RO): 21474836480
sharable ( RO): false
   read-only ( RO): false

I take a VM snapshot. As can be seen below, there are now three VDIs that make 
up the overall root disk (as expected).

[root@XenServer-6 ~]# xe vdi-list sr-uuid=424db333-31b4-9390-84e9-59c9e2fd2a28
uuid ( RO): b9a80c64-1dab-4d5f-8d8e-9fb53f8f08ac
  name-label ( RW): base copy
name-description ( RW): 
 sr-uuid ( RO): 424db333-31b4-9390-84e9-59c9e2fd2a28
virtual-size ( RO): 21474836480
sharable ( RO): false
   read-only ( RO): true


uuid ( RO): 40dee945-2c4d-4227-b508-1adb5e7709c2
  name-label ( RW): ROOT-50
name-description ( RW): 
 sr-uuid ( RO): 424db333-31b4-9390-84e9-59c9e2fd2a28
virtual-size ( RO): 21474836480
sharable ( RO): false
   read-only ( RO): false


uuid ( RO): ae7b04b7-7f23-40d3-80df-95cab4bda602
  name-label ( RW): ROOT-50
name-description ( RW): 
 sr-uuid ( RO): 424db333-31b4-9390-84e9-59c9e2fd2a28
virtual-size ( RO): 21474836480
sharable ( RO): false
   read-only ( RO): false

I revert the VM to its one and only VM snapshot. Data corruption occurs (only 
the snapshot VDI remains...we should have a base VDI and another new VDI (three 
VDIs in total)).

[root@XenServer-6 ~]# xe vdi-list sr-uuid=424db333-31b4-9390-84e9-59c9e2fd2a28
uuid ( RO): ae7b04b7-7f23-40d3-80df-95cab4bda602
  name-label ( RW): ROOT-50
name-description ( RW): 
 sr-uuid ( RO): 424db333-31b4-9390-84e9-59c9e2fd2a28
virtual-size ( RO): 21474836480
sharable ( RO): false
   read-only ( RO): false

If you look at this in XenCenter (which ignores base VDIs), you only see the 
snapshot VDI (you should see the "regular" VDI (the one accepting the new 
writes) and the snapshot VDI).

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9004) Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983048#comment-14983048
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9004:


GitHub user jharshman opened a pull request:

https://github.com/apache/cloudstack/pull/1013

CLOUDSTACK-9004: Add features to HyperVEnlightenmentFeatureDef

Add function to set vapic, spinlock and retries
Add function to get retry value
Modify toString to output appropriate XML for spinlock value if set

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jharshman/cloudstack CLOUDSTACK-9004

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1013


commit a34f269237a2506800854a1e76a37c04caac9071
Author: Josh Harshman 
Date:   2015-10-30T18:14:52Z

CLOUDSTACK-9004: Add features to HyperVEnlightenmentFeatureDef

Add function to set vapic, spinlock and retries
Add function to get retry value
Modify toString to output appropriate XML for spinlock value if set




> Add functionality to LibvirtVMDef.HyperVEnlightenmentFeatureDef
> ---
>
> Key: CLOUDSTACK-9004
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9004
> Project: CloudStack
>  Issue Type: Sub-task
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.2
>Reporter: Josh Harshman
>Priority: Minor
>  Labels: easyfix, patch, perfomance, windows
>
> LibvirtVMDef.HyperVEnlightenmentFeatureDef only supports the setting of the 
> relaxed mode feature.  This change will expand the subclass to be able to set 
> vapic and spinlock boolean values, as well as spinlock retry value.
> These values will then be written out to the XML appropriately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Mike Tutkowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982979#comment-14982979
 ] 

Mike Tutkowski commented on CLOUDSTACK-9008:


Hi Wei,

Sorry...I should have provided more details. I was tired when my test 
encountered this issue and just wanted to record the fact that we have a bug 
that destroys data.

What I know right now is that when I attempt to restore the VM to a prior state 
that one of the VDIs on the SR gets deleted.

For example, you start with one VDI for your root disk. A VM snapshot is taken, 
which creates a snapshot VDI. When I restore to the state of the snapshot, one 
of the VDIs that make up the root disk is deleted and the only VDI that remains 
in the SR is the snapshot VDI.

I tried with this non-managed storage (i.e. "normal" storage) and this does not 
happen (no data corruption there).

I have time now to start a deeper investigation and will do so.

Thanks!
Mike

> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8977) cloudstack UI creates a session for users not yet logged in

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982951#comment-14982951
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8977:


Github user K0zka commented on the pull request:

https://github.com/apache/cloudstack/pull/961#issuecomment-152597661
  
I do not know if a new web UI is needed, but tomcat 6 is a walking dead.

@miguelaferreira yes correct, the session can not be created once the 
headers are out on the pipe. I have only tested with jetty, that's the 
problem...

I try to find some time to experiment with tomcat 6.


> cloudstack UI creates a session for users not yet logged in
> ---
>
> Key: CLOUDSTACK-8977
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8977
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.2
>Reporter: Laszlo Hornyak
>Assignee: Laszlo Hornyak
> Fix For: Future
>
>   Original Estimate: 0.1h
>  Remaining Estimate: 0.1h
>
> The cloudstack UI always creates a session. By executing a command like 'ab 
> -n 20 -c 32' the server can be killed reqlly quick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9006) ListTemplates API returns result in inconsistent order when called concurrently

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982837#comment-14982837
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9006:


Github user bhaisaab commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1009#discussion_r43523867
  
--- Diff: framework/db/src/com/cloud/utils/db/Filter.java ---
@@ -89,7 +89,7 @@ public void addOrderBy(Class clazz, String field, 
boolean ascending) {
 if (_orderBy == null) {
 _orderBy = order.insert(0, " ORDER BY ").toString();
 } else {
-_orderBy = order.insert(0, _orderBy).toString();
+_orderBy = order.insert(0, _orderBy + ", ").toString();
--- End diff --

@rags22489664 also recommend if this should also go into 4.5 branch?


> ListTemplates API returns result in inconsistent order when called 
> concurrently
> ---
>
> Key: CLOUDSTACK-9006
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9006
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ramamurti Subramanian
>Assignee: Ramamurti Subramanian
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8940) Wrong value is inserted into nics table netmask field when creating a VM

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982838#comment-14982838
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8940:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/916#issuecomment-152581514
  
@wilderrodrigues any update on your review?


> Wrong value is inserted into nics table netmask field when creating a VM
> 
>
> Key: CLOUDSTACK-8940
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8940
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9006) ListTemplates API returns result in inconsistent order when called concurrently

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982836#comment-14982836
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9006:


Github user bhaisaab commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1009#discussion_r43523817
  
--- Diff: framework/db/src/com/cloud/utils/db/Filter.java ---
@@ -89,7 +89,7 @@ public void addOrderBy(Class clazz, String field, 
boolean ascending) {
 if (_orderBy == null) {
 _orderBy = order.insert(0, " ORDER BY ").toString();
 } else {
-_orderBy = order.insert(0, _orderBy).toString();
+_orderBy = order.insert(0, _orderBy + ", ").toString();
--- End diff --

LGTM, but since this is a core change can you write a small unit test for 
this method? For the list template api, the temp_zone_pair should give a unique 
id (template_id + "_" + zone_id) so sorting on this should give us 
deterministic results.


> ListTemplates API returns result in inconsistent order when called 
> concurrently
> ---
>
> Key: CLOUDSTACK-9006
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9006
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Ramamurti Subramanian
>Assignee: Ramamurti Subramanian
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982813#comment-14982813
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9010:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/1008#issuecomment-152578560
  
@davidamorimfaria Final request, please squash your commits. We need them 
to be atomic. Please ping me when done, thanks!


> Fix packaging for CentOS 7
> --
>
> Key: CLOUDSTACK-9010
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: David Amorim Faria
>Assignee: David Amorim Faria
>Priority: Blocker
> Fix For: 4.6.0
>
>
> The current packaging for CentOS 7 does not work in a newly 
> installed/upgraded CentOS 7 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982788#comment-14982788
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9010:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/1008#issuecomment-152573997
  
This PR fixes CentOS 7 packaging and obsoletes #888 and CLOUDSTACK-8812. I 
will merge this and close the other. 


> Fix packaging for CentOS 7
> --
>
> Key: CLOUDSTACK-9010
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: David Amorim Faria
>Assignee: David Amorim Faria
>Priority: Blocker
> Fix For: 4.6.0
>
>
> The current packaging for CentOS 7 does not work in a newly 
> installed/upgraded CentOS 7 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9006) ListTemplates API returns result in inconsistent order when called concurrently

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982780#comment-14982780
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9006:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/1009#issuecomment-152570782
  
LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Check the password file in the Router VM ... === TestName: 
test_isolate_network_password_server | Status : SUCCESS ===
ok
Check that the /etc/dhcphosts.txt doesn't contain duplicate IPs ... === 
TestName: test_router_dhcphosts | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok

--
Ran 29 tests in 12581.963s

OK

```


And:

 

[jira] [Commented] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982773#comment-14982773
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9010:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/1008#issuecomment-152568600
  
based on the review of the changes and looking at the deployment of the 
management servers i think this LGTM


> Fix packaging for CentOS 7
> --
>
> Key: CLOUDSTACK-9010
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: David Amorim Faria
>Assignee: David Amorim Faria
>Priority: Blocker
> Fix For: 4.6.0
>
>
> The current packaging for CentOS 7 does not work in a newly 
> installed/upgraded CentOS 7 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8746) VM Snapshotting implementation for KVM

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982776#comment-14982776
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8746:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/977#issuecomment-152569385
  
FYI: Results of tests that I run on this branch:

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Check the password file in the Router VM ... === TestName: 
test_isolate_network_password_server | Status : SUCCESS ===
ok
Check that the /etc/dhcphosts.txt doesn't contain duplicate IPs ... === 
TestName: test_router_dhcphosts | Status : FAILED ===
FAIL
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok
--
Ran 29 tests in 12630.025s

FAILED (failures=1)

```

The test that failed is `test_router_dhcphos

[jira] [Commented] (CLOUDSTACK-8964) Can't create template or volume from snapshot - "Are you sure you got the right type of server?"

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982763#comment-14982763
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8964:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/975#issuecomment-152566972
  
@dahn are you OK to merge this as-is?


> Can't create template or volume from snapshot - "Are you sure you got the 
> right type of server?"
> 
>
> Key: CLOUDSTACK-8964
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8964
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Secondary Storage
>Affects Versions: 4.6.0
> Environment: CentOS 6 HVs & mgmt
>Reporter: Nux
>Assignee: Wei Zhou
>Priority: Blocker
> Fix For: 4.6.0
>
>
> I have a couple of snapshots left-over from by  now deleted instances. Trying 
> to turn them into volumes fails with (UI/cloudmonkey shows this):
> "Failed to create templateUnsupported command issued: 
> org.apache.cloudstack.storage.command.CopyCommand. Are you sure you got the 
> right type of server?"
> mgmt server logs for when trying to create template:
> "2015-10-18 09:15:58,437 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be) ===START===  192.168.192.198 -- GET  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,459 DEBUG [c.c.t.TemplateManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) This template is getting created 
> from other template, setting source template Id to: 201
> 2015-10-18 09:15:58,500 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Add job-135 into job monitoring
> 2015-10-18 09:15:58,506 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) submit async job-135, details: 
> AsyncJobVO {id:135, userId: 2, accountId: 2, instanceType: Template, 
> instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257-840656e1c0bd","ctxAccountId":"2","ctxStartEventId":"253"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 266785867798693, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2015-10-18 09:15:58,506 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-5:ctx-84b2a9be ctx-921b9b20) ===END===  192.168.192.198 -- GET 
>  
> command=createTemplate&response=json&snapshotid=da79387b-ecae-4d5c-b414-3942d29ad821&name=testsnap1&displayText=testsnap1&osTypeId=ba03db1c-7359-11e5-b4d0-f2a3ece198a5&isPublic=false&passwordEnabled=false&isdynamicallyscalable=false&_=1445156157698
> 2015-10-18 09:15:58,507 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (API-Job-Executor-33:ctx-f566f6af job-135) Executing AsyncJobVO {id:135, 
> userId: 2, accountId: 2, instanceType: Template, instanceId: 207, cmd: 
> org.apache.cloudstack.api.command.admin.template.CreateTemplateCmdByAdmin, 
> cmdInfo: 
> {"cmdEventType":"TEMPLATE.CREATE","ctxUserId":"2","httpmethod":"GET","osTypeId":"ba03db1c-7359-11e5-b4d0-f2a3ece198a5","isPublic":"false","isdynamicallyscalable":"false","response":"json","id":"207","ctxDetails":"{\"interface
>  
> com.cloud.template.VirtualMachineTemplate\":\"9c045e56-2463-47f8-a257-840656e1c0bd\",\"interface
>  
> com.cloud.storage.Snapshot\":\"da79387b-ecae-4d5c-b414-3942d29ad821\",\"interface
>  
> com.cloud.storage.GuestOS\":\"ba03db1c-7359-11e5-b4d0-f2a3ece198a5\"}","displayText":"testsnap1","snapshotid":"da79387b-ecae-4d5c-b414-3942d29ad821","passwordEnabled":"false","name":"testsnap1","_":"1445156157698","uuid":"9c045e56-2463-47f8-a257-840656e1c0bd","ctxAccountId":"2","ctxStartEventId":"253"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initM

[jira] [Commented] (CLOUDSTACK-8844) Network Update from RVR offering to Standalone offering fails

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982762#comment-14982762
 ] 

ASF subversion and git services commented on CLOUDSTACK-8844:
-

Commit 901d47c07edb06beef388c4e6b78e26ce87e2f6b in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=901d47c ]

Merge pull request #818 from kansal/CLOUDSTACK-8844

Fixed: Network Update from RVR offering to Standalone offering failsProblem: 
Moving a RVR network offering to standalone makes the status of VR's as UNKNOWN 
and Redundant Router marked with YES.
Fix: The network's isRedundant was not getting updated.

* pr/818:
  CLOUDSTACK-8844: Network Update from RVR offering to Standalone offering 
fails - Fixed

Signed-off-by: Remi Bergsma 


> Network Update from RVR offering to Standalone offering fails
> -
>
> Key: CLOUDSTACK-8844
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8844
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>
>  Create two Network Offerings - Standalone and RVR enabled .
> Navigate to Network and ensure that currently the Network has RVR Offering.
> Update the Network and select Standalone Offering.
> Expected Result :
> Update should be successful and there should be only 1 Router created for the 
> standalone offering.
> Observation :
> There are 2 VR's been created with Redundant Router marked with YES and State 
> as UNKNOWN for both the created Routers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8844) Network Update from RVR offering to Standalone offering fails

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982761#comment-14982761
 ] 

ASF subversion and git services commented on CLOUDSTACK-8844:
-

Commit 901d47c07edb06beef388c4e6b78e26ce87e2f6b in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=901d47c ]

Merge pull request #818 from kansal/CLOUDSTACK-8844

Fixed: Network Update from RVR offering to Standalone offering failsProblem: 
Moving a RVR network offering to standalone makes the status of VR's as UNKNOWN 
and Redundant Router marked with YES.
Fix: The network's isRedundant was not getting updated.

* pr/818:
  CLOUDSTACK-8844: Network Update from RVR offering to Standalone offering 
fails - Fixed

Signed-off-by: Remi Bergsma 


> Network Update from RVR offering to Standalone offering fails
> -
>
> Key: CLOUDSTACK-8844
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8844
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>
>  Create two Network Offerings - Standalone and RVR enabled .
> Navigate to Network and ensure that currently the Network has RVR Offering.
> Update the Network and select Standalone Offering.
> Expected Result :
> Update should be successful and there should be only 1 Router created for the 
> standalone offering.
> Observation :
> There are 2 VR's been created with Redundant Router marked with YES and State 
> as UNKNOWN for both the created Routers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8844) Network Update from RVR offering to Standalone offering fails

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982760#comment-14982760
 ] 

ASF subversion and git services commented on CLOUDSTACK-8844:
-

Commit e24ecccdea06f11fd8a6bf35c5955fbd661d0982 in cloudstack's branch 
refs/heads/master from [~kansal]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e24eccc ]

CLOUDSTACK-8844: Network Update from RVR offering to Standalone offering fails 
- Fixed


> Network Update from RVR offering to Standalone offering fails
> -
>
> Key: CLOUDSTACK-8844
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8844
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>
>  Create two Network Offerings - Standalone and RVR enabled .
> Navigate to Network and ensure that currently the Network has RVR Offering.
> Update the Network and select Standalone Offering.
> Expected Result :
> Update should be successful and there should be only 1 Router created for the 
> standalone offering.
> Observation :
> There are 2 VR's been created with Redundant Router marked with YES and State 
> as UNKNOWN for both the created Routers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8677) Call-home functionality for CloudStack

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982710#comment-14982710
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8677:


Github user borisroman commented on the pull request:

https://github.com/apache/cloudstack/pull/987#issuecomment-152559399
  
@widi already building


> Call-home functionality for CloudStack
> --
>
> Key: CLOUDSTACK-8677
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8677
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: Future, 4.6.0
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
> Fix For: 4.6.0
>
>
> A call-home feature for the CloudStack management server would send 
> anonimized reports back to the CloudStack project.
> These statistics will contain numbers and details on how CloudStack will be 
> used. It will NOT contain:
> * Hostnames
> * IP-Addresses
> * Instance names
> It will report back:
> * Hosts (Number, version, type, hypervisor)
> * Clusters (Hypervisor en Management type)
> * Primary storage (Type and provider)
> * Zones (Network type and providers)
> * Instances (Number and types)
> This gives the CloudStack project a better insight on how CloudStack is being 
> used and allows us to develop accordingly.
> It will be OPT-OUT, using the "usage.report.interval" users can disable usage 
> reporting. By default it will run every 7 days and send a JSON document to 
> https://call-home.cloudstack.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread Remi Bergsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remi Bergsma resolved CLOUDSTACK-9014.
--
Resolution: Fixed

> Rename xapi plugins for s3 and swift to make them work after renaming the 
> calls
> ---
>
> Key: CLOUDSTACK-9014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.6.0
> Environment: XenServer using S3 or Swift as secondary storage
>Reporter: Remi Bergsma
>Assignee: Remi Bergsma
>Priority: Critical
> Fix For: 4.6.0
>
>
> It's called s3xen, not s3xenserver. While investigating, I found the same 
> issue for swiftxen.
> Regresion from a8212d9 where things were massively renamed, without proper 
> verification.
> Error seen:
> 2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
> maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
> connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 
> 5368709120, bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, 
> filename: /var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
> cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
> socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
> key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
> .vhd, useTCPKeepAlive: null,  due to Task failed! Task record:
>  uuid: 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Task failed! Task record: uuid: 
> 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Here we see the correct name:
> scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
> scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
> And:
> scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread Remi Bergsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remi Bergsma closed CLOUDSTACK-9014.


> Rename xapi plugins for s3 and swift to make them work after renaming the 
> calls
> ---
>
> Key: CLOUDSTACK-9014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.6.0
> Environment: XenServer using S3 or Swift as secondary storage
>Reporter: Remi Bergsma
>Assignee: Remi Bergsma
>Priority: Critical
> Fix For: 4.6.0
>
>
> It's called s3xen, not s3xenserver. While investigating, I found the same 
> issue for swiftxen.
> Regresion from a8212d9 where things were massively renamed, without proper 
> verification.
> Error seen:
> 2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
> maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
> connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 
> 5368709120, bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, 
> filename: /var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
> cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
> socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
> key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
> .vhd, useTCPKeepAlive: null,  due to Task failed! Task record:
>  uuid: 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Task failed! Task record: uuid: 
> 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Here we see the correct name:
> scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
> scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
> And:
> scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982689#comment-14982689
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9014:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/982


> Rename xapi plugins for s3 and swift to make them work after renaming the 
> calls
> ---
>
> Key: CLOUDSTACK-9014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.6.0
> Environment: XenServer using S3 or Swift as secondary storage
>Reporter: Remi Bergsma
>Assignee: Remi Bergsma
>Priority: Critical
> Fix For: 4.6.0
>
>
> It's called s3xen, not s3xenserver. While investigating, I found the same 
> issue for swiftxen.
> Regresion from a8212d9 where things were massively renamed, without proper 
> verification.
> Error seen:
> 2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
> maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
> connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 
> 5368709120, bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, 
> filename: /var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
> cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
> socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
> key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
> .vhd, useTCPKeepAlive: null,  due to Task failed! Task record:
>  uuid: 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Task failed! Task record: uuid: 
> 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Here we see the correct name:
> scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
> scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
> And:
> scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982688#comment-14982688
 ] 

ASF subversion and git services commented on CLOUDSTACK-9014:
-

Commit ab749ed97d478d08bfd71354897c4be6f82bc7a5 in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ab749ed ]

Merge pull request #982 from remibergsma/fix-s3-swift

CLOUDSTACK-9014 Rename xapi plugins for s3 and swift to make them work after 
renaming the callsMake renaming introduced in 
a8212d9ef458dd7ac64b021e6fa33fcf64b3cce0 work for S3 and Swift xapi plugins.

This PR is to address comments in PR #970

* pr/982:
  Rename xapi plugins for s3 and swift to make them work after renaming the 
calls

Signed-off-by: Remi Bergsma 


> Rename xapi plugins for s3 and swift to make them work after renaming the 
> calls
> ---
>
> Key: CLOUDSTACK-9014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.6.0
> Environment: XenServer using S3 or Swift as secondary storage
>Reporter: Remi Bergsma
>Assignee: Remi Bergsma
>Priority: Critical
> Fix For: 4.6.0
>
>
> It's called s3xen, not s3xenserver. While investigating, I found the same 
> issue for swiftxen.
> Regresion from a8212d9 where things were massively renamed, without proper 
> verification.
> Error seen:
> 2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
> maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
> connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 
> 5368709120, bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, 
> filename: /var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
> cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
> socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
> key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
> .vhd, useTCPKeepAlive: null,  due to Task failed! Task record:
>  uuid: 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Task failed! Task record: uuid: 
> 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Here we see the correct name:
> scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
> scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
> And:
> scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982684#comment-14982684
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9014:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/982#issuecomment-152553309
  
@karuturi Added Jira issue CLOUDSTACK-9014


> Rename xapi plugins for s3 and swift to make them work after renaming the 
> calls
> ---
>
> Key: CLOUDSTACK-9014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: XenServer
>Affects Versions: 4.6.0
> Environment: XenServer using S3 or Swift as secondary storage
>Reporter: Remi Bergsma
>Assignee: Remi Bergsma
>Priority: Critical
> Fix For: 4.6.0
>
>
> It's called s3xen, not s3xenserver. While investigating, I found the same 
> issue for swiftxen.
> Regresion from a8212d9 where things were massively renamed, without proper 
> verification.
> Error seen:
> 2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
> (DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
> maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
> connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 
> 5368709120, bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, 
> filename: /var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
> cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
> socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
> key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
> .vhd, useTCPKeepAlive: null,  due to Task failed! Task record:
>  uuid: 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Task failed! Task record: uuid: 
> 4be8a515-1e2a-59de-6301-029fb0326651
>nameLabel: Async.host.call_plugin
>  nameDescription: 
>allowedOperations: []
>currentOperations: {}
>  created: Thu Oct 22 21:42:44 CEST 2015
> finished: Thu Oct 22 21:42:44 CEST 2015
>   status: failure
>   residentOn: com.xensource.xenapi.Host@9c7aad90
> progress: 1.0
> type: 
>   result: 
>errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
>  otherConfig: {}
>subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> subtasks: []
> Here we see the correct name:
> scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
> scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
> And:
> scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
> These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9014) Rename xapi plugins for s3 and swift to make them work after renaming the calls

2015-10-30 Thread Remi Bergsma (JIRA)
Remi Bergsma created CLOUDSTACK-9014:


 Summary: Rename xapi plugins for s3 and swift to make them work 
after renaming the calls
 Key: CLOUDSTACK-9014
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9014
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: XenServer
Affects Versions: 4.6.0
 Environment: XenServer using S3 or Swift as secondary storage
Reporter: Remi Bergsma
Assignee: Remi Bergsma
Priority: Critical
 Fix For: 4.6.0


It's called s3xen, not s3xenserver. While investigating, I found the same issue 
for swiftxen.

Regresion from a8212d9 where things were massively renamed, without proper 
verification.

Error seen:

2015-10-22 21:42:30,372 WARN  [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-261:ctx-862ebceb) callHostPlugin failed for cmd: s3 with args 
maxErrorRetry: 10, secretKey: +XGy4yPPbAH9AijYxFTr1yVCCiVQuSfXWWj1Invs, 
connectionTtl: null, iSCSIFlag: false, maxSingleUploadSizeInBytes: 5368709120, 
bucket: mccx-nl2, endPoint: s3.storage.acc.schubergphilis.com, filename: 
/var/run/sr-mount/9414f970-0afd-42db-972f-aa4743293430/2
cf0c24b-a596-4039-b50a-7ce87da4f273.vhd, accessKey: 16efbc4e870f24338141, 
socketTimeout: null, https: false, connectionTimeout: 30, operation: put, 
key: snapshots/2/10/2cf0c24b-a596-4039-b50a-7ce87da4f273
.vhd, useTCPKeepAlive: null,  due to Task failed! Task record: 
uuid: 4be8a515-1e2a-59de-6301-029fb0326651
   nameLabel: Async.host.call_plugin
 nameDescription: 
   allowedOperations: []
   currentOperations: {}
 created: Thu Oct 22 21:42:44 CEST 2015
finished: Thu Oct 22 21:42:44 CEST 2015
  status: failure
  residentOn: com.xensource.xenapi.Host@9c7aad90
progress: 1.0
type: 
  result: 
   errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
 otherConfig: {}
   subtaskOf: com.xensource.xenapi.Task@aaf13f6f
subtasks: []
Task failed! Task record: uuid: 
4be8a515-1e2a-59de-6301-029fb0326651
   nameLabel: Async.host.call_plugin
 nameDescription: 
   allowedOperations: []
   currentOperations: {}
 created: Thu Oct 22 21:42:44 CEST 2015
finished: Thu Oct 22 21:42:44 CEST 2015
  status: failure
  residentOn: com.xensource.xenapi.Host@9c7aad90
progress: 1.0
type: 
  result: 
   errorInfo: [XENAPI_MISSING_PLUGIN, s3xenserver]
 otherConfig: {}
   subtaskOf: com.xensource.xenapi.Task@aaf13f6f
subtasks: []
Here we see the correct name:

scripts/vm/hypervisor/xenserver/s3xen:lib.setup_logging("/var/log/cloud/s3xen.log")
scripts/vm/hypervisor/xenserver/xenserver56/patch:s3xen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:s3xen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver60/patch:s3xen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver62/patch:s3xen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver65/patch:s3xen=..,0755,/etc/xapi.d/plugins
And:

scripts/vm/hypervisor/xenserver/xenserver56/patch:swiftxen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver56fp1/patch:swiftxen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver60/patch:swiftxen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver62/patch:swiftxen=..,0755,/etc/xapi.d/plugins
scripts/vm/hypervisor/xenserver/xenserver65/patch:swiftxen=..,0755,/etc/xapi.d/plugins
These plugins are pushed to the hypervisor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982603#comment-14982603
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9010:


Github user davidamorimfaria commented on the pull request:

https://github.com/apache/cloudstack/pull/1008#issuecomment-152536728
  
I haven't installed a cloud with the packages made with this change, but so 
far the management servers start and talk to each other:

Oct 30 13:54:04 mgmt01 server: WARN  [c.c.c.ClusterManagerImpl] 
(Cluster-Notification-1:ctx-cb4bc2b4) Notifying management server join event 
took 0 ms

 These were the steps I did, using a mariadb 10 as db backend:
- yum localinstall cloudstack-common-4.6.0-SNAPSHOT.el7.centos.x86_64.rpm 
cloudstack-management-4.6.0-SNAPSHOT.el7.centos.x86_64.rpm 
cloudstack-usage-4.6.0-SNAPSHOT.el7.centos.x86_64.rpm 
cloudstack-cli-4.6.0-SNAPSHOT.el7.centos.x86_64.rpm
- [in mgmt01] cloudstack-setup-databases cloud:@localhost
- [in mgmt02] cloudstack-setup-databases 
cloud:@
- cloudstack-setup-management
- systemctl enable cloudstack-management
- systemctl start cloudstack-management



> Fix packaging for CentOS 7
> --
>
> Key: CLOUDSTACK-9010
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: David Amorim Faria
>Assignee: David Amorim Faria
>Priority: Blocker
> Fix For: 4.6.0
>
>
> The current packaging for CentOS 7 does not work in a newly 
> installed/upgraded CentOS 7 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9003) Make VM naming services injectable and in their own module

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982522#comment-14982522
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9003:


Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/988#issuecomment-152516868
  
@ProjectMoon : sounds good let me know of any more PoC code or further 
design.


> Make VM naming services injectable and in their own module
> --
>
> Key: CLOUDSTACK-9003
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9003
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jeffrey Hair
>Priority: Minor
>
> Proposal: Make the various classes/code that give VMs and other resources 
> hostnames, hypervisor guest names, and UUIDs into their classes as injectable 
> dependencies in their own module under the core or backend module.
> This proposal originally only concerned the VirtualMachineName class and can 
> be broken down into several parts:
> * Make the VirtualMachineName class an injectable dependency instead of being 
> full of static methods.
> * Refactor the generateHostName method in UserVmManagerImpl to be backed by 
> an injectable service which generates host names.
> * Move the UUIDManagerImpl from the core module to a new module (grouped with 
> the other 2 ideally).
> Rationale:
> * VirtualMachineName is one of the few remaining classes that has static 
> methods tangled like spaghetti throughout the code. This change will put it 
> in line with the rest of the management server codebase and opens the door to 
> extensibility. Which brings us to...
> * Extensibility: The ultimate goal of this feature is to provide 3rd party 
> developers the option of changing default instance/resource naming policies. 
> Currently this is possible in a very limited fashion with the instance.name 
> global setting, but this proposal makes it much more extensible.
> By moving the naming-related services (VirtualMachineName, UUIDManager, and 
> more as added/discovered) to their own module, the module can be excluded by 
> modules.properties and different ones substituted in. Alternatively, it could 
> use the adapter model that other classes use, and the user can configure 
> which adapters are active and also provide custom ones.
> A good use case for this functionality is using a different style naming to 
> emulate other cloud providers such as AWS (i-abc123) or GCE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9013) Virtual router failed to start on KVM

2015-10-30 Thread Wei Zhou (JIRA)
Wei Zhou created CLOUDSTACK-9013:


 Summary: Virtual router failed to start on KVM
 Key: CLOUDSTACK-9013
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9013
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Wei Zhou
Priority: Blocker


log:

2015-10-30 13:48:55,331 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-3:null) Executing: 
/usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vr_cfg.sh 
169.254.2.176 -c /var/cache/cloud/VR-2edaa939-e9b9-4f06-b646-8c1643db4e69.cfg
2015-10-30 13:48:55,769 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-3:null) Exit value is 1
2015-10-30 13:48:55,770 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-3:null) VR config: execution failed: 
"/opt/cloud/bin/update_config.py ip_associations.json", check 
/var/log/cloud.log in VR for details


StartAnswer:

2015-10-30 13:48:55,788 DEBUG [cloud.agent.Agent] (agentRequest-Handler-3:null) 
Seq 20-465278136502714391:  { Ans: , MgmtId: 345051313197, via: 20, Ver: v1, 
Flags: 10, 
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":7514,"name":"r-7514-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":134217728,"maxRam":134217728,"arch":"x86_64","os":"Debian
 GNU/Linux 7(64-bit)","platformEmulator":"Debian GNU/Linux 
7(64-bit)","bootArgs":" template=domP name=r-7514-VM eth2ip=10.11.115.143 
eth2mask=255.255.255.0 gateway=10.11.115.254 eth0ip=10.1.43.1 
eth0mask=255.255.255.0 domain=devcloud.lan cidrsize=24 dhcprange=10.1.43.1 
eth1ip=169.254.2.176 eth1mask=255.255.0.0 type=router disable_rp_filter=true 
dns1=8.8.8.8 
dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"38TQRIKGcN2FhQKBqvWh6A","vncAddr":"172.16.15.15","params":{},"uuid":"91352ace-cf1e-454b-8542-3bd3c9c27fff","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"204b7e56-f089-4861-9a42-00703c098fd5","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e","id":2,"poolType":"NetworkFilesystem","host":"172.16.15.254","path":"/storage/cs-115-pri","port":2049,"url":"NetworkFilesystem://172.16.15.254/storage/cs-115-pri/?ROLE=Primary&STOREUUID=1dcbc42c-99bc-3276-9d86-4ad81ef1ad8e"}},"name":"ROOT-7514","size":3145728000,"path":"204b7e56-f089-4861-9a42-00703c098fd5","volumeId":7516,"vmName":"r-7514-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":7516,"deviceId":0,"cacheMode":"NONE","hypervisorType":"KVM"}},"diskSeq":0,"path":"204b7e56-f089-4861-9a42-00703c098fd5","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.15.254","volumeSize":"3145728000"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"09f90817-d35c-4a56-961e-2b1560144d68","uuid":"765b43a0-43f7-4b23-abcc-86ccc6197a0e","ip":"10.11.115.143","netmask":"255.255.255.0","gateway":"10.11.115.254","mac":"06:cc:fe:00:00:36","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://115","isolationUri":"vlan://115","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"9ec805c8-df0b-40b6-9505-adefb9e436f0","uuid":"a15faf7f-959f-4d63-a478-79c794c7e312","ip":"10.1.43.1","netmask":"255.255.255.0","mac":"02:00:68:a4:00:1f","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://854","isolationUri":"vlan://854","isSecurityGroupEnabled":false,"name":"cloudbr0"},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"acbb3a99-9853-49ad-b54b-606017fbe069","uuid":"dc8a1a58-e581-49a2-8377-dc4fba1dfa57","ip":"169.254.2.176","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:02:b0","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"templateVersion":"Cloudstack
 Release 4.6.0 Thu Aug 6 23:23:49 UTC 
2015","scriptsVersion":"8e577757f8423c7479bc4ca71de97792\n","result":true,"details":"Cloudstack
 Release 4.6.0 Thu Aug 6 23:23:49 UTC 
2015&8e577757f8423c7479bc4ca71de97792\n","wait":0}},{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-7514-VM","bytesSent":0,"bytesReceived":0,"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"details":"Command
 aggregation 
started","wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"wait":0}},{"com.cloud.agent

[jira] [Commented] (CLOUDSTACK-9003) Make VM naming services injectable and in their own module

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982504#comment-14982504
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9003:


Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/988#issuecomment-152513166
  
While I haven't pushed new work to this PR yet, I do have a prototype of 
what I intend to do, and am looking for thoughts on it here.

My plan:
* Move MachineNameService and UUID manager into one interface. Remove the 
methods from VirtualMachineNameService which are not used. Call this interface 
ResourceNamingPolicy.
* Create a ResourceNamingPolicy extension registry, and a new module for 
the default naming policy (which is the UUIDs ACS generates at the moment).
* When using methods from ResourceNamingPolicy, iterate through all 
registered policies until one returns a name that is not null. This is in line 
with other things like API authenticators, host planners, etc.
* Expand the use of the ResourceNamingPolicy beyond virtual machines and 
volumes (which is what VirtualMachineName/UUIDManager touch currently). The 
plan is to make use of this for these entities to begin with: VMs, volumes, 
templates, security groups.


> Make VM naming services injectable and in their own module
> --
>
> Key: CLOUDSTACK-9003
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9003
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jeffrey Hair
>Priority: Minor
>
> Proposal: Make the various classes/code that give VMs and other resources 
> hostnames, hypervisor guest names, and UUIDs into their classes as injectable 
> dependencies in their own module under the core or backend module.
> This proposal originally only concerned the VirtualMachineName class and can 
> be broken down into several parts:
> * Make the VirtualMachineName class an injectable dependency instead of being 
> full of static methods.
> * Refactor the generateHostName method in UserVmManagerImpl to be backed by 
> an injectable service which generates host names.
> * Move the UUIDManagerImpl from the core module to a new module (grouped with 
> the other 2 ideally).
> Rationale:
> * VirtualMachineName is one of the few remaining classes that has static 
> methods tangled like spaghetti throughout the code. This change will put it 
> in line with the rest of the management server codebase and opens the door to 
> extensibility. Which brings us to...
> * Extensibility: The ultimate goal of this feature is to provide 3rd party 
> developers the option of changing default instance/resource naming policies. 
> Currently this is possible in a very limited fashion with the instance.name 
> global setting, but this proposal makes it much more extensible.
> By moving the naming-related services (VirtualMachineName, UUIDManager, and 
> more as added/discovered) to their own module, the module can be excluded by 
> modules.properties and different ones substituted in. Alternatively, it could 
> use the adapter model that other classes use, and the user can configure 
> which adapters are active and also provide custom ones.
> A good use case for this functionality is using a different style naming to 
> emulate other cloud providers such as AWS (i-abc123) or GCE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9012) coreos test case automation

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982477#comment-14982477
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9012:


Github user shwetaag commented on the pull request:

https://github.com/apache/cloudstack/pull/1011#issuecomment-152510674
  
pasting result.txt

test1_coreos_VM_creation 
(integration.component.test_coreos.TestDeployVmWithCoreosTemplate) ... === 
TestName: test1_coreos_VM_creation | Status : SUCCESS ===
ok

--
Ran 1 test in 838.303s

OK



> coreos test case automation
> ---
>
> Key: CLOUDSTACK-9012
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9012
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Reporter: shweta agarwal
>
> Automated a full scenario of coreos guest OS support:
> it includes registering coreos templates present at 
> http://dl.openvm.eu/cloudstack/coreos/x86_64/  
> 1. based on hypervisor types of zone
> 2.  creating ssh key pair 
> 3. creating a sample user data 
> 4. creating a coreos virtual machine using this ssh keypair and userdata
> 5. verifying ssh access to coreo os machine using keypair and core username
> 6. verifying userdata is applied on virtual machine and the service asked in 
> sample data is actually running
> 7. Verifying userdata in router vm as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9012) coreos test case automation

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982474#comment-14982474
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9012:


GitHub user shwetaag opened a pull request:

https://github.com/apache/cloudstack/pull/1011

CLOUDSTACK-9012 :automation of cores feature test path

https://issues.apache.org/jira/browse/CLOUDSTACK-9012
Automated a full scenario of coreos guest OS support:
it includes registering coreos templates present at 
http://dl.openvm.eu/cloudstack/coreos/x86_64/
1. based on hypervisor types of zone
2. creating ssh key pair
3. creating a sample user data
4. creating a coreos virtual machine using this ssh keypair and userdata
5. verifying ssh access to coreo os machine using keypair and core username
6. verifying userdata is applied on virtual machine and the service asked 
in sample data is actually running
7. Verifying userdata in router vm as well

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shwetaag/cloudstack coreos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1011.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1011


commit 95355960746fbe1bdc649a483bff250b115ee7e7
Author: shweta agarwal 
Date:   2015-10-30T11:24:16Z

automation of cores feature test path

corrected some entires in test data




> coreos test case automation
> ---
>
> Key: CLOUDSTACK-9012
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9012
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Reporter: shweta agarwal
>
> Automated a full scenario of coreos guest OS support:
> it includes registering coreos templates present at 
> http://dl.openvm.eu/cloudstack/coreos/x86_64/  
> 1. based on hypervisor types of zone
> 2.  creating ssh key pair 
> 3. creating a sample user data 
> 4. creating a coreos virtual machine using this ssh keypair and userdata
> 5. verifying ssh access to coreo os machine using keypair and core username
> 6. verifying userdata is applied on virtual machine and the service asked in 
> sample data is actually running
> 7. Verifying userdata in router vm as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9012) coreos test case automation

2015-10-30 Thread shweta agarwal (JIRA)
shweta agarwal created CLOUDSTACK-9012:
--

 Summary: coreos test case automation
 Key: CLOUDSTACK-9012
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9012
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Automation
Reporter: shweta agarwal


Automated a full scenario of coreos guest OS support:
it includes registering coreos templates present at 
http://dl.openvm.eu/cloudstack/coreos/x86_64/  
1. based on hypervisor types of zone
2.  creating ssh key pair 
3. creating a sample user data 
4. creating a coreos virtual machine using this ssh keypair and userdata
5. verifying ssh access to coreo os machine using keypair and core username
6. verifying userdata is applied on virtual machine and the service asked in 
sample data is actually running
7. Verifying userdata in router vm as well




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9011) user_vm_view does not check for account_id of keypairs

2015-10-30 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi updated CLOUDSTACK-9011:

Assignee: Nera Nesic

> user_vm_view does not check for account_id of keypairs 
> ---
>
> Key: CLOUDSTACK-9011
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9011
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nera Nesic
>Assignee: Nera Nesic
>Priority: Minor
>
> If there are two accounts using the same key, but create a different key name 
> for it, and then a vm is created using one of the keys, the view will list 
> both keypairs as belonging to the vm, which can in turn cause confusion to 
> the users who see a keypair name which they did not create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9011) user_vm_view does not check for account_id of keypairs

2015-10-30 Thread Rajani Karuturi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajani Karuturi resolved CLOUDSTACK-9011.
-
Resolution: Fixed

> user_vm_view does not check for account_id of keypairs 
> ---
>
> Key: CLOUDSTACK-9011
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9011
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nera Nesic
>Priority: Minor
>
> If there are two accounts using the same key, but create a different key name 
> for it, and then a vm is created using one of the keys, the view will list 
> both keypairs as belonging to the vm, which can in turn cause confusion to 
> the users who see a keypair name which they did not create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8677) Call-home functionality for CloudStack

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982450#comment-14982450
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8677:


Github user wido commented on the pull request:

https://github.com/apache/cloudstack/pull/987#issuecomment-152505938
  
@borisroman Can we run the tests on this one?


> Call-home functionality for CloudStack
> --
>
> Key: CLOUDSTACK-8677
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8677
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: Future, 4.6.0
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
> Fix For: 4.6.0
>
>
> A call-home feature for the CloudStack management server would send 
> anonimized reports back to the CloudStack project.
> These statistics will contain numbers and details on how CloudStack will be 
> used. It will NOT contain:
> * Hostnames
> * IP-Addresses
> * Instance names
> It will report back:
> * Hosts (Number, version, type, hypervisor)
> * Clusters (Hypervisor en Management type)
> * Primary storage (Type and provider)
> * Zones (Network type and providers)
> * Instances (Number and types)
> This gives the CloudStack project a better insight on how CloudStack is being 
> used and allows us to develop accordingly.
> It will be OPT-OUT, using the "usage.report.interval" users can disable usage 
> reporting. By default it will run every 7 days and send a JSON document to 
> https://call-home.cloudstack.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9011) user_vm_view does not check for account_id of keypairs

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982433#comment-14982433
 ] 

ASF subversion and git services commented on CLOUDSTACK-9011:
-

Commit 9191da31121e851725c6702c0bb39b9319dec0bd in cloudstack's branch 
refs/heads/master from [~nnesic]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9191da3 ]

CLOUDSTACK-9011 - Fixed user_vm_view to only display keypairs belonging to the 
account.


> user_vm_view does not check for account_id of keypairs 
> ---
>
> Key: CLOUDSTACK-9011
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9011
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nera NEsic
>Priority: Minor
>
> If there are two accounts using the same key, but create a different key name 
> for it, and then a vm is created using one of the keys, the view will list 
> both keypairs as belonging to the vm, which can in turn cause confusion to 
> the users who see a keypair name which they did not create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9011) user_vm_view does not check for account_id of keypairs

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982434#comment-14982434
 ] 

ASF subversion and git services commented on CLOUDSTACK-9011:
-

Commit bc5a5d662340030fe8f3182f4b3385682a890c47 in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bc5a5d6 ]

Merge pull request #1006 from greenqloud/user_vm_keypairs_fix

Fixed user_vm_view to only display keypairs belonging to the account.The 
user_vm_view displayes the keypair information by joining vm_details with 
ssh_keypairs on the key value exclusively.

We found a scenario in which this can cause information leakage. If there are 
two accounts using the same key, but create a different key name for it, and 
then a vm is created using one of the keys, the view will list both keypairs as 
belonging to the vm, which can in turn cause confusion to the users who see a 
keypair name which they did not create.

The fix simply limits the view to displaying keypairs which belong to vm's 
account.

I added it to the latest schema migration only; should I also include it in the 
previous ones?

* pr/1006:
  CLOUDSTACK-9011 - Fixed user_vm_view to only display keypairs belonging to 
the account.

Signed-off-by: Remi Bergsma 


> user_vm_view does not check for account_id of keypairs 
> ---
>
> Key: CLOUDSTACK-9011
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9011
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nera NEsic
>Priority: Minor
>
> If there are two accounts using the same key, but create a different key name 
> for it, and then a vm is created using one of the keys, the view will list 
> both keypairs as belonging to the vm, which can in turn cause confusion to 
> the users who see a keypair name which they did not create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982424#comment-14982424
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8793:


Github user pdion891 commented on the pull request:

https://github.com/apache/cloudstack/pull/879#issuecomment-152500179
  
Thanks @remibergsma I haven't got time to retest that branch again,  now 
that it's in master will retry...




> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9011) user_vm_view does not check for account_id of keypairs

2015-10-30 Thread Nera NEsic (JIRA)
Nera NEsic created CLOUDSTACK-9011:
--

 Summary: user_vm_view does not check for account_id of keypairs 
 Key: CLOUDSTACK-9011
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9011
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Nera NEsic
Priority: Minor


If there are two accounts using the same key, but create a different key name 
for it, and then a vm is created using one of the keys, the view will list both 
keypairs as belonging to the vm, which can in turn cause confusion to the users 
who see a keypair name which they did not create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8715) Add support for qemu-guest-agent to libvirt provider

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982356#comment-14982356
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8715:


Github user wido commented on the pull request:

https://github.com/apache/cloudstack/pull/985#issuecomment-152494985
  
@ustcweizhou @remibergsma I just pushed a new version of the commit.

On Ubuntu AppArmor needs to be disabled since the default profile for 
libvirt doesn't allow writing into /var/lib/libvirt/qemu. This is however 
already the case with the SSVM.

This could be fixed by adding this to 
'/etc/apparmor.d/abstractions/libvirt-qemu':
/var/lib/libvirt/qemu/channel/target/* rw,


> Add support for qemu-guest-agent to libvirt provider
> 
>
> Key: CLOUDSTACK-8715
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8715
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Sten Spans
>Assignee: Wido den Hollander
>  Labels: kvm, libvirt, qemu, systemvm
> Fix For: Future
>
>
> The qemu guest agent is a newer part of qemu/kvm/libvirt which exposes quite 
> a lot of useful functionality, which can only be provided by having an agent 
> on the VM. This includes things like freezing/thawing filesystems (for 
> backups), reading files on the guest, listing interfaces / ip addresses, etc.
> This feature has been requested by users, but is currently not implemented.
> http://users.cloudstack.apache.narkive.com/3TTmy3zj/enabling-qemu-guest-agent
> The first change needed is to add the following to the XML generated for KVM 
> virtual machines,:
> 
>   
>   
> 
> This provides the communication channel between libvirt and the agent on the 
> host. All in all a pretty simple change to LibvirtComputingResource.java / 
> LibvirtVMDef.java
> Secondly the qemu-guest-agent package needs to be added to the systemvm 
> template.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8977) cloudstack UI creates a session for users not yet logged in

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982350#comment-14982350
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8977:


Github user terbolous commented on the pull request:

https://github.com/apache/cloudstack/pull/961#issuecomment-152491165
  
just proves that we really need a new web ui :-)


> cloudstack UI creates a session for users not yet logged in
> ---
>
> Key: CLOUDSTACK-8977
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8977
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.2
>Reporter: Laszlo Hornyak
>Assignee: Laszlo Hornyak
> Fix For: Future
>
>   Original Estimate: 0.1h
>  Remaining Estimate: 0.1h
>
> The cloudstack UI always creates a session. By executing a command like 'ab 
> -n 20 -c 32' the server can be killed reqlly quick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8977) cloudstack UI creates a session for users not yet logged in

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982346#comment-14982346
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8977:


Github user miguelaferreira commented on the pull request:

https://github.com/apache/cloudstack/pull/961#issuecomment-152490396
  
After digging in a bit more, I've debugged the browser session when hitting 
the home page with `session="false"` what I see is that the page is not even 
completely downloaded. My best guess is that the server stops responding when 
the exceptions I've posted in the previous comment happen. And because the page 
was not completely downloaded it is also not well rendered. In addition the 
script tags are at the end of the page, so none of the dynamics will work 
either.

Researching for similar problems took me to a stackoverflow question

http://stackoverflow.com/questions/2055502/java-lang-illegalstateexception-pwc3999-cannot-create-a-session-after-the-resp?answertab=active#tab-top

In the accepted answer, the poster explains that the exception (which is 
the same as I see, "Cannot create a session after the response has been 
committed") has to do with trying to write to a response that has be already 
committed (i.e. already sent to the client). The poster also mentions that the 
response is automatically flushed (i.e. sent to the client) when it reached a 
size of 2K (I assume 2K bytes). Then looking at the size of the html generated 
for the ACS home page one can easily see that it is way larger than the 2K 
(actually it is 226K large).

While not being an expert on these matters, it seems to me that the culprit 
is the sheer size and complexity of this webpage. How it actually works with 
our the `session="false"` declaration is a mystery to me.


> cloudstack UI creates a session for users not yet logged in
> ---
>
> Key: CLOUDSTACK-8977
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8977
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.5.2
>Reporter: Laszlo Hornyak
>Assignee: Laszlo Hornyak
> Fix For: Future
>
>   Original Estimate: 0.1h
>  Remaining Estimate: 0.1h
>
> The cloudstack UI always creates a session. By executing a command like 'ab 
> -n 20 -c 32' the server can be killed reqlly quick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982331#comment-14982331
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ba9a600410c8d7818bb19e24b4389722ef6507e8 in cloudstack's branch 
refs/heads/4.5 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ba9a600 ]

CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Adds logrotate rules for cloudstack-agent.{err,out} log files

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982334#comment-14982334
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9000:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/993


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982332#comment-14982332
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ef90fec5eaba0b7a9f0707ee3bd5eed9aea9eedb in cloudstack's branch 
refs/heads/4.5 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ef90fec ]

Merge pull request #993 from shapeblue/4.5-logrotate-kvm-agent-erroutlogs

[4.5] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out} log files

cc @remibergsma @wido @wilderrodrigues and others

* pr/993:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982333#comment-14982333
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit ef90fec5eaba0b7a9f0707ee3bd5eed9aea9eedb in cloudstack's branch 
refs/heads/4.5 from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ef90fec ]

Merge pull request #993 from shapeblue/4.5-logrotate-kvm-agent-erroutlogs

[4.5] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out} log files

cc @remibergsma @wido @wilderrodrigues and others

* pr/993:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982304#comment-14982304
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit af90caf63aa70314b2933cd974271e56ff33e60d in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=af90caf ]

Merge pull request #992 from shapeblue/master-logrotate-kvm-agent-erroutlogs

[master/4.6] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out}, jsvc err/out log files may fill 
up disk. This adds a logrotate config in the rpm packages

cc @remibergsma @wido @wilderrodrigues and others

* pr/992:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Remi Bergsma 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982306#comment-14982306
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9000:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/992#issuecomment-152486552
  
Double checked a compile:

```
[INFO] 

[INFO] BUILD SUCCESS
[INFO] 

[INFO] Total time: 6:57.299s
[INFO] Finished at: Fri Oct 30 10:21:07 GMT 2015
[INFO] Final Memory: 84M/417M
[INFO] 

+ exit 0
```

Will soon merge.


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982303#comment-14982303
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit 909df859b329d0a63a5229cabd261ff0d233f696 in cloudstack's branch 
refs/heads/master from [~rohit.ya...@shapeblue.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=909df85 ]

CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Adds logrotate rules for cloudstack-agent.{err,out} log files

Signed-off-by: Rohit Yadav 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982305#comment-14982305
 ] 

ASF subversion and git services commented on CLOUDSTACK-9000:
-

Commit af90caf63aa70314b2933cd974271e56ff33e60d in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=af90caf ]

Merge pull request #992 from shapeblue/master-logrotate-kvm-agent-erroutlogs

[master/4.6] CLOUDSTACK-9000: logrotate cloudstack-agent out and err logsAdds 
logrotate rules for cloudstack-agent.{err,out}, jsvc err/out log files may fill 
up disk. This adds a logrotate config in the rpm packages

cc @remibergsma @wido @wilderrodrigues and others

* pr/992:
  CLOUDSTACK-9000: logrotate cloudstack-agent out and err logs

Signed-off-by: Remi Bergsma 


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982309#comment-14982309
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9000:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/992


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread David Amorim Faria (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982283#comment-14982283
 ] 

David Amorim Faria commented on CLOUDSTACK-9010:


Same issue for other version

> Fix packaging for CentOS 7
> --
>
> Key: CLOUDSTACK-9010
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: David Amorim Faria
>Assignee: David Amorim Faria
>Priority: Blocker
> Fix For: 4.6.0
>
>
> The current packaging for CentOS 7 does not work in a newly 
> installed/upgraded CentOS 7 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9010) Fix packaging for CentOS 7

2015-10-30 Thread David Amorim Faria (JIRA)
David Amorim Faria created CLOUDSTACK-9010:
--

 Summary: Fix packaging for CentOS 7
 Key: CLOUDSTACK-9010
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9010
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.6.0
Reporter: David Amorim Faria
Assignee: David Amorim Faria
Priority: Blocker
 Fix For: 4.6.0


The current packaging for CentOS 7 does not work in a newly installed/upgraded 
CentOS 7 system.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8993) DHCP fails with "no address available" when an IP is reused

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982280#comment-14982280
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8993:


Github user serbaut commented on the pull request:

https://github.com/apache/cloudstack/pull/981#issuecomment-152481954
  
Afaict, the DHCP code is rewritten in 4.6 so this exact issue shouldn't 
affect 4.5.


> DHCP fails with "no address available" when an IP is reused
> ---
>
> Key: CLOUDSTACK-8993
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8993
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.6.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CsDhcp.process() appends new entries to /etc/dhcphosts.txt causing duplicates 
> like:
> {code}
> 06:49:14:00:00:4d,10.7.32.107,node1,infinite
> 06:42:b0:00:00:3a,10.7.32.107,node2,infinite
> {code}
> This makes dnsmasq fail with "no address available".
> CsDhcp.process() should repopulate the file to remove old entries with the 
> same IP address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8812) CentOS 7 - systemd-tmpfiles - Operation not permitted

2015-10-30 Thread David Amorim Faria (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Amorim Faria updated CLOUDSTACK-8812:
---
Affects Version/s: (was: 4.6.0)

> CentOS 7 - systemd-tmpfiles - Operation not permitted
> -
>
> Key: CLOUDSTACK-8812
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8812
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.5.2
> Environment: KVM VM / CentOS Linux release 7.1.1503 (Core)
>Reporter: Sven Vogel
>Priority: Blocker
>
> installation of the shapeblue upstream 4.5.2 repository. setup of the 
> database works. when i start the  service with systemctl start 
> cloudstack-management.service i get the following error.
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd[1]: Starting CloudStack 
> Management Server...
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/netreport) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/net) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/mapper) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/vfio) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/snd) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/subsys) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lockdev) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/setrans) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/console) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/faillock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/sepermit) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/lock/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/user) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/wtmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/btmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/ask-password) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/seats) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/sessions) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/users) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/machines) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/shutdown) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/log/journal) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/log/journal/9b4671a20660436c8068d4f91eea0c1d) failed: Operation 
> not p
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/tmp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/tmp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/run/tomcat.pid: Permission denied
> Sep 04 17:14:34 cloudstack01.o

[jira] [Commented] (CLOUDSTACK-8812) CentOS 7 - systemd-tmpfiles - Operation not permitted

2015-10-30 Thread David Amorim Faria (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982277#comment-14982277
 ] 

David Amorim Faria commented on CLOUDSTACK-8812:


Hi all,

Thi PR [https://github.com/apache/cloudstack/pull/1008] solves the issue in 4.6.
I'm going to create a new ticket for 4.6 and remove it from the Affects 
Version/s.

Regards

> CentOS 7 - systemd-tmpfiles - Operation not permitted
> -
>
> Key: CLOUDSTACK-8812
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8812
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.5.2, 4.6.0
> Environment: KVM VM / CentOS Linux release 7.1.1503 (Core)
>Reporter: Sven Vogel
>Priority: Blocker
>
> installation of the shapeblue upstream 4.5.2 repository. setup of the 
> database works. when i start the  service with systemctl start 
> cloudstack-management.service i get the following error.
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd[1]: Starting CloudStack 
> Management Server...
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/netreport) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/net) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/mapper) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/vfio) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/snd) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/subsys) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lockdev) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/setrans) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/console) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/faillock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/sepermit) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/lock/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/user) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/wtmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/btmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/ask-password) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/seats) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/sessions) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/users) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/machines) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/shutdown) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/log/journal) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/log/journal/9b4671a20660436c8068d4f91eea0c1d) failed: Operation 
> not p
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/tmp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod

[jira] [Comment Edited] (CLOUDSTACK-8812) CentOS 7 - systemd-tmpfiles - Operation not permitted

2015-10-30 Thread David Amorim Faria (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982277#comment-14982277
 ] 

David Amorim Faria edited comment on CLOUDSTACK-8812 at 10/30/15 10:12 AM:
---

Hi all,

The PR [https://github.com/apache/cloudstack/pull/1008] solves the issue in 4.6.
I'm going to create a new ticket for 4.6 and remove it from the Affects 
Version/s.

Regards


was (Author: davidamorimfaria):
Hi all,

Thi PR [https://github.com/apache/cloudstack/pull/1008] solves the issue in 4.6.
I'm going to create a new ticket for 4.6 and remove it from the Affects 
Version/s.

Regards

> CentOS 7 - systemd-tmpfiles - Operation not permitted
> -
>
> Key: CLOUDSTACK-8812
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8812
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.5.2, 4.6.0
> Environment: KVM VM / CentOS Linux release 7.1.1503 (Core)
>Reporter: Sven Vogel
>Priority: Blocker
>
> installation of the shapeblue upstream 4.5.2 repository. setup of the 
> database works. when i start the  service with systemctl start 
> cloudstack-management.service i get the following error.
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd[1]: Starting CloudStack 
> Management Server...
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/netreport) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/net) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/mapper) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/vfio) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/dev/snd) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/subsys) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lockdev) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/setrans) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lock/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/lvm) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/console) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/faillock) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/sepermit) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/run/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/var/lock/ppp) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/user) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/wtmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: Failed to 
> create file /var/log/btmp: Permission denied
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/ask-password) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/seats) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/sessions) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/users) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/machines) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/systemd/shutdown) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/run/log/journal) failed: Operation not permitted
> Sep 04 17:14:34 cloudstack01.oscloud.local systemd-tmpfiles[5519]: 
> chmod(/

[jira] [Commented] (CLOUDSTACK-9000) Logrotate cloudstack-agent error and out files

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982257#comment-14982257
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9000:


Github user bhaisaab commented on the pull request:

https://github.com/apache/cloudstack/pull/992#issuecomment-152476140
  
@remibergsma should we merge this on master now? (jenkins failed due to 
some jvm issue)


> Logrotate cloudstack-agent error and out files
> --
>
> Key: CLOUDSTACK-9000
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9000
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.5.3, 4.6.0
>
>
> As defined in cloud-agent.rc ( -errfile $LOGDIR/cloudstack-agent.err -outfile 
> $LOGDIR/cloudstack-agent.out $CLASS), jsvc can fill up disk very quickly in 
> case of errors. The fix would be to logrotate, the out and err files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8977) cloudstack UI creates a session for users not yet logged in

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982246#comment-14982246
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8977:


Github user miguelaferreira commented on the pull request:

https://github.com/apache/cloudstack/pull/961#issuecomment-152474694
  
In tomcat I see the following errors:
```
Oct 30, 2015 9:39:13 AM org.apache.catalina.core.ApplicationDispatcher 
invoke
SEVERE: Servlet.service() for servlet jsp threw exception
java.lang.IllegalStateException: Cannot create a session after the response 
has been committed
at 
org.apache.catalina.connector.Request.doGetSession(Request.java:2934)
at 
org.apache.catalina.connector.Request.getSession(Request.java:2310)
at 
org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:897)
at 
javax.servlet.http.HttpServletRequestWrapper.getSession(HttpServletRequestWrapper.java:229)
at 
org.apache.catalina.core.ApplicationHttpRequest.getSession(ApplicationHttpRequest.java:569)
at 
org.apache.catalina.core.ApplicationHttpRequest.getSession(ApplicationHttpRequest.java:514)
at 
org.apache.jasper.runtime.PageContextImpl._initialize(PageContextImpl.java:147)
at 
org.apache.jasper.runtime.PageContextImpl.initialize(PageContextImpl.java:126)
at 
org.apache.jasper.runtime.JspFactoryImpl.internalGetPageContext(JspFactoryImpl.java:112)
at 
org.apache.jasper.runtime.JspFactoryImpl.getPageContext(JspFactoryImpl.java:65)
at org.apache.jsp.dictionary_jsp._jspService(dictionary_jsp.java:66)
at 
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:748)
at 
org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:604)
at 
org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:543)
at 
org.apache.jasper.runtime.JspRuntimeLibrary.include(JspRuntimeLibrary.java:954)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:2735)
at 
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at 

[jira] [Commented] (CLOUDSTACK-8947) Load Balancer not working with Isolated Networks

2015-10-30 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982176#comment-14982176
 ] 

Wei Zhou commented on CLOUDSTACK-8947:
--

sorry, I think this issue is not related to load balancer.
Because it still remains even if I remove the load balancers.
I will file another ticket for it.

> Load Balancer not working with Isolated Networks
> 
>
> Key: CLOUDSTACK-8947
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8947
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.6.0
>Reporter: Wilder Rodrigues
>Assignee: Wilder Rodrigues
>Priority: Blocker
> Fix For: 4.6.0
>
>
> 1. acquire IP in an isolated network
> 2. go to ipaddress -> configuration -> firewall 
> 3. add firewall exception for port 22
> 4. then add LB rule for port 22 to a user VM
> 5. try sshing to the new acquired ip(in step 1) --- ssh fails



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8715) Add support for qemu-guest-agent to libvirt provider

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982170#comment-14982170
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8715:


Github user ustcweizhou commented on the pull request:

https://github.com/apache/cloudstack/pull/985#issuecomment-152462216
  
@wido by the way, I just remember I have implemented some codes for 
qemu-guest-agent support , based on cloudstack 4.2.0 maybe.
It is not fully tested. I will share that with you if you need (maybe 
create a pull request to your github branch)


> Add support for qemu-guest-agent to libvirt provider
> 
>
> Key: CLOUDSTACK-8715
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8715
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Sten Spans
>Assignee: Wido den Hollander
>  Labels: kvm, libvirt, qemu, systemvm
> Fix For: Future
>
>
> The qemu guest agent is a newer part of qemu/kvm/libvirt which exposes quite 
> a lot of useful functionality, which can only be provided by having an agent 
> on the VM. This includes things like freezing/thawing filesystems (for 
> backups), reading files on the guest, listing interfaces / ip addresses, etc.
> This feature has been requested by users, but is currently not implemented.
> http://users.cloudstack.apache.narkive.com/3TTmy3zj/enabling-qemu-guest-agent
> The first change needed is to add the following to the XML generated for KVM 
> virtual machines,:
> 
>   
>   
> 
> This provides the communication channel between libvirt and the agent on the 
> host. All in all a pretty simple change to LibvirtComputingResource.java / 
> LibvirtVMDef.java
> Secondly the qemu-guest-agent package needs to be added to the systemvm 
> template.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8715) Add support for qemu-guest-agent to libvirt provider

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982168#comment-14982168
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8715:


Github user ustcweizhou commented on the pull request:

https://github.com/apache/cloudstack/pull/985#issuecomment-152461778
  
@wido ja, you got it. The issue happened on a host running with Ubuntu 
12.04 (QEMU 1.2.1 and libvirt 0.9.13)
There is no issue on Ubuntu 14.04 (QEMU 2.0.0 and libvirt 1.2.2)


> Add support for qemu-guest-agent to libvirt provider
> 
>
> Key: CLOUDSTACK-8715
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8715
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Sten Spans
>Assignee: Wido den Hollander
>  Labels: kvm, libvirt, qemu, systemvm
> Fix For: Future
>
>
> The qemu guest agent is a newer part of qemu/kvm/libvirt which exposes quite 
> a lot of useful functionality, which can only be provided by having an agent 
> on the VM. This includes things like freezing/thawing filesystems (for 
> backups), reading files on the guest, listing interfaces / ip addresses, etc.
> This feature has been requested by users, but is currently not implemented.
> http://users.cloudstack.apache.narkive.com/3TTmy3zj/enabling-qemu-guest-agent
> The first change needed is to add the following to the XML generated for KVM 
> virtual machines,:
> 
>   
>   
> 
> This provides the communication channel between libvirt and the agent on the 
> host. All in all a pretty simple change to LibvirtComputingResource.java / 
> LibvirtVMDef.java
> Secondly the qemu-guest-agent package needs to be added to the systemvm 
> template.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8947) Load Balancer not working with Isolated Networks

2015-10-30 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982165#comment-14982165
 ] 

Wei Zhou commented on CLOUDSTACK-8947:
--

The VR cannot start if load balancer is configured.

{code}

2015-10-30 09:30:29,872 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.IpAssocCommand to ConfigItems
2015-10-30 09:30:29,895 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.SetFirewallRulesCommand to ConfigItems
2015-10-30 09:30:29,897 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.SetStaticNatRulesCommand to ConfigItems
2015-10-30 09:30:29,899 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.SetFirewallRulesCommand to ConfigItems
2015-10-30 09:30:29,902 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.SetPortForwardingRulesCommand to ConfigItems
2015-10-30 09:30:29,904 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.LoadBalancerConfigCommand to ConfigItems
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: global
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: log 127.0.0.1:3914   
local0 warning
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: maxconn 4096
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: maxpipes 1024
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: chroot /var/lib/haproxy
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: user haproxy
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: group haproxy
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) global section: daemon
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section: defaults
2015-10-30 09:30:29,905 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:log global
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:modetcp
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:option  dontlognull
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:retries 3
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:option redispatch
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:option forwardfor
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:option forceclose
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:timeout connect5000
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:timeout client 5
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) default section:timeout server 5
2015-10-30 09:30:29,906 INFO  [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) Haproxy mode http enabled
2015-10-30 09:30:29,906 DEBUG [cloud.network.HAProxyConfigurator] 
(agentRequest-Handler-3:null) Haproxystats rule:
listen stats_on_public 10.11.115.143:8081
mode http
option httpclose
stats enable
stats uri /admin?stats
stats realm   Haproxy\ Statistics
stats authadmin1:AdMiN123

2015-10-30 09:30:29,908 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.SetMonitorServiceCommand to ConfigItems
2015-10-30 09:30:29,909 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) Transforming 
com.cloud.agent.api.routing.DhcpEntryCommand to ConfigItems
2015-10-30 09:30:29,910 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentReq

[jira] [Commented] (CLOUDSTACK-8715) Add support for qemu-guest-agent to libvirt provider

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982164#comment-14982164
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8715:


Github user wido commented on the pull request:

https://github.com/apache/cloudstack/pull/985#issuecomment-152460936
  
@ustcweizhou Which version of libvirt are you using?

If you use libvirt 1.0.6 or newer, you can omit the path='...' 
attribute of the  element, and libvirt will manage things automatically 
on your behalf.




> Add support for qemu-guest-agent to libvirt provider
> 
>
> Key: CLOUDSTACK-8715
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8715
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Sten Spans
>Assignee: Wido den Hollander
>  Labels: kvm, libvirt, qemu, systemvm
> Fix For: Future
>
>
> The qemu guest agent is a newer part of qemu/kvm/libvirt which exposes quite 
> a lot of useful functionality, which can only be provided by having an agent 
> on the VM. This includes things like freezing/thawing filesystems (for 
> backups), reading files on the guest, listing interfaces / ip addresses, etc.
> This feature has been requested by users, but is currently not implemented.
> http://users.cloudstack.apache.narkive.com/3TTmy3zj/enabling-qemu-guest-agent
> The first change needed is to add the following to the XML generated for KVM 
> virtual machines,:
> 
>   
>   
> 
> This provides the communication channel between libvirt and the agent on the 
> host. All in all a pretty simple change to LibvirtComputingResource.java / 
> LibvirtVMDef.java
> Secondly the qemu-guest-agent package needs to be added to the systemvm 
> template.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-9009) VPC Remote Access VPN DHCP IP from defined tier

2015-10-30 Thread Florian Engelmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Florian Engelmann closed CLOUDSTACK-9009.
-
Resolution: Invalid

> VPC Remote Access VPN DHCP IP from defined tier
> ---
>
> Key: CLOUDSTACK-9009
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9009
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: Future
>Reporter: Florian Engelmann
>
> Currently there is no IP assigned for any VPN Client using a VPC Remote 
> Access VPN. This is very frustrating as you have to assign an IP manually. It 
> would be great to allow the VPC to assign an IP from the range of one of the 
> predefined tier networks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9009) VPC Remote Access VPN DHCP IP from defined tier

2015-10-30 Thread Florian Engelmann (JIRA)
Florian Engelmann created CLOUDSTACK-9009:
-

 Summary: VPC Remote Access VPN DHCP IP from defined tier
 Key: CLOUDSTACK-9009
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9009
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: Future
Reporter: Florian Engelmann


Currently there is no IP assigned for any VPN Client using a VPC Remote Access 
VPN. This is very frustrating as you have to assign an IP manually. It would be 
great to allow the VPC to assign an IP from the range of one of the predefined 
tier networks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8928) While adding VMs to LB rule, default NIC IP is always displayed rather than the IP corresponding to the NIC where LB is being created

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982100#comment-14982100
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8928:


Github user nitin-maharana commented on the pull request:

https://github.com/apache/cloudstack/pull/903#issuecomment-152456961
  
Hi @runseb @remibergsma I don't understand how to write a test for this. If 
you have any idea of how to write one, please help me out. Thanks.


> While adding VMs to LB rule, default NIC IP is always displayed rather than 
> the IP corresponding to the NIC where LB is being created
> -
>
> Key: CLOUDSTACK-8928
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8928
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> Issue  : 
> -
> While creating LB rule, if the VM belongs to multiple NICs, always the 
> default NIC IP is the only one displayed. This causes issues in cases where 
> we want to create an LB using the non-default NIC of the VM. It fails with an 
> error message. This IP is never displayed in UI and only way is use the API 
> directly to create such an LB rule.
> Steps
> =
> 1. Create a VM with multiple NICs (VM belongs to multiple networks)
> 2. Navigate to the non-default Network of the VM -> IP Address -> 
> Configuration -> Load Balancing -> Create an LB rule -> Add -> Choose the VM 
> created
> Observe that the IP listed does not belong to that Network. It is always the 
> IP of the default NIC. By choosing this IP, the LB creation will fail since 
> the IP and network ids would not match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8958) add dedicated ips to domain

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982086#comment-14982086
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8958:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/1007#issuecomment-152453373
  
LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok

--
Ran 27 tests in 11840.093s

OK
```


And:

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=false \
smoke/test_routers.py \
smoke/test_network_acl.py \
smoke/test_privategw_acl.py \
smoke/test_reset_vm_on_reboot.py \
smoke/test_vm_life_cycle.py

[jira] [Commented] (CLOUDSTACK-8847) ListServiceOfferings is returning incompatible tagged offerings when called with VM id

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982085#comment-14982085
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8847:


Github user nitin-maharana commented on the pull request:

https://github.com/apache/cloudstack/pull/823#issuecomment-152453158
  
I tested this with this scenario.

There are three service offerings.

{
"listserviceofferingsresponse": {
"count": 3,
"serviceoffering": [
{
"id": "7482d1bb-9acb-4688-b448-d7bd7f70916a",
"name": "CO-z",
"displaytext": "CO-z",
"cpunumber": 1,
"cpuspeed": 64,
"memory": 64,
"created": "2014-05-08T17:02:54+0530",
"storagetype": "shared",
"offerha": false,
"limitcpuuse": false,
"isvolatile": false,
"tags": "z",
"issystem": false,
"defaultuse": false,
"iscustomized": false
},
{
"id": "d9ae1945-ff42-43d3-bddd-3ec54ea49bbf",
"name": "To_Error",
"displaytext": "To_Error",
"cpunumber": 4,
"cpuspeed": 4,
"memory": 256,
"created": "2014-05-26T12:14:25+0530",
"storagetype": "shared",
"offerha": false,
"limitcpuuse": false,
"isvolatile": false,
"tags": "anusha",
"issystem": false,
"defaultuse": false,
"iscustomized": false
},
{
"id": "a0ffbdf4-861c-4de2-a353-852a870281e5",
"name": "MultiTagOffering",
"displaytext": "MultiTagOffering",
"cpunumber": 1,
"cpuspeed": 64,
"memory": 64,
"created": "2014-05-28T16:10:55+0530",
"storagetype": "shared",
"offerha": false,
"limitcpuuse": false,
"isvolatile": false,
"tags": "anusha,z",
"issystem": false,
"defaultuse": false,
"iscustomized": false
}
]
}
}

Now, VM's current service offering name is "To_Error", which has tag named 
"anusha".
When I upgrade the service offering to "CO-z". I get the below error. SS 
attached.


![ccp_incompatible_compute_offering](https://cloud.githubusercontent.com/assets/12583725/10840826/537c927e-7f09-11e5-9583-3f58546c42bd.png)

But we can upgrade from "To_Error" to "MultiTagOffering". As 
"MultiTagOffering" contains tags "anusha" and some extra tags.


> ListServiceOfferings is returning incompatible tagged offerings when called 
> with VM id
> --
>
> Key: CLOUDSTACK-8847
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8847
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> When calling listServiceOfferings with VM id as parameter. It is returning 
> incompatible tagged offerings. It should only list all compatible tagged 
> offerings. The new service offering should contain all the tags of the 
> existing service offering. If that is the case It should list in the result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8902) Restart Network fails in EIP/ELB zone

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982081#comment-14982081
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8902:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/898#issuecomment-152452596
  
LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok

--
Ran 27 tests in 11871.298s

OK

```


And:

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=false \
smoke/test_routers.py \
smoke/test_network_acl.py \
smoke/test_privategw_acl.py \
smoke/test_reset_vm_on_reboot.py \
smoke/test_vm_life_cycl

[jira] [Commented] (CLOUDSTACK-8866) restart.retry.interval is being used instead of migrate.retry.interval during host maintenance

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982075#comment-14982075
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8866:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/834#issuecomment-152452258
  
LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestN

[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982066#comment-14982066
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8793:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/879


> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982062#comment-14982062
 ] 

ASF subversion and git services commented on CLOUDSTACK-8793:
-

Commit 930ef8dc7b22a28b74970826fbf53d16f1172c0a in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=930ef8d ]

Merge pull request #879 from pdube/CLOUDSTACK-8793

CLOUDSTACK-8793 Enable s2s VPN connection for projects

* pr/879:
  CLOUDSTACK-8793 Added project id to create vpn customer gateway, and to the 
impl of list vpn connections and list vpn customer gateways

Signed-off-by: Remi Bergsma 


> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982063#comment-14982063
 ] 

ASF subversion and git services commented on CLOUDSTACK-8793:
-

Commit 930ef8dc7b22a28b74970826fbf53d16f1172c0a in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=930ef8d ]

Merge pull request #879 from pdube/CLOUDSTACK-8793

CLOUDSTACK-8793 Enable s2s VPN connection for projects

* pr/879:
  CLOUDSTACK-8793 Added project id to create vpn customer gateway, and to the 
impl of list vpn connections and list vpn customer gateways

Signed-off-by: Remi Bergsma 


> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982064#comment-14982064
 ] 

ASF subversion and git services commented on CLOUDSTACK-8793:
-

Commit 930ef8dc7b22a28b74970826fbf53d16f1172c0a in cloudstack's branch 
refs/heads/master from [~remibergsma]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=930ef8d ]

Merge pull request #879 from pdube/CLOUDSTACK-8793

CLOUDSTACK-8793 Enable s2s VPN connection for projects

* pr/879:
  CLOUDSTACK-8793 Added project id to create vpn customer gateway, and to the 
impl of list vpn connections and list vpn customer gateways

Signed-off-by: Remi Bergsma 


> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982061#comment-14982061
 ] 

ASF subversion and git services commented on CLOUDSTACK-8793:
-

Commit 110f66ff13a81671dadd9b2c527d232ceb2d5411 in cloudstack's branch 
refs/heads/master from Patrick Dube
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=110f66f ]

CLOUDSTACK-8793 Added project id to create vpn customer gateway, and to the 
impl of list vpn connections and list vpn customer gateways


> Project Site-2-Site VPN Connection Fails to Register Correctly
> --
>
> Key: CLOUDSTACK-8793
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8793
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Projects
>Affects Versions: 4.5.2
> Environment: Clean install of ACS 4.5.2 on CentOS 6.6
>Reporter: Geoff Higgibottom
>Assignee: Patrick D.
>  Labels: project, vpc, vpn
>
> When trying to create a new Site-2-Site VPN Connection for a Project using 
> the UI the following error message is presented.
> "VPN connection can only be esitablished between same account's VPN gateway 
> and customer gateway!"
> Apart from the spelling mistake in the error message, the main issue is that 
> the VPN Connection fails to create as the VPN Customer Gateway is linked to 
> the Logged in user account, and not the Project.
> The VPN Gateway is correctly linked to the Project, as this was fixed in 
> CLOUDSTACK-5409.
> Manually updating the ‘domain_id’ and ‘account_id’ values in the 
> ‘s2s_vpn_connection’ table in the DB will result in the successful creation 
> of the VPN Connection, but this connection will not display in the UI or when 
> querying via the API.
> The same error exists when using only the API so it is not a UI issue.
> This prevents the use of Site-2Site VPNs for VPCs belonging to Projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8793) Project Site-2-Site VPN Connection Fails to Register Correctly

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982057#comment-14982057
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8793:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/879#issuecomment-152450103
  
@pdube I trust you, but the commit hash changed so I just run them again.

LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok

--
Ran 27 tests in 12312.625s

OK
```


And:

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=false \
smoke/test_routers.py \
smoke/test_network_acl.py \
smoke/test_private

[jira] [Commented] (CLOUDSTACK-8940) Wrong value is inserted into nics table netmask field when creating a VM

2015-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982049#comment-14982049
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8940:


Github user remibergsma commented on the pull request:

https://github.com/apache/cloudstack/pull/916#issuecomment-152449289
  
LGTM, based on a set of tests that I run on this branch (which I rebased 
myself first):

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=true \
component/test_vpc_redundant.py \
component/test_routers_iptables_default_policy.py \
component/test_routers_network_ops.py \
component/test_vpc_router_nics.py \
smoke/test_loadbalance.py \
smoke/test_internal_lb.py \
smoke/test_ssvm.py \
smoke/test_network.py

```

Result:

```
Create a redundant VPC with two networks with two VMs in each network ... 
=== TestName: test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Status : 
SUCCESS ===
ok
Create a redundant VPC with two networks with two VMs in each network and 
check default routes ... === TestName: test_02_redundant_VPC_default_routes | 
Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policy on RouterVM ... === TestName: 
test_02_routervm_iptables_policies | Status : SUCCESS ===
ok
Test iptables default INPUT/FORWARD policies on VPC router ... === 
TestName: test_01_single_VPC_iptables_policies | Status : SUCCESS ===
ok
Stop existing router, add a PF rule and check we can access the VM ... === 
TestName: test_isolate_network_FW_PF_default_routes | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: 
test_RVR_Network_FW_PF_SSH_default_routes | Status : SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test nics 
after destroy ... === TestName: test_01_VPC_nics_after_destroy | Status : 
SUCCESS ===
ok
Create a VPC with two networks with one VM in each network and test default 
routes ... === TestName: test_02_VPC_default_routes | Status : SUCCESS ===
ok
Test to create Load balancing rule with source NAT ... === TestName: 
test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: 
test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: 
test_assign_and_removal_lb | Status : SUCCESS ===
ok
Test to verify access to loadbalancer haproxy admin stats page ... === 
TestName: test02_internallb_haproxy_stats_on_all_interfaces | Status : SUCCESS 
===
ok
Test create, assign, remove of an Internal LB with roundrobin http traffic 
to 3 vm's ... === TestName: test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 
| Status : SUCCESS ===
ok
Test SSVM Internals ... === TestName: test_03_ssvm_internals | Status : 
SUCCESS ===
ok
Test CPVM Internals ... === TestName: test_04_cpvm_internals | Status : 
SUCCESS ===
ok
Test stop SSVM ... === TestName: test_05_stop_ssvm | Status : SUCCESS ===
ok
Test stop CPVM ... === TestName: test_06_stop_cpvm | Status : SUCCESS ===
ok
Test reboot SSVM ... === TestName: test_07_reboot_ssvm | Status : SUCCESS 
===
ok
Test reboot CPVM ... === TestName: test_08_reboot_cpvm | Status : SUCCESS 
===
ok
Test destroy SSVM ... === TestName: test_09_destroy_ssvm | Status : SUCCESS 
===
ok
Test destroy CPVM ... === TestName: test_10_destroy_cpvm | Status : SUCCESS 
===
ok
Test for port forwarding on source NAT ... === TestName: 
test_01_port_fwd_on_src_nat | Status : SUCCESS ===
ok
Test for port forwarding on non source NAT ... === TestName: 
test_02_port_fwd_on_non_src_nat | Status : SUCCESS ===
ok
Test for reboot router ... === TestName: test_reboot_router | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_1_static_nat_rule | Status : 
SUCCESS ===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_2_nat_rule | Status : SUCCESS 
===
ok
Test for Router rules for network rules on acquired public IP ... === 
TestName: test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Status : 
SUCCESS ===
ok

--
Ran 27 tests in 11938.702s

OK

```


And:

```
nosetests --with-marvin --marvin-config=${marvinCfg} -s -a 
tags=advanced,required_hardware=false \
smoke/test_routers.py \
smoke/test_network_acl.py \
smoke/test_privategw_acl.py \
smoke/test_reset_vm_on_reboot.py \
smoke/test_vm_life_cycl

[jira] [Commented] (CLOUDSTACK-9008) VM Snapshots no longer work with managed storage

2015-10-30 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982029#comment-14982029
 ] 

Wei Zhou commented on CLOUDSTACK-9008:
--

Mike,
more details? It might be related to the Guru change.


> VM Snapshots no longer work with managed storage
> 
>
> Key: CLOUDSTACK-9008
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9008
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
> Environment: XenServer 6.5
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
>Priority: Blocker
> Fix For: 4.6.0
>
>
> When using managed storage for the root disk of a VM, you cannot revert a VM 
> to a VM snapshot without encountering a RuntimeException that destroys the 
> state of your disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)