[jira] [Commented] (CLOUDSTACK-8234) SS VM agent fails to start due to Java error

2015-02-16 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322616#comment-14322616
 ] 

Rohit Yadav commented on CLOUDSTACK-8234:
-

Hey [~nuxro], cool keep me posted.

 SS VM agent fails to start due to Java error
 

 Key: CLOUDSTACK-8234
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8234
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: SystemVM
Affects Versions: 4.5.0
 Environment: CentOS 6, KVM
Reporter: Nux
Priority: Critical
  Labels: secondary_storage, ssvm

 After an upgrade from 4.4.2 everything appears to go smoothly except the SSVM 
 agent which fails to start:
 From the VM's /var/log/cloud.log:
 ERROR [cloud.agent.AgentShell] (main:null) Unable to start agent: Resource 
 class not found: com.cloud.storage.resource.PremiumSecondaryStorageResource 
 due to: java.lang.ClassNotFoundException: 
 com.cloud.storage.resource.PremiumSecondaryStorageResource
 More here http://fpaste.org/183058/34269811/raw/
 Java is installed:
 root@s-386-VM:~# dpkg -l| grep jre
 ii  openjdk-7-jre-headless:amd64   7u75-2.5.4-1~deb7u1  amd64 
OpenJDK Java runtime, using Hotspot JIT (headless)
 The HV and management servers are also on openjdk 7 (1.7.0) from CentOS 6 
 stock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8234) SS VM agent fails to start due to Java error

2015-02-15 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14322398#comment-14322398
 ] 

Rohit Yadav commented on CLOUDSTACK-8234:
-

Is this still an issue?

 SS VM agent fails to start due to Java error
 

 Key: CLOUDSTACK-8234
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8234
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: SystemVM
Affects Versions: 4.5.0
 Environment: CentOS 6, KVM
Reporter: Nux
Priority: Critical
  Labels: secondary_storage, ssvm

 After an upgrade from 4.4.2 everything appears to go smoothly except the SSVM 
 agent which fails to start:
 From the VM's /var/log/cloud.log:
 ERROR [cloud.agent.AgentShell] (main:null) Unable to start agent: Resource 
 class not found: com.cloud.storage.resource.PremiumSecondaryStorageResource 
 due to: java.lang.ClassNotFoundException: 
 com.cloud.storage.resource.PremiumSecondaryStorageResource
 More here http://fpaste.org/183058/34269811/raw/
 Java is installed:
 root@s-386-VM:~# dpkg -l| grep jre
 ii  openjdk-7-jre-headless:amd64   7u75-2.5.4-1~deb7u1  amd64 
OpenJDK Java runtime, using Hotspot JIT (headless)
 The HV and management servers are also on openjdk 7 (1.7.0) from CentOS 6 
 stock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8226) Upgrade to 4.5.0 from 4.3.2 fails - systemvms don't start on KVM

2015-02-06 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8226.
---
Resolution: Fixed
  Assignee: Rohit Yadav

Thanks for you comment [~weizhou], I found an issue with the systemvm template 
I was using and fixed it.

I've uploaded new systemvms here for public consumption:
http://packages.shapeblue.com/systemvmtemplate/4.5/

 Upgrade to 4.5.0 from 4.3.2 fails - systemvms don't start on KVM
 

 Key: CLOUDSTACK-8226
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8226
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Upgrading from 4.3.2 to 4.5.0 causes systemvms to not start. Related issues 
 from past:
 https://issues.apache.org/jira/browse/CLOUDSTACK-7179
 https://issues.apache.org/jira/browse/CLOUDSTACK-4826
 I followed similar steps to upgrade systemvm template but for 4.5: 
 http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.4.2/upgrade/upgrade-4.3.html#update-system-vm-templates
 cc [~kishan]
 Following seen in agent log:
 2015-02-06 21:24:09,877 WARN  [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-4:null) Timed out: 
 /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n 
 s-10-VM -p 
 %template=domP%type=secstorage%host=192.168.1.11%port=8250%name=s-10-VM%zone=1%pod=1%guid=s-10-VM%workers=5%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.168.11.153%eth2mask=255.255.0.0%gateway=192.168.1.1%eth0ip=169.254.1.128%eth0mask=255.255.0.0%eth1ip=192.168.10.231%eth1mask=255.255.0.0%mgmtcidr=192.168.0.0/16%localgw=192.168.1.1%private.network.device=eth1%eth3ip=192.168.10.240%eth3mask=255.255.0.0%storageip=192.168.10.240%storagenetmask=255.255.0.0%storagegateway=192.168.1.1%internaldns1=192.168.1.1%dns1=8.8.8.8
  .  Output is: 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8220) Fix CitrixResourceBase to support XenServer 6.5

2015-02-06 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8220.
---
Resolution: Fixed

 Fix CitrixResourceBase to support XenServer 6.5
 ---

 Key: CLOUDSTACK-8220
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8220
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


 While XenServer 6.5 seems to work with present code, just add another class 
 and patching path separate from 6.2 to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8224) CloudStack 4.5 showing lock related exceptions (seems harmless)

2015-02-06 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8224:
---

 Summary: CloudStack 4.5 showing lock related exceptions (seems 
harmless)
 Key: CLOUDSTACK-8224
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8224
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-2:ctx-3151b87e 
job-2/job-11 ctx-7dcbb1dd) releasing lock for VMTemplateStoragePool 1
WARN  [c.c.u.d.Merovingian2] (Work-Job-Executor-2:ctx-3151b87e job-2/job-11 
ctx-7dcbb1dd) Was unable to find lock for the key template_spool_ref1 and 
thread id 1121063028
com.cloud.utils.exception.CloudRuntimeException: Was unable to find lock for 
the key template_spool_ref1 and thread id 1121063028
at com.cloud.utils.db.Merovingian2.release(Merovingian2.java:274)
at 
com.cloud.utils.db.TransactionLegacy.release(TransactionLegacy.java:397)
at 
com.cloud.utils.db.GenericDaoBase.releaseFromLockTable(GenericDaoBase.java:1045)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy75.releaseFromLockTable(Unknown Source)
at 
org.apache.cloudstack.storage.volume.VolumeServiceImpl.createBaseImageAsync(VolumeServiceImpl.java:513)
at 
org.apache.cloudstack.storage.volume.VolumeServiceImpl.createVolumeFromTemplateAsync(VolumeServiceImpl.java:747)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1252)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:1322)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:982)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4627)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 

[jira] [Created] (CLOUDSTACK-8226) Upgrade to 4.5.0

2015-02-06 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8226:
---

 Summary: Upgrade to 4.5.0 
 Key: CLOUDSTACK-8226
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8226
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
Priority: Blocker


Upgrading from 4.3.2 to 4.5.0 causes systemvms to not start. Related issues 
from past:
https://issues.apache.org/jira/browse/CLOUDSTACK-7179
https://issues.apache.org/jira/browse/CLOUDSTACK-4826

I followed similar steps to upgrade systemvm template but for 4.5: 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.4.2/upgrade/upgrade-4.3.html#update-system-vm-templates

cc [~kishan]

Following seen in agent log:

2015-02-06 21:24:09,877 WARN  [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) Timed out: 
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n 
s-10-VM -p 
%template=domP%type=secstorage%host=192.168.1.11%port=8250%name=s-10-VM%zone=1%pod=1%guid=s-10-VM%workers=5%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.168.11.153%eth2mask=255.255.0.0%gateway=192.168.1.1%eth0ip=169.254.1.128%eth0mask=255.255.0.0%eth1ip=192.168.10.231%eth1mask=255.255.0.0%mgmtcidr=192.168.0.0/16%localgw=192.168.1.1%private.network.device=eth1%eth3ip=192.168.10.240%eth3mask=255.255.0.0%storageip=192.168.10.240%storagenetmask=255.255.0.0%storagegateway=192.168.1.1%internaldns1=192.168.1.1%dns1=8.8.8.8
 .  Output is: 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8226) Upgrade to 4.5.0 from 4.3.2 fails - systemvms don't start on KVM

2015-02-06 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8226:

Affects Version/s: 4.5.0
Fix Version/s: 4.6.0
   4.5.0
  Summary: Upgrade to 4.5.0 from 4.3.2 fails - systemvms don't 
start on KVM  (was: Upgrade to 4.5.0 )

 Upgrade to 4.5.0 from 4.3.2 fails - systemvms don't start on KVM
 

 Key: CLOUDSTACK-8226
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8226
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Upgrading from 4.3.2 to 4.5.0 causes systemvms to not start. Related issues 
 from past:
 https://issues.apache.org/jira/browse/CLOUDSTACK-7179
 https://issues.apache.org/jira/browse/CLOUDSTACK-4826
 I followed similar steps to upgrade systemvm template but for 4.5: 
 http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.4.2/upgrade/upgrade-4.3.html#update-system-vm-templates
 cc [~kishan]
 Following seen in agent log:
 2015-02-06 21:24:09,877 WARN  [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-4:null) Timed out: 
 /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl -n 
 s-10-VM -p 
 %template=domP%type=secstorage%host=192.168.1.11%port=8250%name=s-10-VM%zone=1%pod=1%guid=s-10-VM%workers=5%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=192.168.11.153%eth2mask=255.255.0.0%gateway=192.168.1.1%eth0ip=169.254.1.128%eth0mask=255.255.0.0%eth1ip=192.168.10.231%eth1mask=255.255.0.0%mgmtcidr=192.168.0.0/16%localgw=192.168.1.1%private.network.device=eth1%eth3ip=192.168.10.240%eth3mask=255.255.0.0%storageip=192.168.10.240%storagenetmask=255.255.0.0%storagegateway=192.168.1.1%internaldns1=192.168.1.1%dns1=8.8.8.8
  .  Output is: 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-02-06 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8183.
---
Resolution: Fixed

Not seen in latest 4.5

 Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast
 ---

 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Exceptions see when 4.3.2 upgraded to 4.5.0:
 A lot of logs
 2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
 2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
 279278805450840
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
 279278805450939
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
 279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing 
 to peer
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
 request received
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (StatsCollector-3:ctx-afb47a0b) Could not find exception: 
 com.cloud.exception.OperationTimedoutException in error code list for 
 exceptions
 2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
 (StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
 com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 
 to Host 1 timed out after 3600
 2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
 com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due 
 to Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
 9188469139742654471 to Host 1 timed out after 3600
 at 
 org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
 at 
 com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
  at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
 

[jira] [Closed] (CLOUDSTACK-8224) CloudStack 4.5 showing lock related exceptions (seems harmless)

2015-02-06 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8224.
---
Resolution: Fixed

Found from master, this has been fixed there;  CLOUDSTACK-7721

 CloudStack 4.5 showing lock related exceptions (seems harmless)
 ---

 Key: CLOUDSTACK-8224
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8224
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-2:ctx-3151b87e 
 job-2/job-11 ctx-7dcbb1dd) releasing lock for VMTemplateStoragePool 1
 WARN  [c.c.u.d.Merovingian2] (Work-Job-Executor-2:ctx-3151b87e job-2/job-11 
 ctx-7dcbb1dd) Was unable to find lock for the key template_spool_ref1 and 
 thread id 1121063028
 com.cloud.utils.exception.CloudRuntimeException: Was unable to find lock for 
 the key template_spool_ref1 and thread id 1121063028
   at com.cloud.utils.db.Merovingian2.release(Merovingian2.java:274)
   at 
 com.cloud.utils.db.TransactionLegacy.release(TransactionLegacy.java:397)
   at 
 com.cloud.utils.db.GenericDaoBase.releaseFromLockTable(GenericDaoBase.java:1045)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
   at 
 com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
   at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
   at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
   at com.sun.proxy.$Proxy75.releaseFromLockTable(Unknown Source)
   at 
 org.apache.cloudstack.storage.volume.VolumeServiceImpl.createBaseImageAsync(VolumeServiceImpl.java:513)
   at 
 org.apache.cloudstack.storage.volume.VolumeServiceImpl.createVolumeFromTemplateAsync(VolumeServiceImpl.java:747)
   at 
 org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1252)
   at 
 org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:1322)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:982)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4471)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
   at 
 com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4627)
   at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
   at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
   at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
   at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)
   at 
 

[jira] [Resolved] (CLOUDSTACK-8215) SAML2 authentication provider certificate is only valid for two days

2015-02-05 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8215.
-
Resolution: Fixed

Resolving since this is merged now on 4.5 and master.

 SAML2 authentication provider certificate is only valid for two days
 

 Key: CLOUDSTACK-8215
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8215
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0, 4.6.0
 Environment: Fresh 4.5 running on CentOS 6. Built from lastest sha 
 (5159cbec9f7 at time of wring)
Reporter: Erik Weber
  Labels: easyfix
   Original Estimate: 1h
  Remaining Estimate: 1h

 There's something wrong with the date calculation in 
 SAMLUtils.generateRandomX509Certificate().
 The result is that the certificate is only valid for ~2 days, while it 
 probably should be valid for much longer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8220) Fix CitrixResourceBase to support XenServer 6.5

2015-02-05 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8220:
---

 Summary: Fix CitrixResourceBase to support XenServer 6.5
 Key: CLOUDSTACK-8220
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8220
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


While XenServer 6.5 seems to work with present code, just add another class and 
patching path separate from 6.2 to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7792) Usage Events to be captured based on Volume State Machine

2015-02-05 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-7792.
---
Resolution: Fixed

 Usage Events to be captured based on Volume State Machine
 -

 Key: CLOUDSTACK-7792
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7792
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Usage
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.6.0


 Currently in CloudStack the Usage Events for Volume related actions are 
 captured directly at various places.
 But actually Volume has a State Machine which can be used to capture Usage 
 Events for volume similar to VM Usage Events



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-7648) There are new VM State Machine changes introduced which were missed to capture the usage events

2015-02-05 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-7648.
---
Resolution: Fixed

 There are new VM State Machine changes introduced which were missed to 
 capture the usage events
 ---

 Key: CLOUDSTACK-7648
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7648
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.5.0


 There are new VM State Machine changes introduced while adding VM Sync 
 changes and these were missed to capture the usage events. 
 This is causing to get wrong usage statistics for a VM who's state is changed 
 by VM sync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8212) database upgrade failed for fresh install of 4.5.0-SNAPSHOT

2015-02-05 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307252#comment-14307252
 ] 

Rohit Yadav commented on CLOUDSTACK-8212:
-

Which MySQL/MariaDB version are you using?

 database upgrade failed for fresh install of 4.5.0-SNAPSHOT
 ---

 Key: CLOUDSTACK-8212
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8212
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
 Environment: RedHat 7, external MariaDB on custom port.
Reporter: Andreas Stenius
 Attachments: cloudstack-4.5.0-SNAPSHOT-logs.tar.gz


 During installation of a new system, when first starting the MS, it logs that 
 the db needs upgrading from 4.0.0, but fails to do so (see attached logs).
 The culprit would seem to be due to these error messages (my guess from 
 screening the logs...):
 ERROR [c.c.u.d.ScriptRunner] (localhost-startStop-1:null) Error executing: 
 UPDATE `cloud`.`configuration` SET value = CONCAT(*.,(SELECT 
 `temptable`.`value` FROM (SELECT * FROM `cloud`.`configuration` WHERE 
 `name`=consoleproxy.url.domain) AS `temptable` WHERE 
 `temptable`.`name`=consoleproxy.url.domain)) WHERE 
 `name`=consoleproxy.url.domain 
 ERROR [c.c.u.d.ScriptRunner] (localhost-startStop-1:null) 
 java.sql.SQLException: You can't specify target table 'configuration' for 
 update in FROM clause
 ERROR [c.c.u.DatabaseUpgradeChecker] (localhost-startStop-1:null) Unable to 
 execute upgrade script: 
 /usr/share/cloudstack-management/setup/db/schema-421to430.sql



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8212) database upgrade failed for fresh install of 4.5.0-SNAPSHOT

2015-02-05 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307251#comment-14307251
 ] 

Rohit Yadav commented on CLOUDSTACK-8212:
-

Cannot reproduce using latest 4.5 and doing a fresh install, can you try again?

 database upgrade failed for fresh install of 4.5.0-SNAPSHOT
 ---

 Key: CLOUDSTACK-8212
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8212
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
 Environment: RedHat 7, external MariaDB on custom port.
Reporter: Andreas Stenius
 Attachments: cloudstack-4.5.0-SNAPSHOT-logs.tar.gz


 During installation of a new system, when first starting the MS, it logs that 
 the db needs upgrading from 4.0.0, but fails to do so (see attached logs).
 The culprit would seem to be due to these error messages (my guess from 
 screening the logs...):
 ERROR [c.c.u.d.ScriptRunner] (localhost-startStop-1:null) Error executing: 
 UPDATE `cloud`.`configuration` SET value = CONCAT(*.,(SELECT 
 `temptable`.`value` FROM (SELECT * FROM `cloud`.`configuration` WHERE 
 `name`=consoleproxy.url.domain) AS `temptable` WHERE 
 `temptable`.`name`=consoleproxy.url.domain)) WHERE 
 `name`=consoleproxy.url.domain 
 ERROR [c.c.u.d.ScriptRunner] (localhost-startStop-1:null) 
 java.sql.SQLException: You can't specify target table 'configuration' for 
 update in FROM clause
 ERROR [c.c.u.DatabaseUpgradeChecker] (localhost-startStop-1:null) Unable to 
 execute upgrade script: 
 /usr/share/cloudstack-management/setup/db/schema-421to430.sql



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8196) Local storage - Live VM migration fails

2015-02-05 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307255#comment-14307255
 ] 

Rohit Yadav commented on CLOUDSTACK-8196:
-

Somehow this works in master (with some edge cases), so I've picked few fixes 
from master to 4.5. Let's test again using latest 4.5

 Local storage - Live VM migration fails
 ---

 Key: CLOUDSTACK-8196
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8196
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
 Environment: Xenserver 6.5
Reporter: Abhinandan Prateek
Priority: Blocker
 Fix For: 4.5.0


 When you live migrate a VM with its root volume on local storage it fails 
 with following in the logs:
 2015-02-03 21:56:18,399 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-867b3e23) Zone 1 is ready to launch secondary storage VM
 2015-02-03 21:56:18,504 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-3c5b23c9) Zone 1 is ready to launch console proxy
 2015-02-03 21:56:19,080 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b5966006) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963522072
 2015-02-03 21:56:19,107 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b5966006 ctx-7c783c38) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963522072
 2015-02-03 21:56:22,082 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b08b7dae) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963525073
 2015-02-03 21:56:22,097 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b08b7dae ctx-6e581587) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963525073
 2015-02-03 21:56:22,587 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) Begin cleanup expired async-jobs
 2015-02-03 21:56:22,591 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) End cleanup expired async-jobs
 2015-02-03 21:56:24,660 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-11:null) SeqA 2-2881: Processing Seq 2-2881:  { Cmd , 
 MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
 [{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:2,_loadInfo:{\n
   \connections\: []\n},wait:0}}] }
 2015-02-03 21:56:24,663 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-11:null) SeqA 2-2881: Sending Seq 2-2881:  { Ans: , 
 MgmtId: 345043735628, via: 2, Ver: v1, Flags: 100010, 
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2015-02-03 21:56:25,081 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-474ad7b4) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963528073
 2015-02-03 21:56:25,093 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-474ad7b4 ctx-9fbdc942) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963528073
 2015-02-03 21:56:25,902 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-1:ctx-92670642) Task failed! Task record: uuid: 
 da8c120c-ce1f-35a2-2008-ec2071e3ada1
nameLabel: Async.VM.migrate_send
  nameDescription:
allowedOperations: []
currentOperations: {}
  created: Sat Jan 31 15:53:59 IST 2015
 finished: Sat Jan 31 15:54:07 IST 2015
   status: failure
   residentOn: com.xensource.xenapi.Host@50b4f213
 progress: 1.0
 type: none/
   result:
errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 2015-02-03 21:56:25,909 WARN  [c.c.h.x.r.XenServer610Resource] 
 (DirectAgent-1:ctx-92670642) Catch Exception 
 com.xensource.xenapi.Types$BadAsyncResult. Storage motion failed due to Task 
 failed! Task record: uuid: 
 da8c120c-ce1f-35a2-2008-ec2071e3ada1
nameLabel: Async.VM.migrate_send
  nameDescription:
allowedOperations: []
currentOperations: {}

[jira] [Commented] (CLOUDSTACK-8207) Set of small CloudStack tools

2015-02-04 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14305622#comment-14305622
 ] 

Rohit Yadav commented on CLOUDSTACK-8207:
-

Sure, would depend on the scope of the db tool.

 Set of small CloudStack tools
 -

 Key: CLOUDSTACK-8207
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8207
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
  Labels: cloud, golang, gsoc2015, java, python
 Fix For: 4.6.0


 Develop tools for CloudStack:
 - CloudMonkey enhancements: Refactor CloudMonkey so anyone can write color, 
 output, network and other plugins for CloudMonkey. Add library support to 
 merge cloudmonkey and marvin, so cloudmonkey can also be used to write Python 
 scripts. Add Bash helper methods so it can be used to write bash based 
 automation scripts. Write Ansible module for CloudStack using CloudMonkey or 
 Marvin.
 - Write a new Database tool using DatabaseUpgraderClass in Python or other 
 language for CloudStack to do db dump, upgrades, backups etc
 - Write a new tool to do smoke testing (on Jenkins or TravisCI) or 
 integration testing
 - Write a tool to import/save a CloudStack deployment and export a CloudStack 
 cloud using a saved configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8210) KVM Unable to Cancel Maintenance mode after upgrade

2015-02-04 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306760#comment-14306760
 ] 

Rohit Yadav commented on CLOUDSTACK-8210:
-

Thanks, applied fix on 4.3/4.4/4.5

 KVM Unable to Cancel Maintenance mode after upgrade
 ---

 Key: CLOUDSTACK-8210
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8210
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM, Management Server, Upgrade
Affects Versions: 4.4.2
 Environment: Ubuntu 14.04 management and agents. Primary storage - 
 ceph rbd. Secondary storage -nfs. Advanced Networking
Reporter: Andrei Mikhailovsky
Priority: Blocker
  Labels: ceph, kvm, maintenance, management, rbd

 After performing an upgrade from 4.3.2 to 4.4.2 I am no longer able to Cancel 
 Maintenance mode. The GUI shows the following error a few seconds after 
 pressing the button:
 Command failed due to Internal Server Error
 The management server shows the following error:
 2015-02-03 23:42:15,621 DEBUG [c.c.a.ApiServlet] 
 (catalina-exec-23:ctx-04ea4b6d ctx-35701ff3) ===END===  192.168.169.91 -- GET 
  
 command=cancelHostMaintenanceid=c092cb59-c770-4747-8d95-75aa49de5d17response=jsonsessionkey=fI2oaYTbgijs1h6HTOTMnJ%2FkChA%3D_=1423006935464
 2015-02-03 23:42:15,622 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
 (API-Job-Executor-1:ctx-1fda9d17 job-11711) Add job-11711 into job monitoring
 2015-02-03 23:42:15,623 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (API-Job-Executor-1:ctx-1fda9d17 job-11711) Executing AsyncJobVO {id:11711, 
 userId: 3, accountId: 2, instanceType: Host, instanceId: 1, cmd: 
 org.apache.cloudstack.api.command.admin.host.CancelMaintenanceCmd, cmdInfo: 
 {id:c092cb59-c770-4747-8d95-75aa49de5d17,response:json,sessionkey:fI2oaYTbgijs1h6HTOTMnJ/kChA\u003d,ctxDetails:{\com.cloud.host.Host\:\c092cb59-c770-4747-8d95-75aa49de5d17\},cmdEventType:MAINT.CANCEL,ctxUserId:3,httpmethod:GET,_:1423006935464,uuid:c092cb59-c770-4747-8d95-75aa49de5d17,ctxAccountId:2,ctxStartEventId:64857},
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 115129173025114, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: null}
 2015-02-03 23:42:15,646 ERROR [c.c.a.ApiAsyncJobDispatcher] 
 (API-Job-Executor-1:ctx-1fda9d17 job-11711) Unexpected exception while 
 executing org.apache.cloudstack.api.command.admin.host.CancelMaintenanceCmd
 java.lang.NullPointerException
 at 
 com.cloud.resource.ResourceManagerImpl.doCancelMaintenance(ResourceManagerImpl.java:2083)
 at 
 com.cloud.resource.ResourceManagerImpl.cancelMaintenance(ResourceManagerImpl.java:2140)
 at 
 com.cloud.resource.ResourceManagerImpl.cancelMaintenance(ResourceManagerImpl.java:1127)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
 at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
 at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
 at com.sun.proxy.$Proxy148.cancelMaintenance(Unknown Source)
 at 
 org.apache.cloudstack.api.command.admin.host.CancelMaintenanceCmd.execute(CancelMaintenanceCmd.java:102)
 at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141)
 at 
 com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
 at 
 org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:503)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 

[jira] [Created] (CLOUDSTACK-8209) VM migration fails across KVM hosts if hosts have same hostname even if different libvirt uuid or IPs

2015-02-04 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8209:
---

 Summary: VM migration fails across KVM hosts if hosts have same 
hostname even if different libvirt uuid or IPs
 Key: CLOUDSTACK-8209
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8209
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Minor
 Fix For: 4.5.0, 4.6.0


In case KVM hosts have same hostname but different libvirtd host uuid or IPs, 
VM migration fails with:

2015-02-04 15:22:18,042 ERROR [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-13:ctx-ae4c1ba1 job-37/job-38) Unable to complete AsyncJobVO 
{id:38, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: 
com.cloud.vm.VmWorkMigrate, cmdInfo: 
rO0ABXNyABpjb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZRdxQXtPtzYqAgAGSgAJc3JjSG9zdElkTAAJY2x1c3RlcklkdAAQTGphdmEvbGFuZy9Mb25nO0wABmhvc3RJZHEAfgABTAAFcG9kSWRxAH4AAUwAB3N0b3JhZ2V0AA9MamF2YS91dGlsL01hcDtMAAZ6b25lSWRxAH4AAXhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAACAAIAA3QAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAXNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAAXNxAH4ABwACcQB-AAlwcQB-AAk,
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 5071960016, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: Wed Feb 04 15:22:15 IST 2015}, job origin:37
com.cloud.utils.exception.CloudRuntimeException: org.libvirt.LibvirtException: 
internal error: Attempt to migrate guest to the same host kvm-test  
   
---at 
com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:1956)
---at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:1854)
---at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:4501)
---at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
   
---at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
---at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
---at java.lang.reflect.Method.invoke(Method.java:606) 
   
---at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
---at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4633)
---at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
  
---at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)
---at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
---at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
---at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
---at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
---at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
---at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)
---at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)  
   
---at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
   
---at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
---at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
---at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8208) Improve CloudStack Integration Testing and Write tool for automating it

2015-02-04 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14305031#comment-14305031
 ] 

Rohit Yadav commented on CLOUDSTACK-8208:
-

[~rcoedo] that's good, are you a student?

Let me know whatever guidance you need or more details. I would suggest you 
start by building and playing with CloudStack so you've a feel of what it is as 
a user first and then try something as developer. Join the CloudStack dev and 
user ML and ask questions when you're stuck. If you need help with setting up 
one node CloudStack deployment with NFS+KVM, I can help you or you may read 
this: http://bhaisaab.org/logs/cloudstack-kvm

I gave a talk in last CCCEU which you may follow which is an intro to 
CloudStack development as well: 
https://www.youtube.com/watch?v=g6vUHGoVtpIlist=PLbzoR-pLrL6ruJrhXZ-jSSYw0m3WfK4eaindex=28

 Improve CloudStack Integration Testing and Write tool for automating it
 ---

 Key: CLOUDSTACK-8208
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8208
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
  Labels: cloud, golang, gsoc2015, java, python
 Fix For: 4.6.0


 The integration tests that CloudStack has are hard to run on real hardware 
 due to strict/hardcoded configuration. The task of this project are:
 - Figure about minimal system resources needed to run all integration tests 
 (RAM, CPU, no. of VMs).
 - Fix integration tests for they can run on a developer's laptop or mini PCs 
 such as NUC with real hypervisors (and not just simulator) - Xen or KVM. All 
 hypervisors run in nested virtualized environment - for example Xen on 
 VirtualBox, KVM on VMWare workstation or Fusion, or KVM/Xen on KVM etc.
 - Create Jenkins job for the same
 - Write an tool necessary to automate this
 - Create ansible based (ideal/template) CloudStack deployment based on Xen or 
 KVM (checkout as an example, github.com/bhaisaab/peppercorn)
 This will be most important GSoC project and contribution to CloudStack if it 
 delivers the above because right now even if we've the integration tests, 
 they are hard to run by developers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8196) Local storage - Live VM migration fails

2015-02-04 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304981#comment-14304981
 ] 

Rohit Yadav commented on CLOUDSTACK-8196:
-

I'm unable to live migrate VM on KVM as well. Though I only get this:  VM uses 
Local storage, cannot migrate. This works for shared storage though (like NFS).
But if VM is shutdown and the root disk is migrated to a new hosts's local 
storage (to default /var/lib/libvirt/images, but the old host's still keep the 
disk image).

I think [~kishan] can comment if this issue and the above behaviour is a bug or 
limitation.

 Local storage - Live VM migration fails
 ---

 Key: CLOUDSTACK-8196
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8196
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Volumes
Affects Versions: 4.5.0
 Environment: Xenserver 6.5
Reporter: Abhinandan Prateek
Priority: Blocker
 Fix For: 4.5.0


 When you live migrate a VM with its root volume on local storage it fails 
 with following in the logs:
 2015-02-03 21:56:18,399 DEBUG [o.a.c.s.SecondaryStorageManagerImpl] 
 (secstorage-1:ctx-867b3e23) Zone 1 is ready to launch secondary storage VM
 2015-02-03 21:56:18,504 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
 (consoleproxy-1:ctx-3c5b23c9) Zone 1 is ready to launch console proxy
 2015-02-03 21:56:19,080 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b5966006) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963522072
 2015-02-03 21:56:19,107 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b5966006 ctx-7c783c38) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963522072
 2015-02-03 21:56:22,082 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b08b7dae) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963525073
 2015-02-03 21:56:22,097 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-b08b7dae ctx-6e581587) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963525073
 2015-02-03 21:56:22,587 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) Begin cleanup expired async-jobs
 2015-02-03 21:56:22,591 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
 (AsyncJobMgr-Heartbeat-1:ctx-d6eb5d59) End cleanup expired async-jobs
 2015-02-03 21:56:24,660 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-11:null) SeqA 2-2881: Processing Seq 2-2881:  { Cmd , 
 MgmtId: -1, via: 2, Ver: v1, Flags: 11, 
 [{com.cloud.agent.api.ConsoleProxyLoadReportCommand:{_proxyVmId:2,_loadInfo:{\n
   \connections\: []\n},wait:0}}] }
 2015-02-03 21:56:24,663 DEBUG [c.c.a.m.AgentManagerImpl] 
 (AgentManager-Handler-11:null) SeqA 2-2881: Sending Seq 2-2881:  { Ans: , 
 MgmtId: 345043735628, via: 2, Ver: v1, Flags: 100010, 
 [{com.cloud.agent.api.AgentControlAnswer:{result:true,wait:0}}] }
 2015-02-03 21:56:25,081 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-474ad7b4) ===START===  192.168.100.30 -- GET 
  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963528073
 2015-02-03 21:56:25,093 DEBUG [c.c.a.ApiServlet] 
 (1765698327@qtp-1462420582-8:ctx-474ad7b4 ctx-9fbdc942) ===END===  
 192.168.100.30 -- GET  
 command=queryAsyncJobResultjobId=1c233c05-2331-4130-a7c9-fdfa9cc7bc36response=jsonsessionkey=aMoUq2zeFihn%2FMD2vVoTFHf9Uys%3D_=1422963528073
 2015-02-03 21:56:25,902 WARN  [c.c.h.x.r.CitrixResourceBase] 
 (DirectAgent-1:ctx-92670642) Task failed! Task record: uuid: 
 da8c120c-ce1f-35a2-2008-ec2071e3ada1
nameLabel: Async.VM.migrate_send
  nameDescription:
allowedOperations: []
currentOperations: {}
  created: Sat Jan 31 15:53:59 IST 2015
 finished: Sat Jan 31 15:54:07 IST 2015
   status: failure
   residentOn: com.xensource.xenapi.Host@50b4f213
 progress: 1.0
 type: none/
   result:
errorInfo: [SR_BACKEND_FAILURE_44, , There is insufficient space]
  otherConfig: {}
subtaskOf: com.xensource.xenapi.Task@aaf13f6f
 subtasks: []
 2015-02-03 21:56:25,909 WARN  [c.c.h.x.r.XenServer610Resource] 
 (DirectAgent-1:ctx-92670642) Catch Exception 

[jira] [Updated] (CLOUDSTACK-7083) Implement SAML 2.0 plugin

2015-02-04 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-7083:

Issue Type: New Feature  (was: Bug)

 Implement SAML 2.0 plugin
 -

 Key: CLOUDSTACK-7083
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7083
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API, Cloudmonkey, IAM, UI
Reporter: Rohit Yadav
Assignee: Rohit Yadav
  Labels: features
 Fix For: 4.5.0


 FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/SAML+2.0+Plugin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8200) Secondary storage and systemvm template detection fails with KVM and LocalStorage

2015-02-04 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8200.
---
Resolution: Fixed

Thanks [~kishan] while I use CloudMonkey based automation, I'm not sure if it 
failed or passed to enable Zone. Anyway with latest 4.5, I could not reproduce 
the issue.

The other issue I'm getting it, VM migration fails if disk is local storage. 
This used to work in 4.3.

 Secondary storage and systemvm template detection fails with KVM and 
 LocalStorage
 -

 Key: CLOUDSTACK-8200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0

 Attachments: api.log.gz, vmops.log.gz


 With KVM, when a zone is deployed with localstorage - it fails to detect 
 systemvm template of the added secondary storage and does not do anything:
 120453 2015-02-03 22:33:57,691 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-206849e1) There is no secondary storage VM for 
 secondary storage host Seclkj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8206) Support Bhyve as a hypervisor in CloudStack

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8206:
---

 Summary: Support Bhyve as a hypervisor in CloudStack
 Key: CLOUDSTACK-8206
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8206
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
 Fix For: 4.6.0


Support Bhyve (from FreeBSD community) as a hypervisor in CloudStack. This 
would require using libvirt, and seeing what is possible wrt basic/advance zone 
and isolated/shared networking.

Suggested Mentor: Rohit Yadav



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8206) Support Bhyve as a hypervisor in CloudStack

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8206:

Labels: cloud gsoc2015 java  (was: cloud gsoc2014 java)

 Support Bhyve as a hypervisor in CloudStack
 ---

 Key: CLOUDSTACK-8206
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8206
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
  Labels: cloud, gsoc2015, java
 Fix For: 4.6.0


 Support Bhyve (from FreeBSD community) as a hypervisor in CloudStack. This 
 would require using libvirt, and seeing what is possible wrt basic/advance 
 zone and isolated/shared networking.
 Suggested Mentor: Rohit Yadav



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8205) Support Docker as a hypervisor in CloudStack

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8205:
---

 Summary: Support Docker as a hypervisor in CloudStack
 Key: CLOUDSTACK-8205
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8205
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
 Fix For: 4.6.0


Support Docker as a hypervisor in CloudStack. See what is possible 
basic/advance zone and shared/isolated networking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8207) Set of small CloudStack tools

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8207:
---

 Summary: Set of small CloudStack tools
 Key: CLOUDSTACK-8207
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8207
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
 Fix For: 4.6.0


Develop tools for CloudStack:

- CloudMonkey enhancements: Refactor CloudMonkey so anyone can write color, 
output, network and other plugins for CloudMonkey. Add library support to merge 
cloudmonkey and marvin, so cloudmonkey can also be used to write Python 
scripts. Add Bash helper methods so it can be used to write bash based 
automation scripts. Write Ansible module for CloudStack using CloudMonkey or 
Marvin.
- Write a new Database tool using DatabaseUpgraderClass in Python or other 
language for CloudStack to do db dump, upgrades, backups etc
- Write a new tool to do smoke testing (on Jenkins or TravisCI) or integration 
testing
- Write a tool to import/save a CloudStack deployment and export a CloudStack 
cloud using a saved configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8200) Secondary storage and systemvm template detection fails with KVM and LocalStorage

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8200:

Attachment: api.log.gz
vmops.log.gz

Management server log and API log.

 Secondary storage and systemvm template detection fails with KVM and 
 LocalStorage
 -

 Key: CLOUDSTACK-8200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0

 Attachments: api.log.gz, vmops.log.gz


 With KVM, when a zone is deployed with localstorage - it fails to detect 
 systemvm template of the added secondary storage and does not do anything:
 120453 2015-02-03 22:33:57,691 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-206849e1) There is no secondary storage VM for 
 secondary storage host Seclkj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8208) Improve CloudStack Integration Testing and Write tool for automating it

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8208:
---

 Summary: Improve CloudStack Integration Testing and Write tool for 
automating it
 Key: CLOUDSTACK-8208
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8208
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
 Fix For: 4.6.0


The integration tests that CloudStack has are hard to run on real hardware due 
to strict/hardcoded configuration. The task of this project are:

- Figure about minimal system resources needed to run all integration tests 
(RAM, CPU, no. of VMs).
- Fix integration tests for they can run on a developer's laptop or mini PCs 
such as NUC with real hypervisors (and not just simulator) - Xen or KVM. All 
hypervisors run in nested virtualized environment - for example Xen on 
VirtualBox, KVM on VMWare workstation or Fusion, or KVM/Xen on KVM etc.
- Create Jenkins job for the same
- Write an tool necessary to automate this
- Create ansible based (ideal/template) CloudStack deployment based on Xen or 
KVM (checkout as an example, github.com/bhaisaab/peppercorn)

This will be most important GSoC project and contribution to CloudStack if it 
delivers the above because right now even if we've the integration tests, they 
are hard to run by developers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8197) make minimal sysvm version configuratble

2015-02-03 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303168#comment-14303168
 ] 

Rohit Yadav commented on CLOUDSTACK-8197:
-

Daan - how about we make it same as the current CloudStack version (read 
version info from jar?)

 make minimal sysvm version configuratble
 

 Key: CLOUDSTACK-8197
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8197
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, SystemVM
Reporter: Daan Hoogland
Assignee: Daan Hoogland

 MinVRVersion is hard coded in the VirtualNetworkApplianceService. To 
 facilitate users to make their own versions it must be a config key. As such 
 it will be best to move it to NetworkOrchestrationService as it's use is 
 mostly from that layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8200) Secondary storage and systemvm template detection fails with KVM and LocalStorage

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8200:
---

 Summary: Secondary storage and systemvm template detection fails 
with KVM and LocalStorage
 Key: CLOUDSTACK-8200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


With KVM, when a zone is deployed with localstorage - it fails to detect 
systemvm template of the added secondary storage and does not do anything:

120453 2015-02-03 22:33:57,691 DEBUG [c.c.s.StatsCollector] 
(StatsCollector-3:ctx-206849e1) There is no secondary storage VM for secondary 
storage host Seclkj




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8198) Localstorage on KVM breaks when multiple hosts are added

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8198:
---

 Summary: Localstorage on KVM breaks when multiple hosts are added
 Key: CLOUDSTACK-8198
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8198
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.6.0


Adding more than one hosts with local storage causes the local storage uuid to 
be same if not passed. This causes the host addition and primary local storage 
registration to fail as the code that creates UUID is not random at all.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8198) Localstorage on KVM breaks when multiple hosts are added

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8198.
---
Resolution: Fixed

 Localstorage on KVM breaks when multiple hosts are added
 

 Key: CLOUDSTACK-8198
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8198
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.6.0


 Adding more than one hosts with local storage causes the local storage uuid 
 to be same if not passed. This causes the host addition and primary local 
 storage registration to fail as the code that creates UUID is not random at 
 all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8197) make minimal sysvm version configuratble

2015-02-03 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303169#comment-14303169
 ] 

Rohit Yadav commented on CLOUDSTACK-8197:
-

In doing that we can drop the minor version; so if jar is like 4.6.0-SNAPSHOT; 
we use 4.6.0

 make minimal sysvm version configuratble
 

 Key: CLOUDSTACK-8197
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8197
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, SystemVM
Reporter: Daan Hoogland
Assignee: Daan Hoogland

 MinVRVersion is hard coded in the VirtualNetworkApplianceService. To 
 facilitate users to make their own versions it must be a config key. As such 
 it will be best to move it to NetworkOrchestrationService as it's use is 
 mostly from that layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reassigned CLOUDSTACK-8183:
---

Assignee: Rohit Yadav

 Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast
 ---

 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Exceptions see when 4.3.2 upgraded to 4.5.0:
 A lot of logs
 2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
 2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
 279278805450840
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
 279278805450939
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
 279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing 
 to peer
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
 request received
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (StatsCollector-3:ctx-afb47a0b) Could not find exception: 
 com.cloud.exception.OperationTimedoutException in error code list for 
 exceptions
 2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
 (StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
 com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 
 to Host 1 timed out after 3600
 2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
 com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due 
 to Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
 9188469139742654471 to Host 1 timed out after 3600
 at 
 org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
 at 
 com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
  at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
 

[jira] [Commented] (CLOUDSTACK-8197) make minimal sysvm version configuratble

2015-02-03 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303269#comment-14303269
 ] 

Rohit Yadav commented on CLOUDSTACK-8197:
-

[~dahn] - not really, what I'm suggesting is keep minimum version same as 
current jar's major version and use that value to override global config (sort 
of upgrade automatically) if it's less than that. For example, upgrading to 
4.4.3 from 4.4.2 won't change the min version as by the logic I shared min 
version remains 4.4.0 which of course in global config is 4.4.2 so it's fine. 
But when you upgrade to 4.5.0 or 4.5.1, the min value becomes 4.5.0. While the 
other of doing this is to change the global config param/value in an upgrade 
path. The former solution would fix it automatically for us so less 
maintenance, the latter is explicit so could also be useful.

 make minimal sysvm version configuratble
 

 Key: CLOUDSTACK-8197
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8197
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, SystemVM
Reporter: Daan Hoogland
Assignee: Daan Hoogland

 MinVRVersion is hard coded in the VirtualNetworkApplianceService. To 
 facilitate users to make their own versions it must be a config key. As such 
 it will be best to move it to NetworkOrchestrationService as it's use is 
 mostly from that layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8195) Make getSPMetadata should return XML

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8195.
-
Resolution: Fixed

 Make getSPMetadata should return XML
 

 Key: CLOUDSTACK-8195
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8195
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


 A user in community notes that getSPMetadata API does not return metadata xml 
 but nested in json or xml output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8200) Secondary storage and systemvm template detection fails with KVM and LocalStorage

2015-02-03 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8200:

Priority: Blocker  (was: Major)

 Secondary storage and systemvm template detection fails with KVM and 
 LocalStorage
 -

 Key: CLOUDSTACK-8200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 With KVM, when a zone is deployed with localstorage - it fails to detect 
 systemvm template of the added secondary storage and does not do anything:
 120453 2015-02-03 22:33:57,691 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-206849e1) There is no secondary storage VM for 
 secondary storage host Seclkj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8200) Secondary storage and systemvm template detection fails with KVM and LocalStorage

2015-02-03 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303630#comment-14303630
 ] 

Rohit Yadav commented on CLOUDSTACK-8200:
-

Moving to blocker, a previously working functionality is broken now.

 Secondary storage and systemvm template detection fails with KVM and 
 LocalStorage
 -

 Key: CLOUDSTACK-8200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 With KVM, when a zone is deployed with localstorage - it fails to detect 
 systemvm template of the added secondary storage and does not do anything:
 120453 2015-02-03 22:33:57,691 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-206849e1) There is no secondary storage VM for 
 secondary storage host Seclkj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8195) Make getSPMetadata should return XML

2015-02-03 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8195:
---

 Summary: Make getSPMetadata should return XML
 Key: CLOUDSTACK-8195
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8195
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


A user in community notes that getSPMetadata API does not return metadata xml 
but nested in json or xml output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8190) XenServer traffic label has changed in 4.5, backward incompatibility is lost

2015-02-02 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8190.
-
Resolution: Fixed

Fixed on 4.5/master both API layer and UI

 XenServer traffic label has changed in 4.5, backward incompatibility is lost
 

 Key: CLOUDSTACK-8190
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8190
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Xen, XenServer
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


 Until 4.4, XenServer traffic label was xennetworklabel but in 4.5/master it's 
 xenservernetworklabel. To keep backward compatibility I'm reverting such 
 changes introduced in a8212d9ef458dd7ac64b021e6fa33fcf64b3cce0 (xenserver 
 plugin refactoring). When in future we'll have a separate xen project plugin 
 (based on libvirt or what have you) we should add a new label like 
 xenprojectnetworklabel but let's keep the old one for backward 
 compatibility's sake.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8191) SAML users should be created in separate accounts

2015-02-02 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8191:
---

 Summary: SAML users should be created in separate accounts
 Key: CLOUDSTACK-8191
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8191
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


A user from the community reported that saml users are created under one 
account which violates multi-tenancy. Since CloudStack uses account to separate 
tenants, each SAML authenticated user should be in their own account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7792) Usage Events to be captured based on Volume State Machine

2015-02-02 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14301081#comment-14301081
 ] 

Rohit Yadav commented on CLOUDSTACK-7792:
-

Ping [~damoder.reddy] ? I'm facing SSVM related issues 
(https://issues.apache.org/jira/browse/CLOUDSTACK-8183) do you think your fix 
is relevant and also if it applies to 4.5?

 Usage Events to be captured based on Volume State Machine
 -

 Key: CLOUDSTACK-7792
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7792
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Usage
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.6.0


 Currently in CloudStack the Usage Events for Volume related actions are 
 captured directly at various places.
 But actually Volume has a State Machine which can be used to capture Usage 
 Events for volume similar to VM Usage Events



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-7792) Usage Events to be captured based on Volume State Machine

2015-02-02 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reopened CLOUDSTACK-7792:
-

 Usage Events to be captured based on Volume State Machine
 -

 Key: CLOUDSTACK-7792
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7792
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Usage
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.6.0


 Currently in CloudStack the Usage Events for Volume related actions are 
 captured directly at various places.
 But actually Volume has a State Machine which can be used to capture Usage 
 Events for volume similar to VM Usage Events



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8190) XenServer traffic label has changed in 4.5, backward incompatibility is lost

2015-02-02 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8190:
---

 Summary: XenServer traffic label has changed in 4.5, backward 
incompatibility is lost
 Key: CLOUDSTACK-8190
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8190
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Xen, XenServer
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


Until 4.4, XenServer traffic label was xennetworklabel but in 4.5/master it's 
xenservernetworklabel. To keep backward compatibility I'm reverting such 
changes introduced in a8212d9ef458dd7ac64b021e6fa33fcf64b3cce0 (xenserver 
plugin refactoring). When in future we'll have a separate xen project plugin 
(based on libvirt or what have you) we should add a new label like 
xenprojectnetworklabel but let's keep the old one for backward compatibility's 
sake.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8191) SAML users should be created in separate accounts

2015-02-02 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8191.
-
Resolution: Fixed

Thanks [~webern] - this is fixed now.

 SAML users should be created in separate accounts
 -

 Key: CLOUDSTACK-8191
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8191
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 A user from the community reported that saml users are created under one 
 account which violates multi-tenancy. Since CloudStack uses account to 
 separate tenants, each SAML authenticated user should be in their own account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293434#comment-14293434
 ] 

Rohit Yadav commented on CLOUDSTACK-8183:
-

Update: any secondary storage related operations fail, such as download 
template, register template etc. 

 Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast
 ---

 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Exceptions see when 4.3.2 upgraded to 4.5.0:
 A lot of logs
 2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
 2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
 279278805450840
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
 279278805450939
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
 279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing 
 to peer
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
 request received
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (StatsCollector-3:ctx-afb47a0b) Could not find exception: 
 com.cloud.exception.OperationTimedoutException in error code list for 
 exceptions
 2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
 (StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
 com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 
 to Host 1 timed out after 3600
 2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
 com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due 
 to Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
 9188469139742654471 to Host 1 timed out after 3600
 at 
 org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
 at 
 com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
  at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 at 
 

[jira] [Resolved] (CLOUDSTACK-5946) SSL: Fail to find the generated keystore. Loading fail-safe one to continue.

2015-01-27 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-5946.
-
Resolution: Fixed

Works for me.

 SSL: Fail to find the generated keystore. Loading fail-safe one to continue.
 

 Key: CLOUDSTACK-5946
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5946
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.1, 4.3.0
 Environment: CentOS 6.4 x64
Reporter: Vadim Kim.
Assignee: Kishan Kavala
Priority: Minor
 Fix For: 4.5.0


 Management logs are full of this warning.
 deleting file /etc/cloudstack/management/cloudmanagementserver.keystore
 seems to fix the problem. System re-crates it later and does not issue 
 warnings anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8184) Usage server failed to start after upgrade to 4.5.0

2015-01-27 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8184:
---

 Summary: Usage server failed to start after upgrade to 4.5.0
 Key: CLOUDSTACK-8184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


The usage server failed to upgrade after upgrading from 4.3.2 to 4.5.0 RC2:

Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'usageNetworkOfferingDaoImpl' defined in URL 
[jar:file:/usr/share/cloudstack-usage/lib/cloud-engine-schema-4.5.0.jar!/com/cloud/usage/dao/UsageNetworkOfferingDaoImpl.class]:
 BeanPostProcessor before instantiation of bean failed; nested exception is 
net.sf.cglib.core.CodeGenerationException: 
java.lang.ExceptionInInitializerError--null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293418#comment-14293418
 ] 

Rohit Yadav commented on CLOUDSTACK-8183:
-

from cloud.mshost table it did not remove the previous cloudstack mgmt server 
entries, so I manually shutdown the mgmt server; udpated the entries's removed 
to NOW() and when I restart I still see the log being forwarded to the mgmt 
server (there is only one mgmt server in mshost table now in Up state that is 
not removed);

2015-01-27 17:45:51,909 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(AgentManager-Handler-12:null) Seq 1-7493145355014373378: Forwarding Seq 
1-7493145355014373378:  { Cmd , MgmtId: 279278805451596, via: 1, Ver: v1, 
Flags: 100111, 
[{com.cloud.agent.api.storage.ListVolumeCommand:{store:{com.cloud.agent.api.to.NfsTO:{_url:nfs://192.168.1.11/export/secondary,_role:Image}},secUrl:nfs://192.168.1.11/export/secondary,wait:0}}]
 } to 279278805450840


 Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast
 ---

 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Exceptions see when 4.3.2 upgraded to 4.5.0:
 A lot of logs
 2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
 2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
 279278805450840
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
 279278805450939
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
 279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing 
 to peer
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
 request received
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (StatsCollector-3:ctx-afb47a0b) Could not find exception: 
 com.cloud.exception.OperationTimedoutException in error code list for 
 exceptions
 2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
 (StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
 com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 
 to Host 1 timed out after 3600
 2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
 com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due 
 to Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
 9188469139742654471 to Host 1 timed out after 3600
 at 
 org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
 at 
 com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
  at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 

[jira] [Commented] (CLOUDSTACK-5946) SSL: Fail to find the generated keystore. Loading fail-safe one to continue.

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293401#comment-14293401
 ] 

Rohit Yadav commented on CLOUDSTACK-5946:
-

Funny, I've none of them. Will create one. Perhaps an env issue.

 SSL: Fail to find the generated keystore. Loading fail-safe one to continue.
 

 Key: CLOUDSTACK-5946
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5946
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.1, 4.3.0
 Environment: CentOS 6.4 x64
Reporter: Vadim Kim.
Assignee: Kishan Kavala
Priority: Minor
 Fix For: 4.5.0


 Management logs are full of this warning.
 deleting file /etc/cloudstack/management/cloudmanagementserver.keystore
 seems to fix the problem. System re-crates it later and does not issue 
 warnings anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8184) Usage server failed to start after upgrade to 4.5.0

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293405#comment-14293405
 ] 

Rohit Yadav commented on CLOUDSTACK-8184:
-

cc [~kishan] can you see what's is causing it? If you can give me pointers I 
can try to fix it? Thanks.

 Usage server failed to start after upgrade to 4.5.0
 ---

 Key: CLOUDSTACK-8184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 The usage server failed to upgrade after upgrading from 4.3.2 to 4.5.0 RC2:
 Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
 org.springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'usageNetworkOfferingDaoImpl' defined in URL 
 [jar:file:/usr/share/cloudstack-usage/lib/cloud-engine-schema-4.5.0.jar!/com/cloud/usage/dao/UsageNetworkOfferingDaoImpl.class]:
  BeanPostProcessor before instantiation of bean failed; nested exception is 
 net.sf.cglib.core.CodeGenerationException: 
 java.lang.ExceptionInInitializerError--null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CLOUDSTACK-8184) Usage server failed to start after upgrade to 4.5.0

2015-01-27 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-8184.
---
Resolution: Fixed
  Assignee: Rohit Yadav

Fixed by cherry-picking fix from master;
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6fb9746

 Usage server failed to start after upgrade to 4.5.0
 ---

 Key: CLOUDSTACK-8184
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8184
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 The usage server failed to upgrade after upgrading from 4.3.2 to 4.5.0 RC2:
 Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
 org.springframework.beans.factory.BeanCreationException: Error creating bean 
 with name 'usageNetworkOfferingDaoImpl' defined in URL 
 [jar:file:/usr/share/cloudstack-usage/lib/cloud-engine-schema-4.5.0.jar!/com/cloud/usage/dao/UsageNetworkOfferingDaoImpl.class]:
  BeanPostProcessor before instantiation of bean failed; nested exception is 
 net.sf.cglib.core.CodeGenerationException: 
 java.lang.ExceptionInInitializerError--null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-5946) SSL: Fail to find the generated keystore. Loading fail-safe one to continue.

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293350#comment-14293350
 ] 

Rohit Yadav commented on CLOUDSTACK-5946:
-

Still seen in CloudStack, from 4.3.2 to 4.5.0 upgrade:
2015-01-27 16:14:23,686 WARN  [c.c.u.n.Link] (AgentManager-Selector:null) SSL: 
Fail to find the generated keystore. Loading fail-safe one to continue.

Please advise?

 SSL: Fail to find the generated keystore. Loading fail-safe one to continue.
 

 Key: CLOUDSTACK-5946
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5946
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.1, 4.3.0
 Environment: CentOS 6.4 x64
Reporter: Vadim Kim.
Assignee: Kishan Kavala
Priority: Minor
 Fix For: 4.5.0


 Management logs are full of this warning.
 deleting file /etc/cloudstack/management/cloudmanagementserver.keystore
 seems to fix the problem. System re-crates it later and does not issue 
 warnings anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293356#comment-14293356
 ] 

Rohit Yadav commented on CLOUDSTACK-8183:
-

In 10seconds the logs have more than a million lines containing:
2015-01-27 16:21:44,313 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(AgentManager-Handler-10:null) Seq 1-2130202623746244610: Forwarding Seq 
1-2130202623746244610: { Cmd , MgmtId: 279278805451241, via: 1, Ver: v1, Flags: 
100111, 
[{com.cloud.agent.api.storage.ListVolumeCommand:{store:{com.cloud.agent.api.to.NfsTO:{_url:nfs://192.168.1.11/export/secondary,_role:Image}},secUrl:nfs://192.168.1.11/export/secondary,wait:0}}]
 } to 279278805450840

Any storage folks have any idea? cc [~nitinme] [~mike-tutkowski] [~edison] ?

 Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast
 ---

 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 Exceptions see when 4.3.2 upgraded to 4.5.0:
 A lot of logs
 2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
 2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
 279278805450840
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
 279278805450939
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
 279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
 2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing 
 to peer
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
 (AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
 request received
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
 time because this is the current command
 2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
 (StatsCollector-3:ctx-afb47a0b) Could not find exception: 
 com.cloud.exception.OperationTimedoutException in error code list for 
 exceptions
 2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
 2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
 (StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
 2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
 (StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
 com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 
 to Host 1 timed out after 3600
 2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
 (StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
 com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due 
 to Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
 9188469139742654471 to Host 1 timed out after 3600
 at 
 org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
 at 
 com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
 at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
  at 
 org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
 at 
 org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
 at 
 

[jira] [Commented] (CLOUDSTACK-8184) Usage server failed to start after upgrade to 4.5.0

2015-01-27 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293407#comment-14293407
 ] 

Rohit Yadav commented on CLOUDSTACK-8184:
-

Here's the full log, it could be related to my environment:

Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'usageNetworkOfferingDaoImpl' defined in URL 
[jar:file:/usr/share/cloudstack-usage/lib/cloud-engine-schema-4.5.0.jar!/com/cloud/usage/dao/UsageNetworkOfferingDaoImpl.class]:
 BeanPostProcessor before instantiation of bean failed; nested exception is 
net.sf.cglib.core.CodeGenerationException: 
java.lang.ExceptionInInitializerError--null
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:454)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:628)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.context.support.ClassPathXmlApplicationContext.init(ClassPathXmlApplicationContext.java:139)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.context.support.ClassPathXmlApplicationContext.init(ClassPathXmlApplicationContext.java:83)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011... 5 more
Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
net.sf.cglib.core.CodeGenerationException: 
java.lang.ExceptionInInitializerError--null
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.core.ReflectUtils.newInstance(ReflectUtils.java:235)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.core.ReflectUtils.newInstance(ReflectUtils.java:216)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.proxy.Enhancer.createUsingReflection(Enhancer.java:643)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:225)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
com.cloud.utils.component.ComponentInstantiationPostProcessor.postProcessBeforeInstantiation(ComponentInstantiationPostProcessor.java:92)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:890)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:448)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011... 15 more
Jan 27 17:27:29 bluebox jsvc.exec[6449]: Caused by: 
java.lang.ExceptionInInitializerError
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
com.cloud.utils.db.TransactionContextBuilder.interceptStart(TransactionContextBuilder.java:49)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
com.cloud.usage.dao.UsageNetworkOfferingDaoImpl_EnhancerByCloudStack_38feacdf.createPartialSelectSql(generated)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
com.cloud.utils.db.GenericDaoBase.init(GenericDaoBase.java:230)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
com.cloud.usage.dao.UsageNetworkOfferingDaoImpl_EnhancerByCloudStack_38feacdf.init(generated)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
Jan 27 17:27:29 bluebox jsvc.exec[6449]: #011at 
net.sf.cglib.core.ReflectUtils.newInstance(ReflectUtils.java:228)
Jan 27 

[jira] [Created] (CLOUDSTACK-8183) Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk very fast

2015-01-27 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8183:
---

 Summary: Exceptions from 4.3.2 to 4.5.0 upgrade, logs fill up disk 
very fast
 Key: CLOUDSTACK-8183
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8183
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


Exceptions see when 4.3.2 upgraded to 4.5.0:

A lot of logs

2015-01-27 16:14:15,161 DEBUG [c.c.s.StatsCollector] 
(StatsCollector-3:ctx-afb47a0b) StorageCollector is running...
2015-01-27 16:14:15,165 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Forwarding null to 
279278805450840
2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(AgentManager-Handler-7:null) Seq 1-9188469139742654471: Routing from 
279278805450939
2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(AgentManager-Handler-7:null) Seq 1-9188469139742654471: Link is closed
2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
(AgentManager-Handler-7:null) Seq 1-9188469139742654471: MgmtId 
279278805450939: Req: Resource [Host:1] is unreachable: Host 1: Link is closed
2015-01-27 16:14:15,166 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
(AgentManager-Handler-7:null) Seq 1--1: MgmtId 279278805450939: Req: Routing to 
peer
2015-01-27 16:14:15,167 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] 
(AgentManager-Handler-8:null) Seq 1--1: MgmtId 279278805450939: Req: Cancel 
request received
2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
(AgentManager-Handler-8:null) Seq 1-9188469139742654471: Cancelling.
2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
(StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
time because this is the current command
2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
(StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Waiting some more 
time because this is the current command
2015-01-27 16:14:15,167 INFO  [c.c.u.e.CSExceptionErrorCode] 
(StatsCollector-3:ctx-afb47a0b) Could not find exception: 
com.cloud.exception.OperationTimedoutException in error code list for exceptions
2015-01-27 16:14:15,167 WARN  [c.c.a.m.AgentAttache] 
(StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Timed out on null
2015-01-27 16:14:15,167 DEBUG [c.c.a.m.AgentAttache] 
(StatsCollector-3:ctx-afb47a0b) Seq 1-9188469139742654471: Cancelling.
2015-01-27 16:14:15,167 DEBUG [o.a.c.s.RemoteHostEndPoint] 
(StatsCollector-3:ctx-afb47a0b) Failed to send command, due to Agent:1, 
com.cloud.exception.OperationTimedoutException: Commands 9188469139742654471 to 
Host 1 timed out after 3600
2015-01-27 16:14:15,167 ERROR [c.c.s.StatsCollector] 
(StatsCollector-3:ctx-afb47a0b) Error trying to retrieve storage stats
com.cloud.utils.exception.CloudRuntimeException: Failed to send command, due to 
Agent:1, com.cloud.exception.OperationTimedoutException: Commands 
9188469139742654471 to Host 1 timed out after 3600
at 
org.apache.cloudstack.storage.RemoteHostEndPoint.sendMessage(RemoteHostEndPoint.java:133)
at 
com.cloud.server.StatsCollector$StorageCollector.runInContext(StatsCollector.java:623)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-01-27 16:14:15,651 DEBUG [c.c.s.StatsCollector] 
(StatsCollector-1:ctx-83c6ce44) HostStatsCollector is running...
2015-01-27 16:14:15,874 DEBUG 

[jira] [Updated] (CLOUDSTACK-8151) An API to cleanup cloud_usage table

2015-01-23 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8151:

Summary: An API to cleanup cloud_usage table  (was: Global config and 
periodic thread to cleanup cloud_usage table)

 An API to cleanup cloud_usage table
 ---

 Key: CLOUDSTACK-8151
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8151
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 The cloud_usage table can be very big and once the raw data is processed we 
 can safely remove the rows to reduce database size. There is a KB on this: 
 http://support.citrix.com/article/CTX139043
 The aim is to create a global config (delete records older than specified no. 
 of days) and an API (if necessary) to trigger cleaning up of old entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8151) Global config and periodic thread to cleanup cloud_usage table

2015-01-23 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14289220#comment-14289220
 ] 

Rohit Yadav commented on CLOUDSTACK-8151:
-

I thought about it and instead of a thread, I'm planning to add an explicit API 
to allow removal of older entries. This gives the user/admin much better 
control.

 Global config and periodic thread to cleanup cloud_usage table
 --

 Key: CLOUDSTACK-8151
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8151
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 The cloud_usage table can be very big and once the raw data is processed we 
 can safely remove the rows to reduce database size. There is a KB on this: 
 http://support.citrix.com/article/CTX139043
 The aim is to create a global config (delete records older than specified no. 
 of days) and an API (if necessary) to trigger cleaning up of old entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7365) Upgrading without proper systemvm template corrupt cloudstack management server

2015-01-23 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14289226#comment-14289226
 ] 

Rohit Yadav commented on CLOUDSTACK-7365:
-

Ping, let's fix this before next 4.5.0 RC is cut?

 Upgrading without proper systemvm template corrupt cloudstack management 
 server
 ---

 Key: CLOUDSTACK-7365
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7365
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.3.0, 4.4.0, 4.4.1
Reporter: Pierre-Luc Dion
  Labels: upgrade
 Attachments: 4.4.0to4.4.1_mgtlogissue.txt


 Since 4.3.0, also affecting 4.4.0 and 4.4.1. When upgrading CloudStack 
 management server, it is required to have systemvm template properly named 
 prior to the upgrade. otherwise the management server will fail to restart 
 with after upgrading database schema.
 The possible repair method is to revert packages to previously installed 
 CloudStack and restore the database which have been upgraded.
 This is not a viable upgrade path since management servers packages could be 
 accidentally upgraded after a  yum upgrade or apt-get update.
 Upgrading CloudStack management-server without previously uploading systemvm 
 template should not fail to start the management-server. if the systemvm 
 template is not in place, then the management-server cannot start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8151) An API to cleanup cloud_usage table

2015-01-23 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8151.
-
Resolution: Fixed

Please close after testing. Work for me.

 An API to cleanup cloud_usage table
 ---

 Key: CLOUDSTACK-8151
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8151
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 The cloud_usage table can be very big and once the raw data is processed we 
 can safely remove the rows to reduce database size. There is a KB on this: 
 http://support.citrix.com/article/CTX139043
 The aim is to create a global config (delete records older than specified no. 
 of days) and an API (if necessary) to trigger cleaning up of old entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6378) SSL: Fail to find the generated keystore.

2015-01-21 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285369#comment-14285369
 ] 

Rohit Yadav commented on CLOUDSTACK-6378:
-

Issue still exists on latest 4.5, I'm not sure how to fix this

 SSL: Fail to find the generated keystore.
 -

 Key: CLOUDSTACK-6378
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6378
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.1, 4.3.0, 4.4.0
 Environment: Confirmed this is an issue on centos 6.5 and ubuntu 
 12.04 on both versions of Cloudstack 4.2.1 and 4.3.0
Reporter: Michael Phillips
Priority: Minor

 The logs on versions 4.2.1 and 4.3.0 get filled with the following error:
 WARN [c.c.u.n.Link] (AgentManager-Selector:null) SSL: Fail to find the 
 generated keystore. Loading fail-safe one to continue.
 In 4.2.1, the fix was to rename cloudmanagementserver.keystore, to 
 cloud.keystore then restart cloudstack.
 In 4.3.0, the system by default does not create the file named 
 cloudmanagementserver.keystore. You now have to create the keystore file by 
 running sudo keytool -genkey -keystore, then name the file cloud.keystore. 
 This error is reproducible, and confirmed by other users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-5946) SSL: Fail to find the generated keystore. Loading fail-safe one to continue.

2015-01-21 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reopened CLOUDSTACK-5946:
-

[~kishan] your fix got reverted 47bb175bd4b443f992d8ecc55f0bc795693ba016, the 
issue is still seen in latest 4.5

 SSL: Fail to find the generated keystore. Loading fail-safe one to continue.
 

 Key: CLOUDSTACK-5946
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5946
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.1, 4.3.0
 Environment: CentOS 6.4 x64
Reporter: Vadim Kim.
Assignee: Kishan Kavala
Priority: Minor
 Fix For: 4.5.0


 Management logs are full of this warning.
 deleting file /etc/cloudstack/management/cloudmanagementserver.keystore
 seems to fix the problem. System re-crates it later and does not issue 
 warnings anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-5946) SSL: Fail to find the generated keystore. Loading fail-safe one to continue.

2015-01-21 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285577#comment-14285577
 ] 

Rohit Yadav commented on CLOUDSTACK-5946:
-

Please resolve/close after testing, cc [~htrippaers] - Hugo can you please 
check this?

 SSL: Fail to find the generated keystore. Loading fail-safe one to continue.
 

 Key: CLOUDSTACK-5946
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5946
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.1, 4.3.0
 Environment: CentOS 6.4 x64
Reporter: Vadim Kim.
Assignee: Kishan Kavala
Priority: Minor
 Fix For: 4.5.0


 Management logs are full of this warning.
 deleting file /etc/cloudstack/management/cloudmanagementserver.keystore
 seems to fix the problem. System re-crates it later and does not issue 
 warnings anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8155) JSON response from Mgmt server has additional spaces, breaks a badly written client

2015-01-21 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8155.
-
Resolution: Fixed

 JSON response from Mgmt server has additional spaces, breaks a badly written 
 client
 ---

 Key: CLOUDSTACK-8155
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8155
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 If the JSON parser of an old http client does not take into account extra 
 whitespaces in the JSON response they may break. The fix is to remove those 
 unnecessary whitespaces from ApiResponseSerializer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-7648) There are new VM State Machine changes introduced which were missed to capture the usage events

2015-01-20 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reopened CLOUDSTACK-7648:
-

not fixed on 4.5 branch

 There are new VM State Machine changes introduced which were missed to 
 capture the usage events
 ---

 Key: CLOUDSTACK-7648
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7648
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.5.0


 There are new VM State Machine changes introduced while adding VM Sync 
 changes and these were missed to capture the usage events. 
 This is causing to get wrong usage statistics for a VM who's state is changed 
 by VM sync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8172) Console proxy does not work in advance network with KVM and ACS 4.5

2015-01-20 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8172:

Priority: Blocker  (was: Major)

 Console proxy does not work in advance network with KVM and ACS 4.5
 ---

 Key: CLOUDSTACK-8172
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8172
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Blocker
 Fix For: 4.5.0, 4.6.0


 While this could be an environment related issue. On KVM (Ubuntu 14.04 x64 
 host) host with latest (SHA 45ebdf34aee51217bf32e58039da16870dd1e5b3) ACS 4.5 
 I'm unable to get console proxy to work. While SSVM seems to work along with 
 VPCs and VLAN isolation based isolated SNAT network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8172) Console proxy does not work in advance network with KVM and ACS 4.5

2015-01-20 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8172:
---

 Summary: Console proxy does not work in advance network with KVM 
and ACS 4.5
 Key: CLOUDSTACK-8172
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8172
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0
Reporter: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


While this could be an environment related issue. On KVM (Ubuntu 14.04 x64 
host) host with latest (SHA 45ebdf34aee51217bf32e58039da16870dd1e5b3) ACS 4.5 
I'm unable to get console proxy to work. While SSVM seems to work along with 
VPCs and VLAN isolation based isolated SNAT network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8171) Lock related warnings seen in 4.5/master related to template_spool_ref2

2015-01-20 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8171:
---

 Summary: Lock related warnings seen in 4.5/master related to 
template_spool_ref2
 Key: CLOUDSTACK-8171
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8171
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


Warnings seen on 4.5 mgmt server startup:

INFO  [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-18:ctx-7e377b10 
job-29/job-30 ctx-a3a27aa3) releasing lock for VMTemplateStoragePool 2
WARN  [c.c.u.d.Merovingian2] (Work-Job-Executor-18:ctx-7e377b10 job-29/job-30 
ctx-a3a27aa3) Was unable to find lock for the key template_spool_ref2 and 
thread id 1504476186
com.cloud.utils.exception.CloudRuntimeException: Was unable to find lock for 
the key template_spool_ref2 and thread id 1504476186
at com.cloud.utils.db.Merovingian2.release(Merovingian2.java:274)
at 
com.cloud.utils.db.TransactionLegacy.release(TransactionLegacy.java:397)
at 
com.cloud.utils.db.GenericDaoBase.releaseFromLockTable(GenericDaoBase.java:1045)
at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy75.releaseFromLockTable(Unknown Source)
at 
org.apache.cloudstack.storage.volume.VolumeServiceImpl.createBaseImageAsync(VolumeServiceImpl.java:513)
at 
org.apache.cloudstack.storage.volume.VolumeServiceImpl.createVolumeFromTemplateAsync(VolumeServiceImpl.java:747)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1250)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:1320)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:981)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4596)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)





--
This message was sent 

[jira] [Commented] (CLOUDSTACK-7920) NPE in Volume sync causing ssvm agent to not connect

2015-01-19 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283306#comment-14283306
 ] 

Rohit Yadav commented on CLOUDSTACK-7920:
-

[~nitinme] thanks for commenting, for what it is worth if it makes code robust 
if possible please see if you can cherry-pick this on 4.5? I tried to 
cherry-pick this but it failed with a conflict in one of the files. Since, I 
don't know anything around that part of codebase I aborted the cherry-pick. I 
think it won't be a lot of effort on your end to cherry-pick and fix the 
conflict since you know the code.

 NPE in Volume sync causing ssvm agent to not connect 
 -

 Key: CLOUDSTACK-7920
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7920
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
Priority: Critical
 Fix For: 4.6.0


 NPE in Volume sync causing ssvm agent to not connect 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7792) Usage Events to be captured based on Volume State Machine

2015-01-19 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283470#comment-14283470
 ] 

Rohit Yadav commented on CLOUDSTACK-7792:
-

[~damoder.reddy] Damodar do you think we should cherry-pick this fix on 4.5? 
Looks interesting improvement. If we should then please cherry-pick and fix 
conflicts on 4.5. Thanks.

 Usage Events to be captured based on Volume State Machine
 -

 Key: CLOUDSTACK-7792
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7792
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, Usage
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.6.0


 Currently in CloudStack the Usage Events for Volume related actions are 
 captured directly at various places.
 But actually Volume has a State Machine which can be used to capture Usage 
 Events for volume similar to VM Usage Events



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7648) There are new VM State Machine changes introduced which were missed to capture the usage events

2015-01-19 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283508#comment-14283508
 ] 

Rohit Yadav commented on CLOUDSTACK-7648:
-

[~damoder.reddy] [~kishan] Damodar/Kishan - the fix version is 4.5.0 and this 
does not exist in 4.5 branch, can you please fix it in 4.5 branch?

 There are new VM State Machine changes introduced which were missed to 
 capture the usage events
 ---

 Key: CLOUDSTACK-7648
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7648
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.5.0


 There are new VM State Machine changes introduced while adding VM Sync 
 changes and these were missed to capture the usage events. 
 This is causing to get wrong usage statistics for a VM who's state is changed 
 by VM sync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8164) Removing snapshot for a VM whose host is disabled gives a null pointer exception

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8164.
-
Resolution: Fixed

Merged on 4.3, 4.4, 4.5 and master.

 Removing snapshot for a VM whose host is disabled gives a null pointer 
 exception
 

 Key: CLOUDSTACK-8164
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8164
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Abhinandan Prateek
Assignee: Abhinandan Prateek
 Fix For: 4.5.0


 tcher] (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Run VM work job: 
 com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM 3, job origin: 597
 2015-01-17 00:09:55,762 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Execute VM 
 work job: 
 com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots{userId:2,accountId:2,vmId:3,handlerName:VMSnapshotManagerImpl}
 2015-01-17 00:09:55,794 ERROR [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Invocation 
 exception, caused by: java.lang.NullPointerException
 2015-01-17 00:09:55,794 INFO  [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Rethrow 
 exception java.lang.NullPointerException
 2015-01-17 00:09:55,794 DEBUG [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Done with run of VM work 
 job: com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM 3, job origin: 
 597
 2015-01-17 00:09:55,794 ERROR [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Unable to complete 
 AsyncJobVO {id:599, userId: 2, accountId: 2, instanceType: null, instanceId: 
 null, cmd: com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots, cmdInfo: 
 rO0ABXNyADBjb20uY2xvdWQudm0uc25hcHNob3QuVm1Xb3JrRGVsZXRlQWxsVk1TbmFwc2hvdHOsl-VRajf8cAIAAUwABHR5cGV0ACdMY29tL2Nsb3VkL3ZtL3NuYXBzaG90L1ZNU25hcHNob3QkVHlwZTt4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1ldAASTGphdmEvbGFuZy9TdHJpbmc7eHAAAgACAAN0ABVWTVNuYXBzaG90TWFuYWdlckltcGxw,
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 8796753417514, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: Sat Jan 17 00:09:53 IST 2015}, job origin:597
 java.lang.NullPointerException
 at 
 org.apache.cloudstack.storage.helper.VMSnapshotHelperImpl.pickRunningHost(VMSnapshotHelperImpl.java:86)
 at 
 org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.deleteVMSnapshot(DefaultVMSnapshotStrategy.java:194)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateDeleteAllVMSnapshots(VMSnapshotManagerImpl.java:773)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateDeleteAllVMSnapshots(VMSnapshotManagerImpl.java:1007)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.handleVmWorkJob(VMSnapshotManagerImpl.java:1014)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
 at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
 at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
 at com.sun.proxy.$Proxy197.handleVmWorkJob(Unknown Source)
 at 
 com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
 at 
 

[jira] [Assigned] (CLOUDSTACK-8164) Removing snapshot for a VM whose host is disabled gives a null pointer exception

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reassigned CLOUDSTACK-8164:
---

Assignee: Abhinandan Prateek

 Removing snapshot for a VM whose host is disabled gives a null pointer 
 exception
 

 Key: CLOUDSTACK-8164
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8164
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.5.0
Reporter: Abhinandan Prateek
Assignee: Abhinandan Prateek
 Fix For: 4.5.0


 tcher] (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Run VM work job: 
 com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM 3, job origin: 597
 2015-01-17 00:09:55,762 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Execute VM 
 work job: 
 com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots{userId:2,accountId:2,vmId:3,handlerName:VMSnapshotManagerImpl}
 2015-01-17 00:09:55,794 ERROR [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Invocation 
 exception, caused by: java.lang.NullPointerException
 2015-01-17 00:09:55,794 INFO  [c.c.v.VmWorkJobHandlerProxy] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599 ctx-f1671ba3) Rethrow 
 exception java.lang.NullPointerException
 2015-01-17 00:09:55,794 DEBUG [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Done with run of VM work 
 job: com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM 3, job origin: 
 597
 2015-01-17 00:09:55,794 ERROR [c.c.v.VmWorkJobDispatcher] 
 (Work-Job-Executor-45:ctx-bce004ad job-597/job-599) Unable to complete 
 AsyncJobVO {id:599, userId: 2, accountId: 2, instanceType: null, instanceId: 
 null, cmd: com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots, cmdInfo: 
 rO0ABXNyADBjb20uY2xvdWQudm0uc25hcHNob3QuVm1Xb3JrRGVsZXRlQWxsVk1TbmFwc2hvdHOsl-VRajf8cAIAAUwABHR5cGV0ACdMY29tL2Nsb3VkL3ZtL3NuYXBzaG90L1ZNU25hcHNob3QkVHlwZTt4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1ldAASTGphdmEvbGFuZy9TdHJpbmc7eHAAAgACAAN0ABVWTVNuYXBzaG90TWFuYWdlckltcGxw,
  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
 null, initMsid: 8796753417514, completeMsid: null, lastUpdated: null, 
 lastPolled: null, created: Sat Jan 17 00:09:53 IST 2015}, job origin:597
 java.lang.NullPointerException
 at 
 org.apache.cloudstack.storage.helper.VMSnapshotHelperImpl.pickRunningHost(VMSnapshotHelperImpl.java:86)
 at 
 org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.deleteVMSnapshot(DefaultVMSnapshotStrategy.java:194)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateDeleteAllVMSnapshots(VMSnapshotManagerImpl.java:773)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateDeleteAllVMSnapshots(VMSnapshotManagerImpl.java:1007)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
 at 
 com.cloud.vm.snapshot.VMSnapshotManagerImpl.handleVmWorkJob(VMSnapshotManagerImpl.java:1014)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
 at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
 at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
 at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
 at com.sun.proxy.$Proxy197.handleVmWorkJob(Unknown Source)
 at 
 com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
 at 
 

[jira] [Created] (CLOUDSTACK-8166) Usage data boundary condition and NPE

2015-01-18 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8166:
---

 Summary: Usage data boundary condition and NPE
 Key: CLOUDSTACK-8166
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8166
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.3.1, 4.5.0, 4.3.2
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0


Usage parsers trim to start date but off-shoot end date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6920) Support listing of LBHealthcheck policy with LBHealthcheck policy ID

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281764#comment-14281764
 ] 

Rohit Yadav commented on CLOUDSTACK-6920:
-

This is in 4.4 and master but not on 4.5; any reason for this?

 Support listing of LBHealthcheck policy with LBHealthcheck policy ID
 

 Key: CLOUDSTACK-6920
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6920
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Controller
Affects Versions: 4.4.0
Reporter: Rajesh Battala
Assignee: Rajesh Battala
 Fix For: 4.4.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8167) CreateSnapshot publishes volume Id instead of UUId

2015-01-18 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-8167:
---

 Summary: CreateSnapshot publishes volume Id instead of UUId
 Key: CLOUDSTACK-8167
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8167
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.0, 4.4.3, 4.3.2
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


CreateSnapshot should publish uuid on event bus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8107) Failed to create snapshot from volume when the task is performed repeatedly in zone wide primary Storage.

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281777#comment-14281777
 ] 

Rohit Yadav commented on CLOUDSTACK-8107:
-

Any reason why this is in 4.4 and master branch but not in 4.5?

 Failed to create snapshot from volume when the task is performed repeatedly 
 in zone wide primary Storage.
 -

 Key: CLOUDSTACK-8107
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8107
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Likitha Shetty
Assignee: Likitha Shetty
Priority: Critical
 Fix For: 4.4.3


 +Steps to reproduce+
 1. Setup a CS zone with a VMware DC.
 2. Ensure the DC has 2 clusters with a host each.
 3. Add 2 Cluster-Wide primary storage pool to each of the clusters.
 4. Add a Zone-wide primary storage pool.
 5. Deploy a VM with a data-disk. Ensure both the ROOT and Data disk of the VM 
 is in the zone-wide storage pool.
 6. Take snapshots for the Data volume till the operation fails.
 In vCenter, the failure will be while Reconfiguring (worker)VM and the error 
 will be '.vmdk was not found'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8129) [VMware] Cold migration of VM across VMware DCs leaves the VM behind in the source host.

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281789#comment-14281789
 ] 

Rohit Yadav commented on CLOUDSTACK-8129:
-

any reason this should not be fixed on 4.5?

 [VMware] Cold migration of VM across VMware DCs leaves the VM behind in the 
 source host.
 

 Key: CLOUDSTACK-8129
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8129
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Likitha Shetty
Assignee: Likitha Shetty
Priority: Critical
 Fix For: Future


 1. Have an upgraded VMware setup with a legacy zone (Legacy zone is a zone 
 where a CS zone has clusters that are spread across different VMware DCs). So 
 ensure there are two clusters say C1 and C2 that belong to VMware DCs D1 and 
 D2 respectively.
 2. Deploy a VM. Say it is in C1.
 3. Stop the VM.
 3. Migrate the VM to a storage in a C2.
 4. Start the VM.
 Observations - VM will be be successfully started in a host in C2. But a 
 stale entry of the VM will be left in C1. And whenever VM sync kicks in it 
 stops ans start the VM repeatedly because there are two VMs by the same name 
 in vCenter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8165) The systemvm template guest os is set to 32 bit debian by default instead of 64 bit

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281738#comment-14281738
 ] 

Rohit Yadav commented on CLOUDSTACK-8165:
-

Moving to blocker. In one setup, this failed with:
2015-01-17 18:24:10,211 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-4:ctx-5c32d20b) 1. The VM v-1-VM is in Starting state.
2015-01-17 18:24:10,214 WARN  [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-4:ctx-5c32d20b) Catch Exception: class 
com.cloud.utils.exception.CloudRuntimeException due to 
com.cloud.utils.exception.CloudRuntimeException: Cannot find template Debian 
Wheezy 7 (32-bit) on XenServer host
com.cloud.utils.exception.CloudRuntimeException: Cannot find template Debian 
Wheezy 7 (32-bit) on XenServer host
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVmFromTemplate(CitrixResourceBase.java:1314)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1756)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:490)
at 
com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:64)
at 
com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:87)

 The systemvm template guest os is set to 32 bit debian by default instead of 
 64 bit
 ---

 Key: CLOUDSTACK-8165
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8165
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: SystemVM
Affects Versions: 4.5.0
Reporter: Abhinandan Prateek
Priority: Blocker
 Fix For: 4.5.0


 This query on a fresh db install 
 SELECT cloud.guest_os.display_name FROM cloud.vm_template, cloud.guest_os 
 where cloud.vm_template.guest_os_id=cloud.guest_os.id and 
 cloud.vm_template.name='SystemVM Template (XenServer)' 
 returns 'Debian GNU/Linux 7(32-bit)'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7920) NPE in Volume sync causing ssvm agent to not connect

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281786#comment-14281786
 ] 

Rohit Yadav commented on CLOUDSTACK-7920:
-

[~nitinme] Nitin, can you comment if this should be on 4.5 as well?

 NPE in Volume sync causing ssvm agent to not connect 
 -

 Key: CLOUDSTACK-7920
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7920
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
Priority: Critical
 Fix For: 4.6.0


 NPE in Volume sync causing ssvm agent to not connect 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8165) The systemvm template guest os is set to 32 bit debian by default instead of 64 bit

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8165:

Priority: Blocker  (was: Major)

 The systemvm template guest os is set to 32 bit debian by default instead of 
 64 bit
 ---

 Key: CLOUDSTACK-8165
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8165
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: SystemVM
Affects Versions: 4.5.0
Reporter: Abhinandan Prateek
Priority: Blocker
 Fix For: 4.5.0


 This query on a fresh db install 
 SELECT cloud.guest_os.display_name FROM cloud.vm_template, cloud.guest_os 
 where cloud.vm_template.guest_os_id=cloud.guest_os.id and 
 cloud.vm_template.name='SystemVM Template (XenServer)' 
 returns 'Debian GNU/Linux 7(32-bit)'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8166) Usage data boundary condition and NPE

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8166.
-
Resolution: Fixed

Related to fix 84c25f7025ca0eb22c8229d9e9cb7e987cbe on master.

 Usage data boundary condition and NPE
 -

 Key: CLOUDSTACK-8166
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8166
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.3.1, 4.3.2
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0


 Usage parsers trim to start date but off-shoot end date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8088) VM scale up is failing in vmware with Unable to execute ScaleVmCommand due to java.lang.NullPointerException

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281762#comment-14281762
 ] 

Rohit Yadav commented on CLOUDSTACK-8088:
-

any reason this should not be in 4.5?

 VM scale up is failing in vmware with Unable to execute ScaleVmCommand due to 
 java.lang.NullPointerException
 

 Key: CLOUDSTACK-8088
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8088
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava

 setup
 ---
 Vcenter 5.5 (esxi-5.5) setup 
 steps to reproduce
 
 1-enable zone level setting enable.dynamic.scale.vm
 2-deploy a vm which is dynamically scalable
 3- try to scale up vm to medium SO (#cpu=1,cpu=1000,mem=1024)
 expected
 ---
 scaling up should be sucessful
 actual
 ---
 scaling up is failing with error
 com.cloud.utils.exception.CloudRuntimeException: Unable to scale vm due to 
 Unable to execute ScaleVmCommand due to java.lang.NullPointerException
 During scale vm command, in VmwareResource.java the following line is 
 executed where there is no value sent in the 'details' map for the 
 configuration parameter 'vmware.reserve.mem'. because of this NPE is occured.
 int getReservedMemoryMb(VirtualMachineTO vmSpec) {
 if 
 (vmSpec.getDetails().get(VMwareGuru.VmwareReserveMemory.key()).equalsIgnoreCase(true))
  { ...
 In VirtualMachineManagerImpl.java ScaleVmCommand is prepared where 
 VirtualMachineTO transfer object is created as following where we are not 
 setting 'details' map.
 public ScaleVmCommand(String vmName, int cpus, Integer minSpeed, Integer 
 maxSpeed, long minRam, long maxRam, boolean limitCpuUse) {
 .
 this.vm = new VirtualMachineTO(1L, vmName, null, cpus, minSpeed, maxSpeed, 
 minRam, maxRam, null, null, false, limitCpuUse, null);
 .
 We need to pass these configuration parameters (vmware.reserve.cpu, 
 vmware.reserve.mem) to VMwareResource.java using VirtualMachineTO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8167) CreateSnapshot publishes volume Id instead of UUId

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav resolved CLOUDSTACK-8167.
-
Resolution: Fixed

Fixed on all branch 4.3, 4.4, 4.5 and master

 CreateSnapshot publishes volume Id instead of UUId
 --

 Key: CLOUDSTACK-8167
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8167
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.4.3, 4.3.2
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.5.0, 4.6.0


 CreateSnapshot should publish uuid on event bus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8097) Failed to create snapshot from volume after vm live migration across clusters

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8097:

Fix Version/s: 4.5.0

 Failed to create snapshot from volume after vm live migration across clusters
 -

 Key: CLOUDSTACK-8097
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8097
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Snapshot, Storage Controller
Affects Versions: 4.5.0
Reporter: Sanjay Tripathi
Assignee: Sanjay Tripathi
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 Failed to create volume snapshot after vm live migration across clusters
 Steps to reproduce:
 
 1.Bring up cs in advanced zone with two xen clusters and at-least one host in 
 each cluster
 2.Deploy one guest vm using default cent os template
 3.Take snapshot of the root disk
 4.Now live migrate vm to another cluster so that it will be migrated along 
 with storage
 5.Again take snapshot on the root volume 
 Expected Behavior:
 ===
 Snapshot creation shall succeed and shall backup to secondary storage
 Actual Behavior:
 ==
 Snapshot is created on primary but failed to copy to secondary. So snapshot 
 creation failed with exceptions. 
 Observations:
 
 After live migrating vm with storage we are not changing the volume id and 
 uuid in volumes table. During vm live migration volume state remains in 
 Migrating state and after completion it comes to Ready state.
 1.When wee created snapshot on the root disk vm was in cluster1 so snapshot 
 was created in cluster1's primary storage.
 2.After that we live migrated the vm to another cluster and created snapshot 
 another snapshot on the root disk. This time snapshot was created on 
 cluster2's primary storage.
 May be this vhd file on this primary storage did not find any parent vhd file 
 so vhd scan might have deleted this vhd.
 I could found the following lines from xenserver SMlog file:
 Nov 21 04:58:13 localhost SMGC: [21457] Got sm-config for 
 *837da6ea(20.000G/112.262M): {'vhd-parent': 
 '745072fa-345a-4a7b-99a3-0c3789831146', 'vhd-blocks': 
 'eJxrYAAB0QcMKECEEUKzMAxNIAClWdD4AwUotV9EQFCCgVGRQQSHUSI47MNlL7o8vcJHVFBQqlFQkJFO1pEM6BUO9E6PtM7HA52/yAcAa1ED4w=='}
 Nov 21 04:58:13 localhost SMGC: [21457] Found 1 VDIs for deletion:
 Nov 21 04:58:13 localhost SMGC: [21457]   *49341e2f(20.000G/34.110M)
 Nov 21 04:58:13 localhost SMGC: [21457] Deleting unlinked VDI 
 *49341e2f(20.000G/34.110M)
 Nov 21 04:58:13 localhost SMGC: [21457] Checking with slave: 
 ('OpaqueRef:17fa526d-4f74-6d9
 I guess when CS tried to copy the snapshot to secondary storage it could not 
 found the vhd hence it failed with NPE.
 Please look for async job-35 in the attached MS log file.
 Following is the log snippet from xenserver SMlog file:
 ==
 Nov 21 04:57:07 localhost SM: [20944] vdi_snapshot {'sr_uuid': 
 '1f273ffb-3583-4a91-0c30-faa4951e7df8', 'subtask_of': 
 'DummyRef:|6bb04f6c-1a9a-950e-77cb-067eb7a29e3f|VDI.snapshot', 'vdi_ref': 
 'OpaqueRef:9583c77d-e9c5-144a-823c-599db4ba59f4', 'vdi_on_boot': 'persist', 
 'args': [], 'o_direct': False, 'vdi_location': 
 '1cf9e9f7-862e-4426-9416-584987b9cfbd', 'host_ref': 
 'OpaqueRef:4366ea2d-224b-fa03-65d6-a53c18b0c77f', 'session_ref': 
 'OpaqueRef:45e50d1a-37cf-aa1d-30ec-e77927acfab5', 'device_config': 
 {'SRmaster': 'true', 'serverpath': '/vol/export/902534-aauDvm', 'server': 
 '10.220.160.33'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 
 'sr_ref': 'OpaqueRef:c32edd0d-b0d7-7e28-9385-a4a6100f116b', 'driver_params': 
 {}, 'vdi_uuid': '1cf9e9f7-862e-4426-9416-584987b9cfbd'}
 Nov 21 04:57:07 localhost SM: [20944] Pause request for 
 1cf9e9f7-862e-4426-9416-584987b9cfbd
 Nov 21 04:57:07 localhost SM: [20944] Calling tap-pause on host 
 OpaqueRef:4366ea2d-224b-fa03-65d6-a53c18b0c77f
 Nov 21 04:57:07 localhost SM: [20955] lock: opening lock file 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] lock: acquired 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] Pause for 
 1cf9e9f7-862e-4426-9416-584987b9cfbd
 Nov 21 04:57:07 localhost SM: [20955] Calling tap pause with minor 1
 Nov 21 04:57:07 localhost SM: [20955] ['/usr/sbin/tap-ctl', 'pause', '-p', 
 '18621', '-m', '1']
 Nov 21 04:57:07 localhost SM: [20955]  = 0
 Nov 21 04:57:07 localhost SM: [20955] lock: released 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] lock: closed 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20944] FileVDI._snapshot for 
 

[jira] [Commented] (CLOUDSTACK-8097) Failed to create snapshot from volume after vm live migration across clusters

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281780#comment-14281780
 ] 

Rohit Yadav commented on CLOUDSTACK-8097:
-

Looks like a necessary bugfix, any reason why this should not be on 4.5?

 Failed to create snapshot from volume after vm live migration across clusters
 -

 Key: CLOUDSTACK-8097
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8097
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Snapshot, Storage Controller
Affects Versions: 4.5.0
Reporter: Sanjay Tripathi
Assignee: Sanjay Tripathi
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 Failed to create volume snapshot after vm live migration across clusters
 Steps to reproduce:
 
 1.Bring up cs in advanced zone with two xen clusters and at-least one host in 
 each cluster
 2.Deploy one guest vm using default cent os template
 3.Take snapshot of the root disk
 4.Now live migrate vm to another cluster so that it will be migrated along 
 with storage
 5.Again take snapshot on the root volume 
 Expected Behavior:
 ===
 Snapshot creation shall succeed and shall backup to secondary storage
 Actual Behavior:
 ==
 Snapshot is created on primary but failed to copy to secondary. So snapshot 
 creation failed with exceptions. 
 Observations:
 
 After live migrating vm with storage we are not changing the volume id and 
 uuid in volumes table. During vm live migration volume state remains in 
 Migrating state and after completion it comes to Ready state.
 1.When wee created snapshot on the root disk vm was in cluster1 so snapshot 
 was created in cluster1's primary storage.
 2.After that we live migrated the vm to another cluster and created snapshot 
 another snapshot on the root disk. This time snapshot was created on 
 cluster2's primary storage.
 May be this vhd file on this primary storage did not find any parent vhd file 
 so vhd scan might have deleted this vhd.
 I could found the following lines from xenserver SMlog file:
 Nov 21 04:58:13 localhost SMGC: [21457] Got sm-config for 
 *837da6ea(20.000G/112.262M): {'vhd-parent': 
 '745072fa-345a-4a7b-99a3-0c3789831146', 'vhd-blocks': 
 'eJxrYAAB0QcMKECEEUKzMAxNIAClWdD4AwUotV9EQFCCgVGRQQSHUSI47MNlL7o8vcJHVFBQqlFQkJFO1pEM6BUO9E6PtM7HA52/yAcAa1ED4w=='}
 Nov 21 04:58:13 localhost SMGC: [21457] Found 1 VDIs for deletion:
 Nov 21 04:58:13 localhost SMGC: [21457]   *49341e2f(20.000G/34.110M)
 Nov 21 04:58:13 localhost SMGC: [21457] Deleting unlinked VDI 
 *49341e2f(20.000G/34.110M)
 Nov 21 04:58:13 localhost SMGC: [21457] Checking with slave: 
 ('OpaqueRef:17fa526d-4f74-6d9
 I guess when CS tried to copy the snapshot to secondary storage it could not 
 found the vhd hence it failed with NPE.
 Please look for async job-35 in the attached MS log file.
 Following is the log snippet from xenserver SMlog file:
 ==
 Nov 21 04:57:07 localhost SM: [20944] vdi_snapshot {'sr_uuid': 
 '1f273ffb-3583-4a91-0c30-faa4951e7df8', 'subtask_of': 
 'DummyRef:|6bb04f6c-1a9a-950e-77cb-067eb7a29e3f|VDI.snapshot', 'vdi_ref': 
 'OpaqueRef:9583c77d-e9c5-144a-823c-599db4ba59f4', 'vdi_on_boot': 'persist', 
 'args': [], 'o_direct': False, 'vdi_location': 
 '1cf9e9f7-862e-4426-9416-584987b9cfbd', 'host_ref': 
 'OpaqueRef:4366ea2d-224b-fa03-65d6-a53c18b0c77f', 'session_ref': 
 'OpaqueRef:45e50d1a-37cf-aa1d-30ec-e77927acfab5', 'device_config': 
 {'SRmaster': 'true', 'serverpath': '/vol/export/902534-aauDvm', 'server': 
 '10.220.160.33'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 
 'sr_ref': 'OpaqueRef:c32edd0d-b0d7-7e28-9385-a4a6100f116b', 'driver_params': 
 {}, 'vdi_uuid': '1cf9e9f7-862e-4426-9416-584987b9cfbd'}
 Nov 21 04:57:07 localhost SM: [20944] Pause request for 
 1cf9e9f7-862e-4426-9416-584987b9cfbd
 Nov 21 04:57:07 localhost SM: [20944] Calling tap-pause on host 
 OpaqueRef:4366ea2d-224b-fa03-65d6-a53c18b0c77f
 Nov 21 04:57:07 localhost SM: [20955] lock: opening lock file 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] lock: acquired 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] Pause for 
 1cf9e9f7-862e-4426-9416-584987b9cfbd
 Nov 21 04:57:07 localhost SM: [20955] Calling tap pause with minor 1
 Nov 21 04:57:07 localhost SM: [20955] ['/usr/sbin/tap-ctl', 'pause', '-p', 
 '18621', '-m', '1']
 Nov 21 04:57:07 localhost SM: [20955]  = 0
 Nov 21 04:57:07 localhost SM: [20955] lock: released 
 /var/lock/sm/1cf9e9f7-862e-4426-9416-584987b9cfbd/vdi
 Nov 21 04:57:07 localhost SM: [20955] lock: closed 
 

[jira] [Commented] (CLOUDSTACK-8066) There is not way to know the size of the snapshot created

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281759#comment-14281759
 ] 

Rohit Yadav commented on CLOUDSTACK-8066:
-

Just going through list of issues that affects 4.5.0, any reason why this is 
not on 4.5?

 There is not way to know the size of the snapshot created
 -

 Key: CLOUDSTACK-8066
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8066
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.5.0
Reporter: Gaurav Aradhye
Assignee: Sanjay Tripathi
  Labels: automation
 Fix For: 4.6.0


 The listSnapshots command does not return the size of the snapshot nor the 
 createSnapshot command.
 There is no way to know the size of the snapshot and hence consider it in the 
 secondary storage limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-7920) NPE in Volume sync causing ssvm agent to not connect

2015-01-18 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-7920:

Affects Version/s: 4.5.0

 NPE in Volume sync causing ssvm agent to not connect 
 -

 Key: CLOUDSTACK-7920
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7920
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
Priority: Critical
 Fix For: 4.6.0


 NPE in Volume sync causing ssvm agent to not connect 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8129) [VMware] Cold migration of VM across VMware DCs leaves the VM behind in the source host.

2015-01-18 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281791#comment-14281791
 ] 

Rohit Yadav commented on CLOUDSTACK-8129:
-

[~likithas] since the fix version is 4.5, can you cherry-pick the fix on 4.5? 
It's not applying cleanly so I'm skipping doing it myself.

 [VMware] Cold migration of VM across VMware DCs leaves the VM behind in the 
 source host.
 

 Key: CLOUDSTACK-8129
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8129
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Likitha Shetty
Assignee: Likitha Shetty
Priority: Critical
 Fix For: Future


 1. Have an upgraded VMware setup with a legacy zone (Legacy zone is a zone 
 where a CS zone has clusters that are spread across different VMware DCs). So 
 ensure there are two clusters say C1 and C2 that belong to VMware DCs D1 and 
 D2 respectively.
 2. Deploy a VM. Say it is in C1.
 3. Stop the VM.
 3. Migrate the VM to a storage in a C2.
 4. Start the VM.
 Observations - VM will be be successfully started in a host in C2. But a 
 stale entry of the VM will be left in C1. And whenever VM sync kicks in it 
 stops ans start the VM repeatedly because there are two VMs by the same name 
 in vCenter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7219) Cannot display Cluster Settings after 4.4 Upgrade

2015-01-15 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279038#comment-14279038
 ] 

Rohit Yadav commented on CLOUDSTACK-7219:
-

I'm not sure what 6023 has to do with this issue, but if you have a good fix 
please proceed, thanks.

 Cannot display Cluster Settings after 4.4 Upgrade
 -

 Key: CLOUDSTACK-7219
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7219
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM, Management Server
Affects Versions: 4.4.0
 Environment: CentOS 6, KVM Hypervisor
Reporter: Prieur Leary
Priority: Blocker
 Fix For: 4.4.1

 Attachments: cluster.png


 After upgrading MS to 4.4, when you to go:
 Home -   Infrastructure -  Clusters - Cluster 1 -  Settings
 It does not display the settings underneath the column heading just, ERROR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8151) Global config and periodic thread to cleanup cloud_usage table

2015-01-13 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275545#comment-14275545
 ] 

Rohit Yadav commented on CLOUDSTACK-8151:
-

Hi [~sadhu], I thought about this too and shared the idea of providing explicit 
cleanup triggered by an API call (that is to say instead of a global conf and a 
background cleanup thread, users use an API to trigger cleanup). The API option 
beats the purpose to automatically cleanup things but I think could be more 
sane than CloudStack automatically removing entries from the tables.

 Global config and periodic thread to cleanup cloud_usage table
 --

 Key: CLOUDSTACK-8151
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8151
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0, 4.6.0
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 The cloud_usage table can be very big and once the raw data is processed we 
 can safely remove the rows to reduce database size. There is a KB on this: 
 http://support.citrix.com/article/CTX139043
 The aim is to create a global config (delete records older than specified no. 
 of days) and an API (if necessary) to trigger cleaning up of old entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8150) No MySQL-HA package in debian builds

2015-01-13 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav reassigned CLOUDSTACK-8150:
---

Assignee: (was: Rohit Yadav)

 No MySQL-HA package in debian builds
 

 Key: CLOUDSTACK-8150
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8150
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.5.0
Reporter: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 There is mysql-ha package in rpm builds but not in debian builds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8034) SAML Unique ID is restricted to 40 chars only

2015-01-12 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8034:

Priority: Critical  (was: Major)

 SAML Unique ID is restricted to 40 chars only
 -

 Key: CLOUDSTACK-8034
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8034
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 Fix for cases where SAML unique IDs returned by IDP is more than 40 chars, 
 what should be the ideal fix like?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8035) SAML SP metadata changes with every CloudStack restart

2015-01-12 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8035:

Priority: Critical  (was: Major)

 SAML SP metadata changes with every CloudStack restart
 --

 Key: CLOUDSTACK-8035
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8035
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 the getSPMetadata API uses the private key to generate public keys every time 
 cloudstack restarts, this is a non issue as saml tokens checked by previous 
 public keys are still validated by the same private key but we need to store 
 it in DB and not re-create them every time mgmt server restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8035) SAML SP metadata changes with every CloudStack restart

2015-01-12 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273347#comment-14273347
 ] 

Rohit Yadav commented on CLOUDSTACK-8035:
-

SAML SP metadata is not static with every server restart. This is potential 
security issue, changing status to critical. The fix would be to save the first 
generated public key to database.

 SAML SP metadata changes with every CloudStack restart
 --

 Key: CLOUDSTACK-8035
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8035
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
Priority: Critical
 Fix For: 4.5.0, 4.6.0


 the getSPMetadata API uses the private key to generate public keys every time 
 cloudstack restarts, this is a non issue as saml tokens checked by previous 
 public keys are still validated by the same private key but we need to store 
 it in DB and not re-create them every time mgmt server restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8036) SAML plugin provides no way to save IDP metadata

2015-01-12 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8036:

Issue Type: Improvement  (was: Bug)

 SAML plugin provides no way to save IDP metadata
 

 Key: CLOUDSTACK-8036
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8036
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 SAML plugin uses a global setting to dynamically get IDP metadata from a URL 
 from a global setting and provides no way to set/save the metadata as a file 
 or in DB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8036) SAML plugin provides no way to save IDP metadata

2015-01-12 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav updated CLOUDSTACK-8036:

Fix Version/s: (was: 4.5.0)

 SAML plugin provides no way to save IDP metadata
 

 Key: CLOUDSTACK-8036
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8036
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.6.0


 SAML plugin uses a global setting to dynamically get IDP metadata from a URL 
 from a global setting and provides no way to set/save the metadata as a file 
 or in DB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8146) Resource count of primary storage does not consider the detached volumes

2015-01-12 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273500#comment-14273500
 ] 

Rohit Yadav commented on CLOUDSTACK-8146:
-

Should this be fixed on 4.5 branch as well?

 Resource count of primary storage does not consider the detached volumes
 

 Key: CLOUDSTACK-8146
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8146
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Wei Zhou
Assignee: Wei Zhou
 Fix For: 4.5.0, 4.6.0


 The resource count of primary storage only counts the volumes attached to 
 VMs, but does not consider the detached volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6667) Can't provision a site-to-site VPN with multiple CIDRs

2015-01-12 Thread Rohit Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273667#comment-14273667
 ] 

Rohit Yadav commented on CLOUDSTACK-6667:
-

[~paulangus] ping

 Can't provision a site-to-site VPN with multiple CIDRs
 --

 Key: CLOUDSTACK-6667
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6667
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: Future, 4.2.1
 Environment: CloudStack 4.2.1
Reporter: Paul Angus
Assignee: Rohit Yadav
Priority: Critical

 When attempting to create a site-to-site VPN with multiple remote CIDRs ie:
 172.1.0.0/16,172.11.0.0/16
 CloudStack reports that 172.1.0.0/16,172.11.0.0/16 is an invalid CIDR.
 CloudStack code does not appear to be enumerating the string as two comma 
 separated CIDRs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    5   6   7   8   9   10   11   12   13   14   >