[jira] [Commented] (CLOUDSTACK-4565) [doc] Review Comments on Networking Sections

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754409#comment-13754409
 ] 

ASF subversion and git services commented on CLOUDSTACK-4565:
-

Commit 682b57e724b362aa307b0dee7a92efc40bb3221f in branch refs/heads/4.2 from 
[~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=682b57e ]

CLOUDSTACK-4565 review comments on network section has been fixed


 [doc] Review Comments on Networking Sections
 

 Key: CLOUDSTACK-4565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4565
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 15.1
 The Management Server automatically creates a virtual router for each 
 network. A virtual router is a special virtual machine that runs on the 
 hosts. Each virtual router has three network interfaces. Its
 eth0 interface serves as the gateway for the guest traffic and has the IP 
 address of 10.1.1.1. Its eth1 interface is used by the system to configure 
 the virtual router. Its eth2 interface is assigned a public IP address for 
 public traffic.
 Typically each virtual router running in the isolated network has three 
 network interfaces.
 (because if we have multiple public vlan, we would have multiple public 
 interfaces. And VPC is another story).
 15.5.3.10
 Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN(only applied 
 to Private VLAN setup).
 15.6
 • A prompt is displayed asking whether you want to keep the existing CIDR. 
 This is to let you know that if you change the network offering, the CIDR 
 will be affected. Choose No to proceed with the change.
 This description is wrong. 
 If the user upgrade between VR as provider and external network devices as 
 provider, he/she must acknowledge the change of CIDR to continue, thus choose 
 Yes here.
 15.14.3
 Remove IPv6 CIDR part, it's not supported in PVLAN.
 15.25.
 Please differentiate Remote Access VPN and Site-to-Site VPN. Site-to-Site VPN 
 has nothing to do with L2TP-over-IPSec.
 --Sheng
 From Chandan:
 Default behavior of ACL is NOT Default behavior is all the incoming and
 outgoing traffic is blocked to the tiers. Check FS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3411) [Portable IP] Update dashboard to display portable IP information

2013-08-30 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-3411:
---

Issue Type: Improvement  (was: Bug)

 [Portable IP] Update dashboard to display portable IP information
 -

 Key: CLOUDSTACK-3411
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3411
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: commit # 67cab313c969e5f488d6c0f92f9ec058288a96a0
Reporter: venkata swamybabu budumuru
 Fix For: 4.2.0

 Attachments: logs.tgz, Screen Shot 2013-07-09 at 12.47.59 PM.png


 Steps to reproduce:
 1. Have latest CloudStack setup with at least 2 advanced zones.
 2. Go to Dashboard - Click on any resource - Zone details 
 Observations:
 (i) This page doesn't display any capacity information Portable IPs
 Attaching the screenshots for the same.
 Attaching all the required logs along with db dump.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3398) [UI] Enhance traffic label configuration in UI to specify switch name, vlan id and switch type with VMWARE hypervisor

2013-08-30 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-3398:
---

Issue Type: Improvement  (was: Bug)

 [UI] Enhance traffic label configuration in UI to specify switch name, vlan 
 id and switch type with VMWARE hypervisor
 -

 Key: CLOUDSTACK-3398
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3398
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
 Fix For: 4.2.0


 Setup: VMWARE 
 Observation:
 Usecase : Two physical networks 
 a) Management , Storage traffic with Standard Switch with Physical network 1
 b) Public , Guest traffic with DVSwitch - Physical network 2 
 c) Currently it is text field where Admin has to manually provide the pattern 
 as switch name, vlan id and switch type 
 Ex: dvs3,,vmwaredvs  ( If there is no VLAN Id )
 This defect is to Enhance traffic label configuration in UI to specify switch 
 name, vlan id and switch type with VMWARE hypervisor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4565) [doc] Review Comments on Networking Sections

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754417#comment-13754417
 ] 

ASF subversion and git services commented on CLOUDSTACK-4565:
-

Commit 2324be9410fe06b4599268051af9af08749ef681 in branch 
refs/heads/4.2-forward from [~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2324be9 ]

CLOUDSTACK-4565 review comments on network section has been fixed


 [doc] Review Comments on Networking Sections
 

 Key: CLOUDSTACK-4565
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4565
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 15.1
 The Management Server automatically creates a virtual router for each 
 network. A virtual router is a special virtual machine that runs on the 
 hosts. Each virtual router has three network interfaces. Its
 eth0 interface serves as the gateway for the guest traffic and has the IP 
 address of 10.1.1.1. Its eth1 interface is used by the system to configure 
 the virtual router. Its eth2 interface is assigned a public IP address for 
 public traffic.
 Typically each virtual router running in the isolated network has three 
 network interfaces.
 (because if we have multiple public vlan, we would have multiple public 
 interfaces. And VPC is another story).
 15.5.3.10
 Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN(only applied 
 to Private VLAN setup).
 15.6
 • A prompt is displayed asking whether you want to keep the existing CIDR. 
 This is to let you know that if you change the network offering, the CIDR 
 will be affected. Choose No to proceed with the change.
 This description is wrong. 
 If the user upgrade between VR as provider and external network devices as 
 provider, he/she must acknowledge the change of CIDR to continue, thus choose 
 Yes here.
 15.14.3
 Remove IPv6 CIDR part, it's not supported in PVLAN.
 15.25.
 Please differentiate Remote Access VPN and Site-to-Site VPN. Site-to-Site VPN 
 has nothing to do with L2TP-over-IPSec.
 --Sheng
 From Chandan:
 Default behavior of ACL is NOT Default behavior is all the incoming and
 outgoing traffic is blocked to the tiers. Check FS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-3434) Parallel deployment - Xenserver - When deploying 30 Vms in parallel/starting 30 VMs in parallel, Vms failed because of failing in SavePasswordCommand/Dhcp Entry.

2013-08-30 Thread shweta agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shweta agarwal closed CLOUDSTACK-3434.
--


Verified . Passed

 Parallel deployment - Xenserver - When deploying 30 Vms in parallel/starting 
 30 VMs in parallel, Vms failed because of failing in 
 SavePasswordCommand/Dhcp Entry. 
 --

 Key: CLOUDSTACK-3434
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3434
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Build from 4.2
Reporter: Sangeetha Hariharan
Assignee: Sheng Yang
Priority: Blocker
 Fix For: 4.2.0

 Attachments: parallel-run.rar, xenparallel.rar


 Parallel deployment - Xenserver - When deploying 30 Vms in parallel/starting 
 30 VMs in parallel, Vms failed because of failing in 
 SavePasswordCommand/Dhcp Entry.
 Steps to reproduce the problem:
 Advanced zone set up with Xenserver host.
 Deploy 30 Vms in parallel.
 Out of 30 vms , 1 vm failed to start successfully and is in Error state 
 because of timing out on the SavePasswordCommand. 
 Following is the snippet from management server logs:
 2013-07-09 16:30:28,895 DEBUG [cloud.network.NetworkModelImpl] 
 (Job-Executor-35:job-35) Service SecurityGroup is not supported in the 
 network id=204
 2013-07-09 16:30:28,898 DEBUG 
 [network.router.VirtualNetworkApplianceManagerImpl] (Job-Executor-35:job-35) 
 Applying userdata and password entry in network Ntwk[204|Guest|8]
 2013-07-09 16:30:28,910 DEBUG [agent.transport.Request] 
 (Job-Executor-35:job-35) Seq 1-729350387: Sending  { Cmd , MgmtId: 
 7200344900649, via: 1, Ver: v1, Flags: 100011, [{com.clou
 d.agent.api.routing.SavePasswordCommand:{password:fnirq_cnffjbeq,vmIpAddress:10.1.1.148,vmName:hello-14,executeInSequence:false,accessDetails:{router.guest.ip:10
 .1.1.1,zone.network.type:Advanced,router.ip:169.254.3.54,router.name:r-4-VM},wait:0}},{com.cloud.agent.api.routing.VmDataCommand:{vmIpAddress:10.1.1.148,vmName
 :hello-14,executeInSequence:false,accessDetails:{router.guest.ip:10.1.1.1,zone.network.type:Advanced,router.ip:169.254.3.54,router.name:r-4-VM},wait:0}}]
  }
 2013-07-09 16:30:28,911 DEBUG [agent.transport.Request] 
 (Job-Executor-35:job-35) Seq 1-729350387: Executing:  { Cmd , MgmtId: 
 7200344900649, via: 1, Ver: v1, Flags: 100011, [{com.c
 loud.agent.api.routing.SavePasswordCommand:{password:fnirq_cnffjbeq,vmIpAddress:10.1.1.148,vmName:hello-14,executeInSequence:false,accessDetails:{router.guest.ip:
 10.1.1.1,zone.network.type:Advanced,router.ip:169.254.3.54,router.name:r-4-VM},wait:0}},{com.cloud.agent.api.routing.VmDataCommand:{vmIpAddress:10.1.1.148,vmN
 ame:hello-14,executeInSequence:false,accessDetails:{router.guest.ip:10.1.1.1,zone.network.type:Advanced,router.ip:169.254.3.54,router.name:r-4-VM},wait:0}}]
  }
 ..
 2013-07-09 16:30:32,224 DEBUG [agent.manager.DirectAgentAttache] 
 (DirectAgent-105:null) Seq 1-729350387: Cancelling because one of the answers 
 is false and it is stop on error.
 2013-07-09 16:30:32,224 DEBUG [agent.manager.DirectAgentAttache] 
 (DirectAgent-105:null) Seq 1-729350387: Response Received:
 2013-07-09 16:30:32,225 DEBUG [agent.transport.Request] 
 (DirectAgent-105:null) Seq 1-729350387: Processing:  { Ans: , MgmtId: 
 7200344900649, via: 1, Ver: v1, Flags: 10, [{com.cloud
 .agent.api.Answer:{result:false,details:savePassword 
 failed,wait:0}}] }
 2013-07-09 16:30:32,225 DEBUG [agent.transport.Request] 
 (Job-Executor-35:job-35) Seq 1-729350387: Received:  { Ans: , MgmtId: 
 7200344900649, via: 1, Ver: v1, Flags: 10, { Answer } }
 2013-07-09 16:30:32,225 INFO  [cloud.vm.VirtualMachineManagerImpl] 
 (Job-Executor-35:job-35) Unable to contact resource.
 com.cloud.exception.ResourceUnavailableException: Resource [DataCenter:1] is 
 unreachable: Unable to apply userdata and password entry on router
 at 
 com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyRules(VirtualNetworkApplianceManagerImpl.java:3784)
 at 
 com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyUserData(VirtualNetworkApplianceManagerImpl.java:2977)
 at 
 com.cloud.network.element.VirtualRouterElement.addPasswordAndUserdata(VirtualRouterElement.java:944)
 at 
 com.cloud.network.NetworkManagerImpl.prepareElement(NetworkManagerImpl.java:2006)
 at 
 com.cloud.network.NetworkManagerImpl.prepareNic(NetworkManagerImpl.java:2112)
 at 
 com.cloud.network.NetworkManagerImpl.prepare(NetworkManagerImpl.java:2053)
 at 
 

[jira] [Updated] (CLOUDSTACK-4512) [VMWARE] Deployment of User VM Fails on the ESXi host due to CPU Resources unavailability

2013-08-30 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-4512:
---

Issue Type: Improvement  (was: Bug)

 [VMWARE] Deployment of User VM Fails on the ESXi host due to CPU Resources 
 unavailability
 -

 Key: CLOUDSTACK-4512
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4512
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server, VMware
Affects Versions: 4.2.0
Reporter: Chandan Purushothama
 Fix For: 4.2.1

 Attachments: hostd.zip, management-server.zip


 
 Steps to Reproduce:
 
 1. Deploy a VM using the default CentOS Template.
 ===
 Observation:
 ===
 Observe that the error complains about CPU Resources on the ESXi host, while 
 the host has more than enough CPU Resources to service the VM.
 On the Management Server Log:
 2013-08-26 16:01:51,808 WARN  [vmware.resource.VmwareResource] 
 (DirectAgent-151:10.223.57.66) StartCommand failed due to Exception: 
 java.lang.RuntimeException
 Message: The available CPU resources in the parent resource pool are 
 insufficient for the operation.
 java.lang.RuntimeException: The available CPU resources in the parent 
 resource pool are insufficient for the operation.
 at 
 com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:378)
 at 
 com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:188)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:3099)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:514)
 at 
 com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)
 2013-08-26 16:01:51,812 DEBUG [agent.manager.DirectAgentAttache] 
 (DirectAgent-151:null) Seq 5-746390325: Response Received:
 2013-08-26 16:01:51,814 DEBUG [agent.transport.Request] 
 (DirectAgent-151:null) Seq 5-746390325: Processing:  { Ans: , MgmtId: 
 7471666038533, via: 5, Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.StartAnswer:{vm:{id:27,name:i-9-27-VMWARERETEST,bootloader:HVM,type:User,cpus:1,minSpeed:500,maxSpeed:500,minRam:536870912,maxRam:536870912,hostName:Boron-VM-1,arch:x86_64,os:CentOS
  5.3 
 (64-bit),bootArgs:,rebootOnCrash:false,enableHA:false,limitCpuUse:false,enableDynamicallyScaleVm:false,vncPassword:f12649cca76b879d,params:{rootDiskController:ide,nicAdapter:E1000,nestedVirtualizationFlag:false},uuid:79201226-48cf-43d9-9d20-50a3a8d4c7aa,disks:[{data:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:d0467aeb-0774-4aaa-99cd-80a4843d7ffd,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:96bc10ee-70b3-3d20-a60c-c068a024b3a7,id:201,poolType:NetworkFilesystem,host:10.223.110.232,path:/export/home/chandan/307PB-195-103/primary2,port:2049}},name:ROOT-27,size:2147483648,path:ROOT-27,volumeId:38,vmName:i-9-27-VMWARERETEST,accountId:9,format:OVA,id:38,hypervisorType:VMware}},diskSeq:0,type:ROOT},{data:{org.apache.cloudstack.storage.to.TemplateObjectTO:{id:0,format:ISO,accountId:0,hvm:false}},diskSeq:3,type:ISO}],nics:[{deviceId:0,networkRateMbps:200,defaultNic:true,uuid:2ce1f4ba-2420-4060-81bd-da30ef53fcd9,ip:10.1.1.157,netmask:255.255.255.0,gateway:10.1.1.1,mac:02:00:74:1d:00:01,dns1:8.8.8.8,dns2:8.8.4.4,broadcastType:Vlan,type:Guest,broadcastUri:vlan://2600,isolationUri:vlan://2600,isSecurityGroupEnabled:false}]},result:false,details:StartCommand
  failed due to Exception: java.lang.RuntimeException\nMessage: The available 
 CPU resources in the parent resource pool are insufficient for the 
 operation.\n,wait:0}}] }
 2013-08-26 16:01:51,814 DEBUG [agent.transport.Request] 
 (Job-Executor-25:job-125 = [ bb63450c-35de-4f1f-b81b-1ac3482f ]) Seq 
 5-746390325: Received:  { Ans: , MgmtId: 7471666038533, via: 5, Ver: v1, 
 Flags: 10, { StartAnswer } }
 

[jira] [Created] (CLOUDSTACK-4566) Incorrect values in resource_count table for resource limitation

2013-08-30 Thread Wei Zhou (JIRA)
Wei Zhou created CLOUDSTACK-4566:


 Summary: Incorrect values in resource_count table for resource 
limitation
 Key: CLOUDSTACK-4566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Wei Zhou
Assignee: Wei Zhou


Util now, I found three issues in resource_count table

(1) expunge a vm, the public_ip decreases and becomes -1 in basic zone.
(2) recover a vm, the volume increase.
(3) restore a vm, the volume decrease.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4566) Incorrect values in resource_count table for resource limitation

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754439#comment-13754439
 ] 

ASF subversion and git services commented on CLOUDSTACK-4566:
-

Commit 948014dee6af67d4bdd27301e23f4cdee695d9f1 in branch 
refs/heads/4.2-forward from [~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=948014d ]

CLOUDSTACK-4566: fix incorrect values in resource_count table for resource 
limitation

There are three issues in resource_count table
(1) expunge a vm, the public_ip decreases and becomes -1 in basic zone.
(2) recover a vm, the volume increase.
(3) restore a vm, the volume decrease.


 Incorrect values in resource_count table for resource limitation
 

 Key: CLOUDSTACK-4566
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4566
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Wei Zhou
Assignee: Wei Zhou

 Util now, I found three issues in resource_count table
 (1) expunge a vm, the public_ip decreases and becomes -1 in basic zone.
 (2) recover a vm, the volume increase.
 (3) restore a vm, the volume decrease.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-3143) revoke securitygroup ingress/egress does not return correct response

2013-08-30 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi resolved CLOUDSTACK-3143.


Resolution: Fixed

Resolving based on commit

 revoke securitygroup ingress/egress does not return correct response
 

 Key: CLOUDSTACK-3143
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3143
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.1.0
 Environment: seems that revokeSecurityGroupIngress and 
 revokeSecurityGroupEgress don't return the correct response.
 Instead of revokesecuritygroupingressresponse it returns 
 revokesecuritygroupingress. same for revokeSecurityGroupEgress
Reporter: sebastien goasguen
 Fix For: 4.2.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4557) ceph:Performance:first time operstions taking more time

2013-08-30 Thread Wido den Hollander (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754471#comment-13754471
 ] 

Wido den Hollander commented on CLOUDSTACK-4557:


With deploy do you mean boot or before it actually starts booting?

When using NFS for both Primary and Secondary a simple QCOW2 file copy is 
required, but with RBD you have to go from QCOW2 - RAW on the first copy. 
Afterwards it will use the RBD cloning feature to deploy new Instances.

 ceph:Performance:first time operstions taking more time
 ---

 Key: CLOUDSTACK-4557
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4557
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
Reporter: sadhu suresh

 Its taking more time to deploy   a first vm  on rbd based primary storage 
 when compared to VM deployment  on NFS based storage. In our environment its 
 taking more than 7 mins.[our observation is first time operations are taking 
 more time ],like first t time vm deployment/snapshot/attach operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4371) [Performance Testing] Basic zone with 20K Hosts, management server restart leaves the hosts in disconnected state for very long time

2013-08-30 Thread Sowmya Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754490#comment-13754490
 ] 

Sowmya Krishnan commented on CLOUDSTACK-4371:
-

All storage pools are present on the simulator DB also.
mysql select count(*) from mockstoragepool;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.00 sec)

mysql select count(*) from storage_pool;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.00 sec)


 [Performance Testing] Basic zone with 20K Hosts, management server restart 
 leaves the hosts in disconnected state for very long time
 

 Key: CLOUDSTACK-4371
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4371
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Basic zone, with over 20K simulator hosts
Reporter: Sowmya Krishnan
  Labels: performance
 Fix For: 4.2.0

 Attachments: ms1_restartfail.log.gz, ms2_restartfail.log.gz, 
 ms3_restartfail.log.gz


 Basic zone performance test bed:
 20K simulator hosts,
 3 Management servers
 1 host/cluster
 Local storage
 Java heap size: 12GB
 db.cloud.maxActive=2000
 direct.agent.load.size=1000
 agent.lb.enabled=true
 Deploy around 20K simulator hosts with 3 Management servers clustered
 (Not deployed any VMs yet)
 After all hosts are deployed, stop all 3 Management servers and then start 
 all 3 one after another
 Result
 =
 Hosts don't get to connected state at all even after 10 minutes. While around 
 2K of them go into alert state while rest are in disconnected state.
 mysql select count(*), status, resource_state, type, mgmt_server_id from 
 host group by mgmt_server_id, status, type, resource_state;
 +--+--++++
 | count(*) | status   | resource_state | type   | 
 mgmt_server_id |
 +--+--++++
 | 1946 | Alert| Enabled| Routing|   
 NULL |
 |18054 | Disconnected | Enabled| Routing|   
 NULL |
 |1 | Disconnected | Enabled| SecondaryStorageVM |   
 NULL |
 +--+--++++
 3 rows in set (0.11 sec)
 MS Logs show lot of storage pool exceptions while hosts try to get connected:
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Sending  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.agen
 t.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Executing:  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.a
 gent.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [xen.discoverer.XcpServerDiscoverer] 
 (AgentTaskPool-14:null) Not XenServer so moving on.
 2013-08-16 05:49:25,592 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-14:null) Sending Connect to listener: 
 DeploymentPlanningManagerImpl_EnhancerByCloudStack_76f3d8e4
 2013-08-16 05:49:25,591 DEBUG [cloud.resource.AgentResourceBase] 
 (ClusteredAgentManager Timer:null) Deserializing simulated agent on reconnect
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Scheduled network rules cleanup, interval=2028
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Received a host startup notification
 2013-08-16 05:49:25,595 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: StoragePoolMonitor
 ...
 ...
 2013-08-16 05:49:25,761 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: 
 ClusteredVirtualMachineManagerImpl_EnhancerByCloudStack_b5459b7b
 2013-08-16 05:49:25,764 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
 (AgentTaskPool-12:null) Found 0 VMs for host 13
 2013-08-16 05:49:25,765 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: LocalStoragePoolListener
 2013-08-16 05:49:25,768 DEBUG 
 [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
 (AgentTaskPool-12:null) createPool Params @ scheme - Filesystem storageHost - 
 172.1.3.131 hostPath - /mnt/2a2463b4-4fd2-4ac7-ad3f-040a3046e478 port - -1
 2013-08-16 05:49:25,771 DEBUG 
 

[jira] [Comment Edited] (CLOUDSTACK-4371) [Performance Testing] Basic zone with 20K Hosts, management server restart leaves the hosts in disconnected state for very long time

2013-08-30 Thread Sowmya Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754490#comment-13754490
 ] 

Sowmya Krishnan edited comment on CLOUDSTACK-4371 at 8/30/13 8:24 AM:
--

All storage pools are present on the simulator DB also.
mysql select count( * ) from mockstoragepool;
+--+
| count( * ) |
+--+
|2 |
+--+
1 row in set (0.00 sec)

mysql select count( * ) from storage_pool;
+--+
| count( * ) |
+--+
|2 |
+--+
1 row in set (0.00 sec)


  was (Author: sowmyak):
All storage pools are present on the simulator DB also.
mysql select count(*) from mockstoragepool;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.00 sec)

mysql select count(*) from storage_pool;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.00 sec)

  
 [Performance Testing] Basic zone with 20K Hosts, management server restart 
 leaves the hosts in disconnected state for very long time
 

 Key: CLOUDSTACK-4371
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4371
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Basic zone, with over 20K simulator hosts
Reporter: Sowmya Krishnan
  Labels: performance
 Fix For: 4.2.0

 Attachments: ms1_restartfail.log.gz, ms2_restartfail.log.gz, 
 ms3_restartfail.log.gz


 Basic zone performance test bed:
 20K simulator hosts,
 3 Management servers
 1 host/cluster
 Local storage
 Java heap size: 12GB
 db.cloud.maxActive=2000
 direct.agent.load.size=1000
 agent.lb.enabled=true
 Deploy around 20K simulator hosts with 3 Management servers clustered
 (Not deployed any VMs yet)
 After all hosts are deployed, stop all 3 Management servers and then start 
 all 3 one after another
 Result
 =
 Hosts don't get to connected state at all even after 10 minutes. While around 
 2K of them go into alert state while rest are in disconnected state.
 mysql select count(*), status, resource_state, type, mgmt_server_id from 
 host group by mgmt_server_id, status, type, resource_state;
 +--+--++++
 | count(*) | status   | resource_state | type   | 
 mgmt_server_id |
 +--+--++++
 | 1946 | Alert| Enabled| Routing|   
 NULL |
 |18054 | Disconnected | Enabled| Routing|   
 NULL |
 |1 | Disconnected | Enabled| SecondaryStorageVM |   
 NULL |
 +--+--++++
 3 rows in set (0.11 sec)
 MS Logs show lot of storage pool exceptions while hosts try to get connected:
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Sending  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.agen
 t.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Executing:  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.a
 gent.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [xen.discoverer.XcpServerDiscoverer] 
 (AgentTaskPool-14:null) Not XenServer so moving on.
 2013-08-16 05:49:25,592 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-14:null) Sending Connect to listener: 
 DeploymentPlanningManagerImpl_EnhancerByCloudStack_76f3d8e4
 2013-08-16 05:49:25,591 DEBUG [cloud.resource.AgentResourceBase] 
 (ClusteredAgentManager Timer:null) Deserializing simulated agent on reconnect
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Scheduled network rules cleanup, interval=2028
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Received a host startup notification
 2013-08-16 05:49:25,595 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: StoragePoolMonitor
 ...
 ...
 2013-08-16 05:49:25,761 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: 
 ClusteredVirtualMachineManagerImpl_EnhancerByCloudStack_b5459b7b
 2013-08-16 05:49:25,764 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
 (AgentTaskPool-12:null) Found 0 VMs for host 13
 

[jira] [Commented] (CLOUDSTACK-4371) [Performance Testing] Basic zone with 20K Hosts, management server restart leaves the hosts in disconnected state for very long time

2013-08-30 Thread Sowmya Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754501#comment-13754501
 ] 

Sowmya Krishnan commented on CLOUDSTACK-4371:
-

mysql select count(  *) from mockstoragepool where hostguid is NULL;
+--+
| count( * ) |
+--+
|0 |
+--+
1 row in set (0.00 sec)


 [Performance Testing] Basic zone with 20K Hosts, management server restart 
 leaves the hosts in disconnected state for very long time
 

 Key: CLOUDSTACK-4371
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4371
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
 Environment: Basic zone, with over 20K simulator hosts
Reporter: Sowmya Krishnan
  Labels: performance
 Fix For: 4.2.0

 Attachments: ms1_restartfail.log.gz, ms2_restartfail.log.gz, 
 ms3_restartfail.log.gz


 Basic zone performance test bed:
 20K simulator hosts,
 3 Management servers
 1 host/cluster
 Local storage
 Java heap size: 12GB
 db.cloud.maxActive=2000
 direct.agent.load.size=1000
 agent.lb.enabled=true
 Deploy around 20K simulator hosts with 3 Management servers clustered
 (Not deployed any VMs yet)
 After all hosts are deployed, stop all 3 Management servers and then start 
 all 3 one after another
 Result
 =
 Hosts don't get to connected state at all even after 10 minutes. While around 
 2K of them go into alert state while rest are in disconnected state.
 mysql select count(*), status, resource_state, type, mgmt_server_id from 
 host group by mgmt_server_id, status, type, resource_state;
 +--+--++++
 | count(*) | status   | resource_state | type   | 
 mgmt_server_id |
 +--+--++++
 | 1946 | Alert| Enabled| Routing|   
 NULL |
 |18054 | Disconnected | Enabled| Routing|   
 NULL |
 |1 | Disconnected | Enabled| SecondaryStorageVM |   
 NULL |
 +--+--++++
 3 rows in set (0.11 sec)
 MS Logs show lot of storage pool exceptions while hosts try to get connected:
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Sending  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.agen
 t.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [agent.transport.Request] 
 (AgentTaskPool-12:null) Seq 13-32440322: Executing:  { Cmd , MgmtId: 
 206915885094132, via: 13, Ver: v1, Flags: 100011, [{com.cloud.a
 gent.api.CleanupNetworkRulesCmd:{interval:2028,wait:0}}] }
 2013-08-16 05:49:25,592 DEBUG [xen.discoverer.XcpServerDiscoverer] 
 (AgentTaskPool-14:null) Not XenServer so moving on.
 2013-08-16 05:49:25,592 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-14:null) Sending Connect to listener: 
 DeploymentPlanningManagerImpl_EnhancerByCloudStack_76f3d8e4
 2013-08-16 05:49:25,591 DEBUG [cloud.resource.AgentResourceBase] 
 (ClusteredAgentManager Timer:null) Deserializing simulated agent on reconnect
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Scheduled network rules cleanup, interval=2028
 2013-08-16 05:49:25,594 INFO  [network.security.SecurityGroupListener] 
 (AgentTaskPool-12:null) Received a host startup notification
 2013-08-16 05:49:25,595 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: StoragePoolMonitor
 ...
 ...
 2013-08-16 05:49:25,761 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: 
 ClusteredVirtualMachineManagerImpl_EnhancerByCloudStack_b5459b7b
 2013-08-16 05:49:25,764 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
 (AgentTaskPool-12:null) Found 0 VMs for host 13
 2013-08-16 05:49:25,765 DEBUG [agent.manager.AgentManagerImpl] 
 (AgentTaskPool-12:null) Sending Connect to listener: LocalStoragePoolListener
 2013-08-16 05:49:25,768 DEBUG 
 [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
 (AgentTaskPool-12:null) createPool Params @ scheme - Filesystem storageHost - 
 172.1.3.131 hostPath - /mnt/2a2463b4-4fd2-4ac7-ad3f-040a3046e478 port - -1
 2013-08-16 05:49:25,771 DEBUG 
 [datastore.lifecycle.CloudStackPrimaryDataStoreLifeCycleImpl] 
 (AgentTaskPool-12:null) Another active pool with the same uuid already exists
 2013-08-16 

[jira] [Updated] (CLOUDSTACK-4534) [object_store_refactor] Deleting uploaded volume is not deleting the volume from backend

2013-08-30 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4534:
-

Priority: Major  (was: Critical)

 [object_store_refactor] Deleting uploaded volume is not deleting the volume 
 from backend
 

 Key: CLOUDSTACK-4534
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4534
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.1
 Environment: git rev-parse HEAD~5
 1f46bc3fb09aead2cf1744d358fea7adba7df6e1
 Cluster: VMWare
 Storage: NFS
Reporter: Sanjeev N
 Fix For: 4.2.1

 Attachments: cloud.dmp, cloud.dmp, management-server.rar, 
 management-server.rar


 Deleting uploaded volume is not deleting the volume from backend and not 
 marking removed field in volumes table.
 Steps to Reproduce:
 
 1.Bring up CS with vmware cluster using NFS for both primary and secondary 
 storage
 2.Upload one volume using uploadVolume API
 3.When the volume is in Uploaded state try to delete the volume
 Result:
 ==
 from volume_store_ref volume entry got deleted but volume was not deleted 
 from secondary storage and removed filed was not set in volumes table.
 Observations:
 ===
 Log snippet from management server log file as follows:
 2013-08-28 03:18:08,269 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===START===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:08,414 DEBUG [cloud.user.AccountManagerImpl] 
 (catalina-exec-20:null) Access granted to Acct[2-admin] to Domain:1/ by 
 AffinityGroupAccessChecker_EnhancerByCloudStack_86df51a8
 2013-08-28 03:18:08,421 INFO  [cloud.resourcelimit.ResourceLimitManagerImpl] 
 (catalina-exec-20:null) Discrepency in the resource count (original 
 count=77179526656 correct count = 78867689472) for type secondary_storage for 
 account ID 2 is fixed during resource count recalculation.
 2013-08-28 03:18:08,446 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===END===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:32,766 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Storage pool garbage collector found 0 
 templates to clean up in storage pool: pri_esx_306
 2013-08-28 03:18:32,772 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 templates to cleanup on template_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,774 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 snapshots to cleanup on snapshot_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,776 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 1 
 volumes to cleanup on volume_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,777 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Deleting volume store DB entry: 
 VolumeDataStore[2-20-2volumes/2/20/7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova]
 Volume in the backend:
 [root@Rhel63-Sanjeev 20]# pwd
 /tmp/nfs/sec_306/volumes/2/20
 [root@Rhel63-Sanjeev 20]# ls -l
 total 898008
 -rwxrwxrwx+ 1 root root 459320832 Aug 27 13:57 
 7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova
 -rwxrwxrwx+ 1 root root 459312128 Sep 17  2010 CentOS5.3-x86_64-disk1.vmdk
 -rwxrwxrwx+ 1 root root   147 Sep 17  2010 CentOS5.3-x86_64.mf
 -rwxrwxrwx+ 1 root root  5340 Sep 17  2010 CentOS5.3-x86_64.ovf
 -rwxrwxrwx+ 1 root root   340 Aug 27 13:58 volume.properties
 [root@Rhel63-Sanjeev 20]#
 Attaching management server log file and cloud db.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3138) [Doc]Flaws in upgrade documentation from 3.0.2 - 4.1.0

2013-08-30 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-3138:
-

Summary: [Doc]Flaws in upgrade documentation from 3.0.2 - 4.1.0  (was: 
Flaws in upgrade documentation from 3.0.2 - 4.1.0)

 [Doc]Flaws in upgrade documentation from 3.0.2 - 4.1.0
 ---

 Key: CLOUDSTACK-3138
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3138
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.1.0
Reporter: Joe Brockmeier
Priority: Critical
  Labels: documentation
 Fix For: 4.2.0


 Reported on the mailing list (http://markmail.org/message/ussthbb6sx6kjm2j)
 there are many errors in release notes for upgrade form CS 3.0.2 to 4.0.1.
 Here are just a few, from the top of my head. I suggest you correct them.
 1. Location of config files is not at /etc/cloud/ but rather at 
 /etc/cloudstack now.
 2. components.xml is nowhere to be found in /etc/cloudstack
 3. server.xml generation failed, because i had enabled ssl in it. It 
 required me to generate them from scratch.
 4. There were no instructions for enabling https, anywhere. I had to fix 
 server.xml and tomcat6.xml to use my certificate.
 5. cloud-sysvmadm is nonexistent. I think there is cloudstack-sys.. Also 
 switches are wrong.
 6. Python and bash scripts are now located 
 /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/ 
 instead 
 of /usr/lib64/cloud/common/ scripts/vm/hypervisor/ 
 xenserver/xenserver60/ as documentation would tell you.
 7. for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk 
 '{print $NF}'`; do xe pbd-plug uuid=$pbd ; doesn't work:
 [root@x1 ~]# for pbd in `xe pbd-list currently-attached=false| grep 
 ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd
  
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)
Prasanna Santhanam created CLOUDSTACK-4567:
--

 Summary: [DOC] Correct incubator links in the getting-release and 
gpg verification sections
 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Prasanna Santhanam
Priority: Critical



Section 3.1
3.1. Getting the release
You can download the latest CloudStack release from the Apache CloudStack 
project download page1.

The link still points to the download page on the incubator.

Section on verifying the GPG keys has wrong link to the KEYS file on incubator






--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754512#comment-13754512
 ] 

ASF subversion and git services commented on CLOUDSTACK-4567:
-

Commit ec91ea459e0cc3de5cae4791aa258f4e1fd1991d in branch refs/heads/master 
from [~tsp]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ec91ea4 ]

CLOUDSTACK-4567: Correcting URLs that pointed to the incubator resources

The location on the downloads section points to the incubator pages and
the KEYS file used for GPG verify of the source points to the incubator
resource. Corrected both links

Signed-off-by: Prasanna Santhanam t...@apache.org
(cherry picked from commit 420b654eaf2fe23c28d0da8dbddd7c58d0ce868b)


 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Prasanna Santhanam
Priority: Critical

 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam resolved CLOUDSTACK-4567.


Resolution: Fixed

Needs cherry-picking to 4.2

 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Install and Setup
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Prasanna Santhanam
Priority: Critical
 Fix For: 4.2.0


 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam updated CLOUDSTACK-4567:
---

Fix Version/s: 4.2.0

 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Install and Setup
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Prasanna Santhanam
Priority: Critical
 Fix For: 4.2.0


 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4568) Need to add this to the release note of 4.2

2013-08-30 Thread Bharat Kumar (JIRA)
Bharat Kumar created CLOUDSTACK-4568:


 Summary: Need to add this to the release note of 4.2
 Key: CLOUDSTACK-4568
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4568
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Doc
Affects Versions: 4.2.0
Reporter: Bharat Kumar
 Fix For: 4.2.0


After upgrade to 4.2 the  mem.overporvisioning.factor and 
cpu.overporvisioning.factor will be set to one that is the default value and 
are at cluster level now.

In case if some one prior to the 4.2 was usign mem.overporvisioning.factor and  
cpu.overporvisioning.factor after the upgrade these will be reset to one and 
can be changed by editing the cluster settings.

All the clusters created after the upgrade will get created with the values 
overcomit values specified at the global config by default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam updated CLOUDSTACK-4567:
---

Affects Version/s: 4.2.0

 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Install and Setup
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Prasanna Santhanam
Priority: Critical

 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam updated CLOUDSTACK-4567:
---

Component/s: Install and Setup
 Doc

 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Install and Setup
Reporter: Prasanna Santhanam
Assignee: Prasanna Santhanam
Priority: Critical

 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754513#comment-13754513
 ] 

ASF subversion and git services commented on CLOUDSTACK-4567:
-

Commit 420b654eaf2fe23c28d0da8dbddd7c58d0ce868b in branch 
refs/heads/4.2-forward from [~tsp]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=420b654 ]

CLOUDSTACK-4567: Correcting URLs that pointed to the incubator resources

The location on the downloads section points to the incubator pages and
the KEYS file used for GPG verify of the source points to the incubator
resource. Corrected both links

Signed-off-by: Prasanna Santhanam t...@apache.org


 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Prasanna Santhanam
Priority: Critical

 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CLOUDSTACK-4567) [DOC] Correct incubator links in the getting-release and gpg verification sections

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam reassigned CLOUDSTACK-4567:
--

Assignee: Prasanna Santhanam

 [DOC] Correct incubator links in the getting-release and gpg verification 
 sections
 --

 Key: CLOUDSTACK-4567
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4567
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Reporter: Prasanna Santhanam
Assignee: Prasanna Santhanam
Priority: Critical

 Section 3.1
 3.1. Getting the release
 You can download the latest CloudStack release from the Apache CloudStack 
 project download page1.
 The link still points to the download page on the incubator.
 Section on verifying the GPG keys has wrong link to the KEYS file on incubator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-2319) 2.2.14 to 4.1.0 upgrade: unable to add egress rules

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754521#comment-13754521
 ] 

ASF subversion and git services commented on CLOUDSTACK-2319:
-

Commit d9ba234d6c032aeb2ba04d4e6be0502de8a4efd9 in branch 
refs/heads/4.2-forward from [~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d9ba234 ]

CLOUDSTACK-2319: fix incorrect account_id in event table for Revoke 
SecurityGroupRule commands


 2.2.14 to 4.1.0 upgrade: unable to add egress rules
 ---

 Key: CLOUDSTACK-2319
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2319
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM
Affects Versions: 4.1.0
Reporter: Shashi Dahal
Assignee: Wei Zhou
Priority: Blocker
  Labels: egress, kvm, security-groups, upgrade
 Fix For: 4.1.0


 Hi, 
 For VMS that are running when in 2.2.14 and when upgraded to 4.1.0, they are 
 unable to connect outside. 
 This is because egress rules are introduced with no rules by default.
 This causes the VM to stop connecting to the outside world, and the traffic 
 is one way.
 I can SSH to the VM, and  I can ping the VM , but I cannot ssh or ping from 
 the VM. 
 I am unable to add egress rules and there is not even a single line showing 
 the api calls or anything related to adding the egress rules in the 
 management log. 
 Notes: 
 Upgrade Environment:  CloudStack 2.2.14, Advance Networking with Security 
 Groups, CentOS 6.4
 There were no issues during the upgrade process. All steps completed 
 successfully. 
 The system VMs were upgraded successfully 
 --
 Stopping and starting 1 secondary storage vm(s)...
 Done stopping and starting secondary storage vm(s)
 Stopping and starting 1 console proxy vm(s)...
 Done stopping and starting console proxy vm(s).
 Stopping and starting 1 running routing vm(s)...
 Done restarting router(s).
 --

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4517) [upgrade][Vmware]Deployment of VM using cents 6.2 template registered before upgrade is failing.

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4517:
---

Summary: [upgrade][Vmware]Deployment of VM using cents 6.2 template 
registered before upgrade is failing.  (was: [upgrade][Vmware]Deployment of VM 
using template registered before upgrade is failing.)

 [upgrade][Vmware]Deployment of VM using cents 6.2 template registered before 
 upgrade is failing.
 

 Key: CLOUDSTACK-4517
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4517
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Upgrade, VMware
Affects Versions: 4.2.0
 Environment: upgraded from 3.07 to 4.2
Reporter: manasaveloori
Assignee: Nitin Mehta
 Fix For: 4.2.1

 Attachments: management-server.zip, mysqldumpAfterUp.dmp, 
 mysqldumpBeforeUp.dmp


 Steps:
 1.Have CS with 3.0.7 build.
 2.Register a template 
 id: 10
 store_id: 2
  template_id: 204
  created: 2013-08-27 12:13:34
 last_updated: 2013-08-27 16:05:45
   job_id: e7c32b26-3e06-4a9e-a9d0-cd6452595659
 download_pct: 100
 size: 107374182400
   store_role: Image
physical_size: 707778560
   download_state: DOWNLOADED
error_str: Install completed successfully at 8/27/13 6:54 AM
   local_path: 
 /mnt/SecStorage/0e8da06e-0788-3efb-86a6-b0705a2205d3/template/tmpl/2/204/dnld4711105428993449674tmp_
 install_path: 
 template/tmpl/2/204/dec94d02-8c40-34e8-9a1d-99906a95df2a.ova
  url: http://10.147.28.7/templates/vmware/Centos6_2.ova
 download_url: NULL
 download_url_created: NULL
state: Ready
destroyed: 0
  is_copy: 0
 update_count: 0
  ref_cnt: 0
  updated: NULL
 5 rows in set (0.00 sec)
 3.Upgrade the build to 4.2.
 4.Deploy a VM using the template registered before upgrade.
 Observation:
 Observed the following exception:
 2013-08-27 23:08:37,892 ERROR [storage.resource.VmwareStorageProcessor] 
 (DirectAgent-101:10.147.40.28) clone volume from base image failed due to 
 Exception: javax.xml.ws.WebServiceException
 Message: java.net.SocketTimeoutException: Read timed out
 javax.xml.ws.WebServiceException: java.net.SocketTimeoutException: Read timed 
 out
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201)
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:151)
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:83)
 at 
 com.sun.xml.internal.ws.transport.DeferredTransportPipe.processRequest(DeferredTransportPipe.java:78)
 at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Fiber.java:587)
 at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Fiber.java:546)
 at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Fiber.java:531)
 at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Fiber.java:428)
 at com.sun.xml.internal.ws.client.Stub.process(Stub.java:211)
 at 
 com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:124)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:98)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78)
 at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:107)
 at $Proxy90.waitForUpdates(Unknown Source)
 at 
 com.cloud.hypervisor.vmware.util.VmwareClient.waitForValues(VmwareClient.java:428)
 at 
 com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:371)
 at 
 com.cloud.hypervisor.vmware.mo.VirtualMachineMO.createFullClone(VirtualMachineMO.java:594)
 at 
 com.cloud.storage.resource.VmwareStorageProcessor.createVMFullClone(VmwareStorageProcessor.java:293)
 at 
 com.cloud.storage.resource.VmwareStorageProcessor.cloneVolumeFromBaseTemplate(VmwareStorageProcessor.java:384)
 at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:73)
 at 
 com.cloud.storage.resource.VmwareStorageSubsystemCommandHandler.execute(VmwareStorageSubsystemCommandHandler.java:147)
 at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:49)
 at 
 

[jira] [Updated] (CLOUDSTACK-4517) [upgrade][Vmware]Deployment of VM using centos 6.2 template registered before upgrade is failing.

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4517:
---

Summary: [upgrade][Vmware]Deployment of VM using centos 6.2 template 
registered before upgrade is failing.  (was: [upgrade][Vmware]Deployment of VM 
using cents 6.2 template registered before upgrade is failing.)

 [upgrade][Vmware]Deployment of VM using centos 6.2 template registered before 
 upgrade is failing.
 -

 Key: CLOUDSTACK-4517
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4517
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Upgrade, VMware
Affects Versions: 4.2.0
 Environment: upgraded from 3.07 to 4.2
Reporter: manasaveloori
Assignee: Nitin Mehta
 Fix For: 4.2.1

 Attachments: management-server.zip, mysqldumpAfterUp.dmp, 
 mysqldumpBeforeUp.dmp


 Steps:
 1.Have CS with 3.0.7 build.
 2.Register a template 
 id: 10
 store_id: 2
  template_id: 204
  created: 2013-08-27 12:13:34
 last_updated: 2013-08-27 16:05:45
   job_id: e7c32b26-3e06-4a9e-a9d0-cd6452595659
 download_pct: 100
 size: 107374182400
   store_role: Image
physical_size: 707778560
   download_state: DOWNLOADED
error_str: Install completed successfully at 8/27/13 6:54 AM
   local_path: 
 /mnt/SecStorage/0e8da06e-0788-3efb-86a6-b0705a2205d3/template/tmpl/2/204/dnld4711105428993449674tmp_
 install_path: 
 template/tmpl/2/204/dec94d02-8c40-34e8-9a1d-99906a95df2a.ova
  url: http://10.147.28.7/templates/vmware/Centos6_2.ova
 download_url: NULL
 download_url_created: NULL
state: Ready
destroyed: 0
  is_copy: 0
 update_count: 0
  ref_cnt: 0
  updated: NULL
 5 rows in set (0.00 sec)
 3.Upgrade the build to 4.2.
 4.Deploy a VM using the template registered before upgrade.
 Observation:
 Observed the following exception:
 2013-08-27 23:08:37,892 ERROR [storage.resource.VmwareStorageProcessor] 
 (DirectAgent-101:10.147.40.28) clone volume from base image failed due to 
 Exception: javax.xml.ws.WebServiceException
 Message: java.net.SocketTimeoutException: Read timed out
 javax.xml.ws.WebServiceException: java.net.SocketTimeoutException: Read timed 
 out
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201)
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:151)
 at 
 com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:83)
 at 
 com.sun.xml.internal.ws.transport.DeferredTransportPipe.processRequest(DeferredTransportPipe.java:78)
 at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Fiber.java:587)
 at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Fiber.java:546)
 at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Fiber.java:531)
 at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Fiber.java:428)
 at com.sun.xml.internal.ws.client.Stub.process(Stub.java:211)
 at 
 com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:124)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:98)
 at 
 com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78)
 at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:107)
 at $Proxy90.waitForUpdates(Unknown Source)
 at 
 com.cloud.hypervisor.vmware.util.VmwareClient.waitForValues(VmwareClient.java:428)
 at 
 com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:371)
 at 
 com.cloud.hypervisor.vmware.mo.VirtualMachineMO.createFullClone(VirtualMachineMO.java:594)
 at 
 com.cloud.storage.resource.VmwareStorageProcessor.createVMFullClone(VmwareStorageProcessor.java:293)
 at 
 com.cloud.storage.resource.VmwareStorageProcessor.cloneVolumeFromBaseTemplate(VmwareStorageProcessor.java:384)
 at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:73)
 at 
 com.cloud.storage.resource.VmwareStorageSubsystemCommandHandler.execute(VmwareStorageSubsystemCommandHandler.java:147)
 at 
 com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:49)
 at 
 

[jira] [Updated] (CLOUDSTACK-4200) listSystemVMs API and listRouters API fail to return hypervisor property

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4200:
---

Fix Version/s: (was: 4.2.0)
   4.2.1

 listSystemVMs API and listRouters API fail to return hypervisor property 
 -

 Key: CLOUDSTACK-4200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.2.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
 Fix For: 4.2.1


 listSystemVMs API and listRouters API doesn't return hypervisor property and 
 this is important for scalevm operation, since its not implemented for all 
 the hypervisors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4200) listSystemVMs API and listRouters API fail to return hypervisor property

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4200:
---

Priority: Critical  (was: Major)

 listSystemVMs API and listRouters API fail to return hypervisor property 
 -

 Key: CLOUDSTACK-4200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.2.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
Priority: Critical
 Fix For: 4.2.1


 listSystemVMs API and listRouters API doesn't return hypervisor property and 
 this is important for scalevm operation, since its not implemented for all 
 the hypervisors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4200) listSystemVMs API and listRouters API fail to return hypervisor property

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4200:
---

Fix Version/s: (was: 4.2.1)
   Future

 listSystemVMs API and listRouters API fail to return hypervisor property 
 -

 Key: CLOUDSTACK-4200
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4200
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API
Affects Versions: 4.2.0
Reporter: Nitin Mehta
Assignee: Nitin Mehta
Priority: Critical
 Fix For: Future


 listSystemVMs API and listRouters API doesn't return hypervisor property and 
 this is important for scalevm operation, since its not implemented for all 
 the hypervisors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-840) [DOC] Document CLOUDSTACK-670 regarding configuration option for linked clones or lack thereof

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754529#comment-13754529
 ] 

ASF subversion and git services commented on CLOUDSTACK-840:


Commit 33c7c654af85a5493ef6fa053dc41542bfffe180 in branch refs/heads/master 
from [~jessica.tomec...@citrix.com]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=33c7c65 ]

CLOUDSTACK-840. DOC. New doc. Linked and full clones on VMware.


 [DOC] Document CLOUDSTACK-670 regarding configuration option for linked 
 clones or lack thereof
 --

 Key: CLOUDSTACK-840
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-840
 Project: CloudStack
  Issue Type: Sub-task
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Reporter: David Nalley
Assignee: Jessica Tomechak
 Fix For: 4.2.0


 Add documentation for the configuration option about use (or not) of linked 
 clones. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4569) [doc] Review comments on Egress Firewall

2013-08-30 Thread Radhika Nair (JIRA)
Radhika Nair created CLOUDSTACK-4569:


 Summary: [doc] Review comments on Egress Firewall
 Key: CLOUDSTACK-4569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Doc
Affects Versions: 4.2.0
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


16.20.1.3. Changing the Default Egress Policy
Change: 
Configuring Default Egress Policy 
2.
You can configure the default policy of egress firewall rules in Isolated 
Advanced networks.
Change: 
The default egress policy for the isolated guest network is configured using 
the network offering. Create network offering with egress policy Allow/Deny. 
Now use this network offering to create network.

3.
16.15.3. Assigning Additional IPs to a VM
1.  1. You need to specify the secondary IP address on the guest VM.  
Change: You need to configure the IP on guest vm NIC manually.
2.
In point# 6: ensure that you assign IPs to NIC each time the VM reboots. 
change:
Ensure the ip address configuration persist on vm reboot.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4474) [UI] Zone wide Primary storages does not display the zone information

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada closed CLOUDSTACK-4474.



This is fixed with latest build. Hence closing the bug. 

 [UI] Zone wide Primary storages does not display the zone information
 -

 Key: CLOUDSTACK-4474
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4474
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.1
Reporter: Sailaja Mada
Assignee: Jessica Wang
Priority: Critical
 Fix For: 4.2.1

 Attachments: dbvol.sql, zonewide1.png, zonewide2.png


 Steps:
 1. Configure Adv Zone with VMWARE cluster 
 2. Add Zone wide primary zone 
 3. Access Infrastructure-Primary Storage's 
 4. Try to get the Zone with which this storage is added.  
 Observation:
 There is no zone info provided with this storage :
 API gives the details of the zone :
 http://10.102.192.207:8080/client/api?command=listStoragePoolssessionkey=W20AJSGGax%2F%2B4ASviTMmxq26fx0%3Dpage=1pageSize=20listAll=true_=1377257345005
 storagepoolidf528f6ca-31e0-333a-88a1-93acce50e381/idzoneid659f6614-3363-46eb-ace3-6d1007957d30/zoneidzonenameLegacyZone1/zonenamenamelegacyzwps2/nameipaddress10.102.192.100/ipaddresspath/cpg_vol/sailaja/legacyzwps2/pathcreated2013-08-22T12:10:42+0530/createdtypeNetworkFilesystem/typedisksizetotal879609303040/disksizetotaldisksizeallocated10737418240/disksizeallocateddisksizeused491180085248/disksizeusedtagszwps2/tagsstateUp/statescopeZONE/scopehypervisorVMware/hypervisor/storagepoolstoragepool
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4570) service cloud-management wrongly named

2013-08-30 Thread Pavan Kumar Bandarupally (JIRA)
Pavan Kumar Bandarupally created CLOUDSTACK-4570:


 Summary: service cloud-management wrongly named
 Key: CLOUDSTACK-4570
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4570
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Doc
Affects Versions: 4.2.0
Reporter: Pavan Kumar Bandarupally
 Fix For: 4.2.0


4.2.6 LDAP User Authentication: Limitation

service cloud-management restart  should be changed to service 
cloudstack-management restart

Apart from that there is a minor spelling mistake in section 3.7. About 
Secondary Storage

In the last but one paragraph in that section , swift is misspelled as swoft.

The NFS storage in each zone acts as a staging area
through which all templates and other secondary storage data pass before being 
forwarded to Swoft
or S3



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4569) [doc] Review comments on Egress Firewall

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754554#comment-13754554
 ] 

ASF subversion and git services commented on CLOUDSTACK-4569:
-

Commit f37e0b0b6bf473884d4f7dfc21e3559f4ebeb1b3 in branch 
refs/heads/4.2-forward from [~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=f37e0b0 ]

CLOUDSTACK-4569 review comments on egress firewall and multiple ip per nic


 [doc] Review comments on Egress Firewall
 

 Key: CLOUDSTACK-4569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 16.20.1.3. Changing the Default Egress Policy
 Change: 
 Configuring Default Egress Policy 
 2.
 You can configure the default policy of egress firewall rules in Isolated 
 Advanced networks.
 Change: 
 The default egress policy for the isolated guest network is configured using 
 the network offering. Create network offering with egress policy Allow/Deny. 
 Now use this network offering to create network.
 3.
 16.15.3. Assigning Additional IPs to a VM
 1.1. You need to specify the secondary IP address on the guest VM.  
 Change: You need to configure the IP on guest vm NIC manually.
 2.
 In point# 6: ensure that you assign IPs to NIC each time the VM reboots. 
 change:
 Ensure the ip address configuration persist on vm reboot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4533) permission issue in usage server and it failed to start after upgrade from 3.0.4 to 4.2

2013-08-30 Thread Kishan Kavala (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754553#comment-13754553
 ] 

Kishan Kavala commented on CLOUDSTACK-4533:
---

As result of commit commit cd65d26a931fb4599cc9831a33a52cd5a2759a42, some 
parameters are missing in db.properties for usage.

Workaround:
1. Copy db.properties from /etc/cloudstack/management to /etc/cloudstack/usage
2. Restart Usage server

 permission issue in usage server and it failed to start after upgrade from 
 3.0.4 to 4.2
 ---

 Key: CLOUDSTACK-4533
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4533
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Packaging, Upgrade, Usage
Affects Versions: 4.2.1
 Environment: 
Reporter: shweta agarwal
  Labels: ReleaseNote
 Fix For: 4.2.1

 Attachments: cloudstack-usage.err, cloudstack-usage.err, 
 cloudstack-usage.out, cloudstack-usage.out, usage.log


 did an upgrade from 3.0.4 to 4.2  and then start usage server. 
 Usage server failed to start
 giving following exception :
 log4j:ERROR setFile(null,true) call failed.
 java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
 (Permission denied)
 at java.io.FileOutputStream.openAppend(Native Method)
 at java.io.FileOutputStream.init(FileOutputStream.java:207)
 at java.io.FileOutputStream.init(FileOutputStream.java:131)
 at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
 at 
 org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
 at 
 org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
 at 
 org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
 at 
 org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:773)
 at 
 org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:901)
 at 
 org.springframework.util.Log4jConfigurer.initLogging(Log4jConfigurer.java:69)
 at com.cloud.usage.UsageServer.initLog4j(UsageServer.java:89)
 at com.cloud.usage.UsageServer.init(UsageServer.java:52)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
 log4j:ERROR setFile(null,true) call failed.
 java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
 (Permission denied)
 at java.io.FileOutputStream.openAppend(Native Method)
 at java.io.FileOutputStream.init(FileOutputStream.java:207)
 at java.io.FileOutputStream.init(FileOutputStream.java:131)
 at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
 at 
 org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
 at 
 org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
 at 
 org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
 at 
 org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:755)
 

[jira] [Updated] (CLOUDSTACK-4328) Make mode http option httpclose in HAproxy.conf configurable on port 80

2013-08-30 Thread Daan Hoogland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daan Hoogland updated CLOUDSTACK-4328:
--

Description: 
By default CS configures mode http  option http when it detects a rule on 
public port 80.

In most situations this is perfectly OK, but we hit a specific situation where 
this has negative impact on performance. In this case a Varnish implementation 
which needs to run on port 80 and cannot run on an alternative port. We could 
imagine this could be an issue for other CS users too.
Besides this also the maxconnections needs to be raised but this has already 
been address by CS-2997, which is also the inspiration to make the http mode 
configurable.

next several other related options need to be set
 maxpipes 20480 # sugestion is to set it to 1/4 of max connection and 
do this automatically (if needed this can be altered later
 nokqueue
 nopoll

and the following set
option forwardfor
option forceclose
be changed into
no option forceclose


Details on the performance difference below.

So we like to see this http modeon port 80 configurable. e.g. httpmode 
false-true on the API

See also CS-2997 and the following commits:
dd33abffbe3b7c5b615e8f64b1824a720329dd0d [dd33abf]
954e1978130b3cfb0c73f2f1506d94440f478f01 [954e197]

Performance testing details:

with ‘mode http en option httpclose’ on de load-balancing rule:

9:34 root@w6 /home/erwin/src/wrk  ./wrk  -c 1000 -t 20 -d 20 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg

Running 20s test @ 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg
  20 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 6.21s 6.30s   15.87s82.30%
Req/Sec43.61 37.81   371.00 78.61%
  16452 requests in 20.01s, 146.60MB read
  Socket errors: connect 0, read 8, write 309, timeout 4753
Requests/sec:822.03
Transfer/sec:  7.33MB

-   without:

9:33 root@w6 /home/erwin/src/wrk  ./wrk  -c 1000 -t 20 -d 20 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg

Running 20s test @ 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg
  20 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency   240.62ms1.28s   12.19s97.80%
Req/Sec   545.98181.96 1.46k74.70%
  29 requests in 20.00s, 1.92GB read
  Socket errors: connect 0, read 53, write 43, timeout 791
Requests/sec:  11109.45
Transfer/sec: 98.09MB


  was:
By default CS configures mode http  option http when it detects a rule on 
public port 80.

In most situations this is perfectly OK, but we hit a specific situation where 
this has negative impact on performance. In this case a Varnish implementation 
which needs to run on port 80 and cannot run on an alternative port. We could 
imagine this could be an issue for other CS users too.
Besides this also the maxconnections needs to be raised but this has already 
been address by CS-2997, which is also the inspiration to make the http mode 
configurable.

Details on the performance difference below.

So we like to see this http modeon port 80 configurable. e.g. httpmode 
false-true on the API

See also CS-2997 and the following commits:
dd33abffbe3b7c5b615e8f64b1824a720329dd0d [dd33abf]
954e1978130b3cfb0c73f2f1506d94440f478f01 [954e197]

Performance testing details:

with ‘mode http en option httpclose’ on de load-balancing rule:

9:34 root@w6 /home/erwin/src/wrk  ./wrk  -c 1000 -t 20 -d 20 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg

Running 20s test @ 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg
  20 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 6.21s 6.30s   15.87s82.30%
Req/Sec43.61 37.81   371.00 78.61%
  16452 requests in 20.01s, 146.60MB read
  Socket errors: connect 0, read 8, write 309, timeout 4753
Requests/sec:822.03
Transfer/sec:  7.33MB

-   without:

9:33 root@w6 /home/erwin/src/wrk  ./wrk  -c 1000 -t 20 -d 20 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg

Running 20s test @ 
http://x/imgbase0/imagebase/thumb/FC/2/7/0/8/1004004013338072.jpg
  20 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency   240.62ms1.28s   12.19s97.80%
Req/Sec   545.98181.96 1.46k74.70%
  29 requests in 20.00s, 1.92GB read
  Socket errors: connect 0, read 53, write 43, timeout 791
Requests/sec:  11109.45
Transfer/sec: 98.09MB



 Make mode http  option httpclose in HAproxy.conf configurable on port 80
 -

 Key: CLOUDSTACK-4328
 URL: 

[jira] [Commented] (CLOUDSTACK-4569) [doc] Review comments on Egress Firewall

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754562#comment-13754562
 ] 

ASF subversion and git services commented on CLOUDSTACK-4569:
-

Commit 5275618bcc94513f2e2d463adb878aa16628731f in branch refs/heads/master 
from [~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5275618 ]

CLOUDSTACK-4569 review comments on egress firewall and multiple ip per nic


 [doc] Review comments on Egress Firewall
 

 Key: CLOUDSTACK-4569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 16.20.1.3. Changing the Default Egress Policy
 Change: 
 Configuring Default Egress Policy 
 2.
 You can configure the default policy of egress firewall rules in Isolated 
 Advanced networks.
 Change: 
 The default egress policy for the isolated guest network is configured using 
 the network offering. Create network offering with egress policy Allow/Deny. 
 Now use this network offering to create network.
 3.
 16.15.3. Assigning Additional IPs to a VM
 1.1. You need to specify the secondary IP address on the guest VM.  
 Change: You need to configure the IP on guest vm NIC manually.
 2.
 In point# 6: ensure that you assign IPs to NIC each time the VM reboots. 
 change:
 Ensure the ip address configuration persist on vm reboot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-4569) [doc] Review comments on Egress Firewall

2013-08-30 Thread Radhika Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radhika Nair resolved CLOUDSTACK-4569.
--

Resolution: Fixed

 [doc] Review comments on Egress Firewall
 

 Key: CLOUDSTACK-4569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 16.20.1.3. Changing the Default Egress Policy
 Change: 
 Configuring Default Egress Policy 
 2.
 You can configure the default policy of egress firewall rules in Isolated 
 Advanced networks.
 Change: 
 The default egress policy for the isolated guest network is configured using 
 the network offering. Create network offering with egress policy Allow/Deny. 
 Now use this network offering to create network.
 3.
 16.15.3. Assigning Additional IPs to a VM
 1.1. You need to specify the secondary IP address on the guest VM.  
 Change: You need to configure the IP on guest vm NIC manually.
 2.
 In point# 6: ensure that you assign IPs to NIC each time the VM reboots. 
 change:
 Ensure the ip address configuration persist on vm reboot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4571) Data disks attached to Windows 2008 R2 VMs are being attached as virtIO disks

2013-08-30 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754564#comment-13754564
 ] 

Wei Zhou commented on CLOUDSTACK-4571:
--

In CloudStack, all the attachdisk are added as VIRTIO disk, no matter the 
guestOS supports PV or not.

It is fixed in 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.attachOrDetachDisk(Connect,
 boolean, String, KVMPhysicalDisk, int)

diskdef = new DiskDef();
if (attachingPool.getType() == StoragePoolType.RBD) {
diskdef.defNetworkBasedDisk(attachingDisk.getPath(),
attachingPool.getSourceHost(), 
attachingPool.getSourcePort(),
attachingPool.getAuthUserName(), 
attachingPool.getUuid(), devId,
DiskDef.diskBus.VIRTIO, diskProtocol.RBD);
} else if (attachingDisk.getFormat() == 
PhysicalDiskFormat.QCOW2) {
diskdef.defFileBasedDisk(attachingDisk.getPath(), devId,
DiskDef.diskBus.VIRTIO, DiskDef.diskFmtType.QCOW2);
} else if (attachingDisk.getFormat() == PhysicalDiskFormat.RAW) 
{
diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId,
DiskDef.diskBus.VIRTIO);
}

 Data disks attached to Windows 2008 R2 VMs are being attached as virtIO disks
 -

 Key: CLOUDSTACK-4571
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4571
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.1.0
 Environment: [root@slodev-cnkvm001 ~]# uname -a
 Linux slodev-cnkvm001 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 
 2013 x86_64 x86_64 x86_64 GNU/Linux
 [root@slodev-cnkvm001 ~]# cat /etc/redhat-release 
 CentOS release 6.4 (Final)
Reporter: danny webb

 When attaching a data disk on KVM to a Windows 2008 R2 64 bit quest it is 
 being attached as a VirtIO disk.
 6 0 54727 1  20   0 6819540 6202704 poll_s Sl ?   326:22 
 /usr/libexec/qemu-kvm -name i-5-465-VM -S -M rhel6.4.0 -enable-kvm -m 6144 
 -smp 2,sockets=2,cores=1,threads=1 -uuid 135f41ee-ff3e-39ae-a578-e6961f830b39 
 -nodefconfig -nodefaults -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/i-5-465-VM.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc 
 base=localtime,driftfix=slew -no-shutdown -device 
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
 file=/var/lib/libvirt/images/f06dcafe-18ee-4793-808f-9ff70cf9ccd3,if=none,id=drive-ide0-0-0,format=qcow2,cache=none
  -device 
 ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 
 -drive 
 if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none 
 -device 
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 
 -drive 
 file=/var/lib/libvirt/images/528c7665-c93e-4f9b-b7f5-729f350b51a4,if=none,id=drive-virtio-disk1,format=qcow2,cache=none
  -device 
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1
  -netdev tap,fd=24,id=hostnet0 -device 
 e1000,netdev=hostnet0,id=net0,mac=06:7a:92:00:05:aa,bus=pci.0,addr=0x3 
 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
 -device usb-tablet,id=input0 -vnc 0.0.0.0:3 -vga cirrus -device 
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 [root@slodev-cnkvm001 ~]# virsh dumpxml i-5-465-VM
 domain type='kvm' id='28'
   namei-5-465-VM/name
   uuid135f41ee-ff3e-39ae-a578-e6961f830b39/uuid
   descriptionWindows Server 2008 R2 (64-bit)/description
   memory unit='KiB'6291456/memory
   currentMemory unit='KiB'6291456/currentMemory
   vcpu placement='static'2/vcpu
   cputune
 shares4000/shares
   /cputune
   os
 type arch='x86_64' machine='rhel6.4.0'hvm/type
 boot dev='cdrom'/
 boot dev='hd'/
   /os
   features
 acpi/
 apic/
 pae/
   /features
   clock offset='localtime'
 timer name='rtc' tickpolicy='catchup'/
   /clock
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/qemu-kvm/emulator
 disk type='file' device='disk'
   driver name='qemu' type='qcow2' cache='none'/
   source 
 file='/var/lib/libvirt/images/f06dcafe-18ee-4793-808f-9ff70cf9ccd3'/
   target dev='hda' bus='ide'/
   alias name='ide0-0-0'/
   address type='drive' controller='0' bus='0' target='0' unit='0'/
 /disk
 disk type='file' device='cdrom'
   driver name='qemu' type='raw' cache='none'/
   source 
 

[jira] [Updated] (CLOUDSTACK-4550) [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have migration work

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4550:
---

Assignee: Jessica Tomechak

 [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have 
 migration work
 --

 Key: CLOUDSTACK-4550
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4550
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, KVM, Upgrade
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Jessica Tomechak
Priority: Critical

 See CLOUDSTACK-4405 for the original bug. This is the doc to be prepared as
 part of upgrade in release notes once the fix for the bug is verified to work
 After network bridges being renamed from cloudVirBrVLAN to brem1-VLAN rename
 the bridges to allow migration to work between host added before upgrade to
 those added after upgrade
 This can be done by running the cloudstack-agent-upgrade script

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4405) (Upgrade) Migrate failed between existing hosts and new hosts

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4405:
---

Labels: ReleaseNote  (was: )

 (Upgrade) Migrate failed between existing hosts and new hosts
 -

 Key: CLOUDSTACK-4405
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4405
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.1.0, 4.2.0
 Environment: CS 4.1
Reporter: Wei Zhou
Assignee: edison su
Priority: Blocker
  Labels: ReleaseNote
 Fix For: 4.1.1, 4.2.0, 4.2.1


 There are two hosts (cs-kvm001, cs-kvm002) in old 2.2.14 environment .
 After upgrade from 2.2.14 to 4.1, I added two new hosts (cs-kvm003, 
 cs-kvm004).
 The migration between cs-kvm001 and cs-kvm002, or cs-kvm003 and cs-kvm004 
 succeed.
 However, the migration from cs-kvm001/002 to the new hosts (cs-kvm003, 
 cs-kvm004) failed.
 2013-08-19 16:57:31,051 DEBUG [kvm.resource.BridgeVifDriver] 
 (agentRequest-Handler-1:null) nic=[Nic:Guest-10.11.110.231-vlan://110]
 2013-08-19 16:57:31,051 DEBUG [kvm.resource.BridgeVifDriver] 
 (agentRequest-Handler-1:null) creating a vlan dev and bridge for guest 
 traffic per traffic label cloudbr0
 2013-08-19 16:57:31,051 DEBUG [utils.script.Script] 
 (agentRequest-Handler-1:null) Executing: /bin/bash -c brctl show | grep 
 cloudVirBr110
 2013-08-19 16:57:31,063 DEBUG [utils.script.Script] 
 (agentRequest-Handler-1:null) Exit value is 1
 2013-08-19 16:57:31,063 DEBUG [utils.script.Script] 
 (agentRequest-Handler-1:null)
 2013-08-19 16:57:31,063 DEBUG [kvm.resource.BridgeVifDriver] 
 (agentRequest-Handler-1:null) Executing: 
 /usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvlan.sh -v 110 -p 
 em1 -b brem1-110 -o add
 2013-08-19 16:57:31,121 DEBUG [kvm.resource.BridgeVifDriver] 
 (agentRequest-Handler-1:null) Execution is successful.
 2013-08-19 16:57:31,122 DEBUG [kvm.resource.BridgeVifDriver] 
 (agentRequest-Handler-1:null) Set name-type for VLAN subsystem. Should be 
 visible in /proc/net/vlan/config
 This is because the bridge name on old hosts are cloudVirBr110, and brem1-110 
 on new hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4500) [VMWARE][UI]System VM's are failed to deploy on Standard vSwitch when DVS is enabled at the zone level (Failed to create public port group)

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada closed CLOUDSTACK-4500.



This is fixed with latest build. Hence closing the bug. 

 [VMWARE][UI]System VM's are failed to deploy on Standard vSwitch when DVS is 
 enabled at the zone level (Failed to create public port group)
 ---

 Key: CLOUDSTACK-4500
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4500
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI, VMware
Affects Versions: 4.2.1
Reporter: Sailaja Mada
Assignee: Sateesh Chodapuneedi
Priority: Blocker
 Fix For: 4.2.1

 Attachments: overrideUIZome.png, systemvmissue.rar


 Steps:
 1. Configure Adv zone with DVS enabled at the global setting.
 2. Physical network 1 ( All the traffics are on single Physical network 
 labeled as vSwitch0) 
 3.  While adding Cluster Override options are there but there is no drop down 
 listed down to select Standard vSwitch for Public and Guest . So just enabled 
 the override option.
 4. Enable Zone
 Observation:
 System VM's are failed to deploy on Standard vSwitch when DVS is enabled at 
 the zone level
 Note:
 1. It could be UI issue as we are not getting any drop down while adding the 
 zone to override cloud level settings (dvs)
 2. This issue needs to be looked by both Core and UI dev.
 3. It failed to create public group .
 2013-08-26 14:47:26,394 INFO  [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) Prepare network on vmwaresvs 
 P[vSwitch0:untagged] with name prefix: cloud.private
 2013-08-26 14:47:26,404 INFO  [storage.resource.VmwareStorageLayoutHelper] 
 (DirectAgent-103:10.102.192.13) sync [13ce52911b9f3d36a9a34e8a51450925] 
 ROOT-72.vmdk-[13ce52911b9f3d36a9a34e8a51450925] s-72-VM/ROOT-72.vmdk
 2013-08-26 14:47:26,462 INFO  [vmware.mo.HypervisorHostHelper] 
 (DirectAgent-206:10.102.192.13) Network cloud.private.untagged.0.1-vSwitch0 
 is ready on vSwitch vSwitch0
 2013-08-26 14:47:26,462 INFO  [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) Preparing NIC device on network 
 cloud.private.untagged.0.1-vSwitch0
 2013-08-26 14:47:26,462 DEBUG [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) Prepare NIC at new device 
 {operation:ADD,device:{addressType:Manual,macAddress:06:78:1a:00:00:04,key:-4,backing:{network:{value:network-11702,type:Network},deviceName:cloud.private.untagged.0.1-vSwitch0},connectable:{startConnected:true,allowGuestControl:true,connected:true},unitNumber:1}}
 2013-08-26 14:47:26,463 INFO  [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) Prepare NIC device based on NicTO: 
 {deviceId:2,networkRateMbps:-1,defaultNic:true,uuid:f0729e58-8bd1-438f-8bfb-e07ee1322506,ip:10.102.196.221,netmask:255.255.255.0,gateway:10.102.196.1,mac:06:1e:b6:00:00:06,dns1:10.103.128.15,broadcastType:Vlan,type:Public,broadcastUri:vlan://100,isolationUri:vlan://100,isSecurityGroupEnabled:false,name:vSwitch0}
 2013-08-26 14:47:26,466 INFO  [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) Prepare network on vmwaredvs 
 P[vSwitch0:untagged] with name prefix: cloud.public
 2013-08-26 14:47:26,480 ERROR [vmware.mo.HypervisorHostHelper] 
 (DirectAgent-206:10.102.192.13) Unable to find distributed vSwitch null
 2013-08-26 14:47:26,482 WARN  [vmware.resource.VmwareResource] 
 (DirectAgent-206:10.102.192.13) StartCommand failed due to Exception: 
 java.lang.Exception
 Message: Unable to find distributed vSwitch null
 java.lang.Exception: Unable to find distributed vSwitch null
 at 
 com.cloud.hypervisor.vmware.mo.HypervisorHostHelper.prepareNetwork(HypervisorHostHelper.java:528)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.prepareNetworkFromNicInfo(VmwareResource.java:3308)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:2904)
 at 
 com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:514)
 at 
 com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
 at 
 

[jira] [Commented] (CLOUDSTACK-4569) [doc] Review comments on Egress Firewall

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754573#comment-13754573
 ] 

ASF subversion and git services commented on CLOUDSTACK-4569:
-

Commit 91ef76fb5d9ffe61d1d657bdf11bf5fa4faed818 in branch refs/heads/4.2 from 
[~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=91ef76f ]

CLOUDSTACK-4569 review comments on egress firewall and multiple ip per nic


 [doc] Review comments on Egress Firewall
 

 Key: CLOUDSTACK-4569
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4569
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Radhika Nair
Assignee: Radhika Nair
 Fix For: 4.2.0


 16.20.1.3. Changing the Default Egress Policy
 Change: 
 Configuring Default Egress Policy 
 2.
 You can configure the default policy of egress firewall rules in Isolated 
 Advanced networks.
 Change: 
 The default egress policy for the isolated guest network is configured using 
 the network offering. Create network offering with egress policy Allow/Deny. 
 Now use this network offering to create network.
 3.
 16.15.3. Assigning Additional IPs to a VM
 1.1. You need to specify the secondary IP address on the guest VM.  
 Change: You need to configure the IP on guest vm NIC manually.
 2.
 In point# 6: ensure that you assign IPs to NIC each time the VM reboots. 
 change:
 Ensure the ip address configuration persist on vm reboot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4533) permission issue in usage server and it failed to start after upgrade from 3.0.4 to 4.2

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754572#comment-13754572
 ] 

ASF subversion and git services commented on CLOUDSTACK-4533:
-

Commit cc2f76e1d81e127d271876a55bc538efa5d391b1 in branch refs/heads/4.2 from 
[~radhikap]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=cc2f76e ]

CLOUDSTACK-4533 usage server issue


 permission issue in usage server and it failed to start after upgrade from 
 3.0.4 to 4.2
 ---

 Key: CLOUDSTACK-4533
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4533
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Packaging, Upgrade, Usage
Affects Versions: 4.2.1
 Environment: 
Reporter: shweta agarwal
  Labels: ReleaseNote
 Fix For: 4.2.1

 Attachments: cloudstack-usage.err, cloudstack-usage.err, 
 cloudstack-usage.out, cloudstack-usage.out, usage.log


 did an upgrade from 3.0.4 to 4.2  and then start usage server. 
 Usage server failed to start
 giving following exception :
 log4j:ERROR setFile(null,true) call failed.
 java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
 (Permission denied)
 at java.io.FileOutputStream.openAppend(Native Method)
 at java.io.FileOutputStream.init(FileOutputStream.java:207)
 at java.io.FileOutputStream.init(FileOutputStream.java:131)
 at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
 at 
 org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
 at 
 org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
 at 
 org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
 at 
 org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:773)
 at 
 org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:901)
 at 
 org.springframework.util.Log4jConfigurer.initLogging(Log4jConfigurer.java:69)
 at com.cloud.usage.UsageServer.initLog4j(UsageServer.java:89)
 at com.cloud.usage.UsageServer.init(UsageServer.java:52)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
 log4j:ERROR setFile(null,true) call failed.
 java.io.FileNotFoundException: /var/log/cloudstack/usage/usage.log 
 (Permission denied)
 at java.io.FileOutputStream.openAppend(Native Method)
 at java.io.FileOutputStream.init(FileOutputStream.java:207)
 at java.io.FileOutputStream.init(FileOutputStream.java:131)
 at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
 at 
 org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
 at 
 org.apache.log4j.rolling.RollingFileAppender.activateOptions(RollingFileAppender.java:179)
 at 
 org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
 at 
 org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
 at 
 org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:492)
 at 
 org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1001)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:867)
 at 
 org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:755)
 

[jira] [Commented] (CLOUDSTACK-3765) [packaging][document] unable to upgrade cp 4.2 build on centos5.5

2013-08-30 Thread Radhika Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754657#comment-13754657
 ] 

Radhika Nair commented on CLOUDSTACK-3765:
--

Please check whether the RN instruction fixes the problem .

 [packaging][document] unable to upgrade cp 4.2 build on centos5.5
 -

 Key: CLOUDSTACK-3765
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3765
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Packaging
Affects Versions: 4.2.0
 Environment: Centos5.5
Reporter: shweta agarwal
Assignee: frank zhang
Priority: Critical
 Fix For: 4.2.0

 Attachments: Apache_CloudStack-4.2.0-Release_Notes-en-US.pdf


 When I am trying to install 
 http://repo-ccp.citrix.com/releases/ASF/rhel/5/4.2/CP4.2-dbupgrade-44-rhel5.tar.gz
  
  I am hitting JSVC dependency error
 When I tried to install  jsvc via rpm its also failing giving error
 rpm -Uvh 
 http://mirror.centos.org/centos/6/os/x86_64/Packages/jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64.rpm
 Retrieving 
 http://mirror.centos.org/centos/6/os/x86_64/Packages/jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64.rpm
 warning: /var/tmp/rpm-xfer.qxSLPS: Header V3 RSA/SHA256 signature: NOKEY, key 
 ID c105b9de
 error: Failed dependencies:
 rpmlib(FileDigests) = 4.6.0-1 is needed by 
 jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64
 rpmlib(PayloadIsXz) = 5.2-1 is needed by 
 jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64
 Centos 5.5 repo does not contain any jsvc package
 Infact at rpmfind  location also JSVC package only exists for centos6.4
 http://rpmfind.net/linux/rpm2html/search.php?query=jsvc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CLOUDSTACK-3765) [packaging][document] unable to upgrade cp 4.2 build on centos5.5

2013-08-30 Thread Radhika Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754657#comment-13754657
 ] 

Radhika Nair edited comment on CLOUDSTACK-3765 at 8/30/13 12:49 PM:


Please check whether the RN instruction fixes the problem  or something extra 
to be done

  was (Author: radhikap):
Please check whether the RN instruction fixes the problem .
  
 [packaging][document] unable to upgrade cp 4.2 build on centos5.5
 -

 Key: CLOUDSTACK-3765
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3765
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Packaging
Affects Versions: 4.2.0
 Environment: Centos5.5
Reporter: shweta agarwal
Assignee: frank zhang
Priority: Critical
 Fix For: 4.2.0

 Attachments: Apache_CloudStack-4.2.0-Release_Notes-en-US.pdf


 When I am trying to install 
 http://repo-ccp.citrix.com/releases/ASF/rhel/5/4.2/CP4.2-dbupgrade-44-rhel5.tar.gz
  
  I am hitting JSVC dependency error
 When I tried to install  jsvc via rpm its also failing giving error
 rpm -Uvh 
 http://mirror.centos.org/centos/6/os/x86_64/Packages/jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64.rpm
 Retrieving 
 http://mirror.centos.org/centos/6/os/x86_64/Packages/jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64.rpm
 warning: /var/tmp/rpm-xfer.qxSLPS: Header V3 RSA/SHA256 signature: NOKEY, key 
 ID c105b9de
 error: Failed dependencies:
 rpmlib(FileDigests) = 4.6.0-1 is needed by 
 jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64
 rpmlib(PayloadIsXz) = 5.2-1 is needed by 
 jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64
 Centos 5.5 repo does not contain any jsvc package
 Infact at rpmfind  location also JSVC package only exists for centos6.4
 http://rpmfind.net/linux/rpm2html/search.php?query=jsvc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4327) [Storage Maintenance] SSVM, CPVM and routerVMs are running even after storage entered into maintenance.

2013-08-30 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-4327:


Assignee: Nitin Mehta

 [Storage Maintenance] SSVM, CPVM and routerVMs are running even after storage 
 entered into maintenance.
 ---

 Key: CLOUDSTACK-4327
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4327
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.2.0
 Environment: commit id # 8df22d1818c120716bea5fce39854da38f61055b
Reporter: venkata swamybabu budumuru
Assignee: Nitin Mehta
 Fix For: 4.2.0

 Attachments: logs.tgz


 Step to reproduce : 
 1. Have latest CloudStack setup with at least 1 advanced zone. 
 2. Above setup was created using Marvin framework using APIs 
 3. During the creation of zone, I have added 2 cluster wide primary storages 
 - PS0 
 - PS1 
 mysql select * from storage_pool where id3\G 
 *** 1. row *** 
id: 1 
  name: PS0 
  uuid: 5458182e-bfcb-351c-97ed-e7223bca2b8e 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: 1 
cluster_id: 1 
used_bytes: 4218878263296 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/primary.campo.kvm.1.zone 
   created: 2013-08-14 07:10:01 
   removed: NULL 
   update_time: NULL 
status: Maintenance 
 storage_provider_name: DefaultPrimary 
 scope: CLUSTER 
hypervisor: NULL 
   managed: 0 
 capacity_iops: NULL 
 *** 2. row *** 
id: 2 
  name: PS1 
  uuid: 94634fe1-55f7-3fa8-aad9-5adc25246072 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: 1 
cluster_id: 1 
used_bytes: 4217960071168 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/primary.campo.kvm.2.zone 
   created: 2013-08-14 07:10:02 
   removed: NULL 
   update_time: NULL 
status: Maintenance 
 storage_provider_name: DefaultPrimary 
 scope: CLUSTER 
hypervisor: NULL 
   managed: 0 
 capacity_iops: NULL 
 2 rows in set (0.00 sec) 
 Observations: 
 (i) SSVM and CPVM volumes got created on pool_id=1 
 4. Zone got setup without any issues. 
 5. Added following zone wide primary storages 
 - test1 
 - test2 
 mysql select * from storage_pool where id7\G 
 *** 1. row *** 
id: 8 
  name: test1 
  uuid: 4e612995-3cb1-344e-ba19-3992e3d37d3f 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: NULL 
cluster_id: NULL 
used_bytes: 4214658203648 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/test1 
   created: 2013-08-14 09:49:56 
   removed: NULL 
   update_time: NULL 
status: Up 
 storage_provider_name: DefaultPrimary 
 scope: ZONE 
hypervisor: KVM 
   managed: 0 
 capacity_iops: NULL 
 *** 2. row *** 
id: 9 
  name: test2 
  uuid: 43a95e23-1ad6-30a9-9903-f68231dacec5 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: NULL 
cluster_id: NULL 
used_bytes: 4214658793472 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/test2 
   created: 2013-08-14 09:50:12 
   removed: NULL 
   update_time: NULL 
status: Up 
 storage_provider_name: DefaultPrimary 
 scope: ZONE 
hypervisor: KVM 
   managed: 0 
 capacity_iops: NULL 
 6. Have created a non-ROOT domain user and deployed VMs 
 7. Create 5 volumes as above users 

[jira] [Commented] (CLOUDSTACK-4568) Need to add this to the release note of 4.2

2013-08-30 Thread Abhinandan Prateek (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754681#comment-13754681
 ] 

Abhinandan Prateek commented on CLOUDSTACK-4568:


Bharat can you look at current release notes and provide more information as to 
what section this should go etc ?

 Need to add this to the release note of 4.2
 ---

 Key: CLOUDSTACK-4568
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4568
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Bharat Kumar
  Labels: releasenotes
 Fix For: 4.2.0


 After upgrade to 4.2 the  mem.overporvisioning.factor and 
 cpu.overporvisioning.factor will be set to one that is the default value and 
 are at cluster level now.
 In case if some one prior to the 4.2 was usign mem.overporvisioning.factor 
 and  cpu.overporvisioning.factor after the upgrade these will be reset to one 
 and can be changed by editing the cluster settings.
 All the clusters created after the upgrade will get created with the values 
 overcomit values specified at the global config by default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-1579) List view widget: Support actions on multiple rows

2013-08-30 Thread Chris Suich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Suich updated CLOUDSTACK-1579:


Attachment: Screen Shot 2013-08-30 at 8.46.11 AM.png
Screen Shot 2013-08-30 at 8.46.36 AM.png
Screen Shot 2013-08-30 at 8.46.29 AM.png

Initial concepts for informal review.

 List view widget: Support actions on multiple rows
 --

 Key: CLOUDSTACK-1579
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-1579
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Brian Federle
Assignee: Chris Suich
 Fix For: Future

 Attachments: Screen Shot 2013-08-30 at 8.46.11 AM.png, Screen Shot 
 2013-08-30 at 8.46.29 AM.png, Screen Shot 2013-08-30 at 8.46.36 AM.png

   Original Estimate: 144h
  Remaining Estimate: 144h

 Currently, actions can only be executed manually, 1 row at a time. Need to 
 implement ability to select multiple list view items, and perform actions on 
 them at once:
 -Adds checkboxes to the left side of the list view
 -If 1 or more rows are checked, adds toolbar menu under table header with 
 supported actions to apply to all selected rows
 Technical requirements:
 -Need design for new layout and UX for executing multi-row actions
 -Refactor list view widget code to support selection of multiple list items, 
 and passing multiple items of data to API call code.
 -Change API calls to support multiple row objects passed though the context 
 object, in all sections where it is useful:
 -- Instances page
 -- Events/alerts (for deleting multiple events/alerts)
 -- Storage page
 -- etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-1579) List view widget: Support actions on multiple rows

2013-08-30 Thread Chris Suich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Suich updated CLOUDSTACK-1579:


Attachment: Screen Shot 2013-08-30 at 8.46.11 AM.png
Screen Shot 2013-08-30 at 8.46.36 AM.png
Screen Shot 2013-08-30 at 8.46.29 AM.png

Initial concepts for informal review.

 List view widget: Support actions on multiple rows
 --

 Key: CLOUDSTACK-1579
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-1579
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Brian Federle
Assignee: Chris Suich
 Fix For: Future

 Attachments: Screen Shot 2013-08-30 at 8.46.11 AM.png, Screen Shot 
 2013-08-30 at 8.46.29 AM.png, Screen Shot 2013-08-30 at 8.46.36 AM.png

   Original Estimate: 144h
  Remaining Estimate: 144h

 Currently, actions can only be executed manually, 1 row at a time. Need to 
 implement ability to select multiple list view items, and perform actions on 
 them at once:
 -Adds checkboxes to the left side of the list view
 -If 1 or more rows are checked, adds toolbar menu under table header with 
 supported actions to apply to all selected rows
 Technical requirements:
 -Need design for new layout and UX for executing multi-row actions
 -Refactor list view widget code to support selection of multiple list items, 
 and passing multiple items of data to API call code.
 -Change API calls to support multiple row objects passed though the context 
 object, in all sections where it is useful:
 -- Instances page
 -- Events/alerts (for deleting multiple events/alerts)
 -- Storage page
 -- etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4568) Need to add this to the release note of 4.2

2013-08-30 Thread Bharat Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754698#comment-13754698
 ] 

Bharat Kumar commented on CLOUDSTACK-4568:
--

hi Abhi, 
Need to add this to the overcommit section. Also the 
mem.overprovisoining.factor was used to reserve the memory but not overcommit 
in VMware.


 Need to add this to the release note of 4.2
 ---

 Key: CLOUDSTACK-4568
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4568
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Bharat Kumar
  Labels: releasenotes
 Fix For: 4.2.0


 After upgrade to 4.2 the  mem.overporvisioning.factor and 
 cpu.overporvisioning.factor will be set to one that is the default value and 
 are at cluster level now.
 In case if some one prior to the 4.2 was usign mem.overporvisioning.factor 
 and  cpu.overporvisioning.factor after the upgrade these will be reset to one 
 and can be changed by editing the cluster settings.
 All the clusters created after the upgrade will get created with the values 
 overcomit values specified at the global config by default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-1579) List view widget: Support actions on multiple rows

2013-08-30 Thread David La Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754697#comment-13754697
 ] 

David La Motta commented on CLOUDSTACK-1579:


I will be in all-day training the 27th and 28th of August (Wednesday and 
Thursday), with no access to email.  Please expect a reply to your message 
after business hours, or Friday at the latest.

--David


 List view widget: Support actions on multiple rows
 --

 Key: CLOUDSTACK-1579
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-1579
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Reporter: Brian Federle
Assignee: Chris Suich
 Fix For: Future

 Attachments: Screen Shot 2013-08-30 at 8.46.11 AM.png, Screen Shot 
 2013-08-30 at 8.46.29 AM.png, Screen Shot 2013-08-30 at 8.46.36 AM.png

   Original Estimate: 144h
  Remaining Estimate: 144h

 Currently, actions can only be executed manually, 1 row at a time. Need to 
 implement ability to select multiple list view items, and perform actions on 
 them at once:
 -Adds checkboxes to the left side of the list view
 -If 1 or more rows are checked, adds toolbar menu under table header with 
 supported actions to apply to all selected rows
 Technical requirements:
 -Need design for new layout and UX for executing multi-row actions
 -Refactor list view widget code to support selection of multiple list items, 
 and passing multiple items of data to API call code.
 -Change API calls to support multiple row objects passed though the context 
 object, in all sections where it is useful:
 -- Instances page
 -- Events/alerts (for deleting multiple events/alerts)
 -- Storage page
 -- etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4561) DeployVm failed after upgrading from earlier version having a private zone to 4.2

2013-08-30 Thread Abhinav Roy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinav Roy closed CLOUDSTACK-4561.
---


Closing the issue after fix validation

 DeployVm failed after upgrading from earlier version having a private zone to 
 4.2 
 --

 Key: CLOUDSTACK-4561
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4561
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Management Server
Affects Versions: 4.2.0
Reporter: Prachi Damle
Assignee: Prachi Damle
Priority: Blocker
 Fix For: 4.2.0, 4.2.1


 1. Upgraded from earlier CS version having a private zone to 4.2 
 2. After upgrade deploy VM failed.
 3. Observed the below exception: Failed to deploy VM, Zone zone1 not 
 available for the user domain Acct[2-admin]
 2013-08-29 14:15:16,480 DEBUG [cloud.network.NetworkModelImpl] 
 (catalina-exec-15:null) Service SecurityGroup is not supported in the network 
 id=204
 2013-08-29 14:15:16,524 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (catalina-exec-15:null) submit async job-12 = [ 
 558412e7-bfdc-4c76-b775-2298bf4d383f ], details: AsyncJobVO {id:12, userId: 
 2, accountId: 2, sessionKey: null, instanceType: VirtualMachine, instanceId: 
 5, cmd: org.apache.cloudstack.api.command.user.vm.DeployVMCmd, cmdOriginator: 
 null, cmdInfo: 
 {sessionkey:f2rCMyZ44a8fKjicWTJywoaSp/4\u003d,cmdEventType:VM.CREATE,ctxUserId:2,serviceOfferingId:e28329c9-4ab8-419a-9699-131e8d703844,httpmethod:GET,zoneId:5dfdd7e0-c312-49eb-9fb6-b09d09def631,templateId:4,response:json,id:5,networkIds:caed3f96-c465-4e99-a67a-ca528a396dcc,hypervisor:KVM,name:v242,_:1377765915488,ctxAccountId:2,ctxStartEventId:51,displayname:v242},
  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
 processStatus: 0, resultCode: 0, result: null, initMsid: 7672187322550, 
 completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
 2013-08-29 14:15:16,528 DEBUG [cloud.api.ApiServlet] (catalina-exec-15:null) 
 ===END=== 10.252.192.18 -- GET 
 command=deployVirtualMachinezoneId=5dfdd7e0-c312-49eb-9fb6-b09d09def631templateId=4hypervisor=KVMserviceOfferingId=e28329c9-4ab8-419a-9699-131e8d703844networkIds=caed3f96-c465-4e99-a67a-ca528a396dccdisplayname=v242name=v242response=jsonsessionkey=f2rCMyZ44a8fKjicWTJywoaSp%2F4%3D_=1377765915488
 2013-08-29 14:15:16,534 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) Executing 
 org.apache.cloudstack.api.command.user.vm.DeployVMCmd for job-12 = [ 
 558412e7-bfdc-4c76-b775-2298bf4d383f ]
 2013-08-29 14:15:16,556 DEBUG [cloud.api.ApiDispatcher] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) 
 InfrastructureEntity name is:com.cloud.offering.ServiceOffering
 2013-08-29 14:15:16,556 DEBUG [cloud.api.ApiDispatcher] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) 
 ControlledEntity name is:com.cloud.template.VirtualMachineTemplate
 2013-08-29 14:15:16,564 DEBUG [cloud.api.ApiDispatcher] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) 
 ControlledEntity name is:com.cloud.network.Network
 2013-08-29 14:15:16,638 DEBUG [cloud.network.NetworkModelImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) Service 
 SecurityGroup is not supported in the network id=204
 2013-08-29 14:15:16,657 DEBUG [cloud.network.NetworkModelImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) Service 
 SecurityGroup is not supported in the network id=204
 2013-08-29 14:15:16,713 DEBUG [cloud.vm.UserVmManagerImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) Destroying 
 vm VM[User|v242] as it failed to create on Host with Id:null
 2013-08-29 14:15:16,730 DEBUG [cloud.capacity.CapacityManagerImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) VM state 
 transitted from :Stopped to Error with event: OperationFailedToErrorvm's 
 original host id: null new host id: null host id before state transition: null
 2013-08-29 14:15:16,760 WARN [apache.cloudstack.alerts] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) 
 alertType:: 8 // dataCenterId:: 1 // podId:: null // clusterId:: null // 
 message:: Failed to deploy Vm with Id: 5, on Host with Id: null
 2013-08-29 14:15:16,824 ERROR [cloud.async.AsyncJobManagerImpl] 
 (Job-Executor-1:job-12 = [ 558412e7-bfdc-4c76-b775-2298bf4d383f ]) Unexpected 
 exception while executing 
 org.apache.cloudstack.api.command.user.vm.DeployVMCmd
 com.cloud.utils.exception.CloudRuntimeException: Failed to deploy VM, Zone 
 zone1 not available for the user domain Acct[2-admin]
 at 
 

[jira] [Commented] (CLOUDSTACK-4327) [Storage Maintenance] SSVM, CPVM and routerVMs are running even after storage entered into maintenance.

2013-08-30 Thread Nitin Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754727#comment-13754727
 ] 

Nitin Mehta commented on CLOUDSTACK-4327:
-

In my setup I had a vm deployed with system vms up and running. At this point I 
had only one PS in the cluster - call it PS1.
I then introduced PS2 in the same cluster. 
Put PS1 to maintenance. I observed that systemvms (ssvm, cpvm and router) all 
started on PS1 again. and PS1 successfully transitioned into maintenance. 



 [Storage Maintenance] SSVM, CPVM and routerVMs are running even after storage 
 entered into maintenance.
 ---

 Key: CLOUDSTACK-4327
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4327
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.2.0
 Environment: commit id # 8df22d1818c120716bea5fce39854da38f61055b
Reporter: venkata swamybabu budumuru
Assignee: Nitin Mehta
 Fix For: 4.2.0

 Attachments: logs.tgz


 Step to reproduce : 
 1. Have latest CloudStack setup with at least 1 advanced zone. 
 2. Above setup was created using Marvin framework using APIs 
 3. During the creation of zone, I have added 2 cluster wide primary storages 
 - PS0 
 - PS1 
 mysql select * from storage_pool where id3\G 
 *** 1. row *** 
id: 1 
  name: PS0 
  uuid: 5458182e-bfcb-351c-97ed-e7223bca2b8e 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: 1 
cluster_id: 1 
used_bytes: 4218878263296 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/primary.campo.kvm.1.zone 
   created: 2013-08-14 07:10:01 
   removed: NULL 
   update_time: NULL 
status: Maintenance 
 storage_provider_name: DefaultPrimary 
 scope: CLUSTER 
hypervisor: NULL 
   managed: 0 
 capacity_iops: NULL 
 *** 2. row *** 
id: 2 
  name: PS1 
  uuid: 94634fe1-55f7-3fa8-aad9-5adc25246072 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: 1 
cluster_id: 1 
used_bytes: 4217960071168 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/primary.campo.kvm.2.zone 
   created: 2013-08-14 07:10:02 
   removed: NULL 
   update_time: NULL 
status: Maintenance 
 storage_provider_name: DefaultPrimary 
 scope: CLUSTER 
hypervisor: NULL 
   managed: 0 
 capacity_iops: NULL 
 2 rows in set (0.00 sec) 
 Observations: 
 (i) SSVM and CPVM volumes got created on pool_id=1 
 4. Zone got setup without any issues. 
 5. Added following zone wide primary storages 
 - test1 
 - test2 
 mysql select * from storage_pool where id7\G 
 *** 1. row *** 
id: 8 
  name: test1 
  uuid: 4e612995-3cb1-344e-ba19-3992e3d37d3f 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: NULL 
cluster_id: NULL 
used_bytes: 4214658203648 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/test1 
   created: 2013-08-14 09:49:56 
   removed: NULL 
   update_time: NULL 
status: Up 
 storage_provider_name: DefaultPrimary 
 scope: ZONE 
hypervisor: KVM 
   managed: 0 
 capacity_iops: NULL 
 *** 2. row *** 
id: 9 
  name: test2 
  uuid: 43a95e23-1ad6-30a9-9903-f68231dacec5 
 pool_type: NetworkFilesystem 
  port: 2049 
data_center_id: 1 
pod_id: NULL 
cluster_id: NULL 
used_bytes: 4214658793472 
capacity_bytes: 5902284816384 
  host_address: 10.147.28.7 
 user_info: NULL 
  path: /export/home/swamy/test2 
   

[jira] [Commented] (CLOUDSTACK-4534) [object_store_refactor] Deleting uploaded volume is not deleting the volume from backend

2013-08-30 Thread Nitin Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754753#comment-13754753
 ] 

Nitin Mehta commented on CLOUDSTACK-4534:
-

This will be happening for all the hypervisors but only for uploaded volumes.

 [object_store_refactor] Deleting uploaded volume is not deleting the volume 
 from backend
 

 Key: CLOUDSTACK-4534
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4534
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.1
 Environment: git rev-parse HEAD~5
 1f46bc3fb09aead2cf1744d358fea7adba7df6e1
 Cluster: VMWare
 Storage: NFS
Reporter: Sanjeev N
 Fix For: 4.2.1

 Attachments: cloud.dmp, cloud.dmp, management-server.rar, 
 management-server.rar


 Deleting uploaded volume is not deleting the volume from backend and not 
 marking removed field in volumes table.
 Steps to Reproduce:
 
 1.Bring up CS with vmware cluster using NFS for both primary and secondary 
 storage
 2.Upload one volume using uploadVolume API
 3.When the volume is in Uploaded state try to delete the volume
 Result:
 ==
 from volume_store_ref volume entry got deleted but volume was not deleted 
 from secondary storage and removed filed was not set in volumes table.
 Observations:
 ===
 Log snippet from management server log file as follows:
 2013-08-28 03:18:08,269 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===START===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:08,414 DEBUG [cloud.user.AccountManagerImpl] 
 (catalina-exec-20:null) Access granted to Acct[2-admin] to Domain:1/ by 
 AffinityGroupAccessChecker_EnhancerByCloudStack_86df51a8
 2013-08-28 03:18:08,421 INFO  [cloud.resourcelimit.ResourceLimitManagerImpl] 
 (catalina-exec-20:null) Discrepency in the resource count (original 
 count=77179526656 correct count = 78867689472) for type secondary_storage for 
 account ID 2 is fixed during resource count recalculation.
 2013-08-28 03:18:08,446 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===END===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:32,766 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Storage pool garbage collector found 0 
 templates to clean up in storage pool: pri_esx_306
 2013-08-28 03:18:32,772 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 templates to cleanup on template_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,774 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 snapshots to cleanup on snapshot_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,776 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 1 
 volumes to cleanup on volume_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,777 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Deleting volume store DB entry: 
 VolumeDataStore[2-20-2volumes/2/20/7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova]
 Volume in the backend:
 [root@Rhel63-Sanjeev 20]# pwd
 /tmp/nfs/sec_306/volumes/2/20
 [root@Rhel63-Sanjeev 20]# ls -l
 total 898008
 -rwxrwxrwx+ 1 root root 459320832 Aug 27 13:57 
 7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova
 -rwxrwxrwx+ 1 root root 459312128 Sep 17  2010 CentOS5.3-x86_64-disk1.vmdk
 -rwxrwxrwx+ 1 root root   147 Sep 17  2010 CentOS5.3-x86_64.mf
 -rwxrwxrwx+ 1 root root  5340 Sep 17  2010 CentOS5.3-x86_64.ovf
 -rwxrwxrwx+ 1 root root   340 Aug 27 13:58 volume.properties
 [root@Rhel63-Sanjeev 20]#
 Attaching management server log file and cloud db.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-3115) [DOC] Cisco VNMC provider known issues to document

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada closed CLOUDSTACK-3115.



This is documented now. Hence closing the ticket.

 [DOC] Cisco VNMC provider known issues to document 
 ---

 Key: CLOUDSTACK-3115
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3115
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Radhika Nair
 Fix For: 4.2.0

 Attachments: Apache_CloudStack-4.2.0-Installation_Guide-en-US.pdf


 Please find the list of known issues with CISCO VNMC provider (ASA 1000v 
 firewall) to document:
 1. Public IP address range should be from single subnet. Its not allowed to 
 add from different Subnet
 2. One ASA instance per VLAN - Cannot trunk multiple vlans to asa ports
 3. Auto Spin of ASA instance is not supported
 4. When guest network with CISCO VNMC is provider, By default an additional 
 public IP gets acquired along with Source NAT which is used for ASA outside 
 IP. This should not be released. This is required as there is a config issue 
 in ASA if source NAT ip is used as ASA outside ip
 5. No Side by side support for LB , Only inline
 6. CISCO ASA firewall is not Supported in VPC
 7. CISCO ASA is only supported with isolated guest networks in advanced zone
 8. Load Balancing is not certified with Cisco ASA firewall
 9.In ASA the firewall rule is not tied to the specific public ip. i.e the 
 destination filter ip is ‘Any IP’

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4568) Need to add this to the release note of 4.2

2013-08-30 Thread Bharat Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754841#comment-13754841
 ] 

Bharat Kumar commented on CLOUDSTACK-4568:
--

same is the case for mem.overprovisoining.factor, used for cpu reservation.

 Need to add this to the release note of 4.2
 ---

 Key: CLOUDSTACK-4568
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4568
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Bharat Kumar
Assignee: Jessica Tomechak
  Labels: releasenotes
 Fix For: 4.2.1


 After upgrade to 4.2 the  mem.overporvisioning.factor and 
 cpu.overporvisioning.factor will be set to one that is the default value and 
 are at cluster level now.
 In case if some one prior to the 4.2 was usign mem.overporvisioning.factor 
 and  cpu.overporvisioning.factor after the upgrade these will be reset to one 
 and can be changed by editing the cluster settings.
 All the clusters created after the upgrade will get created with the values 
 overcomit values specified at the global config by default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4534) [object_store_refactor] Deleting uploaded volume is not deleting the volume from backend

2013-08-30 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754839#comment-13754839
 ] 

Min Chen commented on CLOUDSTACK-4534:
--

Based on our design, deleteVolumeCmd should trigger object store driver to 
delete the volume from secondary storage, that is why we removed logic of 
deleting volume from backend in scavenger thread. This seems a bug in handling 
deleteVolume for uploaded volume. What if you restart management server or 
restart SSVM, volume sync triggered there should remove it from secondary 
storage.

 [object_store_refactor] Deleting uploaded volume is not deleting the volume 
 from backend
 

 Key: CLOUDSTACK-4534
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4534
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.1
 Environment: git rev-parse HEAD~5
 1f46bc3fb09aead2cf1744d358fea7adba7df6e1
 Cluster: VMWare
 Storage: NFS
Reporter: Sanjeev N
 Fix For: 4.2.1

 Attachments: cloud.dmp, cloud.dmp, management-server.rar, 
 management-server.rar


 Deleting uploaded volume is not deleting the volume from backend and not 
 marking removed field in volumes table.
 Steps to Reproduce:
 
 1.Bring up CS with vmware cluster using NFS for both primary and secondary 
 storage
 2.Upload one volume using uploadVolume API
 3.When the volume is in Uploaded state try to delete the volume
 Result:
 ==
 from volume_store_ref volume entry got deleted but volume was not deleted 
 from secondary storage and removed filed was not set in volumes table.
 Observations:
 ===
 Log snippet from management server log file as follows:
 2013-08-28 03:18:08,269 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===START===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:08,414 DEBUG [cloud.user.AccountManagerImpl] 
 (catalina-exec-20:null) Access granted to Acct[2-admin] to Domain:1/ by 
 AffinityGroupAccessChecker_EnhancerByCloudStack_86df51a8
 2013-08-28 03:18:08,421 INFO  [cloud.resourcelimit.ResourceLimitManagerImpl] 
 (catalina-exec-20:null) Discrepency in the resource count (original 
 count=77179526656 correct count = 78867689472) for type secondary_storage for 
 account ID 2 is fixed during resource count recalculation.
 2013-08-28 03:18:08,446 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===END===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:32,766 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Storage pool garbage collector found 0 
 templates to clean up in storage pool: pri_esx_306
 2013-08-28 03:18:32,772 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 templates to cleanup on template_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,774 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 snapshots to cleanup on snapshot_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,776 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 1 
 volumes to cleanup on volume_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,777 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Deleting volume store DB entry: 
 VolumeDataStore[2-20-2volumes/2/20/7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova]
 Volume in the backend:
 [root@Rhel63-Sanjeev 20]# pwd
 /tmp/nfs/sec_306/volumes/2/20
 [root@Rhel63-Sanjeev 20]# ls -l
 total 898008
 -rwxrwxrwx+ 1 root root 459320832 Aug 27 13:57 
 7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova
 -rwxrwxrwx+ 1 root root 459312128 Sep 17  2010 CentOS5.3-x86_64-disk1.vmdk
 -rwxrwxrwx+ 1 root root   147 Sep 17  2010 CentOS5.3-x86_64.mf
 -rwxrwxrwx+ 1 root root  5340 Sep 17  2010 CentOS5.3-x86_64.ovf
 -rwxrwxrwx+ 1 root root   340 Aug 27 13:58 volume.properties
 [root@Rhel63-Sanjeev 20]#
 Attaching management server log file and cloud db.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4358) [Automation] [vmware] Few VM deployment failed with IndexOutOfBoundsException at VolumeManagerImpl.java:2534

2013-08-30 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan closed CLOUDSTACK-4358.
---


Not found this issue in latest runs

 [Automation] [vmware] Few VM deployment failed with IndexOutOfBoundsException 
 at VolumeManagerImpl.java:2534
 --

 Key: CLOUDSTACK-4358
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4358
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, VMware
Affects Versions: 4.2.0
Reporter: Rayees Namathponnan
Assignee: Kelven Yang
Priority: Blocker
 Fix For: 4.2.1

 Attachments: CLOUDSTACK-4358_Log2.rar, CLOUDSTACK-4358.rar, 
 management-server.log.2013-08-19.gz


 This issue observed while executing test case  
 integration.component.test_stopped_vm.TestDeployVMFromTemplate.test_deploy_vm_password_enabled
 VM deployment failed with below error in MS
 2013-08-15 06:25:04,040 DEBUG [storage.volume.VolumeServiceImpl] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) 
 Acquire lock on VMTemplateStoragePool 11 with timeout 3600 seconds
 2013-08-15 06:25:04,041 INFO  [storage.volume.VolumeServiceImpl] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) lock 
 is acquired for VMTemplateStoragePool 11
 2013-08-15 06:25:04,051 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) 
 copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
 2013-08-15 06:25:04,065 DEBUG [agent.transport.Request] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) Seq 
 9-1658651433: Sending  { Cmd , MgmtId: 90928106758026, via: 9, Ver: v1, 
 Flags: 100011, [{org.apache.cloud
 stack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.TemplateObjectTO:{path:template/tmpl/282/230/3e85ab63-d22b-33b3-9e5e-e246b7413fff.ova,origUrl:http://nfs1.lab.vmops.com/templates/ubuntu/ubuntu-12.04.
 1-desktop-i386-nest-13.02.04.ova,uuid:74cf2a78-5935-4c31-baca-d237f4fcf974,id:230,format:OVA,accountId:282,checksum:63d4a4350424504f416fcd989b6ef1b2,hvm:true,displayText:Cent
  OS Template,imageDataStore:{com.clou
 d.agent.api.to.NfsTO:{_url:nfs://10.223.110.232:/export/home/automation/SC-CLOUD-QA03/secondary1,_role:Image}},name:230-282-0c4f6793-b7ab-334a-9a13-41f5580cdf90,hypervisorType:VMware}},destTO:{org.apache.cloudstack.st
 orage.to.TemplateObjectTO:{origUrl:http://nfs1.lab.vmops.com/templates/ubuntu/ubuntu-12.04.1-desktop-i386-nest-13.02.04.ova,uuid:74cf2a78-5935-4c31-baca-d237f4fcf974,id:230,format:OVA,accountId:282,checksum:63d4a43504
 24504f416fcd989b6ef1b2,hvm:true,displayText:Cent OS 
 Template,imageDataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:4faf04c2-6dd8-3025-b43f-65d32cc49d02,id:1,poolType:NetworkFilesystem,host:10.2
 23.110.232,path:/export/home/automation/SC-CLOUD-QA03/primary1,port:2049}},name:230-282-0c4f6793-b7ab-334a-9a13-41f5580cdf90,hypervisorType:VMware}},executeInSequence:false,wait:10800}}]
  }
 2013-08-15 06:25:04,099 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-3:null) Seq 9-1658651433: Processing:  { Ans: , MgmtId: 
 90928106758026, via: 9, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{r
 esult:false,details:Unable to copy template to primary storage due to 
 exception:Exception: java.lang.IndexOutOfBoundsException\nMessage: Index: 0, 
 Size: 0\n,wait:0}}] }
 2013-08-15 06:25:04,100 DEBUG [agent.transport.Request] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) Seq 
 9-1658651433: Received:  { Ans: , MgmtId: 90928106758026, via: 9, Ver: v1, 
 Flags: 10, { CopyCmdAnswer } }
 2013-08-15 06:25:04,108 INFO  [storage.volume.VolumeServiceImpl] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) 
 releasing lock for VMTemplateStoragePool 11
 2013-08-15 06:25:04,108 WARN  [utils.db.Merovingian2] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) Was 
 unable to find lock for the key template_spool_ref11 and thread id 557745274
 2013-08-15 06:25:04,108 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) Unable 
 to create Vol[442|vm=388|ROOT]:Unable to copy template to primary storage due 
 to exception:Exce
 ption: java.lang.IndexOutOfBoundsException
 Message: Index: 0, Size: 0
 2013-08-15 06:25:04,108 INFO  [cloud.vm.VirtualMachineManagerImpl] 
 (Job-Executor-151:job-1405 = [ 0c22bfca-fc58-42a0-a5eb-23def296accf ]) Unable 
 to contact resource.
 

[jira] [Reopened] (CLOUDSTACK-4335) [Automation] Test case test_deployVmSharedNetworkWithoutIpRange failed due subnet is overlapped with subnet in other network

2013-08-30 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan reopened CLOUDSTACK-4335:
-


Still fails with latest runs

Execute cmd: createnetwork failed, due to: errorCode: 431, errorText:The IP 
range already has IPs that overlap with the new range. Please specify a 
different start IP/end IP.
  begin captured logging  
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Fetching default shared 
network offering from nw offerings
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Shared netwrk offering: 
DefaultSharedNetworkOffering
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Creating a network from 
shared network offering
-  end captured logging  -
Stacktrace

  File /usr/local/lib/python2.7/unittest/case.py, line 318, in run
testMethod()
  File 
/Repo_30X/ipcl/cloudstack/test/integration/component/test_shared_network_offering.py,
 line 195, in test_deployVmSharedNetworkWithoutIpRange
zoneid=self.zone.id
  File /usr/local/lib/python2.7/site-packages/marvin/integration/lib/base.py, 
line 1959, in create
return Network(apiclient.createNetwork(cmd).__dict__)
  File 
/usr/local/lib/python2.7/site-packages/marvin/cloudstackAPI/cloudstackAPIClient.py,
 line 1708, in createNetwork
response = self.connection.marvin_request(command, response_type=response, 
method=method)
  File /usr/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py, 
line 222, in marvin_request
response = jsonHelper.getResultObj(response.json(), response_type)
  File /usr/local/lib/python2.7/site-packages/marvin/jsonHelper.py, line 148, 
in getResultObj
raise cloudstackException.cloudstackAPIException(respname, errMsg)
Execute cmd: createnetwork failed, due to: errorCode: 431, errorText:The IP 
range already has IPs that overlap with the new range. Please specify a 
different start IP/end IP.
  begin captured logging  
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Fetching default shared 
network offering from nw offerings
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Shared netwrk offering: 
DefaultSharedNetworkOffering
testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Creating a network from 
shared network offering
-  end captured logging  -

 [Automation] Test case test_deployVmSharedNetworkWithoutIpRange  failed due 
 subnet is overlapped with subnet in other network 
 --

 Key: CLOUDSTACK-4335
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4335
 Project: CloudStack
  Issue Type: Test
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.2.0
 Environment: Automation
Reporter: Rayees Namathponnan
Priority: Critical
 Fix For: 4.2.0


 Below test case failed 
 integration.component.test_shared_network_offering.TestSharedNetworkWithoutIp.test_deployVmSharedNetworkWithoutIpRange
 Error Message
 Execute cmd: createnetwork failed, due to: errorCode: 431, errorText:This 
 subnet is overlapped with subnet in other network 332 in zone Adv-KVM-Zone1 . 
 Please specify a different gateway/netmask.
   begin captured logging  
 testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Fetching default 
 shared network offering from nw offerings
 testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Shared netwrk 
 offering: DefaultSharedNetworkOffering
 testclient.testcase.TestSharedNetworkWithoutIp: DEBUG: Creating a network 
 from shared network offering
 -  end captured logging  -
 Stacktrace
   File /usr/local/lib/python2.7/unittest/case.py, line 318, in run
 testMethod()
   File 
 /Repo_30X/ipcl/cloudstack/test/integration/component/test_shared_network_offering.py,
  line 195, in test_deployVmSharedNetworkWithoutIpRange
 zoneid=self.zone.id
   File 
 /usr/local/lib/python2.7/site-packages/marvin/integration/lib/base.py, line 
 1940, in create
 return Network(apiclient.createNetwork(cmd).__dict__)
   File 
 /usr/local/lib/python2.7/site-packages/marvin/cloudstackAPI/cloudstackAPIClient.py,
  line 1709, in createNetwork
 response = self.connection.marvin_request(command, 
 response_type=response, method=method)
   File 
 /usr/local/lib/python2.7/site-packages/marvin/cloudstackConnection.py, line 
 222, in marvin_request
 response = jsonHelper.getResultObj(response.json(), response_type)
   File /usr/local/lib/python2.7/site-packages/marvin/jsonHelper.py, line 

[jira] [Created] (CLOUDSTACK-4573) Aquire IP address above domain limit in VPC

2013-08-30 Thread Daan Hoogland (JIRA)
Daan Hoogland created CLOUDSTACK-4573:
-

 Summary: Aquire IP address above domain limit in VPC
 Key: CLOUDSTACK-4573
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4573
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.1.1
Reporter: Daan Hoogland


It is possible to aquire more public IP addresses than allowed according to the 
domain limit, the steps are as followed:

user has a limit of 2 ips
domain has a limit of 5 ips

1) create a VPC, this will aquire a public IP address for source nat
2) create a network (not in the VPC) and aquire an IP address, we are now at 
the max of two allowed public IP address
3) create one or more networks on the VPC
4) under IP addresses (VPC configuration) aquire IP address
We now have 3 IP addresses aquired, I tested more, I was allowed up to 7, at 
which time there was no more free IP addresses available in cloudstack.

conclusion: the non VPC network is correctly adhering to the domain limit, but 
the VPC is not, and IP addresses on the VPC are not counted for when checking 
the domain limit.
Strange thing is though, that cloudstack is checking the IP limit during the 
creation of a VPC, you cannot create a VPC when you have already reached your 
IP limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4573) Aquire IP address above domain limit in VPC

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam updated CLOUDSTACK-4573:
---

Labels: integration-test  (was: )

 Aquire IP address above domain limit in VPC
 ---

 Key: CLOUDSTACK-4573
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4573
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.1.1
Reporter: Daan Hoogland
  Labels: integration-test

 It is possible to aquire more public IP addresses than allowed according to 
 the domain limit, the steps are as followed:
 user has a limit of 2 ips
 domain has a limit of 5 ips
 1) create a VPC, this will aquire a public IP address for source nat
 2) create a network (not in the VPC) and aquire an IP address, we are now at 
 the max of two allowed public IP address
 3) create one or more networks on the VPC
 4) under IP addresses (VPC configuration) aquire IP address
 We now have 3 IP addresses aquired, I tested more, I was allowed up to 7, at 
 which time there was no more free IP addresses available in cloudstack.
 conclusion: the non VPC network is correctly adhering to the domain limit, 
 but the VPC is not, and IP addresses on the VPC are not counted for when 
 checking the domain limit.
 Strange thing is though, that cloudstack is checking the IP limit during the 
 creation of a VPC, you cannot create a VPC when you have already reached your 
 IP limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4550) [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have migration work

2013-08-30 Thread Jessica Tomechak (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754873#comment-13754873
 ] 

Jessica Tomechak commented on CLOUDSTACK-4550:
--

Not sure I understand the description given. Can anyone (Edison?) please 
explain this further. The description seems to say you have to rename the 
bridges after you rename the bridges.

 [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have 
 migration work
 --

 Key: CLOUDSTACK-4550
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4550
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, KVM, Upgrade
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Jessica Tomechak
Priority: Critical

 See CLOUDSTACK-4405 for the original bug. This is the doc to be prepared as
 part of upgrade in release notes once the fix for the bug is verified to work
 After network bridges being renamed from cloudVirBrVLAN to brem1-VLAN rename
 the bridges to allow migration to work between host added before upgrade to
 those added after upgrade
 This can be done by running the cloudstack-agent-upgrade script

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread Saksham Srivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754879#comment-13754879
 ] 

Saksham Srivastava commented on CLOUDSTACK-4572:


Fix available for review at : https://reviews.apache.org/r/13911/

 findHostsForMigration API does not return correct host list
 ---

 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava
 Fix For: 4.2.1


 Create multi cluster setup.
 Tag host in one cluster with host tag t1.
 Create a service offering using the host tag t1
 Deploy a vm using the tagged service offering.
 Even if tagged/untagged hosts are available across different clusters the api 
 does not list correct hosts for migration for the deployed vm.
 Expected behavior:
 The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread Saksham Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saksham Srivastava updated CLOUDSTACK-4572:
---

Status: Ready To Review  (was: In Progress)

 findHostsForMigration API does not return correct host list
 ---

 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava
 Fix For: 4.2.1


 Create multi cluster setup.
 Tag host in one cluster with host tag t1.
 Create a service offering using the host tag t1
 Deploy a vm using the tagged service offering.
 Even if tagged/untagged hosts are available across different clusters the api 
 does not list correct hosts for migration for the deployed vm.
 Expected behavior:
 The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4362) VM's are failing to start after its DATA volume is migrated to other primary storage

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada updated CLOUDSTACK-4362:
-

Attachment: management-server.log
apilog.log

 VM's are failing to start after its DATA volume is migrated to other primary 
 storage 
 -

 Key: CLOUDSTACK-4362
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4362
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Sateesh Chodapuneedi
Priority: Critical
 Fix For: 4.2.0

 Attachments: alllogs.rar, apilog.log, apilog.log, db1.sql, 
 management-server.log, management-server.log, ssvmlogs.rar


 Steps:
 1. Configure Adv zone with 2 zone wide primary storage's
 2. Create new account and Deploy instance using this account
 3. Add new DATA volume and attach to this instance
 4. Resize this volume from 5 GB to 7 GB
 5. As admin, Migrate this volume from Storage 1 to Storage2 ( Zone wide 
 primary)
 6. Stop and Start this instance
 Observation:
 VM's are failing to start after its DATA volume is migrated to second zone 
 wide primary storage 
 2013-08-16 12:04:48,465 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Checking 
 if we need to prepare 2 volumes for VM[User|inst2]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Mismatch 
 in storage pool Pool[1|NetworkFilesystem] assigned by deploymentPlanner and 
 the one associated with volume Vol[14|vm=5|DATADISK]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Shared 
 volume Vol[14|vm=5|DATADISK] will be migrated on storage pool 
 Pool[1|NetworkFilesystem] assigned by deploymentPlanner
 2013-08-16 12:04:48,524 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-16 12:04:48,528 DEBUG [cache.allocator.StorageCacheRandomAllocator] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Can't 
 find staging storage in zone: 1
 2013-08-16 12:04:48,591 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Sending  { Cmd , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:b33e996a-444e-3685-9070-0865067454c4,id:2,poolType:NetworkFilesystem,host:10.102.192.100,path:/cpg_vol/sailaja/sailajaps2,port:2049}},name:new32,size:7516192768,path:a04a400289624aad99ab92e7c089343d,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},destTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.NfsTO:{_url:nfs://10.102.192.100/cpg_vol/sailaja/sailajass1,_role:Image}},name:new32,size:7516192768,path:volumes/3/14,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},executeInSequence:false,wait:10800}}]
  }
 2013-08-16 12:04:48,700 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-1:null) Seq 3-1725105201: Processing:  { Ans: , MgmtId: 
 187767034175903, via: 3, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:copy
  volume from primary to secondary failed due to exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2013-08-16 12:04:48,702 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Received:  { Ans: , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 10, { CopyCmdAnswer } }
 2013-08-16 12:04:48,706 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copy to 
 image store failed: copy volume from primary to secondary failed due to 
 exception: Exception: java.lang.NullPointerException
 Message: null
 2013-08-16 12:04:48,728 DEBUG [storage.image.BaseImageStoreDriverImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Unable to 
 destoy VOLUME: 14
 java.lang.NullPointerException
 at 
 

[jira] [Closed] (CLOUDSTACK-4471) [Vmware] Error Instances leaving ROOT Volumes in expunging state and not getting updated as removed evenafter having the instances expunged

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada closed CLOUDSTACK-4471.



Regressed with latest builds. This is fixed now. Hence closing the bug.

 [Vmware] Error Instances leaving ROOT Volumes in expunging state and not 
 getting updated as removed evenafter having the instances expunged
 ---

 Key: CLOUDSTACK-4471
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4471
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware
Affects Versions: 4.2.1
Reporter: Sailaja Mada
Assignee: Likitha Shetty
Priority: Critical
 Fix For: 4.2.0, 4.2.1

 Attachments: errovolumedb.sql, mslogs.rar, volumelogs.rar


 Steps:
 1.Configure Advzone with VMWARE
 2. Set the expunge interval to lesser value
 3. Enable storage.cleanup interval to lesser value
 4.  There is a case where user instances failed to deploy and they are into 
 ERROR state
 Obseravation:
 1. There is a root DISK created as part of this deployment. When VM got 
 expunged , these ROOT disks are moved to expunging state . But these are not 
 updated as removed.  
 2. With that list Volumes we get to see all these volumes being displayed
 volumeidfe7cc918-dd31-4925-9dd0-9917807d051a/idnameROOT-36/namezoneidefb00e64-4f4d-4582-818c-cb80446d5e5c/zoneidzonename307XenZone1/zonenametypeROOT/typedeviceid0/deviceidvirtualmachineid1043592d-9df7-490b-a408-0dd7cdb80239/virtualmachineidvmname1043592d-9df7-490b-a408-0dd7cdb80239/vmnamevmstateExpunging/vmstatesize2147483648/sizecreated2013-08-22T14:47:16+0530/createdstateAllocated/stateaccountvmwareuser1/accountdomainidca370a5c-0b19-4358-a757-58549a2c29ed/domainiddomaincdc/domainstoragetypeshared/storagetypedestroyedfalse/destroyedserviceofferingid3c1dacb5-6629-432b-b093-f260067a078c/serviceofferingidserviceofferingnamehost13/serviceofferingnameserviceofferingdisplaytexthost13/serviceofferingdisplaytextisextractabletrue/isextractabledisplayvolumefalse/displayvolume/volumevolume

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4550) [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have migration work

2013-08-30 Thread Prasanna Santhanam (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Santhanam updated CLOUDSTACK-4550:
---

Description: 
See CLOUDSTACK-4405 for the original bug. This is the doc to be prepared as
part of upgrade in release notes once the fix for the bug is verified to work

After network bridges being renamed from cloudVirBrVLAN to brem1-VLAN to 
support the same VLAN on multiple physical networks the migration of VMs from 
hosts prior the upgrade to the ones added after the upgrade will fail. 

In order to fix this rename the bridges is required to allow migration to work.

This can be done by running the cloudstack-agent-upgrade script. The original 
bug is still undergoing testing, but these are the initial instructions



  was:
See CLOUDSTACK-4405 for the original bug. This is the doc to be prepared as
part of upgrade in release notes once the fix for the bug is verified to work

After network bridges being renamed from cloudVirBrVLAN to brem1-VLAN rename
the bridges to allow migration to work between host added before upgrade to
those added after upgrade

This can be done by running the cloudstack-agent-upgrade script




 [DOC] When upgrading KVM agents to 4.2(.1?) perform bridge renaming to have 
 migration work
 --

 Key: CLOUDSTACK-4550
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4550
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, KVM, Upgrade
Affects Versions: 4.2.0
Reporter: Prasanna Santhanam
Assignee: Jessica Tomechak
Priority: Critical

 See CLOUDSTACK-4405 for the original bug. This is the doc to be prepared as
 part of upgrade in release notes once the fix for the bug is verified to work
 After network bridges being renamed from cloudVirBrVLAN to brem1-VLAN to 
 support the same VLAN on multiple physical networks the migration of VMs from 
 hosts prior the upgrade to the ones added after the upgrade will fail. 
 In order to fix this rename the bridges is required to allow migration to 
 work.
 This can be done by running the cloudstack-agent-upgrade script. The original 
 bug is still undergoing testing, but these are the initial instructions

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-2654) VPC UI Missing information

2013-08-30 Thread Brian Federle (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Federle closed CLOUDSTACK-2654.
-

Resolution: Fixed

This was just a placeholder; thus closing the bug.

 VPC UI Missing information 
 ---

 Key: CLOUDSTACK-2654
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2654
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.0.2
Reporter: Sonny Chhen
Assignee: Sonny Chhen
  Labels: ui
 Fix For: 4.2.0

   Original Estimate: 168h
  Remaining Estimate: 168h

 VPC UI needs to include information pertaining to new nTier features.
 The look and feel needs to be modified to include the following information:
 -Network ACL lists
 -Internal load balancers
 -Public load balancers
 Additionally the structure of the chart needs to better reflect the new nTier 
 functionality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4362) VM's are failing to start after its DATA volume is migrated to other primary storage

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada updated CLOUDSTACK-4362:
-

Attachment: alllogs.rar

 VM's are failing to start after its DATA volume is migrated to other primary 
 storage 
 -

 Key: CLOUDSTACK-4362
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4362
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Sateesh Chodapuneedi
Priority: Critical
 Fix For: 4.2.0

 Attachments: alllogs.rar, apilog.log, db1.sql, management-server.log, 
 ssvmlogs.rar


 Steps:
 1. Configure Adv zone with 2 zone wide primary storage's
 2. Create new account and Deploy instance using this account
 3. Add new DATA volume and attach to this instance
 4. Resize this volume from 5 GB to 7 GB
 5. As admin, Migrate this volume from Storage 1 to Storage2 ( Zone wide 
 primary)
 6. Stop and Start this instance
 Observation:
 VM's are failing to start after its DATA volume is migrated to second zone 
 wide primary storage 
 2013-08-16 12:04:48,465 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Checking 
 if we need to prepare 2 volumes for VM[User|inst2]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Mismatch 
 in storage pool Pool[1|NetworkFilesystem] assigned by deploymentPlanner and 
 the one associated with volume Vol[14|vm=5|DATADISK]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Shared 
 volume Vol[14|vm=5|DATADISK] will be migrated on storage pool 
 Pool[1|NetworkFilesystem] assigned by deploymentPlanner
 2013-08-16 12:04:48,524 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-16 12:04:48,528 DEBUG [cache.allocator.StorageCacheRandomAllocator] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Can't 
 find staging storage in zone: 1
 2013-08-16 12:04:48,591 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Sending  { Cmd , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:b33e996a-444e-3685-9070-0865067454c4,id:2,poolType:NetworkFilesystem,host:10.102.192.100,path:/cpg_vol/sailaja/sailajaps2,port:2049}},name:new32,size:7516192768,path:a04a400289624aad99ab92e7c089343d,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},destTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.NfsTO:{_url:nfs://10.102.192.100/cpg_vol/sailaja/sailajass1,_role:Image}},name:new32,size:7516192768,path:volumes/3/14,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},executeInSequence:false,wait:10800}}]
  }
 2013-08-16 12:04:48,700 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-1:null) Seq 3-1725105201: Processing:  { Ans: , MgmtId: 
 187767034175903, via: 3, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:copy
  volume from primary to secondary failed due to exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2013-08-16 12:04:48,702 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Received:  { Ans: , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 10, { CopyCmdAnswer } }
 2013-08-16 12:04:48,706 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copy to 
 image store failed: copy volume from primary to secondary failed due to 
 exception: Exception: java.lang.NullPointerException
 Message: null
 2013-08-16 12:04:48,728 DEBUG [storage.image.BaseImageStoreDriverImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Unable to 
 destoy VOLUME: 14
 java.lang.NullPointerException
 at 
 org.apache.cloudstack.storage.volume.VolumeObject.getPath(VolumeObject.java:338)
 at 
 

[jira] [Updated] (CLOUDSTACK-4568) Need to add this to the release note of 4.2

2013-08-30 Thread Abhinandan Prateek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinandan Prateek updated CLOUDSTACK-4568:
---

Assignee: Jessica Tomechak

 Need to add this to the release note of 4.2
 ---

 Key: CLOUDSTACK-4568
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4568
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Bharat Kumar
Assignee: Jessica Tomechak
  Labels: releasenotes
 Fix For: 4.2.0


 After upgrade to 4.2 the  mem.overporvisioning.factor and 
 cpu.overporvisioning.factor will be set to one that is the default value and 
 are at cluster level now.
 In case if some one prior to the 4.2 was usign mem.overporvisioning.factor 
 and  cpu.overporvisioning.factor after the upgrade these will be reset to one 
 and can be changed by editing the cluster settings.
 All the clusters created after the upgrade will get created with the values 
 overcomit values specified at the global config by default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread Saksham Srivastava (JIRA)
Saksham Srivastava created CLOUDSTACK-4572:
--

 Summary: findHostsForMigration API does not return correct host 
list
 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava
 Fix For: 4.2.1


Create multi cluster setup.
Tag host in one cluster with host tag t1.
Create a service offering using the host tag t1
Deploy a vm using the tagged service offering.
Even if tagged/untagged hosts are available across different clusters the api 
does not list correct hosts for migration for the deployed vm.

Expected behavior:
The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-08-30 Thread Sailaja Mada (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Mada updated CLOUDSTACK-4089:
-

Fix Version/s: (was: Future)
   4.2.0

 Provide a drop down to specify VLAN,Switch type, Traffic label name while 
 configuring Zone(VMWARE)
 --

 Key: CLOUDSTACK-4089
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Jessica Wang
 Fix For: 4.2.0

 Attachments: 2013-08-06-A.jpg, 
 addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
 edit-traffic-type-vmware.jpg


 Observation:
 Setup: VMWARE 
 1. While configuring Zone,  During Physical network creation ,  currently 
 there is a text field to specify VLAN Id for the traffic ,  Traffic label 
 name  Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
 2. It is text field and there is a possibility of missing some of the 
 parameters. 
 3.  While adding cluster we have an option to specify the traffic label name 
 and drop down to select the Switch type.  
 This is the request to provide  a drop down to specify VLAN,Switch type, 
 Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
 confusion between Zone vs Cluster level configuration.
 It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754970#comment-13754970
 ] 

ASF subversion and git services commented on CLOUDSTACK-4089:
-

Commit 3b14b66b20e06b30005be169617637f3635aa89d in branch refs/heads/master 
from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3b14b66 ]

CLOUDSTACK-4089: UI  zone wizard  hypervisor VMware  multiple physical 
networks  edit Public/Guest traffic type  fix a bug that vSwitch Type 
dropdown selection didn't remain after Public/Guest traffic type is dragged to 
another physical network.


 Provide a drop down to specify VLAN,Switch type, Traffic label name while 
 configuring Zone(VMWARE)
 --

 Key: CLOUDSTACK-4089
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Jessica Wang
 Fix For: 4.2.0

 Attachments: 2013-08-06-A.jpg, 
 addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
 edit-traffic-type-vmware.jpg


 Observation:
 Setup: VMWARE 
 1. While configuring Zone,  During Physical network creation ,  currently 
 there is a text field to specify VLAN Id for the traffic ,  Traffic label 
 name  Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
 2. It is text field and there is a possibility of missing some of the 
 parameters. 
 3.  While adding cluster we have an option to specify the traffic label name 
 and drop down to select the Switch type.  
 This is the request to provide  a drop down to specify VLAN,Switch type, 
 Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
 confusion between Zone vs Cluster level configuration.
 It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754962#comment-13754962
 ] 

ASF subversion and git services commented on CLOUDSTACK-4089:
-

Commit 2c2ebee3f7395a6541088eefe91ceaa1b02c70d5 in branch 
refs/heads/4.2-forward from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2c2ebee ]

CLOUDSTACK-4089: UI  zone wizard  hypervisor VMware  multiple physical 
networks  edit Public/Guest traffic type  fix a bug that vSwitch Type 
dropdown selection didn't remain after Public/Guest traffic type is dragged to 
another physical network.


 Provide a drop down to specify VLAN,Switch type, Traffic label name while 
 configuring Zone(VMWARE)
 --

 Key: CLOUDSTACK-4089
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Jessica Wang
 Fix For: 4.2.0

 Attachments: 2013-08-06-A.jpg, 
 addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
 edit-traffic-type-vmware.jpg


 Observation:
 Setup: VMWARE 
 1. While configuring Zone,  During Physical network creation ,  currently 
 there is a text field to specify VLAN Id for the traffic ,  Traffic label 
 name  Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
 2. It is text field and there is a possibility of missing some of the 
 parameters. 
 3.  While adding cluster we have an option to specify the traffic label name 
 and drop down to select the Switch type.  
 This is the request to provide  a drop down to specify VLAN,Switch type, 
 Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
 confusion between Zone vs Cluster level configuration.
 It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4190) [Object_store_refactor] volume should be deleted from staging storage after successfule volume migration

2013-08-30 Thread Sanjeev N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjeev N updated CLOUDSTACK-4190:
--

Attachment: management-server.rar
cloud.rar
cloud.dmp

Attached latest management server log file, cloud.log from SSVM and cloud DB.

 [Object_store_refactor] volume should be deleted from staging storage after 
 successfule volume migration
 

 Key: CLOUDSTACK-4190
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4190
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.0
 Environment: Latest build from ACS 4.2 branch
 Storage: S3 for image store, NFS for secondary staging and primary storage
Reporter: Sanjeev N
Assignee: Min Chen
Priority: Critical
 Fix For: 4.2.0

 Attachments: cloud.dmp, cloud.dmp, cloud.rar, management-server.rar, 
 management-server.rar


 volume copied to secondary staging storage during volume migration should be 
 deleted after migration.
 Steps to Reproduce:
 =
 1.Bring up CS with xen cluster using S3 from image store, NFS for secondary 
 staging and primary storage
 2.Deploy guest vm using default cent os template with both Root and Data 
 disks.
 3.Add another NFS based primary storage to the cluster 
 4.Detach data disk from the vm
 5.Migrate the data disk to the primary storage created at step3
 Result:
 ==
 Volume migration was successful. But the volume copied to secondary staging 
 storage during migration process did not get deleted.
 Only volume from source primary storage got deleted.
 Observations:
 
 Following is the log snippet during volume migration:
 2013-08-08 08:34:49,766 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) 
 ===START===  10.146.0.20 -- GET  
 command=migrateVolumestorageid=29a0c990-7100-3a8d-b570-ba9f84ca78bcvolumeid=e1eb0b93-3fba-4437-ad50-b12bf1d6f1efresponse=jsonsessionkey=qXfe5TLEOA5koD0qobFirCKKbOY%3D_=1375965252817
 2013-08-08 08:34:49,954 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (catalina-exec-13:null) submit async job-44 = [ 
 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ], details: AsyncJobVO {id:44, userId: 
 2, accountId: 2, sessionKey: null, instanceType: None, instanceId: null, cmd: 
 org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd, 
 cmdOriginator: null, cmdInfo: 
 {response:json,sessionkey:qXfe5TLEOA5koD0qobFirCKKbOY\u003d,cmdEventType:VOLUME.MIGRATE,ctxUserId:2,storageid:29a0c990-7100-3a8d-b570-ba9f84ca78bc,httpmethod:GET,volumeid:e1eb0b93-3fba-4437-ad50-b12bf1d6f1ef,_:1375965252817,ctxAccountId:2,ctxStartEventId:194},
  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
 processStatus: 0, resultCode: 0, result: null, initMsid: 6615759585382, 
 completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
 2013-08-08 08:34:49,957 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) 
 ===END===  10.146.0.20 -- GET  
 command=migrateVolumestorageid=29a0c990-7100-3a8d-b570-ba9f84ca78bcvolumeid=e1eb0b93-3fba-4437-ad50-b12bf1d6f1efresponse=jsonsessionkey=qXfe5TLEOA5koD0qobFirCKKbOY%3D_=1375965252817
 2013-08-08 08:34:49,961 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) Executing 
 org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd for job-44 = [ 
 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]
 2013-08-08 08:34:50,032 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-08 08:34:50,061 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-08 08:34:50,083 DEBUG [agent.transport.Request] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) Seq 
 2-1303576995: Sending  { Cmd , MgmtId: 6615759585382, via: 2, Ver: v1, Flags: 
 100011, 
 

[jira] [Reopened] (CLOUDSTACK-4190) [Object_store_refactor] volume should be deleted from staging storage after successfule volume migration

2013-08-30 Thread Sanjeev N (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjeev N reopened CLOUDSTACK-4190:
---


This issue is observed again in the latest build. Deleting volume copied to 
staging storage during Extracting volume operation failed.
Tried this scenario in VMWare.

Log snippet from SSVM log:
2013-08-30 17:56:12,957 INFO  [vmware.mo.VirtualMachineMO] 
(agentRequest-Handler-5:null) volss: copy vmdk and ovf file starts 1377885372957
2013-08-30 17:56:12,958 INFO  [vmware.mo.HypervisorHostHelper] 
(agentRequest-Handler-5:null) Resolving host name in url through vCenter, url: 
https://10.147.40.13/nfc/5207a809-22f9-1196-9a18-4d74877a2bd4/disk-0.vmdk
2013-08-30 17:56:12,958 INFO  [vmware.mo.HypervisorHostHelper] 
(agentRequest-Handler-5:null) host name in url is already in IP address, url: 
https://10.147.40.13/nfc/5207a809-22f9-1196-9a18-4d74877a2bd4/disk-0.vmdk
2013-08-30 17:56:12,959 INFO  [vmware.mo.VirtualMachineMO] 
(agentRequest-Handler-5:null) Download VMDK file for export. url: 
https://10.147.40.13/nfc/5207a809-22f9-1196-9a18-4d74877a2bd4/disk-0.vmdk
2013-08-30 17:56:12,975 INFO  [vmware.util.VmwareContext] 
(agentRequest-Handler-5:null) Connected, conn: 
sun.net.www.protocol.https.DelegateHttpsURLConnection:https://10.147.40.13/nfc/5207a809-22f9-1196-9a18-4d74877a2bd4/disk-0.vmdk,
 retry: 0
2013-08-30 17:56:22,512 INFO  [storage.template.S3TemplateDownloader] 
(s3-transfer-manager-worker-1:null) download completed
2013-08-30 17:56:22,514 INFO  [storage.template.S3TemplateDownloader] 
(s3-transfer-manager-worker-1:null) download completed
2013-08-30 17:56:22,514 INFO  [storage.template.DownloadManagerImpl] 
(pool-1-thread-4:null) Download Completion for jobId: 
b9ae8901-5e24-4a0a-af8b-d4e54957e73f, status=DOWNLOAD_FINISHED
2013-08-30 17:59:28,805 INFO  [vmware.mo.VirtualMachineMO] 
(agentRequest-Handler-5:null) volss: copy vmdk and ovf file finishes 
1377885568805
2013-08-30 17:59:28,806 INFO  [vmware.mo.HttpNfcLeaseMO] 
(agentRequest-Handler-5:null) close ProgressReporter, interrupt reporter runner 
to let it quit
2013-08-30 17:59:28,806 INFO  [vmware.mo.HttpNfcLeaseMO] (Thread-13:null) 
ProgressReporter is interrupted, quiting
2013-08-30 17:59:28,837 INFO  [vmware.mo.HttpNfcLeaseMO] (Thread-13:null) 
ProgressReporter stopped
2013-08-30 17:59:35,540 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) 
Seq 4-453574766:  { Ans: , MgmtId: 6615759585382, via: 4, Ver: v1, Flags: 110, 
[{org.apache.cloudstack.storage.command.CopyCmdAnswer:{newData:{org.apache.cloudstack.storage.to.VolumeObjectTO:{path:volumes/2/8/4c7e8fc27a134a039b317bf0aa9382a5,accountId:0,id:0}},result:true,wait:0}}]
 }
2013-08-30 17:59:35,652 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) 
Request:Seq 4-453574770:  { Cmd , MgmtId: 6615759585382, via: 4, Ver: v1, 
Flags: 100111, 
[{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:3c4c5331-4764-4a13-8a96-cc4893bf6361,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.NfsTO:{_url:nfs://10.147.28.7/export/home/sanjeev/sec_xen_os,_role:ImageCache}},name:test,size:5368709120,path:volumes/2/8/4c7e8fc27a134a039b317bf0aa9382a5,volumeId:8,accountId:2,format:OVA,id:8,hypervisorType:VMware}},destTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:3c4c5331-4764-4a13-8a96-cc4893bf6361,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.S3TO:{id:2,uuid:6074c28c-9ae5-43be-a45e-89489efe887c,endPoint:10.147.29.56:8080,bucketName:imagestore,httpsFlag:false,created:Aug
 30, 2013 10:32:03 
AM,enableRRS:false}},name:test,size:5368709120,path:volumes/2/8,volumeId:8,accountId:2,format:OVA,id:8,hypervisorType:VMware}},executeInSequence:true,wait:10800}}]
 }
2013-08-30 17:59:35,652 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) 
Processing command: org.apache.cloudstack.storage.command.CopyCommand
2013-08-30 17:59:35,660 DEBUG [vmware.manager.VmwareStorageManagerImpl] 
(agentRequest-Handler-1:null) Executing: sudo sync
2013-08-30 17:59:35,995 DEBUG [vmware.manager.VmwareStorageManagerImpl] 
(agentRequest-Handler-1:null) Execution is successful.
2013-08-30 17:59:35,995 INFO  [vmware.manager.VmwareStorageManagerImpl] 
(agentRequest-Handler-1:null) Package OVA with commmand: tar -cf 
4c7e8fc27a134a039b317bf0aa9382a5.ova 4c7e8fc27a134a039b317bf0aa9382a5.ovf 
4c7e8fc27a134a039b317bf0aa9382a5-disk0.vmdk
2013-08-30 17:59:35,995 DEBUG [vmware.manager.VmwareStorageManagerImpl] 
(agentRequest-Handler-1:null) Executing: tar -cf 
4c7e8fc27a134a039b317bf0aa9382a5.ova 4c7e8fc27a134a039b317bf0aa9382a5.ovf 
4c7e8fc27a134a039b317bf0aa9382a5-disk0.vmdk
2013-08-30 17:59:36,068 DEBUG [vmware.manager.VmwareStorageManagerImpl] 
(agentRequest-Handler-1:null) Execution is successful.
2013-08-30 17:59:36,079 DEBUG [cloud.utils.S3Utils] 
(agentRequest-Handler-1:null) Sending file 4c7e8fc27a134a039b317bf0aa9382a5.ova 
as S3 object 

[jira] [Updated] (CLOUDSTACK-4475) [ZWPS] attaching an uploaded volume to a VM is always going to first primary storage added

2013-08-30 Thread Ram Ganesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ram Ganesh updated CLOUDSTACK-4475:
---

Labels: ReleaseNote  (was: )

 [ZWPS] attaching an uploaded volume to a VM is always going to first primary 
 storage added
 --

 Key: CLOUDSTACK-4475
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4475
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.2.1
 Environment: vmware esxi 5.1
Reporter: Srikanteswararao Talluri
Assignee: edison su
  Labels: ReleaseNote
 Fix For: 4.2.1


 Steps to reproduce:
 ==
 1. Have an advanced zone deployment with two cluster one host and one cluster 
 scoped primary storage each
 2. add two more zone wide primary storages
 3. create a deployment on zone scoped primary storage
 4. upload  a volume.
 5. attach uploaded volume to VM created in step 3.
 Observation:
 =
 While attaching volume, volume is always copied to first available primary 
 storage in the storage_pool table and as a result attaching a volume created 
 on cluster scoped primary storage to a VM with its root volume on zone wide 
 primary storage fails.
 mysql select * from storage_pool;
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 | id | name | uuid | pool_type
  | port | data_center_id | pod_id | cluster_id | used_bytes| 
 capacity_bytes | host_address | user_info | path  
  | created | removed | update_time | status  | 
 storage_provider_name | scope   | hypervisor | managed | capacity_iops |
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 |  1 | primaryclus1 | 722e6181-8497-3d31-9933-a0a267ae376c | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1678552014848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/primary  | 2013-08-23 12:11:12 | NULL   
  | NULL| Maintenance | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  2 | pimaryclu2   | 9fd9b0fc-c9fd-39b8-8d66-06372c5ff6d2 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1676566495232 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1primary | 2013-08-23 12:18:14 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  3 | clus1p2  | 22e0c3fe-a390-38fa-8ff7-e1d965a36309 | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660903886848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p2  | 2013-08-23 14:30:32 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  4 | clus1p3  | f2d9fb6b-c433-3c03-acf8-8f73eac48fae | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660901400576 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p3  | 2013-08-23 14:31:05 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  5 | clus2p2  | 13bf579c-51f3-317b-893a-98ff6ca8f486 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660900147200 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p2  | 2013-08-23 14:31:38 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  7 | clus2p3  | 294ae9ff-cb02-33a0-8f31-21fdd8ff34db | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660894195712 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p3  | 2013-08-23 14:33:03 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  8 | z1   | 

[jira] [Updated] (CLOUDSTACK-4475) [ZWPS] attaching an uploaded volume to a VM is always going to first primary storage added

2013-08-30 Thread Ram Ganesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ram Ganesh updated CLOUDSTACK-4475:
---

Component/s: Doc

 [ZWPS] attaching an uploaded volume to a VM is always going to first primary 
 storage added
 --

 Key: CLOUDSTACK-4475
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4475
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc, Storage Controller
Affects Versions: 4.2.1
 Environment: vmware esxi 5.1
Reporter: Srikanteswararao Talluri
Assignee: edison su
  Labels: ReleaseNote
 Fix For: 4.2.1


 Steps to reproduce:
 ==
 1. Have an advanced zone deployment with two cluster one host and one cluster 
 scoped primary storage each
 2. add two more zone wide primary storages
 3. create a deployment on zone scoped primary storage
 4. upload  a volume.
 5. attach uploaded volume to VM created in step 3.
 Observation:
 =
 While attaching volume, volume is always copied to first available primary 
 storage in the storage_pool table and as a result attaching a volume created 
 on cluster scoped primary storage to a VM with its root volume on zone wide 
 primary storage fails.
 mysql select * from storage_pool;
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 | id | name | uuid | pool_type
  | port | data_center_id | pod_id | cluster_id | used_bytes| 
 capacity_bytes | host_address | user_info | path  
  | created | removed | update_time | status  | 
 storage_provider_name | scope   | hypervisor | managed | capacity_iops |
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 |  1 | primaryclus1 | 722e6181-8497-3d31-9933-a0a267ae376c | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1678552014848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/primary  | 2013-08-23 12:11:12 | NULL   
  | NULL| Maintenance | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  2 | pimaryclu2   | 9fd9b0fc-c9fd-39b8-8d66-06372c5ff6d2 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1676566495232 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1primary | 2013-08-23 12:18:14 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  3 | clus1p2  | 22e0c3fe-a390-38fa-8ff7-e1d965a36309 | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660903886848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p2  | 2013-08-23 14:30:32 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  4 | clus1p3  | f2d9fb6b-c433-3c03-acf8-8f73eac48fae | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660901400576 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p3  | 2013-08-23 14:31:05 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  5 | clus2p2  | 13bf579c-51f3-317b-893a-98ff6ca8f486 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660900147200 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p2  | 2013-08-23 14:31:38 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  7 | clus2p3  | 294ae9ff-cb02-33a0-8f31-21fdd8ff34db | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660894195712 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p3  | 2013-08-23 14:33:03 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  8 | z1   | 

[jira] [Updated] (CLOUDSTACK-4570) Doc: service cloud-management wrongly named

2013-08-30 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti updated CLOUDSTACK-4570:
-

Summary: Doc: service cloud-management wrongly named  (was: service 
cloud-management wrongly named)

 Doc: service cloud-management wrongly named
 ---

 Key: CLOUDSTACK-4570
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4570
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.2.0
Reporter: Pavan Kumar Bandarupally
Priority: Critical
 Fix For: 4.2.0


 4.2.6 LDAP User Authentication: Limitation
 service cloud-management restart  should be changed to service 
 cloudstack-management restart
 Apart from that there is a minor spelling mistake in section 3.7. About 
 Secondary Storage
 In the last but one paragraph in that section , swift is misspelled as swoft.
 The NFS storage in each zone acts as a staging area
 through which all templates and other secondary storage data pass before 
 being forwarded to Swoft
 or S3

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4539) [VMWARE] vmware.create.full.clone is set to true in upgraded setup;default nature of vms are full clone

2013-08-30 Thread Chandan Purushothama (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandan Purushothama closed CLOUDSTACK-4539.



Verified it with 4.2 Build:

4.2 Upgraded Setup:

mysql select * from configuration where name like %clone%;
+--+--+---+--+---+--+
| category | instance | component | name | value | 
description  |
+--+--+---+--+---+--+
| Advanced | DEFAULT  | UserVmManager | vmware.create.full.clone | false | If 
set to true, creates VMs as full clones on ESX hypervisor |
+--+--+---+--+---+--+
1 row in set (0.00 sec)

mysql select * from version;
++--+-+--+
| id | version  | updated | step |
++--+-+--+
|  1 | 3.0.6.20121222035904 | 2013-08-27 15:39:16 | Complete |
|  2 | 3.0.7| 2013-08-29 23:59:19 | Complete |
|  3 | 4.1.0| 2013-08-29 23:59:19 | Complete |
|  4 | 4.2.0| 2013-08-29 23:59:19 | Complete |
++--+-+--+
4 rows in set (0.01 sec)


4.2 Fresh Installation:

mysql select * from configuration where name like %clone%;
+--+--+---+--+---+--+
| category | instance | component | name | value | 
description  |
+--+--+---+--+---+--+
| Advanced | DEFAULT  | UserVmManager | vmware.create.full.clone | true  | If 
set to true, creates VMs as full clones on ESX hypervisor |
+--+--+---+--+---+--+
1 row in set (0.00 sec)

mysql select * from version;
++-+-+--+
| id | version | updated | step |
++-+-+--+
|  1 | 4.0.0   | 2013-08-30 09:38:46 | Complete |
|  2 | 4.1.0   | 2013-08-30 13:41:09 | Complete |
|  3 | 4.2.0   | 2013-08-30 13:41:09 | Complete |
++-+-+--+
3 rows in set (0.00 sec)


 [VMWARE] vmware.create.full.clone is set to true in upgraded setup;default 
 nature of vms are full clone
 ---

 Key: CLOUDSTACK-4539
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4539
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: VMware
Affects Versions: 4.2.0
Reporter: prashant kumar mishra
Assignee: Venkata Siva Vijayendra Bhamidipati
Priority: Critical
 Fix For: 4.2.0, 4.2.1

 Attachments: Logs_DB.rar


 In upgraded setup vm should get deployed as linked clone ,default  value of 
 global parameter  vmware.create.full.clone should be  false in upgraded setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4558) Add network offering UI problem

2013-08-30 Thread Marty Sweet (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marty Sweet updated CLOUDSTACK-4558:


Component/s: UI

 Add network offering UI problem
 ---

 Key: CLOUDSTACK-4558
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4558
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
 Environment: MacOS 10.8.4 / Macbook Pro
Reporter: Soheil Eizadi
Priority: Minor

 I have an overlapping window problem on a Modal Dialog box that is shown.
 The problem happens on Safari, FireFox and Chrome on MacOS. I tried to 
 reproduce the problem on Win7 with IE but it worked fine there. 
 The window overlaps only if you create two offerings back to back. I create 
 one, enable service and go back
 and create another one and I have the problem.
 I have seen this UI problem when I was working on 4.2 and now that I have 
 merged to the latest trunk on 4.3, 
 I still see this problem.
 Here is snapshot of the problem:
 https://sites.google.com/site/opencloudstack/_/rsrc/1377725503233/home/Screen%20Shot%202013-08-28%20at%209.46.29%20AM.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4362) VM's are failing to start after its DATA volume is migrated to other primary storage

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754980#comment-13754980
 ] 

ASF subversion and git services commented on CLOUDSTACK-4362:
-

Commit e362f51f37b718466f2d80d9193e58e1fafcb8fb in branch 
refs/heads/4.2-forward from [~kelveny]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e362f51 ]

CLOUDSTACK-4362: always honor vCenter on-disk meta data to work with live 
migration better


 VM's are failing to start after its DATA volume is migrated to other primary 
 storage 
 -

 Key: CLOUDSTACK-4362
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4362
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Sateesh Chodapuneedi
Priority: Critical
 Fix For: 4.2.0

 Attachments: alllogs.rar, apilog.log, apilog.log, db1.sql, 
 management-server.log, management-server.log, ssvmlogs.rar


 Steps:
 1. Configure Adv zone with 2 zone wide primary storage's
 2. Create new account and Deploy instance using this account
 3. Add new DATA volume and attach to this instance
 4. Resize this volume from 5 GB to 7 GB
 5. As admin, Migrate this volume from Storage 1 to Storage2 ( Zone wide 
 primary)
 6. Stop and Start this instance
 Observation:
 VM's are failing to start after its DATA volume is migrated to second zone 
 wide primary storage 
 2013-08-16 12:04:48,465 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Checking 
 if we need to prepare 2 volumes for VM[User|inst2]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Mismatch 
 in storage pool Pool[1|NetworkFilesystem] assigned by deploymentPlanner and 
 the one associated with volume Vol[14|vm=5|DATADISK]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Shared 
 volume Vol[14|vm=5|DATADISK] will be migrated on storage pool 
 Pool[1|NetworkFilesystem] assigned by deploymentPlanner
 2013-08-16 12:04:48,524 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-16 12:04:48,528 DEBUG [cache.allocator.StorageCacheRandomAllocator] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Can't 
 find staging storage in zone: 1
 2013-08-16 12:04:48,591 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Sending  { Cmd , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:b33e996a-444e-3685-9070-0865067454c4,id:2,poolType:NetworkFilesystem,host:10.102.192.100,path:/cpg_vol/sailaja/sailajaps2,port:2049}},name:new32,size:7516192768,path:a04a400289624aad99ab92e7c089343d,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},destTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.NfsTO:{_url:nfs://10.102.192.100/cpg_vol/sailaja/sailajass1,_role:Image}},name:new32,size:7516192768,path:volumes/3/14,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},executeInSequence:false,wait:10800}}]
  }
 2013-08-16 12:04:48,700 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-1:null) Seq 3-1725105201: Processing:  { Ans: , MgmtId: 
 187767034175903, via: 3, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:copy
  volume from primary to secondary failed due to exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2013-08-16 12:04:48,702 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Received:  { Ans: , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 10, { CopyCmdAnswer } }
 2013-08-16 12:04:48,706 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copy to 
 image store failed: copy volume from primary to secondary failed due to 
 exception: Exception: 

[jira] [Resolved] (CLOUDSTACK-4362) VM's are failing to start after its DATA volume is migrated to other primary storage

2013-08-30 Thread Kelven Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kelven Yang resolved CLOUDSTACK-4362.
-

Resolution: Fixed

 VM's are failing to start after its DATA volume is migrated to other primary 
 storage 
 -

 Key: CLOUDSTACK-4362
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4362
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, VMware
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Sateesh Chodapuneedi
Priority: Critical
 Fix For: 4.2.0

 Attachments: alllogs.rar, apilog.log, apilog.log, db1.sql, 
 management-server.log, management-server.log, ssvmlogs.rar


 Steps:
 1. Configure Adv zone with 2 zone wide primary storage's
 2. Create new account and Deploy instance using this account
 3. Add new DATA volume and attach to this instance
 4. Resize this volume from 5 GB to 7 GB
 5. As admin, Migrate this volume from Storage 1 to Storage2 ( Zone wide 
 primary)
 6. Stop and Start this instance
 Observation:
 VM's are failing to start after its DATA volume is migrated to second zone 
 wide primary storage 
 2013-08-16 12:04:48,465 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Checking 
 if we need to prepare 2 volumes for VM[User|inst2]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Mismatch 
 in storage pool Pool[1|NetworkFilesystem] assigned by deploymentPlanner and 
 the one associated with volume Vol[14|vm=5|DATADISK]
 2013-08-16 12:04:48,471 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Shared 
 volume Vol[14|vm=5|DATADISK] will be migrated on storage pool 
 Pool[1|NetworkFilesystem] assigned by deploymentPlanner
 2013-08-16 12:04:48,524 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-16 12:04:48,528 DEBUG [cache.allocator.StorageCacheRandomAllocator] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Can't 
 find staging storage in zone: 1
 2013-08-16 12:04:48,591 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Sending  { Cmd , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 100011, 
 [{org.apache.cloudstack.storage.command.CopyCommand:{srcTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:b33e996a-444e-3685-9070-0865067454c4,id:2,poolType:NetworkFilesystem,host:10.102.192.100,path:/cpg_vol/sailaja/sailajaps2,port:2049}},name:new32,size:7516192768,path:a04a400289624aad99ab92e7c089343d,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},destTO:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:b55516ba-da2e-454a-8cee-7d1a927ce25a,volumeType:DATADISK,dataStore:{com.cloud.agent.api.to.NfsTO:{_url:nfs://10.102.192.100/cpg_vol/sailaja/sailajass1,_role:Image}},name:new32,size:7516192768,path:volumes/3/14,volumeId:14,vmName:i-3-5-VM,accountId:3,format:OVA,id:14,hypervisorType:VMware}},executeInSequence:false,wait:10800}}]
  }
 2013-08-16 12:04:48,700 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-1:null) Seq 3-1725105201: Processing:  { Ans: , MgmtId: 
 187767034175903, via: 3, Ver: v1, Flags: 10, 
 [{org.apache.cloudstack.storage.command.CopyCmdAnswer:{result:false,details:copy
  volume from primary to secondary failed due to exception: Exception: 
 java.lang.NullPointerException\nMessage: null\n,wait:0}}] }
 2013-08-16 12:04:48,702 DEBUG [agent.transport.Request] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Seq 
 3-1725105201: Received:  { Ans: , MgmtId: 187767034175903, via: 3, Ver: v1, 
 Flags: 10, { CopyCmdAnswer } }
 2013-08-16 12:04:48,706 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) copy to 
 image store failed: copy volume from primary to secondary failed due to 
 exception: Exception: java.lang.NullPointerException
 Message: null
 2013-08-16 12:04:48,728 DEBUG [storage.image.BaseImageStoreDriverImpl] 
 (Job-Executor-37:job-74 = [ 039d4ba9-0507-4dea-b658-5a784fdc0588 ]) Unable to 
 destoy VOLUME: 14
 java.lang.NullPointerException
 at 
 org.apache.cloudstack.storage.volume.VolumeObject.getPath(VolumeObject.java:338)

[jira] [Commented] (CLOUDSTACK-4190) [Object_store_refactor] volume should be deleted from staging storage after successfule volume migration

2013-08-30 Thread Sanjeev N (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754987#comment-13754987
 ] 

Sanjeev N commented on CLOUDSTACK-4190:
---

Also storage artifacts like snapshots and templates are not getting deleted 
from staging storage when template was created from snapshot.

Following actions take place when template is created from snapshot:
1.Snapshot gets downloaded from S3 to snapshots directory in NFS staging storage
2.Then it gets copied to template directory in NFS staging storage
3.Finally it is uploaded to S3
After successful operation snapshot and template copied to staging storage (in 
step1 and step2) should be deleted.


 [Object_store_refactor] volume should be deleted from staging storage after 
 successfule volume migration
 

 Key: CLOUDSTACK-4190
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4190
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.0
 Environment: Latest build from ACS 4.2 branch
 Storage: S3 for image store, NFS for secondary staging and primary storage
Reporter: Sanjeev N
Assignee: Min Chen
Priority: Critical
 Fix For: 4.2.0

 Attachments: cloud.dmp, cloud.dmp, cloud.rar, management-server.rar, 
 management-server.rar


 volume copied to secondary staging storage during volume migration should be 
 deleted after migration.
 Steps to Reproduce:
 =
 1.Bring up CS with xen cluster using S3 from image store, NFS for secondary 
 staging and primary storage
 2.Deploy guest vm using default cent os template with both Root and Data 
 disks.
 3.Add another NFS based primary storage to the cluster 
 4.Detach data disk from the vm
 5.Migrate the data disk to the primary storage created at step3
 Result:
 ==
 Volume migration was successful. But the volume copied to secondary staging 
 storage during migration process did not get deleted.
 Only volume from source primary storage got deleted.
 Observations:
 
 Following is the log snippet during volume migration:
 2013-08-08 08:34:49,766 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) 
 ===START===  10.146.0.20 -- GET  
 command=migrateVolumestorageid=29a0c990-7100-3a8d-b570-ba9f84ca78bcvolumeid=e1eb0b93-3fba-4437-ad50-b12bf1d6f1efresponse=jsonsessionkey=qXfe5TLEOA5koD0qobFirCKKbOY%3D_=1375965252817
 2013-08-08 08:34:49,954 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (catalina-exec-13:null) submit async job-44 = [ 
 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ], details: AsyncJobVO {id:44, userId: 
 2, accountId: 2, sessionKey: null, instanceType: None, instanceId: null, cmd: 
 org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd, 
 cmdOriginator: null, cmdInfo: 
 {response:json,sessionkey:qXfe5TLEOA5koD0qobFirCKKbOY\u003d,cmdEventType:VOLUME.MIGRATE,ctxUserId:2,storageid:29a0c990-7100-3a8d-b570-ba9f84ca78bc,httpmethod:GET,volumeid:e1eb0b93-3fba-4437-ad50-b12bf1d6f1ef,_:1375965252817,ctxAccountId:2,ctxStartEventId:194},
  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
 processStatus: 0, resultCode: 0, result: null, initMsid: 6615759585382, 
 completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
 2013-08-08 08:34:49,957 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) 
 ===END===  10.146.0.20 -- GET  
 command=migrateVolumestorageid=29a0c990-7100-3a8d-b570-ba9f84ca78bcvolumeid=e1eb0b93-3fba-4437-ad50-b12bf1d6f1efresponse=jsonsessionkey=qXfe5TLEOA5koD0qobFirCKKbOY%3D_=1375965252817
 2013-08-08 08:34:49,961 DEBUG [cloud.async.AsyncJobManagerImpl] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) Executing 
 org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd for job-44 = [ 
 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]
 2013-08-08 08:34:50,032 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-08 08:34:50,061 DEBUG [storage.motion.AncientDataMotionStrategy] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) copyAsync 
 inspecting src type VOLUME copyAsync inspecting dest type VOLUME
 2013-08-08 08:34:50,083 DEBUG [agent.transport.Request] 
 (Job-Executor-45:job-44 = [ 3c1fd226-af63-47fe-9a5c-fc4770f1a6f5 ]) Seq 
 2-1303576995: Sending  { Cmd , MgmtId: 6615759585382, via: 2, Ver: v1, Flags: 
 100011, 
 

[jira] [Commented] (CLOUDSTACK-3556) Add NIC icon is not appearing in UI

2013-08-30 Thread Marty Sweet (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755033#comment-13755033
 ] 

Marty Sweet commented on CLOUDSTACK-3556:
-

I have never seen this feature on CloudStack UI, although I know it is possible 
via the API.
Could you give more details about this issue? Was the 'Add NIC' Icon already 
there in 4.0.2?
Marty

 Add NIC icon is not appearing in UI
 ---

 Key: CLOUDSTACK-3556
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3556
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: KVM, UI
Affects Versions: 4.1.0, 4.1.1
 Environment: Ubuntu (KVM)
Reporter: Raafat Mhamed
Priority: Blocker

 After upgrading from version 4.0.2 to version 4.1 ,I can't see the ADD NIC 
 icon from interface tab.
 check image http://postimg.org/image/ekb5olse9/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4574) MS:NPE: Unexpected exception while executing org.apache.cloudstack.api.command.user.vm.DestroyVMCmd java.lang.NullPointerException

2013-08-30 Thread Parth Jagirdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Jagirdar updated CLOUDSTACK-4574:
---

Attachment: destroyVM.txt

 MS:NPE: Unexpected exception while executing 
 org.apache.cloudstack.api.command.user.vm.DestroyVMCmd 
 java.lang.NullPointerException
 --

 Key: CLOUDSTACK-4574
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4574
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: API, Management Server
Affects Versions: 4.2.1
 Environment: VMWare
Reporter: Parth Jagirdar
 Attachments: destroyVM.txt


 NPE while attempting to destroy a VM.
 VM successfully gets destroyed but raises NPE in Logs.
 Logs attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4574) MS:NPE: Unexpected exception while executing org.apache.cloudstack.api.command.user.vm.DestroyVMCmd java.lang.NullPointerException

2013-08-30 Thread Parth Jagirdar (JIRA)
Parth Jagirdar created CLOUDSTACK-4574:
--

 Summary: MS:NPE: Unexpected exception while executing 
org.apache.cloudstack.api.command.user.vm.DestroyVMCmd 
java.lang.NullPointerException
 Key: CLOUDSTACK-4574
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4574
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: API, Management Server
Affects Versions: 4.2.1
 Environment: VMWare
Reporter: Parth Jagirdar
 Attachments: destroyVM.txt

NPE while attempting to destroy a VM.

VM successfully gets destroyed but raises NPE in Logs.

Logs attached.






--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4089) Provide a drop down to specify VLAN,Switch type, Traffic label name while configuring Zone(VMWARE)

2013-08-30 Thread Jessica Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754943#comment-13754943
 ] 

Jessica Wang commented on CLOUDSTACK-4089:
--

From: Sateesh Chodapuneedi 
Sent: Friday, August 30, 2013 10:31 AM
To: Jessica Wang
Cc: Ram Ganesh
Subject: RE: If vmware.use.dvswitch is false and vmware.use.nexus.vswitch 
is true, then what is default vswitch type? (CLOUDSTACK-4089 Provide a drop 
down to specify VLAN,Switch type, Traffic label name while configuring Zone)

Hi Jessica,

If vmware.use.dvswitch is false and vmware.use.nexus.vswitch is true, then 
what is default vswitch type?

If vmware.use.dvswitch is false then flag vmware.use.nexus.vswitch should be 
ignored. And in that case default vswitch is Standard vswitch.

Regards,
Sateesh


 Provide a drop down to specify VLAN,Switch type, Traffic label name while 
 configuring Zone(VMWARE)
 --

 Key: CLOUDSTACK-4089
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4089
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: UI
Affects Versions: 4.2.0
Reporter: Sailaja Mada
Assignee: Jessica Wang
 Fix For: 4.2.0

 Attachments: 2013-08-06-A.jpg, 
 addClusterCmd_wrongDuplicateParameterNames.jpg, dropPN.png, 
 edit-traffic-type-vmware.jpg


 Observation:
 Setup: VMWARE 
 1. While configuring Zone,  During Physical network creation ,  currently 
 there is a text field to specify VLAN Id for the traffic ,  Traffic label 
 name  Switch type (vmwaresvs,vmwaredvs,nexusdvs) 
 2. It is text field and there is a possibility of missing some of the 
 parameters. 
 3.  While adding cluster we have an option to specify the traffic label name 
 and drop down to select the Switch type.  
 This is the request to provide  a drop down to specify VLAN,Switch type, 
 Traffic label name while configuring Zone(VMWARE).  This will avoid a lot of 
 confusion between Zone vs Cluster level configuration.
 It also simplifies the configuration process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-4475) [ZWPS] attaching an uploaded volume to a VM is always going to first primary storage added

2013-08-30 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su updated CLOUDSTACK-4475:
--

Status: Open  (was: Ready To Review)

 [ZWPS] attaching an uploaded volume to a VM is always going to first primary 
 storage added
 --

 Key: CLOUDSTACK-4475
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4475
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller
Affects Versions: 4.2.1
 Environment: vmware esxi 5.1
Reporter: Srikanteswararao Talluri
Assignee: edison su
 Fix For: 4.2.1


 Steps to reproduce:
 ==
 1. Have an advanced zone deployment with two cluster one host and one cluster 
 scoped primary storage each
 2. add two more zone wide primary storages
 3. create a deployment on zone scoped primary storage
 4. upload  a volume.
 5. attach uploaded volume to VM created in step 3.
 Observation:
 =
 While attaching volume, volume is always copied to first available primary 
 storage in the storage_pool table and as a result attaching a volume created 
 on cluster scoped primary storage to a VM with its root volume on zone wide 
 primary storage fails.
 mysql select * from storage_pool;
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 | id | name | uuid | pool_type
  | port | data_center_id | pod_id | cluster_id | used_bytes| 
 capacity_bytes | host_address | user_info | path  
  | created | removed | update_time | status  | 
 storage_provider_name | scope   | hypervisor | managed | capacity_iops |
 ++--+--+---+--++++---++--+---++-+-+-+-+---+-++-+---+
 |  1 | primaryclus1 | 722e6181-8497-3d31-9933-a0a267ae376c | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1678552014848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/primary  | 2013-08-23 12:11:12 | NULL   
  | NULL| Maintenance | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  2 | pimaryclu2   | 9fd9b0fc-c9fd-39b8-8d66-06372c5ff6d2 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1676566495232 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1primary | 2013-08-23 12:18:14 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  3 | clus1p2  | 22e0c3fe-a390-38fa-8ff7-e1d965a36309 | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660903886848 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p2  | 2013-08-23 14:30:32 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  4 | clus1p3  | f2d9fb6b-c433-3c03-acf8-8f73eac48fae | 
 NetworkFilesystem | 2049 |  1 |  1 |  1 | 
 1660901400576 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus1p3  | 2013-08-23 14:31:05 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  5 | clus2p2  | 13bf579c-51f3-317b-893a-98ff6ca8f486 | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660900147200 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p2  | 2013-08-23 14:31:38 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  7 | clus2p3  | 294ae9ff-cb02-33a0-8f31-21fdd8ff34db | 
 NetworkFilesystem | 2049 |  1 |  1 |  2 | 
 1660894195712 |  590228480 | 10.147.28.7  | NULL  | 
 /export/home/talluri/vmware.campo/clus2p3  | 2013-08-23 14:33:03 | NULL   
  | NULL| Up  | DefaultPrimary| CLUSTER | NULL   | 
   0 |  NULL |
 |  8 | z1   | 

[jira] [Commented] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755066#comment-13755066
 ] 

ASF subversion and git services commented on CLOUDSTACK-4572:
-

Commit 6354604eedff0c5f4ddef4940ce02df80adb656c in branch 
refs/heads/4.2-forward from [~saksham]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6354604 ]

CLOUDSTACK-4572: findHostsForMigration API does not return correct host list

Changes:
Expected behavior:
The api should return the list of suitable/unsuitable hosts
Added fix that creates a deep copy of the the variable allHosts and prevents 
faulty host list return.


 findHostsForMigration API does not return correct host list
 ---

 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava
 Fix For: 4.2.1


 Create multi cluster setup.
 Tag host in one cluster with host tag t1.
 Create a service offering using the host tag t1
 Deploy a vm using the tagged service offering.
 Even if tagged/untagged hosts are available across different clusters the api 
 does not list correct hosts for migration for the deployed vm.
 Expected behavior:
 The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread Prachi Damle (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prachi Damle resolved CLOUDSTACK-4572.
--

Resolution: Fixed

 findHostsForMigration API does not return correct host list
 ---

 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Prachi Damle
 Fix For: 4.2.1


 Create multi cluster setup.
 Tag host in one cluster with host tag t1.
 Create a service offering using the host tag t1
 Deploy a vm using the tagged service offering.
 Even if tagged/untagged hosts are available across different clusters the api 
 does not list correct hosts for migration for the deployed vm.
 Expected behavior:
 The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CLOUDSTACK-4572) findHostsForMigration API does not return correct host list

2013-08-30 Thread Prachi Damle (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prachi Damle reassigned CLOUDSTACK-4572:


Assignee: Saksham Srivastava  (was: Prachi Damle)

 findHostsForMigration API does not return correct host list
 ---

 Key: CLOUDSTACK-4572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
Affects Versions: 4.2.0
Reporter: Saksham Srivastava
Assignee: Saksham Srivastava
 Fix For: 4.2.1


 Create multi cluster setup.
 Tag host in one cluster with host tag t1.
 Create a service offering using the host tag t1
 Deploy a vm using the tagged service offering.
 Even if tagged/untagged hosts are available across different clusters the api 
 does not list correct hosts for migration for the deployed vm.
 Expected behavior:
 The api should return the list of suitable/unsuitable hosts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CLOUDSTACK-1525) Add section on how to ssh in to system VMs

2013-08-30 Thread Marty Sweet (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marty Sweet resolved CLOUDSTACK-1525.
-

Resolution: Fixed

Applied to master and 4.2-forward
https://reviews.apache.org/r/13798/

 Add section on how to ssh in to system VMs
 --

 Key: CLOUDSTACK-1525
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-1525
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Doc
Affects Versions: 4.0.0, 4.1.0
Reporter: Jessica Tomechak
Assignee: Marty Sweet
Priority: Minor
 Fix For: 4.2.0


 In the Working with System VMs section of the Admin Guide, there is no 
 Accessing System VMs section. There should be one, similar to the 
 Accessing VMs section in the earlier Working with Virtual Machines 
 section. You can access system VMs through the UI, in the Infrastructure tab. 
 You can also ssh in, using the following techniques.
 To access a system VM directly over the network, use one of the following 
 techniques, depending on the hypervisor.
 XenServer or KVM:
 SSH in by using the link local IP address of the system VM. For example, in 
 the command below, substitute your own path to the private key used to log in 
 to the system VM and your own link local IP.
 Run the following command on the XenServer or KVM host on which the system VM 
 is present:
 # ssh -i private-key-path link-local-ip -p 3922
 Now you can run commands on the system VM. For example, to check the software 
 version:
 # cat /etc/cloudstack-release
 The output should be like the following:
 Cloudstack Release 4.0 Mon Feb 6 15:10:04 PST 2013
 ESXi:
 SSH in using the private IP address of the system VM. For example, in the 
 command below, substitute your own path to the private key used to log in to 
 the system VM and your own private IP.
 Run the following command on the Management Server:
 # ssh -i private-key-path private-ip -p 3922
 Now you can run commands on the system VM. For example, to check the software 
 version:
 # cat /etc/cloudstack-release
 The output should be like the following:
 Cloudstack Release 4.0 Mon Feb 6 15:10:04 PST 2013

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4534) [object_store_refactor] Deleting uploaded volume is not deleting the volume from backend

2013-08-30 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755062#comment-13755062
 ] 

Min Chen commented on CLOUDSTACK-4534:
--

I just verified, restart SSVM or restart MS will remove the uploaded volum from 
secondary storage. But the only issue here is that that destroyed volume is 
still showing up in the UI due to removed column is not set for this volume. 

 [object_store_refactor] Deleting uploaded volume is not deleting the volume 
 from backend
 

 Key: CLOUDSTACK-4534
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4534
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Storage Controller, Volumes
Affects Versions: 4.2.1
 Environment: git rev-parse HEAD~5
 1f46bc3fb09aead2cf1744d358fea7adba7df6e1
 Cluster: VMWare
 Storage: NFS
Reporter: Sanjeev N
 Fix For: 4.2.1

 Attachments: cloud.dmp, cloud.dmp, management-server.rar, 
 management-server.rar


 Deleting uploaded volume is not deleting the volume from backend and not 
 marking removed field in volumes table.
 Steps to Reproduce:
 
 1.Bring up CS with vmware cluster using NFS for both primary and secondary 
 storage
 2.Upload one volume using uploadVolume API
 3.When the volume is in Uploaded state try to delete the volume
 Result:
 ==
 from volume_store_ref volume entry got deleted but volume was not deleted 
 from secondary storage and removed filed was not set in volumes table.
 Observations:
 ===
 Log snippet from management server log file as follows:
 2013-08-28 03:18:08,269 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===START===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:08,414 DEBUG [cloud.user.AccountManagerImpl] 
 (catalina-exec-20:null) Access granted to Acct[2-admin] to Domain:1/ by 
 AffinityGroupAccessChecker_EnhancerByCloudStack_86df51a8
 2013-08-28 03:18:08,421 INFO  [cloud.resourcelimit.ResourceLimitManagerImpl] 
 (catalina-exec-20:null) Discrepency in the resource count (original 
 count=77179526656 correct count = 78867689472) for type secondary_storage for 
 account ID 2 is fixed during resource count recalculation.
 2013-08-28 03:18:08,446 DEBUG [cloud.api.ApiServlet] (catalina-exec-20:null) 
 ===END===  10.146.0.131 -- GET  
 command=deleteVolumeid=e9ee6c0d-d149-4771-a494-6efda849b2ceresponse=jsonsessionkey=vNQ7kc2GdEuxzKje8MQ2xSAqbAQ%3D_=1377674288184
 2013-08-28 03:18:32,766 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Storage pool garbage collector found 0 
 templates to clean up in storage pool: pri_esx_306
 2013-08-28 03:18:32,772 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 templates to cleanup on template_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,774 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 0 
 snapshots to cleanup on snapshot_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,776 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Secondary storage garbage collector found 1 
 volumes to cleanup on volume_store_ref for store: 
 37f6be5b-0899-48b4-9fd8-1fe483f47c0e
 2013-08-28 03:18:32,777 DEBUG [cloud.storage.StorageManagerImpl] 
 (StorageManager-Scavenger-1:null) Deleting volume store DB entry: 
 VolumeDataStore[2-20-2volumes/2/20/7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova]
 Volume in the backend:
 [root@Rhel63-Sanjeev 20]# pwd
 /tmp/nfs/sec_306/volumes/2/20
 [root@Rhel63-Sanjeev 20]# ls -l
 total 898008
 -rwxrwxrwx+ 1 root root 459320832 Aug 27 13:57 
 7e5778fd-c4bf-35b3-9e7a-9ab8500ab469.ova
 -rwxrwxrwx+ 1 root root 459312128 Sep 17  2010 CentOS5.3-x86_64-disk1.vmdk
 -rwxrwxrwx+ 1 root root   147 Sep 17  2010 CentOS5.3-x86_64.mf
 -rwxrwxrwx+ 1 root root  5340 Sep 17  2010 CentOS5.3-x86_64.ovf
 -rwxrwxrwx+ 1 root root   340 Aug 27 13:58 volume.properties
 [root@Rhel63-Sanjeev 20]#
 Attaching management server log file and cloud db.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (CLOUDSTACK-4456) [Automation] Vm deployment from template is failed; due to some race condition in KVM

2013-08-30 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan closed CLOUDSTACK-4456.
---


Not found this issue in latest runs

 [Automation] Vm deployment from template is failed; due to some race 
 condition in KVM
 -

 Key: CLOUDSTACK-4456
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4456
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation, KVM, Management Server, Template
Affects Versions: 4.2.0
 Environment: Automation 
 4.2
Reporter: Rayees Namathponnan
Assignee: edison su
Priority: Critical
 Fix For: 4.2.1

 Attachments: CLOUDSTACK-4456.rar


 This issue observed during automation run, In attached log i can see multiple 
 deployment failed due to same issue, here the once effected test case 
 integration.component.test_blocker_bugs.TestTemplate.test_01_create_template
 Test case performing below operation 
 1) Create template from http url (ubuntu-10-04-64bit-server.qcow2)
 2) Deploy vm from this template 
 Actual Result 
 -
 Deployment failed with below error;  look like MS deleting template after 
 creating the volume 
 IN MS Search for job-899 
 2013-08-21 22:07:12,383 DEBUG [cloud.network.NetworkManagerImpl] 
 (Job-Executor-66:job-899 = [ 461fc03f-6d0a-488f-9cf3-7cf1a537b26c ]) Asking 
 BareMetalUserdata to prepare for 
 Nic[194-188-97ae23d5-67ea-46c1-91e6-4e6ef35c22ae-10.223.250.231]
 2013-08-21 22:07:12,389 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-66:job-899 = [ 461fc03f-6d0a-488f-9cf3-7cf1a537b26c ]) Checking 
 if we need to prepare 1 volumes for 
 VM[User|daf71095-b010-45f0-9e46-3bf51ee6a59c]
 2013-08-21 22:07:12,389 DEBUG [cloud.storage.VolumeManagerImpl] 
 (Job-Executor-66:job-899 = [ 461fc03f-6d0a-488f-9cf3-7cf1a537b26c ]) No need 
 to recreate the volume: Vol[238|vm=188|ROOT], since it already has a pool 
 assigned: 1, adding disk to VM
 2013-08-21 22:07:12,408 DEBUG [agent.transport.Request] 
 (Job-Executor-66:job-899 = [ 461fc03f-6d0a-488f-9cf3-7cf1a537b26c ]) Seq 
 2-949159112: Sending  { Cmd , MgmtId: 73187150500751, via: 2, Ver: v1, Flags: 
 100011, 
 [{com.cloud.agent.api.StartCommand:{vm:{id:188,name:i-287-188-TestVM,type:User,cpus:1,minSpeed:100,maxSpeed:100,minRam:134217728,maxRam:134217728,arch:x86_64,os:CentOS
  5.5 
 (64-bit),bootArgs:,rebootOnCrash:false,enableHA:false,limitCpuUse:false,enableDynamicallyScaleVm:false,vncPassword:a79bd4d41d554b1a,params:{Message.ReservedCapacityFreed.Flag:false},uuid:daf71095-b010-45f0-9e46-3bf51ee6a59c,disks:[{data:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:eba8fc74-aa6e-43a9-8dae-5501a0dd490f,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:9d41-b138-3a13-974c-fa506683da4d,id:1,poolType:NetworkFilesystem,host:nfs1-ccp.citrix.com,path:/home/common/automation/SC_QA_AUTO5/primary1,port:2049}},name:ROOT-188,size:5368709120,path:5ce3b7d0-07c1-4d3b-9c00-2c6ad8716635,volumeId:238,vmName:i-287-188-TestVM,accountId:287,format:QCOW2,id:238,hypervisorType:KVM}},diskSeq:0,type:ROOT},{data:{org.apache.cloudstack.storage.to.TemplateObjectTO:{id:0,format:ISO,accountId:0,hvm:false}},diskSeq:3,type:ISO}],nics:[{deviceId:0,networkRateMbps:200,defaultNic:true,uuid:3cf9efdc-212d-46f6-b818-071450e602fd,ip:10.223.250.231,netmask:255.255.255.192,gateway:10.223.250.193,mac:06:5f:a0:00:00:24,dns1:8.8.8.8,broadcastType:Native,type:Guest,broadcastUri:vlan://untagged,isolationUri:ec2://untagged,isSecurityGroupEnabled:true}]},hostIp:10.223.250.195,executeInSequence:false,wait:0}}]
  }
 2013-08-21 22:07:12,957 DEBUG [agent.transport.Request] 
 (AgentManager-Handler-7:null) Seq 2-949159112: Processing:  { Ans: , MgmtId: 
 73187150500751, via: 2, Ver: v1, Flags: 10, 
 [{com.cloud.agent.api.StartAnswer:{vm:{id:188,name:i-287-188-TestVM,type:User,cpus:1,minSpeed:100,maxSpeed:100,minRam:134217728,maxRam:134217728,arch:x86_64,os:CentOS
  5.5 
 

[jira] [Closed] (CLOUDSTACK-3024) Test suite test_host_high_availability failed with error Exception during cleanup : 'cloudConnection' object has no attribute 'close'

2013-08-30 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan closed CLOUDSTACK-3024.
---


 Test suite test_host_high_availability failed with error Exception during 
 cleanup : 'cloudConnection' object has no attribute 'close'
 -

 Key: CLOUDSTACK-3024
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3024
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Automation
Affects Versions: 4.2.0
 Environment: Automation
Reporter: Rayees Namathponnan
Assignee: Prasanna Santhanam
 Fix For: 4.2.0


 Test suite test_host_high_availability failed with error 
 Warning: Exception during cleanup : 'cloudConnection' object has no attribute 
 'close'
   begin captured logging  
 testclient.testcase.TestHostHighAvailability: DEBUG: Enabling maintenance 
 mode for host dcc3383a-9ac7-44e1-b086-e279ce9c7390
 testclient.testcase.TestHostHighAvailability: DEBUG: Waiting for VM to come up
 -  end captured logging  -
 Stacktrace
   File /usr/local/lib/python2.7/unittest/case.py, line 345, in run
 self.tearDown()
   File 
 /Repo_30X/ipcl/cloudstack/test/integration/component/test_host_high_availability.py,
  line 148, in tearDown
 raise Exception(Warning: Exception during cleanup : %s % e)
 Warning: Exception during cleanup : 'cloudConnection' object has no attribute 
 'close'
   begin captured logging  
 testclient.testcase.TestHostHighAvailability: DEBUG: Enabling maintenance 
 mode for host dcc3383a-9ac7-44e1-b086-e279ce9c7390
 testclient.testcase.TestHostHighAvailability: DEBUG: Waiting for VM to come up
 -  end captured logging  -

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4575) [Portable IP] disassociating a transferred public IP is failing with exception

2013-08-30 Thread Chiradeep Vittal (JIRA)
Chiradeep Vittal created CLOUDSTACK-4575:


 Summary: [Portable IP] disassociating a transferred public IP is 
failing with exception
 Key: CLOUDSTACK-4575
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4575
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Reporter: Chiradeep Vittal


Steps to reproduce: 

1. Have latest CloudStack with at least 2 advanced zone. 
2. Go to Regions - local - portable IP - add an ip range like below 

Gateway : 10.147.33.1 
startIp : 10.147.33.3 
endip : 10.147.33.10 
vlan : 33 
subnet : 255.255.255.128 

3. login as a non-ROOT admin 

username : dom1User1 
password : password 
domain : dom1 

4. create the following isolated networks in each zone 

- Network1Zone1 
- Network1Zone2 

5. deploy the following VMs in each network 

- vm1Zone1 connected to Network1Zone1 
- vm1Zone2 connected to Network1Zone2 

6. Acquire and associate a portable IP to Network1Zone1 

7. enable staticNAT on the above portableIP and associate it to vm1Zone2 of 
Network1Zone2 and add firewall rule for ssh port 

Observations: 

(i) portable IP got transferred from Zone1 to Network1Zone2 successfully and 
able ssh to the portable IP without any issuees. 

8. disassociate above portable IP from Network1Zone2. 

Observations: 

(ii) sequence of things happened as mentioned below 

- disassociate happened without any issues which cleaned the eth interface from 
router etc.., but, 
- it again initiated IPASSOC on its own for the same portable IP which resulted 
in the following error and thus this IP stuck in release state forever. 

(iii) above behaviour made all further IPASSOCs to fail. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-4575) [Portable IP] disassociating a transferred public IP is failing with exception

2013-08-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755159#comment-13755159
 ] 

ASF subversion and git services commented on CLOUDSTACK-4575:
-

Commit a98eb12549a900c7f88acc68457957a4a955fecd in branch 
refs/heads/4.2-forward from [~chiradeep]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a98eb12 ]

CLOUDSTACK-4575: Portable IP: disassociating a transferred public IP  fails
The code is excessively complicated and convoluted.
 DisassociateIP -
 Revoke Rule - {FW, PF{incl SNAT}, LB, RA VPN} -
- Send IpAssoc (false) to VR
 Send all config to VR again
- Send IpAssoc(false) to VR again   fails here since it cannot 
find the VLAN for the IP since it is already gone
- Mark Ip as released

The workaround fix would be to not throw an exception in CitrixResourceBase if 
it is disassociate and the VLAN does not exist on the XS host.

Signed-off-by: Chiradeep Vittal chirad...@apache.org


 [Portable IP] disassociating a transferred public IP is failing with exception
 --

 Key: CLOUDSTACK-4575
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4575
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public(Anyone can view this level - this is the 
 default.) 
  Components: Network Controller
Reporter: Chiradeep Vittal

 Steps to reproduce: 
 1. Have latest CloudStack with at least 2 advanced zone. 
 2. Go to Regions - local - portable IP - add an ip range like below 
 Gateway : 10.147.33.1 
 startIp : 10.147.33.3 
 endip : 10.147.33.10 
 vlan : 33 
 subnet : 255.255.255.128 
 3. login as a non-ROOT admin 
 username : dom1User1 
 password : password 
 domain : dom1 
 4. create the following isolated networks in each zone 
 - Network1Zone1 
 - Network1Zone2 
 5. deploy the following VMs in each network 
 - vm1Zone1 connected to Network1Zone1 
 - vm1Zone2 connected to Network1Zone2 
 6. Acquire and associate a portable IP to Network1Zone1 
 7. enable staticNAT on the above portableIP and associate it to vm1Zone2 of 
 Network1Zone2 and add firewall rule for ssh port 
 Observations: 
 (i) portable IP got transferred from Zone1 to Network1Zone2 successfully and 
 able ssh to the portable IP without any issuees. 
 8. disassociate above portable IP from Network1Zone2. 
 Observations: 
 (ii) sequence of things happened as mentioned below 
 - disassociate happened without any issues which cleaned the eth interface 
 from router etc.., but, 
 - it again initiated IPASSOC on its own for the same portable IP which 
 resulted in the following error and thus this IP stuck in release state 
 forever. 
 (iii) above behaviour made all further IPASSOCs to fail. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >