[jira] [Commented] (CLOUDSTACK-4627) HA not working, User VM wasn't Migrated

2014-08-19 Thread Valery Ciareszka (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102987#comment-14102987
 ] 

Valery Ciareszka commented on CLOUDSTACK-4627:
--

worked for me in 4.2.1, seems to work in 4.3.0

> HA not working, User VM wasn't Migrated
> ---
>
> Key: CLOUDSTACK-4627
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4627
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller, KVM, Management Server
>Affects Versions: 4.2.0
> Environment: CentOS 6.3 64bit
>Reporter: Naoki Sakamoto
>Assignee: edison su
> Fix For: 4.2.1
>
> Attachments: 20130906_HA_SystemVM_Migration_OK_But_UserVM_NG.zip, 
> 20130909_HA_UserVM_Migration_NG.zip
>
>
> 1. We made one of KVM Host Power OFF by push power button of hardware for 
> High Availability Test.
> 2. Vritual Router / Secodary Storage VM / Console Proxy VM is Migrated.
>But User VM wasn't Migrated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-4867) NullPointerException on agent while remounting primary storage

2013-10-15 Thread Valery Ciareszka (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13795016#comment-13795016
 ] 

Valery Ciareszka commented on CLOUDSTACK-4867:
--

ok, I have closed this issue as dup

> NullPointerException on agent while remounting primary storage
> --
>
> Key: CLOUDSTACK-4867
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4867
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.0
> Environment: KVM (CentOS 6.4)/CloudStack 4.2
>Reporter: Valery Ciareszka
> Fix For: 4.2.1
>
>
> This issue appeared suddenly, I have no idea, how has that happened.
> Sympthoms:
> -no new virtalmachines are created on one of hypervisor servers 
> -there are NullPointerExceptions in agent log on problem server
> -virsh shows no pools
> After doing some debugging I was able to repeat this bug manually(see below), 
> but still have no idea how it occured initially.
> Here are steps to reproduce this bug:
> I have two primary storages mounted via NFS:
> 10.6.20.1:/GIGO1/p1   7.2T   90G  7.1T   2% 
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 10.6.20.2:/GIGO2/p2   7.3T   31G  7.3T   1% 
> /mnt/bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5
> You should have at least one VM running from NFS storage to reproduce this 
> issue.
> [root@ad111 libvirt]# virsh  pool-list
> Name State  Autostart
> -
> 63cacc3d-185f-45f0-981c-5c4d9d79d665 active no
> bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5 active no
> c59065c8-4d4c-3276-9d12-f170e4cd445e active no
> for now all is ok, I can see localstorage and two NFS shares in pool-list
> Let's restart libvirtd:
> [root@ad111 ~]# /etc/init.d/libvirtd restart
> Stopping libvirtd daemon:  [  OK  ]
> Starting libvirtd daemon:  [  OK  ]
> And pools are gone:
> [root@ad111 ~]# virsh  pool-list
> Name State  Autostart
> -
> [root@ad111 ~]#
> According to agent log it tries to add pool to libvirt but it fails because 
> libvirt tries to mount share (which is already mounted) upon adding it:
> [root@ad111 ~]# cat << _EOF > pool.xml
>  
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   7869416079360
>   95770640384
>   7773645438976
>   
> 
> 
> 
>   
>   
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 
>   0755
>   -1
>   -1
> 
>   
> 
> _EOF
> [root@ad111 ~]# virsh pool-create pool.xml
> error: Failed to create pool from pool.xml
> error: Requested operation is not valid: Target 
> '/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e' is already mounted
> Agent loops in java.lang.NullPointerExceptions, restart does not help. As a 
> result, no new VMs could be created on this host.
> I was able to resolve this issue next way:
> -migrated all vms to another node
> -switched on maintenance mode on the problem host 
> -umount all NFS shares
> -switched off maintenance mode on the problem host
> Logs:
> 2013-10-14 15:25:29,770 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-3:null) Processing command: 
> com.cloud.agent.api.GetVmStatsCommand
> 2013-10-14 15:25:29,771 DEBUG [kvm.resource.LibvirtConnection] 
> (agentRequest-Handler-3:null) Connection with libvirtd is broken, due to 
> Cannot write data: Broken pipe
> 2013-10-14 15:25:33,091 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-4:null) Processing command: 
> com.cloud.agent.api.GetHostStatsCommand
> 2013-10-14 15:25:33,092 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-4:null) Executing: /bin/bash -c idle=$(top -b -n 1|grep 
> Cpu\(s\):|cut -d% -f4|cut -d, -f2);echo $idle
> 2013-10-14 15:25:33,224 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-5:null) Processing command: 
> com.cloud.agent.api.GetStorageStatsCommand
> 2013-10-14 15:25:33,228 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) can't get storage pool
> org.libvirt.LibvirtException: Storage pool not found: no pool with matching 
> uuid
> at org.libvirt.ErrorHandler.processError(Unknown Source)
> at org.libvirt.Connect.processError(Unknown Source)
> at org.libvirt.Connect.storagePoolLookupByUUIDString(Unknown Source)
> at 
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool(LibvirtStorageAdaptor.java:363)
> at 
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.getStoragePool(KVMStoragePoolManager.java:104)
> at 
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.j

[jira] [Closed] (CLOUDSTACK-4867) NullPointerException on agent while remounting primary storage

2013-10-15 Thread Valery Ciareszka (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valery Ciareszka closed CLOUDSTACK-4867.


   Resolution: Duplicate
Fix Version/s: 4.2.1

> NullPointerException on agent while remounting primary storage
> --
>
> Key: CLOUDSTACK-4867
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4867
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.0
> Environment: KVM (CentOS 6.4)/CloudStack 4.2
>Reporter: Valery Ciareszka
> Fix For: 4.2.1
>
>
> This issue appeared suddenly, I have no idea, how has that happened.
> Sympthoms:
> -no new virtalmachines are created on one of hypervisor servers 
> -there are NullPointerExceptions in agent log on problem server
> -virsh shows no pools
> After doing some debugging I was able to repeat this bug manually(see below), 
> but still have no idea how it occured initially.
> Here are steps to reproduce this bug:
> I have two primary storages mounted via NFS:
> 10.6.20.1:/GIGO1/p1   7.2T   90G  7.1T   2% 
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 10.6.20.2:/GIGO2/p2   7.3T   31G  7.3T   1% 
> /mnt/bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5
> You should have at least one VM running from NFS storage to reproduce this 
> issue.
> [root@ad111 libvirt]# virsh  pool-list
> Name State  Autostart
> -
> 63cacc3d-185f-45f0-981c-5c4d9d79d665 active no
> bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5 active no
> c59065c8-4d4c-3276-9d12-f170e4cd445e active no
> for now all is ok, I can see localstorage and two NFS shares in pool-list
> Let's restart libvirtd:
> [root@ad111 ~]# /etc/init.d/libvirtd restart
> Stopping libvirtd daemon:  [  OK  ]
> Starting libvirtd daemon:  [  OK  ]
> And pools are gone:
> [root@ad111 ~]# virsh  pool-list
> Name State  Autostart
> -
> [root@ad111 ~]#
> According to agent log it tries to add pool to libvirt but it fails because 
> libvirt tries to mount share (which is already mounted) upon adding it:
> [root@ad111 ~]# cat << _EOF > pool.xml
>  
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   7869416079360
>   95770640384
>   7773645438976
>   
> 
> 
> 
>   
>   
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 
>   0755
>   -1
>   -1
> 
>   
> 
> _EOF
> [root@ad111 ~]# virsh pool-create pool.xml
> error: Failed to create pool from pool.xml
> error: Requested operation is not valid: Target 
> '/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e' is already mounted
> Agent loops in java.lang.NullPointerExceptions, restart does not help. As a 
> result, no new VMs could be created on this host.
> I was able to resolve this issue next way:
> -migrated all vms to another node
> -switched on maintenance mode on the problem host 
> -umount all NFS shares
> -switched off maintenance mode on the problem host
> Logs:
> 2013-10-14 15:25:29,770 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-3:null) Processing command: 
> com.cloud.agent.api.GetVmStatsCommand
> 2013-10-14 15:25:29,771 DEBUG [kvm.resource.LibvirtConnection] 
> (agentRequest-Handler-3:null) Connection with libvirtd is broken, due to 
> Cannot write data: Broken pipe
> 2013-10-14 15:25:33,091 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-4:null) Processing command: 
> com.cloud.agent.api.GetHostStatsCommand
> 2013-10-14 15:25:33,092 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-4:null) Executing: /bin/bash -c idle=$(top -b -n 1|grep 
> Cpu\(s\):|cut -d% -f4|cut -d, -f2);echo $idle
> 2013-10-14 15:25:33,224 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-5:null) Processing command: 
> com.cloud.agent.api.GetStorageStatsCommand
> 2013-10-14 15:25:33,228 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) can't get storage pool
> org.libvirt.LibvirtException: Storage pool not found: no pool with matching 
> uuid
> at org.libvirt.ErrorHandler.processError(Unknown Source)
> at org.libvirt.Connect.processError(Unknown Source)
> at org.libvirt.Connect.storagePoolLookupByUUIDString(Unknown Source)
> at 
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool(LibvirtStorageAdaptor.java:363)
> at 
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.getStoragePool(KVMStoragePoolManager.java:104)
> at 
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2466)
> at 
> com.cloud.hyper

[jira] [Commented] (CLOUDSTACK-4867) NullPointerException on agent while remounting primary storage

2013-10-15 Thread Valery Ciareszka (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13794999#comment-13794999
 ] 

Valery Ciareszka commented on CLOUDSTACK-4867:
--

Commit was made at Sep 17, and 4.2.0 was released at 01 Oct. It is still 
unfixed in 4.2.0:

 154 // if error is that pool is mounted, try to handle it
 155 if (e.toString().contains("already mounted")) {
 156 s_logger.error("Attempting to unmount old mount libvirt is 
unaware of at "+targetPath);
 157 String result = Script.runSimpleBashScript("umount " + 
targetPath );

How could it be possible?



> NullPointerException on agent while remounting primary storage
> --
>
> Key: CLOUDSTACK-4867
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4867
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.0
> Environment: KVM (CentOS 6.4)/CloudStack 4.2
>Reporter: Valery Ciareszka
>
> This issue appeared suddenly, I have no idea, how has that happened.
> Sympthoms:
> -no new virtalmachines are created on one of hypervisor servers 
> -there are NullPointerExceptions in agent log on problem server
> -virsh shows no pools
> After doing some debugging I was able to repeat this bug manually(see below), 
> but still have no idea how it occured initially.
> Here are steps to reproduce this bug:
> I have two primary storages mounted via NFS:
> 10.6.20.1:/GIGO1/p1   7.2T   90G  7.1T   2% 
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 10.6.20.2:/GIGO2/p2   7.3T   31G  7.3T   1% 
> /mnt/bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5
> You should have at least one VM running from NFS storage to reproduce this 
> issue.
> [root@ad111 libvirt]# virsh  pool-list
> Name State  Autostart
> -
> 63cacc3d-185f-45f0-981c-5c4d9d79d665 active no
> bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5 active no
> c59065c8-4d4c-3276-9d12-f170e4cd445e active no
> for now all is ok, I can see localstorage and two NFS shares in pool-list
> Let's restart libvirtd:
> [root@ad111 ~]# /etc/init.d/libvirtd restart
> Stopping libvirtd daemon:  [  OK  ]
> Starting libvirtd daemon:  [  OK  ]
> And pools are gone:
> [root@ad111 ~]# virsh  pool-list
> Name State  Autostart
> -
> [root@ad111 ~]#
> According to agent log it tries to add pool to libvirt but it fails because 
> libvirt tries to mount share (which is already mounted) upon adding it:
> [root@ad111 ~]# cat << _EOF > pool.xml
>  
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   c59065c8-4d4c-3276-9d12-f170e4cd445e
>   7869416079360
>   95770640384
>   7773645438976
>   
> 
> 
> 
>   
>   
> /mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
> 
>   0755
>   -1
>   -1
> 
>   
> 
> _EOF
> [root@ad111 ~]# virsh pool-create pool.xml
> error: Failed to create pool from pool.xml
> error: Requested operation is not valid: Target 
> '/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e' is already mounted
> Agent loops in java.lang.NullPointerExceptions, restart does not help. As a 
> result, no new VMs could be created on this host.
> I was able to resolve this issue next way:
> -migrated all vms to another node
> -switched on maintenance mode on the problem host 
> -umount all NFS shares
> -switched off maintenance mode on the problem host
> Logs:
> 2013-10-14 15:25:29,770 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-3:null) Processing command: 
> com.cloud.agent.api.GetVmStatsCommand
> 2013-10-14 15:25:29,771 DEBUG [kvm.resource.LibvirtConnection] 
> (agentRequest-Handler-3:null) Connection with libvirtd is broken, due to 
> Cannot write data: Broken pipe
> 2013-10-14 15:25:33,091 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-4:null) Processing command: 
> com.cloud.agent.api.GetHostStatsCommand
> 2013-10-14 15:25:33,092 DEBUG [kvm.resource.LibvirtComputingResource] 
> (agentRequest-Handler-4:null) Executing: /bin/bash -c idle=$(top -b -n 1|grep 
> Cpu\(s\):|cut -d% -f4|cut -d, -f2);echo $idle
> 2013-10-14 15:25:33,224 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-5:null) Processing command: 
> com.cloud.agent.api.GetStorageStatsCommand
> 2013-10-14 15:25:33,228 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) can't get storage pool
> org.libvirt.LibvirtException: Storage pool not found: no pool with matching 
> uuid
> at org.libvirt.ErrorHandler.processError(Unknown Source)
> at org.libvirt.Connect.processError(Unknown Source)
> at org.libvirt.Connect

[jira] [Created] (CLOUDSTACK-4868) incorrect cpu usage values in dashboard

2013-10-15 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4868:


 Summary: incorrect cpu usage values in dashboard
 Key: CLOUDSTACK-4868
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4868
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.0
 Environment: KVM (CentOS 6.4) + CloudStack 4.2.0
Reporter: Valery Ciareszka


If cpu.overprovisioning.factor for cluster is being changed, dashboard reports 
incorrect values, unless all vms are restarted (stop/start).

I.e. I configured cluster and set cpu.overprovisioning.factor to 6 and started 
a number of VMS
Current usage is reported correctly.


change cpu.overprovisioning.factor to 12 at cluster settings
Wait for 5-10 minutes and you will see that cpu usage was doubled.

If you will stop/start all vms (including system vms/virtual routers), then you 
will see true CPU usage in dashboard.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-4867) NullPointerException on agent while remounting primary storage

2013-10-15 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4867:


 Summary: NullPointerException on agent while remounting primary 
storage
 Key: CLOUDSTACK-4867
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4867
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Hypervisor Controller
Affects Versions: 4.2.0
 Environment: KVM (CentOS 6.4)/CloudStack 4.2
Reporter: Valery Ciareszka


This issue appeared suddenly, I have no idea, how has that happened.
Sympthoms:
-no new virtalmachines are created on one of hypervisor servers 
-there are NullPointerExceptions in agent log on problem server
-virsh shows no pools

After doing some debugging I was able to repeat this bug manually(see below), 
but still have no idea how it occured initially.


Here are steps to reproduce this bug:

I have two primary storages mounted via NFS:

10.6.20.1:/GIGO1/p1   7.2T   90G  7.1T   2% 
/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e
10.6.20.2:/GIGO2/p2   7.3T   31G  7.3T   1% 
/mnt/bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5

You should have at least one VM running from NFS storage to reproduce this 
issue.



[root@ad111 libvirt]# virsh  pool-list
Name State  Autostart
-
63cacc3d-185f-45f0-981c-5c4d9d79d665 active no
bd32f762-a1f0-3a65-b9bc-fdb6d1d681b5 active no
c59065c8-4d4c-3276-9d12-f170e4cd445e active no

for now all is ok, I can see localstorage and two NFS shares in pool-list

Let's restart libvirtd:

[root@ad111 ~]# /etc/init.d/libvirtd restart
Stopping libvirtd daemon:  [  OK  ]
Starting libvirtd daemon:  [  OK  ]

And pools are gone:

[root@ad111 ~]# virsh  pool-list
Name State  Autostart
-

[root@ad111 ~]#


According to agent log it tries to add pool to libvirt but it fails because 
libvirt tries to mount share (which is already mounted) upon adding it:


[root@ad111 ~]# cat << _EOF > pool.xml

 
  c59065c8-4d4c-3276-9d12-f170e4cd445e
  c59065c8-4d4c-3276-9d12-f170e4cd445e
  7869416079360
  95770640384
  7773645438976
  



  
  
/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e

  0755
  -1
  -1

  


_EOF


[root@ad111 ~]# virsh pool-create pool.xml
error: Failed to create pool from pool.xml
error: Requested operation is not valid: Target 
'/mnt/c59065c8-4d4c-3276-9d12-f170e4cd445e' is already mounted



Agent loops in java.lang.NullPointerExceptions, restart does not help. As a 
result, no new VMs could be created on this host.

I was able to resolve this issue next way:
-migrated all vms to another node
-switched on maintenance mode on the problem host 
-umount all NFS shares
-switched off maintenance mode on the problem host


Logs:


2013-10-14 15:25:29,770 DEBUG [cloud.agent.Agent] (agentRequest-Handler-3:null) 
Processing command: com.cloud.agent.api.GetVmStatsCommand
2013-10-14 15:25:29,771 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-3:null) Connection with libvirtd is broken, due to Cannot 
write data: Broken pipe
2013-10-14 15:25:33,091 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) 
Processing command: com.cloud.agent.api.GetHostStatsCommand
2013-10-14 15:25:33,092 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) Executing: /bin/bash -c idle=$(top -b -n 1|grep 
Cpu\(s\):|cut -d% -f4|cut -d, -f2);echo $idle
2013-10-14 15:25:33,224 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) 
Processing command: com.cloud.agent.api.GetStorageStatsCommand
2013-10-14 15:25:33,228 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-5:null) can't get storage pool
org.libvirt.LibvirtException: Storage pool not found: no pool with matching uuid
at org.libvirt.ErrorHandler.processError(Unknown Source)
at org.libvirt.Connect.processError(Unknown Source)
at org.libvirt.Connect.storagePoolLookupByUUIDString(Unknown Source)
at 
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool(LibvirtStorageAdaptor.java:363)
at 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.getStoragePool(KVMStoragePoolManager.java:104)
at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2466)
at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1230)
at com.cloud.agent.Agent.processRequest(Agent.java:525)
at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
at com.cloud.utils.nio.Task.run(Task.java:83)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker

[jira] [Created] (CLOUDSTACK-4838) proper messaging of checkAccess exceptions

2013-10-09 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4838:


 Summary: proper messaging of checkAccess exceptions
 Key: CLOUDSTACK-4838
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4838
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.0
 Environment: KVM(CentOS 6.4)
Reporter: Valery Ciareszka
Priority: Minor


If you try to deploy virtualmachine via root domain API from non-public 
template and specify non-privileged user as its owner, it will fail.

I.e. curl 
"http://localhost:8096/client/?command=deployVirtualMachine&serviceofferingid=2b45be75-0ec8-4683-91a0-d95414da310d&zoneid=4a5bc8e5-bab9-4f92-9249-d57ef8a0f9f8&templateid=94013c8f-b615-467f-8df2-635ac4c5efb5&networkids=5928684b-f9fc-4c2f-a74b-d6af622250f3&account=vdc3880&domainid=2744e9b6-8633-4e8d-bb4d-860fe5e7e744";

Response is:


531
4365
Acct[ebcf2919-a842-4986-a8ed-a3806dfbd8f2-vdc3880] does not have 
permission to operate with resource 
Acct[9d9ef909-2469-11e3-9901-90e2ba51b336-admin]


It is unclean, what was the reason of PermissionDeniedException. After 
modifying source code and adding more debug messages I figured out that this 
was caused because template was non-public, but it is non obvious.

It would be great if such exceptions could provide more information about their 
actual reasons.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-4828) remove nic fails if dhcp wasn't enabled in network offering

2013-10-08 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4828:


 Summary: remove nic fails if dhcp wasn't enabled in network 
offering
 Key: CLOUDSTACK-4828
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4828
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.2.0
 Environment: KVM (CentOS 6.4) with CloudStack 4.2
Reporter: Valery Ciareszka
Priority: Critical


How to reproduce:
1. create network offering without dhcp 
2. add NIC to VM from network created at step 1.
3. try to remove NIC from VM. It should fail.

VM deletion also fails - VM will stay in "Expunging" state forever, unless NIC 
is deleted manually through mysql:
mysql> delete from nics where state='Deallocating' and  
ip4_address='10.2.2.226';

error in logs:

2013-10-03 14:24:42,616 WARN  [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Unable to expunge VM[User|newIvanVm]
com.cloud.exception.UnsupportedServiceException: Service Dhcp is not supported 
in the network id=222
at 
com.cloud.network.dao.NetworkServiceMapDaoImpl.getProviderForServiceInNetwork(NetworkServiceMapDaoImpl.java:127)
at 
com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
at 
com.cloud.network.NetworkManagerImpl.getDhcpServiceProvider(NetworkManagerImpl.java:3681)
at 
com.cloud.network.NetworkManagerImpl.isDhcpAccrossMultipleSubnetsSupported(NetworkManagerImpl.java:2522)
at 
com.cloud.network.NetworkManagerImpl.removeNic(NetworkManagerImpl.java:2507)
at 
com.cloud.network.NetworkManagerImpl.cleanupNics(NetworkManagerImpl.java:2463)
at 
com.cloud.vm.VirtualMachineManagerImpl.advanceExpunge(VirtualMachineManagerImpl.java:475)
at com.cloud.vm.UserVmManagerImpl.expunge(UserVmManagerImpl.java:1600)
at 
com.cloud.vm.UserVmManagerImpl$ExpungeTask.run(UserVmManagerImpl.java:1769)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4777) NullPointerException instead of working KVM HA

2013-10-03 Thread Valery Ciareszka (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13784895#comment-13784895
 ] 

Valery Ciareszka commented on CLOUDSTACK-4777:
--

I was able to resolve this issue by modifying 
a/server/src/com/cloud/storage/VolumeManagerImpl.java

--- a/server/src/com/cloud/storage/VolumeManagerImpl.java
+++ b/server/src/com/cloud/storage/VolumeManagerImpl.java
@@ -2657,7 +2657,8 @@ public class VolumeManagerImpl extends ManagerBase 
implements VolumeManager {
 public boolean canVmRestartOnAnotherServer(long vmId) {
 List vols = _volsDao.findCreatedByInstance(vmId);
 for (VolumeVO vol : vols) {
-if (!vol.isRecreatable() && !vol.getPoolType().isShared()) {
+StoragePoolVO storagePoolVO = 
_storagePoolDao.findById(vol.getPoolId());
+if (!vol.isRecreatable() && storagePoolVO != null && 
storagePoolVO.getPoolType() != null && 
!(storagePoolVO.getPoolType().isShared())) {
 return false;
 }
 }

this is commit from https://issues.apache.org/jira/browse/CLOUDSTACK-4627
But this bug is still present in latest cloudstack 4.2.0 source code (as well 
as in packages from http://cloudstack.apt-get.eu/rhel/4.2/)

> NullPointerException instead of working KVM HA
> --
>
> Key: CLOUDSTACK-4777
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4777
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller, KVM, Management Server
>Affects Versions: 4.2.0
> Environment: KVM (CentOS 6.4) with CloudStack 4.2
>Reporter: Valery Ciareszka
>Priority: Critical
>
> If  KVM host goes down, CloudStack management does not start HA-enabled VM on 
> the other hosts. There is NullPointerException in management.log:
> 2013-09-24 11:21:25,500 ERROR [cloud.ha.HighAvailabilityManagerImpl] 
> (HA-Worker-4:work-4) Terminating HAWork[4-HA-6-Running-Scheduled]
> java.lang.NullPointerException
> at 
> com.cloud.storage.VolumeManagerImpl.canVmRestartOnAnotherServer(VolumeManagerImpl.java:2641)
> at 
> com.cloud.ha.HighAvailabilityManagerImpl.restart(HighAvailabilityManagerImpl.java:516)
> at 
> com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.run(HighAvailabilityManagerImpl.java:831)
> see full log at http://pastebin.com/upnEA601



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4627) HA not working, User VM wasn't Migrated

2013-10-02 Thread Valery Ciareszka (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13784006#comment-13784006
 ] 

Valery Ciareszka commented on CLOUDSTACK-4627:
--

Was it really commited to  4.2.0 branch ? I see old version in latest source 
package:

wget 
http://www.eu.apache.org/dist/cloudstack/releases/4.2.0/apache-cloudstack-4.2.0-src.tar.bz2
tar jxfv apache-cloudstack-4.2.0-src.tar.bz2

[root@ad011d apache-cloudstack-4.2.0-src]# grep -A9 canVmRestartOnAnotherServer 
server/src/com/cloud/storage/VolumeManagerImpl.java
public boolean canVmRestartOnAnotherServer(long vmId) {
List vols = _volsDao.findCreatedByInstance(vmId);
for (VolumeVO vol : vols) {
if (!vol.isRecreatable() && !vol.getPoolType().isShared()) {
return false;
}
}
return true;
}


> HA not working, User VM wasn't Migrated
> ---
>
> Key: CLOUDSTACK-4627
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4627
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller, KVM, Management Server
>Affects Versions: 4.2.0
> Environment: CentOS 6.3 64bit
>Reporter: Naoki Sakamoto
>Assignee: edison su
> Attachments: 20130906_HA_SystemVM_Migration_OK_But_UserVM_NG.zip, 
> 20130909_HA_UserVM_Migration_NG.zip
>
>
> 1. We made one of KVM Host Power OFF by push power button of hardware for 
> High Availability Test.
> 2. Vritual Router / Secodary Storage VM / Console Proxy VM is Migrated.
>But User VM wasn't Migrated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-4777) NullPointerException instead of working KVM HA

2013-10-01 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4777:


 Summary: NullPointerException instead of working KVM HA
 Key: CLOUDSTACK-4777
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4777
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Hypervisor Controller, KVM, Management Server
Affects Versions: 4.2.0
 Environment: KVM (CentOS 6.4) with CloudStack 4.2
Reporter: Valery Ciareszka
Priority: Critical


If  KVM host goes down, CloudStack management does not start HA-enabled VM on 
the other hosts. There is NullPointerException in management.log:

2013-09-24 11:21:25,500 ERROR [cloud.ha.HighAvailabilityManagerImpl] 
(HA-Worker-4:work-4) Terminating HAWork[4-HA-6-Running-Scheduled]
java.lang.NullPointerException
at 
com.cloud.storage.VolumeManagerImpl.canVmRestartOnAnotherServer(VolumeManagerImpl.java:2641)
at 
com.cloud.ha.HighAvailabilityManagerImpl.restart(HighAvailabilityManagerImpl.java:516)
at 
com.cloud.ha.HighAvailabilityManagerImpl$WorkerThread.run(HighAvailabilityManagerImpl.java:831)

see full log at http://pastebin.com/upnEA601



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-4349) vm hangs in expunged state when static nat is enabled

2013-08-15 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-4349:


 Summary: vm hangs in expunged state when static nat is enabled
 Key: CLOUDSTACK-4349
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4349
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.1.0, 4.1.1
 Environment: CS 4.1.1 advanced mode
CentOS 6.4 64-bit / KVM
Reporter: Valery Ciareszka
Priority: Minor


vm hangs in expunged state when static nat is enabled raising 
java.lang.NullPointerException in management server logs.
steps to reproduce bug:
1. create VM
2. go to the network for this vm, acquire ip and make static nat mapping to 
this VM
3. create permissive firewall rules for this ip (net 0.0.0.0/0 , ports 1-65535 
/ icmptypes -1 for tcp/udp/icmp)
4. try to destroy VM

some logs:

2013-08-15 09:32:53,417 DEBUG [cloud.capacity.CapacityManagerImpl] 
(UserVm-Scavenger-1:null) VM state transitted from :Expunging to Expunging with 
event: ExpungeOperationvm's original host id: 12 new host id: null host id 
before state transition: null
2013-08-15 09:32:53,417 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
(UserVm-Scavenger-1:null) Destroying vm VM[User|test411-3-expunge]
2013-08-15 09:32:53,417 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
(UserVm-Scavenger-1:null) Cleaning up NICS
2013-08-15 09:32:53,417 DEBUG [cloud.network.NetworkManagerImpl] 
(UserVm-Scavenger-1:null) Cleaning network for vm: 6307
2013-08-15 09:32:53,418 DEBUG [cloud.storage.StorageManagerImpl] 
(UserVm-Scavenger-1:null) Cleaning storage for vm: 6307
2013-08-15 09:32:53,420 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
(UserVm-Scavenger-1:null) Expunged VM[User|test411-3-expunge]
2013-08-15 09:32:53,420 DEBUG [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Starting cleaning up vm VM[User|test411-3-expunge] 
resources...
2013-08-15 09:32:53,427 DEBUG [network.firewall.FirewallManagerImpl] 
(UserVm-Scavenger-1:null) No firewall rules are found for vm id=6307
2013-08-15 09:32:53,427 DEBUG [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Firewall rules are removed successfully as a part of 
vm id=6307 expunge
2013-08-15 09:32:53,431 DEBUG [network.rules.RulesManagerImpl] 
(UserVm-Scavenger-1:null) No port forwarding rules are found for vm id=6307
2013-08-15 09:32:53,431 DEBUG [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Port forwarding rules are removed successfully as a 
part of vm id=6307 expunge
2013-08-15 09:32:53,432 DEBUG [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Removed vm id=6307 from all load balancers as a part 
of expunge process
2013-08-15 09:32:53,433 DEBUG [agent.manager.AgentManagerImpl] 
(AgentManager-Handler-9:null) SeqA 46-808: Sending Seq 46-808:  { Ans: , 
MgmtId: 161603152803976, via: 46, Ver: v1, Flags: 100010, 
[{"AgentControlAnswer":{"result":true,"wait":0}}] }
2013-08-15 09:32:53,435 DEBUG [network.rules.RulesManagerImpl] 
(UserVm-Scavenger-1:null) Revoking all Firewallrules as a part of disabling 
static nat for public IP id=1217
2013-08-15 09:32:53,437 DEBUG [network.firewall.FirewallManagerImpl] 
(UserVm-Scavenger-1:null) Releasing 3 firewall rules for ip id=1217
2013-08-15 09:32:53,438 WARN  [cloud.vm.UserVmManagerImpl] 
(UserVm-Scavenger-1:null) Unable to expunge VM[User|test411-3-expunge]
java.lang.NullPointerException
at 
com.cloud.event.ActionEventUtils.getDomainId(ActionEventUtils.java:186)
at 
com.cloud.event.ActionEventUtils.persistActionEvent(ActionEventUtils.java:142)
at 
com.cloud.event.ActionEventUtils.onStartedActionEvent(ActionEventUtils.java:104)
at 
com.cloud.event.ActionEventInterceptor.interceptStart(ActionEventInterceptor.java:47)
at 
com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:119)
at 
com.cloud.network.firewall.FirewallManagerImpl.revokeFirewallRulesForIp(FirewallManagerImpl.java:734)
at 
com.cloud.network.rules.RulesManagerImpl.disableStaticNat(RulesManagerImpl.java:1194)
at 
com.cloud.vm.UserVmManagerImpl.cleanupVmResources(UserVmManagerImpl.java:1856)
at com.cloud.vm.UserVmManagerImpl.expunge(UserVmManagerImpl.java:1787)
at 
com.cloud.vm.UserVmManagerImpl$ExpungeTask.run(UserVmManagerImpl.java:2416)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadP

[jira] [Updated] (CLOUDSTACK-2213) russian language select failure

2013-04-26 Thread Valery Ciareszka (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valery Ciareszka updated CLOUDSTACK-2213:
-

Attachment: syntaxerror.jpg

> russian language select failure
> ---
>
> Key: CLOUDSTACK-2213
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2213
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.0.1, 4.0.2
> Environment: centos 6.4 / kvm
>Reporter: Valery Ciareszka
>Priority: Minor
> Attachments: css_missing.jpg, messages_ru_RU.properties.jpg, 
> syntaxerror.jpg
>
>
> 1. setup cloudstack management
> 2. open it in web browser
> 3. choose russian language
> 4. try to login
> you will see blank page
> This issue is caused by 2(at least) problems:
> 1. one incorrect translation string variable 
> message.action.change.service.warning.for.router in 
> /usr/share/cloud/management/webapps/client/WEB-INF/classes/resources/messages_ru_RU.properties
>  - there is \n symbol there instead of translated version
> 2. missing 
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ru_RU.css file 
> I replaced variable message.action.change.service.warning.for.router with 
> proper text and copied css from  
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ja.css to get 
> quick fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-2213) russian language select failure

2013-04-26 Thread Valery Ciareszka (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valery Ciareszka updated CLOUDSTACK-2213:
-

Attachment: css_missing.jpg

> russian language select failure
> ---
>
> Key: CLOUDSTACK-2213
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2213
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.0.1, 4.0.2
> Environment: centos 6.4 / kvm
>Reporter: Valery Ciareszka
>Priority: Minor
> Attachments: css_missing.jpg, messages_ru_RU.properties.jpg, 
> syntaxerror.jpg
>
>
> 1. setup cloudstack management
> 2. open it in web browser
> 3. choose russian language
> 4. try to login
> you will see blank page
> This issue is caused by 2(at least) problems:
> 1. one incorrect translation string variable 
> message.action.change.service.warning.for.router in 
> /usr/share/cloud/management/webapps/client/WEB-INF/classes/resources/messages_ru_RU.properties
>  - there is \n symbol there instead of translated version
> 2. missing 
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ru_RU.css file 
> I replaced variable message.action.change.service.warning.for.router with 
> proper text and copied css from  
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ja.css to get 
> quick fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-2213) russian language select failure

2013-04-26 Thread Valery Ciareszka (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valery Ciareszka updated CLOUDSTACK-2213:
-

Attachment: messages_ru_RU.properties.jpg

\n symbol instead of proper text - I believe it should look like:
message.action.change.service.warning.for.router=Для изменения текущего 
служебного ресурса ваш роутер должен быть остановлен.

> russian language select failure
> ---
>
> Key: CLOUDSTACK-2213
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2213
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.0.1, 4.0.2
> Environment: centos 6.4 / kvm
>Reporter: Valery Ciareszka
>Priority: Minor
> Attachments: messages_ru_RU.properties.jpg
>
>
> 1. setup cloudstack management
> 2. open it in web browser
> 3. choose russian language
> 4. try to login
> you will see blank page
> This issue is caused by 2(at least) problems:
> 1. one incorrect translation string variable 
> message.action.change.service.warning.for.router in 
> /usr/share/cloud/management/webapps/client/WEB-INF/classes/resources/messages_ru_RU.properties
>  - there is \n symbol there instead of translated version
> 2. missing 
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ru_RU.css file 
> I replaced variable message.action.change.service.warning.for.router with 
> proper text and copied css from  
> /usr/share/cloud/management/webapps/client/css/cloudstack3.ja.css to get 
> quick fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-2213) russian language select failure

2013-04-26 Thread Valery Ciareszka (JIRA)
Valery Ciareszka created CLOUDSTACK-2213:


 Summary: russian language select failure
 Key: CLOUDSTACK-2213
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2213
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.0.1, 4.0.2
 Environment: centos 6.4 / kvm
Reporter: Valery Ciareszka
Priority: Minor


1. setup cloudstack management
2. open it in web browser
3. choose russian language
4. try to login

you will see blank page

This issue is caused by 2(at least) problems:

1. one incorrect translation string variable 
message.action.change.service.warning.for.router in 
/usr/share/cloud/management/webapps/client/WEB-INF/classes/resources/messages_ru_RU.properties
 - there is \n symbol there instead of translated version

2. missing /usr/share/cloud/management/webapps/client/css/cloudstack3.ru_RU.css 
file 

I replaced variable message.action.change.service.warning.for.router with 
proper text and copied css from  
/usr/share/cloud/management/webapps/client/css/cloudstack3.ja.css to get quick 
fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira