[jira] [Commented] (CLOUDSTACK-8313) Local Storage overprovisioning should be possible

2015-03-10 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355202#comment-14355202
 ] 

Star Guo commented on CLOUDSTACK-8313:
--

Add sharemountpoint overprovisioning for kvm too? 
https://issues.apache.org/jira/login.jsp?os_destination=%2Fbrowse%2FCLOUDSTACK-8313
 

> Local Storage overprovisioning should be possible
> -
>
> Key: CLOUDSTACK-8313
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8313
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Storage Controller
>Affects Versions: 4.4.0, 4.4.2, 4.3.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
> Fix For: 4.6.0
>
>
> Currently we don't allow local storage to be overprovisioned.
> On KVM we use QCOW2 which is sparsely allocated and also uses clones for 
> deploying from templates.
> This makes it possible to overprovision local storage as well, but currently 
> StorageManagerImpl does not apply local storage over provisioning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CLOUDSTACK-8313) Local Storage overprovisioning should be possible

2015-03-10 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355202#comment-14355202
 ] 

Star Guo edited comment on CLOUDSTACK-8313 at 3/10/15 4:59 PM:
---

Add sharemountpoint overprovisioning for kvm too? 
https://github.com/apache/cloudstack/pull/74


was (Author: starg):
Add sharemountpoint overprovisioning for kvm too? 
https://issues.apache.org/jira/login.jsp?os_destination=%2Fbrowse%2FCLOUDSTACK-8313
 

> Local Storage overprovisioning should be possible
> -
>
> Key: CLOUDSTACK-8313
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8313
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Storage Controller
>Affects Versions: 4.4.0, 4.4.2, 4.3.2
>Reporter: Wido den Hollander
>Assignee: Wido den Hollander
> Fix For: 4.6.0
>
>
> Currently we don't allow local storage to be overprovisioned.
> On KVM we use QCOW2 which is sparsely allocated and also uses clones for 
> deploying from templates.
> This makes it possible to overprovision local storage as well, but currently 
> StorageManagerImpl does not apply local storage over provisioning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8302) About Snapshot Operation on KVM with RBD

2015-03-05 Thread Star Guo (JIRA)
Star Guo created CLOUDSTACK-8302:


 Summary: About Snapshot Operation on KVM with RBD
 Key: CLOUDSTACK-8302
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8302
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: KVM, Snapshot, Storage Controller
Affects Versions: 4.4.2, 4.4.1, 4.4.0
 Environment: CloudStack 4.4.2 + KVM on CentOS 6.6 + Ceph/RBD 0.80.8
Reporter: Star Guo


I just build a lab with CloudStack 4.4.2 + CentOS 6.6 KVM + Ceph/RBD 0.80.8.
I deploy an instance on RBD and I create the ROOT volume snapshots. When delete 
a snapshot the UI show OK, but the snapshot of the volume in the RBD pool is 
still exist.
And I find the code in 
com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java: 
…
@Override
public Answer deleteSnapshot(DeleteCommand cmd) {
return new Answer(cmd);
}
…
deleteSnapshot() does not be implememented. And I also find the code:
...
@Override
public Answer createTemplateFromSnapshot(CopyCommand cmd) {
return null;  //To change body of implemented methods use File | 
Settings | File Templates.
}
...
So does createTenokateFromSnapshot(). I just look for it in MASTER branch but 
not do that yet. Will CloudStack Dev Team plan to do that ? Thanks .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CLOUDSTACK-7477) [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is removed when agent shutsdown and restarted

2015-03-04 Thread Star Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Star Guo updated CLOUDSTACK-7477:
-
Comment: was deleted

(was: How to solve the co-mounted problem with libvirt+qemu-kvm on RHEL7?)

> [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is 
> removed when agent shutsdown and restarted
> 
>
> Key: CLOUDSTACK-7477
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7477
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.0
>Reporter: shweta agarwal
>Assignee: Kishan Kavala
>Priority: Blocker
>
> Repro steps:
> 1. Create a LXC setup
> 2. create few vms 
> 3. shut down host LXC agent
> 4. Once Agent states become disconnected start host gain
> Bug
> Note all the mounts in host
> mount
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
> devtmpfs on /dev type devtmpfs 
> (rw,nosuid,seclabel,size=3981488k,nr_inodes=995372,mode=755)
> securityfs on /sys/kernel/security type securityfs 
> (rw,nosuid,nodev,noexec,relatime)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
> devpts on /dev/pts type devpts 
> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
> tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)
> cgroup on /sys/fs/cgroup/systemd type cgroup 
> (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
> cgroup on /sys/fs/cgroup/cpuset type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuset)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
> cgroup on /sys/fs/cgroup/memory type cgroup 
> (rw,nosuid,nodev,noexec,relatime,memory)
> cgroup on /sys/fs/cgroup/devices type cgroup 
> (rw,nosuid,nodev,noexec,relatime,devices)
> cgroup on /sys/fs/cgroup/freezer type cgroup 
> (rw,nosuid,nodev,noexec,relatime,freezer)
> cgroup on /sys/fs/cgroup/net_cls type cgroup 
> (rw,nosuid,nodev,noexec,relatime,net_cls)
> cgroup on /sys/fs/cgroup/blkio type cgroup 
> (rw,nosuid,nodev,noexec,relatime,blkio)
> cgroup on /sys/fs/cgroup/perf_event type cgroup 
> (rw,nosuid,nodev,noexec,relatime,perf_event)
> cgroup on /sys/fs/cgroup/hugetlb type cgroup 
> (rw,nosuid,nodev,noexec,relatime,hugetlb)
> configfs on /sys/kernel/config type configfs (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-root on / type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
> (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> sunrpc on /proc/fs/nfsd type nfsd (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-home on /home type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
> 10.147.28.7:/export/home/shweta/goleta.lxc.primary on 
> /mnt/dfa2ec3c-d133-3284-8583-0a0845aa4424 type nfs 
> (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.147.28.7,mountvers=3,mountport=47246,mountproto=udp,local_lock=none,addr=10.147.28.7)
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> it again contains all those cgroups mount point which we deleted earlier as a 
> part of LXC agent instalation .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7477) [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is removed when agent shutsdown and restarted

2015-03-04 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346783#comment-14346783
 ] 

Star Guo commented on CLOUDSTACK-7477:
--

How to solve the co-mounted problem with libvirt+qemu-kvm on RHEL7?

> [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is 
> removed when agent shutsdown and restarted
> 
>
> Key: CLOUDSTACK-7477
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7477
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.0
>Reporter: shweta agarwal
>Assignee: Kishan Kavala
>Priority: Blocker
>
> Repro steps:
> 1. Create a LXC setup
> 2. create few vms 
> 3. shut down host LXC agent
> 4. Once Agent states become disconnected start host gain
> Bug
> Note all the mounts in host
> mount
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
> devtmpfs on /dev type devtmpfs 
> (rw,nosuid,seclabel,size=3981488k,nr_inodes=995372,mode=755)
> securityfs on /sys/kernel/security type securityfs 
> (rw,nosuid,nodev,noexec,relatime)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
> devpts on /dev/pts type devpts 
> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
> tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)
> cgroup on /sys/fs/cgroup/systemd type cgroup 
> (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
> cgroup on /sys/fs/cgroup/cpuset type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuset)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
> cgroup on /sys/fs/cgroup/memory type cgroup 
> (rw,nosuid,nodev,noexec,relatime,memory)
> cgroup on /sys/fs/cgroup/devices type cgroup 
> (rw,nosuid,nodev,noexec,relatime,devices)
> cgroup on /sys/fs/cgroup/freezer type cgroup 
> (rw,nosuid,nodev,noexec,relatime,freezer)
> cgroup on /sys/fs/cgroup/net_cls type cgroup 
> (rw,nosuid,nodev,noexec,relatime,net_cls)
> cgroup on /sys/fs/cgroup/blkio type cgroup 
> (rw,nosuid,nodev,noexec,relatime,blkio)
> cgroup on /sys/fs/cgroup/perf_event type cgroup 
> (rw,nosuid,nodev,noexec,relatime,perf_event)
> cgroup on /sys/fs/cgroup/hugetlb type cgroup 
> (rw,nosuid,nodev,noexec,relatime,hugetlb)
> configfs on /sys/kernel/config type configfs (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-root on / type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
> (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> sunrpc on /proc/fs/nfsd type nfsd (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-home on /home type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
> 10.147.28.7:/export/home/shweta/goleta.lxc.primary on 
> /mnt/dfa2ec3c-d133-3284-8583-0a0845aa4424 type nfs 
> (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.147.28.7,mountvers=3,mountport=47246,mountproto=udp,local_lock=none,addr=10.147.28.7)
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> it again contains all those cgroups mount point which we deleted earlier as a 
> part of LXC agent instalation .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7477) [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is removed when agent shutsdown and restarted

2015-03-04 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346782#comment-14346782
 ] 

Star Guo commented on CLOUDSTACK-7477:
--

How to solve the co-mounted problem with libvirt+qemu-kvm on RHEL7?

> [LXC] the workaround for cpu,cpuacct are co-mounted problem of libvirt is 
> removed when agent shutsdown and restarted
> 
>
> Key: CLOUDSTACK-7477
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7477
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.5.0
>Reporter: shweta agarwal
>Assignee: Kishan Kavala
>Priority: Blocker
>
> Repro steps:
> 1. Create a LXC setup
> 2. create few vms 
> 3. shut down host LXC agent
> 4. Once Agent states become disconnected start host gain
> Bug
> Note all the mounts in host
> mount
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
> devtmpfs on /dev type devtmpfs 
> (rw,nosuid,seclabel,size=3981488k,nr_inodes=995372,mode=755)
> securityfs on /sys/kernel/security type securityfs 
> (rw,nosuid,nodev,noexec,relatime)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
> devpts on /dev/pts type devpts 
> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
> tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)
> cgroup on /sys/fs/cgroup/systemd type cgroup 
> (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
> cgroup on /sys/fs/cgroup/cpuset type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuset)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
> (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
> cgroup on /sys/fs/cgroup/memory type cgroup 
> (rw,nosuid,nodev,noexec,relatime,memory)
> cgroup on /sys/fs/cgroup/devices type cgroup 
> (rw,nosuid,nodev,noexec,relatime,devices)
> cgroup on /sys/fs/cgroup/freezer type cgroup 
> (rw,nosuid,nodev,noexec,relatime,freezer)
> cgroup on /sys/fs/cgroup/net_cls type cgroup 
> (rw,nosuid,nodev,noexec,relatime,net_cls)
> cgroup on /sys/fs/cgroup/blkio type cgroup 
> (rw,nosuid,nodev,noexec,relatime,blkio)
> cgroup on /sys/fs/cgroup/perf_event type cgroup 
> (rw,nosuid,nodev,noexec,relatime,perf_event)
> cgroup on /sys/fs/cgroup/hugetlb type cgroup 
> (rw,nosuid,nodev,noexec,relatime,hugetlb)
> configfs on /sys/kernel/config type configfs (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-root on / type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
> (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> sunrpc on /proc/fs/nfsd type nfsd (rw,relatime)
> /dev/mapper/rhel_rack3pod1host49-home on /home type xfs 
> (rw,relatime,seclabel,attr2,inode64,noquota)
> /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
> 10.147.28.7:/export/home/shweta/goleta.lxc.primary on 
> /mnt/dfa2ec3c-d133-3284-8583-0a0845aa4424 type nfs 
> (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.147.28.7,mountvers=3,mountport=47246,mountproto=udp,local_lock=none,addr=10.147.28.7)
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> it again contains all those cgroups mount point which we deleted earlier as a 
> part of LXC agent instalation .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CLOUDSTACK-8237) add nic with instance throw java.lang.NullPointerException

2015-02-10 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315328#comment-14315328
 ] 

Star Guo edited comment on CLOUDSTACK-8237 at 2/11/15 1:10 AM:
---

Hi,

  Are there any one meet this issue? Thanks.

Best Regards,
Star Guo

--
This message was sent by Atlassian JIRA
(v6.3.4#6332)




was (Author: starg):
Hi,

  Are there any one meet this issue? Thanks.

Best Regards,
Star Guo

-邮件原件-
发件人: Star Guo (JIRA) [mailto:j...@apache.org] 
发送时间: 2015年2月10日 20:45
收件人: cloudstack-iss...@incubator.apache.org
主题: [jira] [Created] (CLOUDSTACK-8237) add nic with instance throw 
java.lang.NullPointerException

Star Guo created CLOUDSTACK-8237:


 Summary: add nic with instance throw 
java.lang.NullPointerException 
 Key: CLOUDSTACK-8237
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8237
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.4.3
 Environment: CentOS 7 which Java 1.7.0_75, Running simulator/
Reporter: Star Guo


In simulator env, I create a serviceoffering with cpu&mem custom , and deploy 
an instance with this serviceoffering. After that, I add nic on this instance 
and throw this issus:


WARN  [c.c.u.d.Merovingian2] (API-Job-Executor-2:ctx-abcb085b job-26 
ctx-c85611f5) Was unable to find lock for the key vm_instance6 and thread id 
2089552368 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27) Add job-27 into job monitoring 
ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27 ctx-26199438) Invocation exception, caused by: 
java.lang.NullPointerException INFO  [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27 ctx-26199438) Rethrow exception 
java.lang.NullPointerException ERROR [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27) Unable to complete AsyncJobVO 
{id:27, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: 
com.cloud.vm.VmWorkAddVmToNetwork, cmdInfo: 
rO0ABXNyACFjb20uY2xvdWQudm0uVm1Xb3JrQWRkVm1Ub05ldHdvcmt6-m3bkApgrQIAAkwACW5ldHdvcmtJZHQAEExqYXZhL2xhbmcvTG9uZztMABJyZXF1c3RlZE5pY1Byb2ZpbGV0ABlMY29tL2Nsb3VkL3ZtL05pY1Byb2ZpbGU7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABEoACWFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO3hwAAIAAgAGdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbHNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAzXNyABdjb20uY2xvdWQudm0uTmljUHJvZmlsZUVY7kYs6AbAAgAeWgAKZGVmYXVsdE5pY0oAAmlkWgAWaXNTZWN1cml0eUdyb3VwRW5hYmxlZEoACW5ldHdvcmtJZEoABHZtSWRMAA1icm9hZGNhc3RUeXBldAAwTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJEJyb2FkY2FzdERvbWFpblR5cGU7TAAMYnJvYWRjYXN0VXJpdAAOTGphdmEvbmV0L1VSSTtMAAhkZXZpY2VJZHQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAARkbnMxcQB-AARMAARkbnMycQB-AARMAAZmb3JtYXR0ACpMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkQWRkcmVzc0Zvcm1hdDtMAAdnYXRld2F5cQB-AARMAAppcDRBZGRyZXNzcQB-AARMAAppcDZBZGRyZXNzcQB-AARMAAdpcDZDaWRycQB-AARMAAdpcDZEbnMxcQB-AARMAAdpcDZEbnMycQB-AARMAAppcDZHYXRld2F5cQB-AARMAAxpc29sYXRpb25VcmlxAH4ADEwACm1hY0FkZHJlc3NxAH4ABEwABG1vZGV0ACFMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkTW9kZTtMAARuYW1lcQB-AARMAAduZXRtYXNrcQB-AARMAAtuZXR3b3JrUmF0ZXEAfgANTAANcmVxdWVzdGVkSXB2NHEAfgAETAANcmVxdWVzdGVkSXB2NnEAfgAETAANcmVzZXJ2YXRpb25JZHEAfgAETAAIc3RyYXRlZ3l0ACZMY29tL2Nsb3VkL3ZtL05pYyRSZXNlcnZhdGlvblN0cmF0ZWd5O0wAC3RyYWZmaWNUeXBldAAoTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJFRyYWZmaWNUeXBlO0wABHV1aWRxAH4ABHhwAABwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw,
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 8796758677527, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: Tue Feb 10 07:20:35 EST 2015}, job origin:26 
java.lang.NullPointerException
at 
com.cloud.hypervisor.HypervisorGuruBase.toVirtualMachineTO(HypervisorGuruBase.java:125)
at com.cloud.simulator.SimulatorGuru.implement(SimulatorGuru.java:46)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:3467)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:5288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
  

[jira] [Comment Edited] (CLOUDSTACK-8237) add nic with instance throw java.lang.NullPointerException

2015-02-10 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315328#comment-14315328
 ] 

Star Guo edited comment on CLOUDSTACK-8237 at 2/11/15 1:11 AM:
---

Are there any one meet this issue? Thanks.

--
This message was sent by Atlassian JIRA
(v6.3.4#6332)




was (Author: starg):
Hi,

  Are there any one meet this issue? Thanks.

Best Regards,
Star Guo

--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



> add nic with instance throw java.lang.NullPointerException 
> ---
>
> Key: CLOUDSTACK-8237
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8237
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.4.3
> Environment: CentOS 7 which Java 1.7.0_75, Running simulator/
>Reporter: Star Guo
>
> In simulator env, I create a serviceoffering with cpu&mem custom , and deploy 
> an instance with this serviceoffering. After that, I add nic on this instance 
> and throw this issus:
> 
> WARN  [c.c.u.d.Merovingian2] (API-Job-Executor-2:ctx-abcb085b job-26 
> ctx-c85611f5) Was unable to find lock for the key vm_instance6 and thread id 
> 2089552368
> INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-2:ctx-754aeed8 
> job-26/job-27) Add job-27 into job monitoring
> ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
> job-26/job-27 ctx-26199438) Invocation exception, caused by: 
> java.lang.NullPointerException
> INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
> job-26/job-27 ctx-26199438) Rethrow exception java.lang.NullPointerException
> ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-2:ctx-754aeed8 
> job-26/job-27) Unable to complete AsyncJobVO {id:27, userId: 2, accountId: 2, 
> instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkAddVmToNetwork, 
> cmdInfo: 
> rO0ABXNyACFjb20uY2xvdWQudm0uVm1Xb3JrQWRkVm1Ub05ldHdvcmt6-m3bkApgrQIAAkwACW5ldHdvcmtJZHQAEExqYXZhL2xhbmcvTG9uZztMABJyZXF1c3RlZE5pY1Byb2ZpbGV0ABlMY29tL2Nsb3VkL3ZtL05pY1Byb2ZpbGU7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABEoACWFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO3hwAAIAAgAGdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbHNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAzXNyABdjb20uY2xvdWQudm0uTmljUHJvZmlsZUVY7kYs6AbAAgAeWgAKZGVmYXVsdE5pY0oAAmlkWgAWaXNTZWN1cml0eUdyb3VwRW5hYmxlZEoACW5ldHdvcmtJZEoABHZtSWRMAA1icm9hZGNhc3RUeXBldAAwTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJEJyb2FkY2FzdERvbWFpblR5cGU7TAAMYnJvYWRjYXN0VXJpdAAOTGphdmEvbmV0L1VSSTtMAAhkZXZpY2VJZHQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAARkbnMxcQB-AARMAARkbnMycQB-AARMAAZmb3JtYXR0ACpMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkQWRkcmVzc0Zvcm1hdDtMAAdnYXRld2F5cQB-AARMAAppcDRBZGRyZXNzcQB-AARMAAppcDZBZGRyZXNzcQB-AARMAAdpcDZDaWRycQB-AARMAAdpcDZEbnMxcQB-AARMAAdpcDZEbnMycQB-AARMAAppcDZHYXRld2F5cQB-AARMAAxpc29sYXRpb25VcmlxAH4ADEwACm1hY0FkZHJlc3NxAH4ABEwABG1vZGV0ACFMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkTW9kZTtMAARuYW1lcQB-AARMAAduZXRtYXNrcQB-AARMAAtuZXR3b3JrUmF0ZXEAfgANTAANcmVxdWVzdGVkSXB2NHEAfgAETAANcmVxdWVzdGVkSXB2NnEAfgAETAANcmVzZXJ2YXRpb25JZHEAfgAETAAIc3RyYXRlZ3l0ACZMY29tL2Nsb3VkL3ZtL05pYyRSZXNlcnZhdGlvblN0cmF0ZWd5O0wAC3RyYWZmaWNUeXBldAAoTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJFRyYWZmaWNUeXBlO0wABHV1aWRxAH4ABHhwAABwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw,
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 8796758677527, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: Tue Feb 10 07:20:35 EST 2015}, job origin:26
> java.lang.NullPointerException
>   at 
> com.cloud.hypervisor.HypervisorGuruBase.toVirtualMachineTO(HypervisorGuruBase.java:125)
>   at com.cloud.simulator.SimulatorGuru.implement(SimulatorGuru.java:46)
>   at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:3467)
>   at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:5288)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>   at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5346)
>   at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.jav

[jira] [Commented] (CLOUDSTACK-8237) add nic with instance throw java.lang.NullPointerException

2015-02-10 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315328#comment-14315328
 ] 

Star Guo commented on CLOUDSTACK-8237:
--

Hi,

  Are there any one meet this issue? Thanks.

Best Regards,
Star Guo

-邮件原件-
发件人: Star Guo (JIRA) [mailto:j...@apache.org] 
发送时间: 2015年2月10日 20:45
收件人: cloudstack-iss...@incubator.apache.org
主题: [jira] [Created] (CLOUDSTACK-8237) add nic with instance throw 
java.lang.NullPointerException

Star Guo created CLOUDSTACK-8237:


 Summary: add nic with instance throw 
java.lang.NullPointerException 
 Key: CLOUDSTACK-8237
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8237
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.4.3
 Environment: CentOS 7 which Java 1.7.0_75, Running simulator/
Reporter: Star Guo


In simulator env, I create a serviceoffering with cpu&mem custom , and deploy 
an instance with this serviceoffering. After that, I add nic on this instance 
and throw this issus:


WARN  [c.c.u.d.Merovingian2] (API-Job-Executor-2:ctx-abcb085b job-26 
ctx-c85611f5) Was unable to find lock for the key vm_instance6 and thread id 
2089552368 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27) Add job-27 into job monitoring 
ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27 ctx-26199438) Invocation exception, caused by: 
java.lang.NullPointerException INFO  [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27 ctx-26199438) Rethrow exception 
java.lang.NullPointerException ERROR [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-2:ctx-754aeed8 job-26/job-27) Unable to complete AsyncJobVO 
{id:27, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: 
com.cloud.vm.VmWorkAddVmToNetwork, cmdInfo: 
rO0ABXNyACFjb20uY2xvdWQudm0uVm1Xb3JrQWRkVm1Ub05ldHdvcmt6-m3bkApgrQIAAkwACW5ldHdvcmtJZHQAEExqYXZhL2xhbmcvTG9uZztMABJyZXF1c3RlZE5pY1Byb2ZpbGV0ABlMY29tL2Nsb3VkL3ZtL05pY1Byb2ZpbGU7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABEoACWFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO3hwAAIAAgAGdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbHNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAzXNyABdjb20uY2xvdWQudm0uTmljUHJvZmlsZUVY7kYs6AbAAgAeWgAKZGVmYXVsdE5pY0oAAmlkWgAWaXNTZWN1cml0eUdyb3VwRW5hYmxlZEoACW5ldHdvcmtJZEoABHZtSWRMAA1icm9hZGNhc3RUeXBldAAwTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJEJyb2FkY2FzdERvbWFpblR5cGU7TAAMYnJvYWRjYXN0VXJpdAAOTGphdmEvbmV0L1VSSTtMAAhkZXZpY2VJZHQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAARkbnMxcQB-AARMAARkbnMycQB-AARMAAZmb3JtYXR0ACpMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkQWRkcmVzc0Zvcm1hdDtMAAdnYXRld2F5cQB-AARMAAppcDRBZGRyZXNzcQB-AARMAAppcDZBZGRyZXNzcQB-AARMAAdpcDZDaWRycQB-AARMAAdpcDZEbnMxcQB-AARMAAdpcDZEbnMycQB-AARMAAppcDZHYXRld2F5cQB-AARMAAxpc29sYXRpb25VcmlxAH4ADEwACm1hY0FkZHJlc3NxAH4ABEwABG1vZGV0ACFMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkTW9kZTtMAARuYW1lcQB-AARMAAduZXRtYXNrcQB-AARMAAtuZXR3b3JrUmF0ZXEAfgANTAANcmVxdWVzdGVkSXB2NHEAfgAETAANcmVxdWVzdGVkSXB2NnEAfgAETAANcmVzZXJ2YXRpb25JZHEAfgAETAAIc3RyYXRlZ3l0ACZMY29tL2Nsb3VkL3ZtL05pYyRSZXNlcnZhdGlvblN0cmF0ZWd5O0wAC3RyYWZmaWNUeXBldAAoTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJFRyYWZmaWNUeXBlO0wABHV1aWRxAH4ABHhwAABwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw,
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 8796758677527, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: Tue Feb 10 07:20:35 EST 2015}, job origin:26 
java.lang.NullPointerException
at 
com.cloud.hypervisor.HypervisorGuruBase.toVirtualMachineTO(HypervisorGuruBase.java:125)
at com.cloud.simulator.SimulatorGuru.implement(SimulatorGuru.java:46)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:3467)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:5288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5346)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
  

[jira] [Created] (CLOUDSTACK-8237) add nic with instance throw java.lang.NullPointerException

2015-02-10 Thread Star Guo (JIRA)
Star Guo created CLOUDSTACK-8237:


 Summary: add nic with instance throw 
java.lang.NullPointerException 
 Key: CLOUDSTACK-8237
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8237
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.4.3
 Environment: CentOS 7 which Java 1.7.0_75, Running simulator/
Reporter: Star Guo


In simulator env, I create a serviceoffering with cpu&mem custom , and deploy 
an instance with this serviceoffering. After that, I add nic on this instance 
and throw this issus:


WARN  [c.c.u.d.Merovingian2] (API-Job-Executor-2:ctx-abcb085b job-26 
ctx-c85611f5) Was unable to find lock for the key vm_instance6 and thread id 
2089552368
INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27) Add job-27 into job monitoring
ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27 ctx-26199438) Invocation exception, caused by: 
java.lang.NullPointerException
INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27 ctx-26199438) Rethrow exception java.lang.NullPointerException
ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-2:ctx-754aeed8 
job-26/job-27) Unable to complete AsyncJobVO {id:27, userId: 2, accountId: 2, 
instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkAddVmToNetwork, 
cmdInfo: 
rO0ABXNyACFjb20uY2xvdWQudm0uVm1Xb3JrQWRkVm1Ub05ldHdvcmt6-m3bkApgrQIAAkwACW5ldHdvcmtJZHQAEExqYXZhL2xhbmcvTG9uZztMABJyZXF1c3RlZE5pY1Byb2ZpbGV0ABlMY29tL2Nsb3VkL3ZtL05pY1Byb2ZpbGU7eHIAE2NvbS5jbG91ZC52bS5WbVdvcmufmbZW8CVnawIABEoACWFjY291bnRJZEoABnVzZXJJZEoABHZtSWRMAAtoYW5kbGVyTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO3hwAAIAAgAGdAAZVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wbHNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAzXNyABdjb20uY2xvdWQudm0uTmljUHJvZmlsZUVY7kYs6AbAAgAeWgAKZGVmYXVsdE5pY0oAAmlkWgAWaXNTZWN1cml0eUdyb3VwRW5hYmxlZEoACW5ldHdvcmtJZEoABHZtSWRMAA1icm9hZGNhc3RUeXBldAAwTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJEJyb2FkY2FzdERvbWFpblR5cGU7TAAMYnJvYWRjYXN0VXJpdAAOTGphdmEvbmV0L1VSSTtMAAhkZXZpY2VJZHQAE0xqYXZhL2xhbmcvSW50ZWdlcjtMAARkbnMxcQB-AARMAARkbnMycQB-AARMAAZmb3JtYXR0ACpMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkQWRkcmVzc0Zvcm1hdDtMAAdnYXRld2F5cQB-AARMAAppcDRBZGRyZXNzcQB-AARMAAppcDZBZGRyZXNzcQB-AARMAAdpcDZDaWRycQB-AARMAAdpcDZEbnMxcQB-AARMAAdpcDZEbnMycQB-AARMAAppcDZHYXRld2F5cQB-AARMAAxpc29sYXRpb25VcmlxAH4ADEwACm1hY0FkZHJlc3NxAH4ABEwABG1vZGV0ACFMY29tL2Nsb3VkL25ldHdvcmsvTmV0d29ya3MkTW9kZTtMAARuYW1lcQB-AARMAAduZXRtYXNrcQB-AARMAAtuZXR3b3JrUmF0ZXEAfgANTAANcmVxdWVzdGVkSXB2NHEAfgAETAANcmVxdWVzdGVkSXB2NnEAfgAETAANcmVzZXJ2YXRpb25JZHEAfgAETAAIc3RyYXRlZ3l0ACZMY29tL2Nsb3VkL3ZtL05pYyRSZXNlcnZhdGlvblN0cmF0ZWd5O0wAC3RyYWZmaWNUeXBldAAoTGNvbS9jbG91ZC9uZXR3b3JrL05ldHdvcmtzJFRyYWZmaWNUeXBlO0wABHV1aWRxAH4ABHhwAABwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw,
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 8796758677527, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: Tue Feb 10 07:20:35 EST 2015}, job origin:26
java.lang.NullPointerException
at 
com.cloud.hypervisor.HypervisorGuruBase.toVirtualMachineTO(HypervisorGuruBase.java:125)
at com.cloud.simulator.SimulatorGuru.implement(SimulatorGuru.java:46)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:3467)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateAddVmToNetwork(VirtualMachineManagerImpl.java:5288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5346)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:501)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at

[jira] [Commented] (CLOUDSTACK-5891) [VMware] Template detail cpu.corespersocket's value is not honoured

2015-02-08 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311678#comment-14311678
 ] 

Star Guo commented on CLOUDSTACK-5891:
--

I run CS 4.4.3 + vSphere 5.5, and I add "cpu.corespersocket=2" in template tag. 
When I deploy virtualmachine with 4 vCPU from this template but the cpu setting 
of the instance on vsphere client is also 4-socket. Does it not support for 
vshpere 5.5 ? 
the deploy log is: 
2015-02-06 18:03:38,839 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-16:ctx-b666324d job-508 ctx-35215523) Complete async job-508, 
jobStatus: SUCCEEDED, resultCode: 0, result: 
org.apache.cloudstack.api.response.UserVmResponse/virtualmachine/{"id":"c9f31a6e-936f-46d9-9f52-beddd96a2c10","name":"c5","displayname":"c5","account":"star","domainid":"067c5eb0-a515-11e4-9f46-005056b951a2","domain":"ROOT","created":"2015-02-06T18:01:19+0800","state":"Stopped","haenable":false,"groupid":"1010425d-41f7-4f2d-a57c-af3becdcd1a9","group":"test","zoneid":"f64c91a7-5cc7-495e-a925-1d0c64549027","zonename":"zone1","templateid":"90e77401-03c8-4dcc-a11b-57bcdd7a9bfc","templatename":"CentOS-6.6-x64-vmware","templatedisplaytext":"CentOS-6.6-x64-v1.0","passwordenabled":false,"serviceofferingid":"99d3934e-fb0d-4333-83d2-fb19b7aab4f0","serviceofferingname":"4core-4g-vmware","cpunumber":4,"cpuspeed":2000,"memory":4096,"cpuused":"0%","networkkbsread":0,"networkkbswrite":0,"guestosid":"373f90d0-a515-11e4-9f46-005056b951a2","rootdeviceid":0,"rootdevicetype":"ROOT","securitygroup":[],"nic":[{"id":"f0773cd5-4ece-491b-9fae-913744a48234","networkid":"40e899a8-b855-4799-8002-659c3f77ed31","networkname":"demo-network1","netmask":"255.255.255.0","gateway":"10.89.13.254","ipaddress":"10.89.13.98","isolationuri":"vlan://613","broadcasturi":"vlan://613","traffictype":"Guest","type":"Shared","isdefault":true,"macaddress":"06:ae:66:00:00:dc"}],"hypervisor":"VMware","tags":[{"key":"cpu.corespersocket","value":"2","resourcetype":"UserVm","resourceid":"c9f31a6e-936f-46d9-9f52-beddd96a2c10","account":"guohuaxing","domainid":"067c5eb0-a515-11e4-9f46-005056b951a2","domain":"ROOT"}],"affinitygroup":[],"displayvm":true,"isdynamicallyscalable":true,"ostypeid":228,"jobid":"34f5572b-7495-40ba-a057-85bfc95fbd76","jobstatus":0}

> [VMware] Template detail cpu.corespersocket's value is not honoured
> ---
>
> Key: CLOUDSTACK-5891
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5891
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.3.0
>Reporter: Likitha Shetty
>Assignee: Likitha Shetty
> Fix For: 4.4.0
>
>
> If a template has been registered and "cpu.corespersocket=X" template details 
> have been added for it, then any instance deployed from that template should 
> have X cores per socket.
> This allows creation of an instance with multiple cores per socket in 
> Xenserver. Allow the same for VMware. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8088) VM scale up is failing in vmware with Unable to execute ScaleVmCommand due to java.lang.NullPointerException

2014-12-18 Thread Star Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251395#comment-14251395
 ] 

Star Guo commented on CLOUDSTACK-8088:
--

Hi, which version?

> VM scale up is failing in vmware with Unable to execute ScaleVmCommand due to 
> java.lang.NullPointerException
> 
>
> Key: CLOUDSTACK-8088
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8088
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Saksham Srivastava
>Assignee: Saksham Srivastava
>
> setup
> ---
> Vcenter 5.5 (esxi-5.5) setup 
> steps to reproduce
> 
> 1-enable zone level setting enable.dynamic.scale.vm
> 2-deploy a vm which is dynamically scalable
> 3- try to scale up vm to medium SO (#cpu=1,cpu=1000,mem=1024)
> expected
> ---
> scaling up should be sucessful
> actual
> ---
> scaling up is failing with error
> com.cloud.utils.exception.CloudRuntimeException: Unable to scale vm due to 
> Unable to execute ScaleVmCommand due to java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)