[one-users] OpenNebula events in schedule for 2015
Dear community, We have events in schedule already for 2015. http://opennebula.org/2015-opennebula-events-plan/ If you have any questions on OpenNebula events, please contact us at eve...@opennebula.org Thanks you for your support! -- Ignacio M. Llorente, PhD, MBA Project Director OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | imllore...@opennebula.org | @OpenNebula http://twitter.com/opennebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Rotating the econe-server.log and the sunstone.log
Hi: Try to add the option /copytruncate to your log logrotate setup By default, log rotate move the logfile and creates a new one. But the file handle that the process uses already points to the moved (renamed) old log file instead the newly created one... so the process already writes in the old log file. With the copytruncate option, the old log file is first copied before truncate (i.e. : void its contends) instead moved In this manner, the file handle already points to the right file. Please note that this action launched by the copytruncate option is //much slower that the default action of move the file, so there is possible to lose a few log data while it is working... I hope this helps you. Cheers / On 01/02/2015 06:30 PM, Steven Timm wrote: I have a logrotate set up for /var/log/one/oned.log and sched.log that works just fine [timm@snowball logrotate.d]$ more oned /var/log/one/*.log { missingok daily notifempty sharedscripts postrotate killall -HUP oned killall -HUP mm_sched endscript } However, when I try to have it also rotate the econe-server.log and the sunstone.log it doesn't work. the new log file appears as scheduled but econe-server keeps writing to the old dated one and the new log file is blank. sending a HUP signal to econe-server does not appear to make any difference. suggestions? Steve -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Rotating the econe-server.log and the sunstone.log
Sorry by the mistake: The right option is copytruncate and not /copytruncate On 01/05/2015 09:27 AM, rdiez wrote: Hi: Try to add the option /copytruncate to your log logrotate setup By default, log rotate move the logfile and creates a new one. But the file handle that the process uses already points to the moved (renamed) old log file instead the newly created one... so the process already writes in the old log file. With the copytruncate option, the old log file is first copied before truncate (i.e. : void its contends) instead moved In this manner, the file handle already points to the right file. Please note that this action launched by the copytruncate option is //much slower that the default action of move the file, so there is possible to lose a few log data while it is working... I hope this helps you. Cheers / On 01/02/2015 06:30 PM, Steven Timm wrote: I have a logrotate set up for /var/log/one/oned.log and sched.log that works just fine [timm@snowball logrotate.d]$ more oned /var/log/one/*.log { missingok daily notifempty sharedscripts postrotate killall -HUP oned killall -HUP mm_sched endscript } However, when I try to have it also rotate the econe-server.log and the sunstone.log it doesn't work. the new log file appears as scheduled but econe-server keeps writing to the old dated one and the new log file is blank. sending a HUP signal to econe-server does not appear to make any difference. suggestions? Steve -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Bulk delete of vCenter VM's leaves stray VM's
Hi Javier, The bug concerning the bulk creation of VM’s works as expected now. Do you have an idea of what the problem is while bulk deleting vm’s? Best regards, Sebastiaan Smit Van: Javier Fontan [mailto:jfon...@opennebula.org] Verzonden: vrijdag 14 november 2014 15:44 Aan: Sebastiaan Smit; users@lists.opennebula.org Onderwerp: Re: [one-users] Bulk delete of vCenter VM's leaves stray VM's There was a bug in the driver that caused error when deploying several VMs at the same time. To fix it change the file /var/lib/one/remotes/vmm/vcenter/vcenter_driver.rb at line 120 from this code: def find_vm_template(uuid) vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine) return vms.find{ |v| v.config.uuid == uuid } end to this other one: def find_vm_template(uuid) vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine) return vms.find{ |v| v.config v.config.uuid == uuid } end We are still looking into the problem when deleting several VMs. Thanks for telling us. On Thu Nov 13 2014 at 12:59:55 PM Javier Fontan jfon...@opennebula.orgmailto:jfon...@opennebula.org wrote: Hi, We have opened an issue to track this problem: http://dev.opennebula.org/issues/3334 Meanwhile you can decrease the number of actions sent changing in /etc/one/oned.conf the parameter -t (number of threads) for VM driver. For example: VM_MAD = [ name = vcenter, executable = one_vmm_sh, arguments = -p -t 2 -r 0 vcenter -s sh, type = xml ] Cheers On Wed Nov 12 2014 at 5:40:00 PM Sebastiaan Smit b...@echelon.nlmailto:b...@echelon.nl wrote: Hi list, We're testing the vCenter functionality in version 4.10 and see some strange behaviour while doing bulk actions. Deleting VM's sometimes leave stray VM's on our cluster. We see the following in de VM log: Sun Nov 9 15:51:34 2014 [Z0][LCM][I]: New VM state is RUNNING Wed Nov 12 17:30:36 2014 [Z0][LCM][I]: New VM state is CLEANUP. Wed Nov 12 17:30:36 2014 [Z0][VMM][I]: Driver command for 60 cancelled Wed Nov 12 17:30:36 2014 [Z0][DiM][I]: New VM state is DONE Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Command execution fail: /var/lib/one/remotes/vmm/vcenter/cancel '423cdcae-b6b3-07c1-def6-96b9f3f4b7b3' 'demo-01' 60 demo-01 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Cancel of VM 423cdcae-b6b3-07c1-def6-96b9f3f4b7b3 on host demo-01 failed due to ManagedObjectNotFound: The object has already been deleted or has not been completely created Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 ExitCode: 255 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Failed to execute virtualization driver operation: cancel. Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Successfully execute network driver operation: clean. Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: CLEANUP SUCCESS 60 We see it in a different manner while bulk creating VM's (20+ at a time): Sun Nov 9 16:01:34 2014 [Z0][DiM][I]: New VM state is ACTIVE. Sun Nov 9 16:01:34 2014 [Z0][LCM][I]: New VM state is PROLOG. Sun Nov 9 16:01:34 2014 [Z0][LCM][I]: New VM state is BOOT Sun Nov 9 16:01:34 2014 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/81/deployment.0 Sun Nov 9 16:01:34 2014 [Z0][VMM][I]: Successfully execute network driver operation: pre. Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Command execution fail: /var/lib/one/remotes/vmm/vcenter/deploy '/var/lib/one/vms/81/deployment.0' 'demo-01' 81 demo-01 Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Deploy of VM 81 on host demo-01 with /var/lib/one/vms/81/deployment.0 failed due to undefined method `uuid' for nil:NilClass Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: ExitCode: 255 Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy. Sun Nov 9 16:01:36 2014 [Z0][VMM][E]: Error deploying virtual machine Sun Nov 9 16:01:36 2014 [Z0][DiM][I]: New VM state is FAILED Wed Nov 12 17:30:19 2014 [Z0][DiM][I]: New VM state is DONE. Wed Nov 12 17:30:19 2014 [Z0][LCM][E]: epilog_success_action, VM in a wrong state I think these have two different root causes. The cluster is not under load. Has anyone else seen this behaviour? Best regards, -- Sebastiaan Smit Echelon BV E: b...@echelon.nlmailto:b...@echelon.nl W: www.echelon.nlhttp://www.echelon.nl T: (088) 3243566 (gewijzigd nummer) T: (088) 3243505 (servicedesk) F: (053) 4336222 KVK: 06055381 ___ Users mailing list Users@lists.opennebula.orgmailto:Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org