Re: Wiki access
May I get access to edit pages as well? username: rajanik -- Thanks, - Rajani -Original Message- From: Chip Childers chip.child...@sungard.commailto:chip%20childers%20%3cchip.child...@sungard.com%3e Reply-to: dev@cloudstack.apache.org To: dev@cloudstack.apache.org dev@cloudstack.apache.orgmailto:%22%3c...@cloudstack.apache.org%3e%22%20%3c...@cloudstack.apache.org%3e Subject: Re: Wiki access Date: Mon, 7 Oct 2013 16:54:54 -0400 On Mon, Oct 7, 2013 at 4:52 PM, David Ortiz dpor...@outlook.commailto:dpor...@outlook.com wrote: Hello, Is it possible to get edit permission? Username is dortiz. Thanks, David Ortiz Done
Re: High CPU utilization on KVM hosts while doing RBD snapshot - was Re: snapshot caused host disconnected
On 10/08/2013 04:59 AM, Indra Pramana wrote: Dear Wido and all, I performed some further tests last night: (1) CPU utilization of the KVM host while RBD snapshot running is still shooting up high even after I set global setting: concurrent.snapshots.threshold.perhost to 2. (2) Most of the concurrent snapshot processes will fail with either stuck in Creating state, or CreatedOnPrimary error message. Hmm, that is odd. It uses rados-java to call the RBD library to create the snapshot and afterwards it copies it to Secondary Storage. I'm leaving for the Ceph Days and the Build a Cloud Day afterwards in London now, so I won't be able to look at this the coming 2 days. (3) I also have adjusted some other related global settings such as backup.snapshot.wait and job.expire.minutes, without any luck. Any advise on the reason what causes the high CPU utilization is greatly appreciated. You might want to set the Agent log to debug and see if the RBD snapshot was created, it should log that: https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java;h=1b883519073acc7514b66857e080a464714c4324;hb=4.2#l1091 Attempting to create RBD snapshot If that succeeds the problem lies with backing up the snapshot to Secondary Storage. Wido Looking forward to your reply, thank you. Cheers. On Mon, Oct 7, 2013 at 11:03 PM, Indra Pramana in...@sg.or.id wrote: Dear all, I also found out that when the RBD snapshot is being run, the CPU utilisation on the KVM host will be shooting up very high, which might explain why the host becomes disconnected. top - 22:49:32 up 3 days, 19:31, 1 user, load average: 7.85, 4.97, 3.47 Tasks: 297 total, 3 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 4.5%us, 1.2%sy, 0.0%ni, 94.1%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 264125244k total, 77203460k used, 186921784k free, 154888k buffers Swap: 545788k total,0k used, 545788k free, 60677092k cached PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 18161 root 20 0 3871m 31m 8444 S 101 0.0 301:58.09 kvm 2790 root 20 0 43.5g 1.6g 19m S 97 0.7 45:52.42 jsvc 24544 root 20 0 4583m 31m 8364 S 97 0.0 425:29.48 kvm 6537 root 20 0 000 R 71 0.0 0:17.49 kworker/3:2 22546 root 20 0 6143m 2.0g 8452 S 26 0.8 55:14.07 kvm 4219 root 20 0 7671m 4.0g 8524 S6 1.6 106:12.26 kvm 5989 root 20 0 43.2g 1.6g 232 D6 0.6 0:08.13 jsvc 5993 root 20 0 43.3g 1.6g 224 D6 0.6 0:08.36 jsvc Is it normal when snapshot is being run on the VM running on that host, the host's CPU utilisation will be higher than usual? How can I limit the CPU resources used by the snapshot? Looking forward to your reply, thank you. Cheers. On Mon, Oct 7, 2013 at 7:18 PM, Indra Pramana in...@sg.or.id wrote: Dear all, I did some tests on snapshots since it's now supported for my Ceph RBD primary storage in CloudStack 4.2. When I ran the snapshot for a particular VM instance earlier, I noticed that this has caused the host (where the VM is on) becomes disconnected. Here's the excerpt from the agent.log: http://pastebin.com/dxVV7stu The management-server.log doesn't much showing anything other than detecting that the host was down and HA is being activated: http://pastebin.com/UeLiSm9K Anyone can advise what is causing the problem? So far there is only one user doing the snapshotting and it has caused issues to the host, I can't imagine what if multiple users try to do snapshotting at the same time? I read about snapshot job throttling which is described on the manual: http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/working-with-snapshots.html But I am not too sure whether this will help to resolve the problem since there is only one user trying to perform snapshot and we already encounter the problem already. Anyone can advise how I can troubleshoot further and find a solution to the problem? Looking forward to your reply, thank you. Cheers.
Re: DOC ACS 4.2 - Need of CSP with xenserver 6.2 for EIP
Hi Travis, Great! Thanks for the response. Regards, Benoit 2013/10/7 Travis Graham tgra...@tgraham.us CSP was included in XS 6.1 forward so there's no need to install it. Travis On Oct 7, 2013, at 11:58 AM, benoit lair kurushi4...@gmail.com wrote: Hi, I'm reading the docs of ACS 4.2, and about Xenserver CSP (ch 8.2.7 : http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/citrix-xenserver-installation.html ) Do we still need a CSP fo xenserver 6.2 (per example for EIP and ELB) with acs 4.2 ? If yes, what is the link providing the good CSP ? Thanks a lot. Regards, Benoit.
[4.2] [xenserver] [system vms] Xentools
Hi all, The current XenServer system VM template doesn't seem to include XenServer Tools - is this by design? systemvmtemplate-2013-07-12-master-xen.vhd.bz2 Regards Paul Angus Senior Consultant / Cloud Architect [cid:image002.png@01CE1071.C6CC9C10] S: +44 20 3603 0540tel:+442036030540 | M: +4tel:+44796816158147711418784 | T: CloudyAngus paul.an...@shapeblue.commailto:geoff.higginbot...@shapeblue.com | www.shapeblue.com | Twitter:@shapebluehttps://twitter.com/ ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS Apache CloudStack Bootcamp training courses 02/03 October, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 13/14 November, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 27/28 November, Bangalorehttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 08/09 January 2014, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
RE: [DISCUSS] Breaking out Marvin from CloudStack
Comments Inline. -Original Message- From: Edison Su [mailto:edison...@citrix.com] Sent: Tuesday, October 08, 2013 4:18 AM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack Few questions: 1. About the more object-oriented CloudStack API python binding: Is the proposed api good enough? For example, The current hand written create virtual machine looks like: class VirtualMachine(object): @classmethod def create(cls, apiclient, services, templateid=None, accountid=None, domainid=None, zoneid=None, networkids=None, serviceofferingid=None, securitygroupids=None, projectid=None, startvm=None, diskofferingid=None, affinitygroupnames=None, group=None, hostid=None, keypair=None, mode='basic', method='GET'): the proposed api may look like: class VirtualMachine(object): def create(self, apiclient, accountId, templateId, **kwargs) The proposed api will look better than previous one, and it's automatically generated, so easy to maintain. But as a consumer of the api, how do people know what kind of parameters should be passed in? Will you have an online document for your api? Or you assume people will look at the api docs generated by CloudStack? Or why not make the api itself as self-contained? For example, add docs before create method: class VirtualMachine(object): ''' Args: accountId: what ever templateId: whatever networkids: whatever ''' ''' Response: ''' def create(self, apiclient, accountId, templateId, **kwargs) All the api documents should be included in api discovery already, so it should be easy to add them in your api binding. [Santhosh]: Each verb as an action on entity, will have provision as earlier to have all required and as well optional arguments. Regarding doc strings, If the API docs are having this facilitation, we will add them as corresponding doc strings during generation of python binding and as well entities. As you rightly mentioned, it will good to add this . We will make sure to get it. Adding adequate doc strings applies even while writing test feature\lib as well, it will improve ease ,readability,usage etc. Anyways a wiki page, and additional pydoc documents posted online will be there. 2. Regarding to data factories. From the proposed factories, in each test case, does test writer still need to write the code to get data, such as writing code to get account during the setupclass? I looked at some of the existing test cases, most of them have the same code snippet: class Services: def __init__(self): self.services = { account: { email: t...@test.commailto:t...@test.com, firstname: Test, lastname: User, username: test, password: password, }, virtual_machine: { displayname: Test VM, username: root, password: password, ssh_port: 22, hypervisor: 'XenServer', privateport: 22, publicport: 22, protocol: 'TCP', }, With the data factories, the code will look like the following? Class TestFoo: Def setupClass(): Account = UserAccount(apiclient) VM = UserVM(apiClient) And if I want to customize the default data factories, I should be able to use something like: UserAccount(apiclient, username='myfoo')? And the data factories should be able to customized based on test environment, right? For example, the current iso test cases are hardcoded to test against http://people.apache.org/~tsp/dummy.iso, but it won't work for devcloud, or in an internal network. The ISO data factory should be able to return an url based on different test environment, thus iso test cases can be reused. [Santhosh] : Currently, as you mentioned, Services class is part of many test modules, this is basically data part for the test. We are separating this with factory approach. Thus, segregating data from test. Compare the earlier mention of Services class in earlier test code without Service class in the below test code. class TestVpcLifeCycle(cloudstackTestCase): def setUp(self): self.apiclient = super(TestVpcLifeCycle, self).getClsTestClient().getApiClient() self.zoneid = get_zone(self.apiclient).id self.templateid = get_template(self.apiclient).id self.serviceofferingid = get_service_offering(self.apiclient).id self.account = UserAccount( apiclient=self.apiclient ) --- Data factory creation @attr(tags='debug') def test_deployvm(self): vm = VpcVirtualMachine( apiclient=self.apiclient, account=self.account.name, domainid=self.account.domainid,
Re: [DISCUSS] Components in JIRA and bug assignment
On Mon, Oct 7, 2013 at 10:50 PM, Sheng Yang sh...@yasker.org wrote: One basic rule for component maintainer to assign the ticket, can be: Fix your own code. I agree with Sheng. And with Ilya on the point that I think we will have to revisit this procedure soon. What I am thinking of is the expectation of a reporter that on entering a feature request/bug follow up will happen. I still don't like the approach but have no better alternative so I have to: +1 regards, Daan
Re: Review Request 14468: CLOUDSTACK-702: Added test for verifying dns service on alias IP
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/#review26767 --- Ship it! Ship It! - venkata swamy babu budumuru On Oct. 3, 2013, 11:49 a.m., sanjeev n wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/ --- (Updated Oct. 3, 2013, 11:49 a.m.) Review request for cloudstack, venkata swamy babu budumuru, SrikanteswaraRao Talluri, and Prasanna Santhanam. Repository: cloudstack-git Description --- 1. Moved some code from test to setupClass method since it is required for all the tests 2. Added new test which will deploy vm in new cidr and verifies dns service on alias ip on VR. Diffs - test/integration/component/maint/test_multiple_ip_ranges.py 782957c Diff: https://reviews.apache.org/r/14468/diff/ Testing --- Yes Thanks, sanjeev n
Re: Review Request 14468: CLOUDSTACK-702: Added test for verifying dns service on alias IP
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/#review26768 --- Commit 1efd544ee27fd0c7c9eac4649568647c0dcbc85b in branch refs/heads/4.2-forward from sanjeevneelarapu [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1efd544 ] CLOUDSTACK-702: 1. Moved common code to setupClass method 2. Added a test to deply vm in new CIDR and verify dns service on alias IP Conflicts: test/integration/component/maint/test_multiple_ip_ranges.py Signed-off-by: sanjeevneelarapu sanjeev.neelar...@citrix.com Signed-off-by: venkataswamybabu budumuru venkataswamybabu.budum...@citrix.com - ASF Subversion and Git Services On Oct. 3, 2013, 11:49 a.m., sanjeev n wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/ --- (Updated Oct. 3, 2013, 11:49 a.m.) Review request for cloudstack, venkata swamy babu budumuru, SrikanteswaraRao Talluri, and Prasanna Santhanam. Repository: cloudstack-git Description --- 1. Moved some code from test to setupClass method since it is required for all the tests 2. Added new test which will deploy vm in new cidr and verifies dns service on alias ip on VR. Diffs - test/integration/component/maint/test_multiple_ip_ranges.py 782957c Diff: https://reviews.apache.org/r/14468/diff/ Testing --- Yes Thanks, sanjeev n
Re: Review Request 14468: CLOUDSTACK-702: Added test for verifying dns service on alias IP
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/#review26769 --- Commit f6c6f03fad5dafa2f28a7a8e8b9d8ab89bf22bf8 in branch refs/heads/master from sanjeevneelarapu [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=f6c6f03 ] CLOUDSTACK-702: 1. Moved common code to setupClass method 2. Added a test to deply vm in new CIDR and verify dns service on alias IP Conflicts: test/integration/component/maint/test_multiple_ip_ranges.py Signed-off-by: sanjeevneelarapu sanjeev.neelar...@citrix.com Signed-off-by: venkataswamybabu budumuru venkataswamybabu.budum...@citrix.com (cherry picked from commit 1efd544ee27fd0c7c9eac4649568647c0dcbc85b) - ASF Subversion and Git Services On Oct. 3, 2013, 11:49 a.m., sanjeev n wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14468/ --- (Updated Oct. 3, 2013, 11:49 a.m.) Review request for cloudstack, venkata swamy babu budumuru, SrikanteswaraRao Talluri, and Prasanna Santhanam. Repository: cloudstack-git Description --- 1. Moved some code from test to setupClass method since it is required for all the tests 2. Added new test which will deploy vm in new cidr and verifies dns service on alias ip on VR. Diffs - test/integration/component/maint/test_multiple_ip_ranges.py 782957c Diff: https://reviews.apache.org/r/14468/diff/ Testing --- Yes Thanks, sanjeev n
Re: Review Request 14334: CLOUDSTACK 4705: Fixed domain memory limits test cases
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14334/#review26770 --- Ship it! Ship It! - abhinav roy On Sept. 25, 2013, 11:06 a.m., Gaurav Aradhye wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14334/ --- (Updated Sept. 25, 2013, 11:06 a.m.) Review request for cloudstack, sailaja mada and Prasanna Santhanam. Repository: cloudstack-git Description --- Fixed CLOUDSTACK 4705: Removed attribute error and fixed indentation issues. Added update_resource_count method to update the resource count after upgrading and downgrading the service offering so as to get latest count. Issue was found that it was showing old resource count without calling this API. Diffs - test/integration/component/memory_limits/test_domain_limits.py 479ec0b Diff: https://reviews.apache.org/r/14334/diff/ Testing --- Thanks, Gaurav Aradhye
RE: [4.2] [xenserver] [system vms] Xentools
Hi, Earlier there was discussion on putting xen tools in systemvms. Please look into that. http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3cce676527.4398b%25abhinandan.prat...@citrix.com%3E From: Paul Angus [mailto:paul.an...@shapeblue.com] Sent: Tuesday, October 08, 2013 1:36 PM To: dev@cloudstack.apache.org Subject: [4.2] [xenserver] [system vms] Xentools Hi all, The current XenServer system VM template doesn't seem to include XenServer Tools - is this by design? systemvmtemplate-2013-07-12-master-xen.vhd.bz2 Regards Paul Angus Senior Consultant / Cloud Architect [cid:image002.png@01CE1071.C6CC9C10] S: +44 20 3603 0540tel:+442036030540 | M: +4tel:+44796816158147711418784 | T: CloudyAngus paul.an...@shapeblue.commailto:geoff.higginbot...@shapeblue.com | www.shapeblue.com | Twitter:@shapebluehttps://twitter.com/ ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS Apache CloudStack Bootcamp training courses 02/03 October, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 13/14 November, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 27/28 November, Bangalorehttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ 08/09 January 2014, Londonhttp://www.shapeblue.com/cloudstack-bootcamp-training-course/ This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Shape Blue Ltd or related companies. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please contact the sender if you believe you have received this email in error. Shape Blue Ltd is a company incorporated in England Wales. ShapeBlue Services India LLP is a company incorporated in India and is operated under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
Review Request 14531: CLOUDSTACK-702: Verify Userdata and Password service on alias ip on VR
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14531/ --- Review request for cloudstack, venkata swamy babu budumuru, SrikanteswaraRao Talluri, and Prasanna Santhanam. Repository: cloudstack-git Description --- Verify Userdata and Password service on alias ip on VR 1.Added two tests to verify userdata and password service after ip alias creation on VR. 2.Tests will deploy vm in new CIDR,checks alias creation and verify services on alias ip addresses on VR. Diffs - test/integration/component/maint/test_multiple_ip_ranges.py 68b5979 Diff: https://reviews.apache.org/r/14531/diff/ Testing --- yes Thanks, sanjeev n
Re: [DISCUSS] Return ssh publickeys in listSSHKeyPairs
On Oct 5, 2013, at 3:41 PM, Ian Duffy i...@ianduffy.ie wrote: Hi, With the development of gClouds, a google compute interface for cloudstack I have found the need to get access to the ssh public keys that Cloudstack generates as part of a keypair. The publickeys are currently not exposed in any way. As a result of this I'm implementing a hacky workaround to segment ssh public keys across tags on an instance which is far from ideal. Does anybody have any objections towards modifying listSSHKeyPairs to return the public key along with the fingerprint and key name? Thanks, Ian. that's a +1 from me since it is returned during the createSSHKeyPair call. There might be a security reason for not returning the public key on a list call, but I don't see it. -sebastien
Re: marvin over https
H Prasanne, I didn't get around this bit a few days. Cloudmonkey works throught the same connection. I will find some time the coming days to test this with debug enabled marvin. regards, Daan On Thu, Oct 3, 2013 at 6:22 AM, Prasanna Santhanam t...@apache.org wrote: On Thu, Sep 26, 2013 at 04:21:38PM +0200, Daan Hoogland wrote: H, I have some trouble getting marvin to connect to cloudstack over https. I am supposing the following should work conn = cloudConnection(mgmtip, apiKey=apikey, securityKey=secretkey, logging=log, port=443, scheme=https) lz = listZones.listZonesCmd() conn.marvin_request(lz) is this a valid assumption? I can browse to the https://mgmtip/client/ and login to retrieve the keys used, but on running the code above i get requests.exceptions.ConnectionError: HTTPSConnectionPool(host='10.200.23.16', port=443): Max retries exceeded with url: /client/api?apiKey=JGvIQPeIVsbgEhVC3shZ51r9buYwClB4ToJZX9Cxs9e3NZbRoJLNyANnWEKgsmgt1uoF_eLdL31GHMwcss6Zywcommand=listZonessignature=KL93r9GYIr6%2FRcbNHuaOj3jUF6o%3Dresponse=json (Caused by class 'socket.error': [Errno 111] Connection refused) In the loglevel() method in CloudConnection.py, switch the logging to logging.DEBUG. That will spew out more verbose logging as to what's happening here. I've never tried it on an https enabled cloudstack so there might be a bug. Does cloudmonkey work for you on this endpoint? If yes, then I don't see why marvin shouldn't. Both use the same request mechanism. I am not sure where to look. at marvin, httprequest or the setup of my env. Hints? thanks, Daan -- Prasanna., Powered by BigRock.com
RE: [DISCUSS] Components in JIRA and bug assignment
+1 I think we need another tool to do this . Maybe set up trac or RT. Sent from my Windows Phone From: Alena Prokharchykmailto:alena.prokharc...@citrix.com Sent: 10/4/2013 10:12 PM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org; Musayev, Ilyamailto:imusa...@webmd.net Subject: Re: [DISCUSS] Components in JIRA and bug assignment On 10/4/13 10:37 AM, Musayev, Ilya imusa...@webmd.net wrote: On Fri, Oct 04, 2013 at 05:11:32PM +, Musayev, Ilya wrote: Question to JIRA experienced admins, we can preserve assign to me option, and if unassigned goto component maintainer? Absolutely. Initial assignment does not equal the actual assignee. Component-based assignment is just a way to skip the unassigned phase, but people can reassign to themselves or others. -chip Chip, thanks for the answer. So far, I've yet to see someone speaking negatively on this proposal. We do need better structure - that will also help us being productive. Please kindly respond with +1, 0 or -1 If -1, please explain why. Thanks ilya +1 -Alena.
Re: ACS 4.2 - Error when trying to declare a LB rule in a vpc to a tier network with lb offering
Hello! Any ideas for this problem ? Thanks for your help. Regards, Benoit. 2013/10/7 benoit lair kurushi4...@gmail.com Hi, I'm working with a CS 4.2, Xenserver 6.2 in a centos 6.3 Deployed a VPC, multiples tiers, each with a Network offering with LB activated. When i navigate on the vpc summary page, i click on the button Public ip adresses on the Vpc virtual router item, I click on acquire new ip, this one is 10.14.6.5, i click on this one and go to configuration tab. I click on load balacing, try to create a lb rule very simple : just a name, port public 80, private port 80, algorithm least connections, no stickiness, no health check, no autoscale, just select 2 vms already deployed and running : I try to create my lb rule, i got this error message in the UI : Failed to create load balancer rule: lb_rule_mano_frontal1 When i look into my mgmt server log : 2013-10-07 11:54:46,591 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Associating ip Ip[10.14.6.5-1] to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,598 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Successfully associated ip address 10.14.6.5 to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,604 WARN [network.lb.LoadBalancingRulesManagerImpl] (catalina-exec-21:null) Failed to create load balancer due to com.cloud.exception.InvalidParameterValueException: Scheme Public is not supported by the network offering [Network Offering [13-Guest-DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB] at com.cloud.network.lb.LoadBalancingRulesManagerImpl.isLbServiceSupportedInNetwork(LoadBalancingRulesManagerImpl.java:2136) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalancer(LoadBalancingRulesManagerImpl.java:1432) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalancerRule(LoadBalancingRulesManagerImpl.java:1360) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at org.apache.cloudstack.api.command.user.loadbalancer.CreateLoadBalancerRuleCmd.create(CreateLoadBalancerRuleCmd.java:282) at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:104) at com.cloud.api.ApiServer.queueCommand(ApiServer.java:460) at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372) at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305) at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889) at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 2013-10-07 11:54:46,619 DEBUG [network.vpc.VpcManagerImpl] (catalina-exec-21:null) Releasing VPC ip address Ip[10.14.6.5-1] from vpc network id=204 2013-10-07 11:54:46,626 DEBUG [network.vpc.VpcManagerImpl] (catalina-exec-21:null) IP address Ip[10.14.6.5-1] is no longer associated with the network inside vpc id=1 2013-10-07 11:54:46,626 DEBUG [network.vpc.VpcManagerImpl] (catalina-exec-21:null) Successfully released VPC ip address Ip[10.14.6.5-1] back to VPC pool 2013-10-07 11:54:46,632 ERROR [cloud.api.ApiServer] (catalina-exec-21:null) unhandled exception executing api command: createLoadBalancerRule com.cloud.utils.exception.CloudRuntimeException: Failed to create load balancer rule: lb_rule_mano_frontal1 at
marvin create network offering incomplete?
H, I am building an integration test with marvin and the following data: network_offering: { name: 'Test Network offering', displaytext: 'Test Network offering', guestiptype: 'Isolated', supportedservices: 'Connectivity', traffictype: 'GUEST', availability: 'Optional', specifyvlan: False, specifyipranges: False, serviceproviderlist: { Connectivity: NiciraNVP }, conservemode: False, tags: [ nicira-based ] }, network: { name: Test Network, displaytext: Test Network, tags: nicira-based }, in the code I put: self.network_offering = NetworkOffering.create( self.apiclient, self.testdata[network_offering] ) # Enable Network offering self.network_offering.update(self.apiclient, state='Enabled') self.testdata[network][zoneid] = self.zone.id self.testdata[network][networkoffering] = self.network_offering.id self.network = Network.create( self.apiclient, self.testdata[network] ) The network doesn't get created: 431, errorText:More than one physical networks exist in zone id=1 and no tags are specified in order to make a choice When inspecting indeed the tags are not set on the offering and neither is conservemode. Am I hunting a bug ar did I misconfigure my test? Daan
Re: [MERGE] marvin-refactor to master
Edison - thanks for the review! I've answered inline. (I've brought the technical review to the right thread from the one about marvin's repo separation) Few questions: 1. About the more object-oriented CloudStack API python binding: Is the proposed api good enough? As long as the cloudstack API retains its compatibility as it does now by not altering required arguments. We are good to go. The current implementation of VirtualMachine is bloated and does too many things, like SSH connections, NAT creation, security group creation etc. The new method will provide such special cases as factory hierarchies instead. So: you'll have the regular VirtualMachine - VpcVirtualMachine - VirtualMachineWithNAT - VirtualMachineWithIngress etc For example, The current hand written create virtual machine looks like: class VirtualMachine(object): @classmethod def create(cls, apiclient, services, templateid=None, accountid=None, domainid=None, zoneid=None, networkids=None, serviceofferingid=None, securitygroupids=None, projectid=None, startvm=None, diskofferingid=None, affinitygroupnames=None, group=None, hostid=None, keypair=None, mode='basic', method='GET'): the proposed api may look like: class VirtualMachine(object): def create(self, apiclient, accountId, templateId, **kwargs) The proposed api will look better than previous one, and it's automatically generated, so easy to maintain. But as a consumer of the api, how do people know what kind of parameters should be passed in? Will you have an online document for your api? Or you assume people will look at the api docs generated by CloudStack? Or why not make the api itself as self-contained? For example, add docs before create method: All **kwargs will be spelt out as docstrings in the entity's methods. This is something I haven't got to yet. It's in the TODO list doc on the branch however. I recognize the difficulty in understanding kwargs for someone looking at the API. I will fix before merge. My concern however is of factories being appropriately documented since they are user written. Those will need to be caught via review. 2. Regarding to data factories. From the proposed factories, in each test case, does test writer still need to write the code to get data, such as writing code to get account during the setupclass? No. this is not required anymore. All data is represented as a factory. So to get account data you simply import the necessary factory. You don't have to imagine the structure of this data and json anymore. from marvin.factory.data import UserAccount ... def setUp() account = UserAccount(apiclient) So those crufty json headers should altogether disappear. With the data factories, the code will look like the following? Class TestFoo: Def setupClass(): Account = UserAccount(apiclient) VM = UserVM(apiClient) And if I want to customize the default data factories, I should be able to use something like: UserAccount(apiclient, username='myfoo')? Yes, this will create a new useraccount with an overridden username. You may override any attribute of the data this way. This however, doesn't check for duplicates. So if a username 'myfoo' already exists, that account creation will fail. If you use the factory, since it generates a random sequence you won't have the problem of collisions And the data factories should be able to customized based on test environment, right? For example, the current iso test cases are hardcoded to test against http://people.apache.org/~tsp/dummy.iso, but it won't work for devcloud, or in an internal network. The ISO data factory should be able to return an url based on different test environment, thus iso test cases can be reused. Yes, we'll have to create a LocalIsoFactory which represents an ISO available on the internal network. It is customizable. May be we can represent it to look for a file within devcloud itself? Thanks, On Wed, Oct 02, 2013 at 10:12:40PM +0530, Prasanna Santhanam wrote: Once upon a time [1] I had propagated the idea of refactoring marvin to make test case writing simpler. At the time, there weren't enough people writing tests using marvin however. Now as focus on testing has become much more important for the stability of our releases I would like to bring back the discussion and to review the refactoring of marvin which I've been doing in the marvin_refactor branch. The key goal of this refactor was to simplify test case writing. In doing so I've transformed the library from its brittle hand-written nature to a completely auto-generated set of libraries. In that sense, marvin is much closer to cloudmonkey now. The two important changes in this refactor are: 1. data represented in an object-oriented fashion presented as factories 2. test case writing using entities and their operations rather than a sequence of disconnected
Re: Review Request 14325: Contrail network virtualization plugin.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14325/#review26771 --- Ship it! This has been pushed to the contrail branch. IP clearance process complete. - Chip Childers On Sept. 24, 2013, 11:38 p.m., Pedro Marques wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14325/ --- (Updated Sept. 24, 2013, 11:38 p.m.) Review request for cloudstack. Repository: cloudstack-git Description --- Plugin for contrail virtual network controller. https://cwiki.apache.org/confluence/display/CLOUDSTACK/Contrail+network+plugin Diffs - api/src/com/cloud/network/Network.java 49f380b client/pom.xml 119c96e client/tomcatconf/applicationContext.xml.in 9b6636a client/tomcatconf/commands.properties.in 58c770d client/tomcatconf/componentContext.xml.in 315c95b plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java 6b81c25 plugins/network-elements/juniper-contrail/pom.xml PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/command/CreateServiceInstanceCmd.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/response/ServiceInstanceResponse.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElement.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElementImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailGuru.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManager.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManagerImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/DBSyncGeneric.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/EventUtils.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ManagementNetworkGuru.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ModelDatabase.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSync.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSyncImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandler.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandlerImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManager.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManagerImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceVirtualMachine.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpPoolModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/InstanceIpModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelController.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObject.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObjectBase.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ServiceInstanceModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VMInterfaceModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VirtualMachineModel.java PRE-CREATION plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VirtualNetworkModel.java PRE-CREATION plugins/network-elements/juniper-contrail/test/net/juniper/contrail/management/MockAccountManager.java PRE-CREATION plugins/network-elements/juniper-contrail/test/net/juniper/contrail/management/NetworkProviderTest.java PRE-CREATION
Re: Review Request 14076: New test added for template copy feature
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14076/ --- (Updated Oct. 8, 2013, 1:45 p.m.) Review request for cloudstack, Girish Shilamkar, Harikrishna Patnala, and Prasanna Santhanam. Changes --- Updated as per review. Repository: cloudstack-git Description --- Added one missing test for test_templates.py from old QA repo to Cloudstack def test_02_copy_template Diffs (updated) - test/integration/component/test_templates.py ea4b277 tools/marvin/marvin/integration/lib/base.py 0d52224 Diff: https://reviews.apache.org/r/14076/diff/ Testing (updated) --- Run log: == client.log == 2013-10-08 00:23:44,306 - DEBUG - test_02_copy_template (test_templates_fixed.TestTemplates) - Copying template from zone: 79ed0a9e-469c-460d-b674-0a2 to 10c91f7e-06ed-47e4-a1fd-26d9102e9d6f == result.log == skipped 'Skip' test_01_create_template_volume (test_templates_fixed.TestTemplates) Test Create template from volume ... skipped 'Skip' test_02_copy_template (test_templates_fixed.TestTemplates) Test for copy template from one zone to another ... ok test_03_delete_template (test_templates_fixed.TestTemplates) Test Delete template ... skipped 'Skip' test_04_template_from_snapshot (test_templates_fixed.TestTemplates) Create Template from snapshot ... skipped 'Skip' -- Ran 5 tests in 567.935s OK (skipped=4) Thanks, Ashutosh Kelkar
Re: Review Request 14076: New test added for template copy feature
On Oct. 6, 2013, 7:47 p.m., Nitin Mehta wrote: test/integration/component/test_templates.py, line 441 https://reviews.apache.org/r/14076/diff/1/?file=350749#file350749line441 Is this a blocking call ? Have you tested this ? Does it return success Added method in base.py so the call is sync now. I have run the test case and tested. On Oct. 6, 2013, 7:47 p.m., Nitin Mehta wrote: test/integration/component/test_templates.py, line 476 https://reviews.apache.org/r/14076/diff/1/?file=350749#file350749line476 timeout is generally is not a count...what is the value typically u will keep ? Using 10 as its value. The loop will check 10 times if the template is copied or not each time waiting for a fixed interval of time. On Oct. 6, 2013, 7:47 p.m., Nitin Mehta wrote: test/integration/component/test_templates.py, line 478 https://reviews.apache.org/r/14076/diff/1/?file=350749#file350749line478 how long will the sleep will be for ? Sleep timer is 30 Secs. There no global setting parameter timeout value related to this. Considering 30 Secs time is good enough to wait. - Ashutosh --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14076/#review26713 --- On Oct. 8, 2013, 1:45 p.m., Ashutosh Kelkar wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14076/ --- (Updated Oct. 8, 2013, 1:45 p.m.) Review request for cloudstack, Girish Shilamkar, Harikrishna Patnala, and Prasanna Santhanam. Repository: cloudstack-git Description --- Added one missing test for test_templates.py from old QA repo to Cloudstack def test_02_copy_template Diffs - test/integration/component/test_templates.py ea4b277 tools/marvin/marvin/integration/lib/base.py 0d52224 Diff: https://reviews.apache.org/r/14076/diff/ Testing --- Run log: == client.log == 2013-10-08 00:23:44,306 - DEBUG - test_02_copy_template (test_templates_fixed.TestTemplates) - Copying template from zone: 79ed0a9e-469c-460d-b674-0a2 to 10c91f7e-06ed-47e4-a1fd-26d9102e9d6f == result.log == skipped 'Skip' test_01_create_template_volume (test_templates_fixed.TestTemplates) Test Create template from volume ... skipped 'Skip' test_02_copy_template (test_templates_fixed.TestTemplates) Test for copy template from one zone to another ... ok test_03_delete_template (test_templates_fixed.TestTemplates) Test Delete template ... skipped 'Skip' test_04_template_from_snapshot (test_templates_fixed.TestTemplates) Create Template from snapshot ... skipped 'Skip' -- Ran 5 tests in 567.935s OK (skipped=4) Thanks, Ashutosh Kelkar
Re: [DISCUSS] Components in JIRA and bug assignment
I don't think we need to add another tool for this. I'm sure Jira is fully capable of doing what's needed, it's just finding the right configuration and the person who knows how to get things setup to work. Travis On Oct 8, 2013, at 9:15 AM, Frankie Onuonga fran...@angani.co wrote: +1 I think we need another tool to do this . Maybe set up trac or RT. Sent from my Windows Phone From: Alena Prokharchykmailto:alena.prokharc...@citrix.com Sent: 10/4/2013 10:12 PM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org; Musayev, Ilyamailto:imusa...@webmd.net Subject: Re: [DISCUSS] Components in JIRA and bug assignment On 10/4/13 10:37 AM, Musayev, Ilya imusa...@webmd.net wrote: On Fri, Oct 04, 2013 at 05:11:32PM +, Musayev, Ilya wrote: Question to JIRA experienced admins, we can preserve assign to me option, and if unassigned goto component maintainer? Absolutely. Initial assignment does not equal the actual assignee. Component-based assignment is just a way to skip the unassigned phase, but people can reassign to themselves or others. -chip Chip, thanks for the answer. So far, I've yet to see someone speaking negatively on this proposal. We do need better structure - that will also help us being productive. Please kindly respond with +1, 0 or -1 If -1, please explain why. Thanks ilya +1 -Alena.
Re: [DISCUSS] Breaking out Marvin from CloudStack
All - thanks for your detailed thoughts on this so far. It seems like this should've come with a fair bit of notice. I will hold on this separation for now. May be a couple more releases to see the level of participation in the QA and if it still feels like a hindrance we'll discuss separating it later. For now I think it should suffice to reorganize the current folder structure of marvin to be able to find all its bits at a single place. This will be proposed separately. I am closing this DISCUSS thread as of now by not splitting out marvin. I will however address some additional points which were raised so far. Inline. On Fri, Oct 04, 2013 at 04:52:45PM -0400, Chip Childers wrote: Fair warning - some of this is a straw man argument to explore the situation, and a little bit of ranting at the end. On Fri, Oct 04, 2013 at 05:57:58PM +0530, Prasanna Santhanam wrote: I'll summarize and address the concerns raised so far. Marvin has been in this repo for a long time for us to start writing tests. The only tests I've seen coming are from a specific set of people focussed on QA efforts. I agree - and that's a problem. New features should *ALL* have tests before they merge into master. I think that assuming that the only test writers are a group of folks that write the tests today is actually a larger problem. It's a bit more prevalent than test cases alone. Docs for eg are written by doc writers and there's a whole bit of process of doc review and someone writing the stuff in an email and then someone else going and correcting it. If you have repo access and notice a problem, please help fix it as well and not create additional process. I'm not for such segmentation but there are roles in a corporate setup that are probably causing this presumption that marvin tests are written by QA alone. I want to reduce the impediment for people who are writing tests *today*. Those looking to get started in the near future won't have any new learning to do, just that their code goes in an alternate repo that is pointed to right infrastructure. Automated testing also works in a push-to-production style very often. Testers need to run their tests on a deployed environment(s) quickly to be able to ensure it is valid and passes. By making them go through reviewboard each time for each test we massively slow down the process. (tons of fixes to tests are on rb today, not just new tests). We don't know if they run until they run on the environment. I want to be clear about this part - a different repo doesn't change the need for someone to be a committer to commit. Yes - I thought this would be a problem as David also mentioned earlier. There are no ACLs within a project for controlling repo access. Except if it was possible, I would've looked at providing access faster to those who are contributing. Reason for tests and framework to go together is simple. If I go look at the jclouds repository today I find tests for rackspace cloud, openstack cloud, cloudstack cloud, euca clouds in the jclouds repository and not in the respective provider/project repository. A newcomer to the marvin repository will be someone interested in writing tests and he will also thus be able to find tests in the marvin repository. This also allows for more heterogenous testing of cloudstack. No one needs to be tied down to a framework / tool to write integration tests. If python is not your forte, use Chip's ruby client, or perhaps in the near future Chiradeep's stackmate to write your test, or even jclouds. But that's actually true today, right? I mean if I wanted to write an integration test using some other method, I'd do that... but would it be useful for others? Probably not! That's because the way that we do testing of this type is via Marvin. The Citrix infra wouldn't be setup for whatever other framework I used, and the community as a whole would get less benefit than if I was consistent. Somehow I sense that as a problem of our test writing. The tests are written to assume a certain infrastructure. But a lot of the API is also admin only which requires certain infra to be in place. The other tools jclouds live tests for example cannot assume admin access to the cloud because they are only testing the user api. Tests written by other tools would only be provided an endpoint and API/Secret keys and test whatever they can. Specific tools required for those tests can be provisioned automatically on the machine that runs the tests. Now the question of supporting older version of marvin against newer versions of cloudstack. Marvin now fully auto-generates itself (see the design in the proposal) based on endpoint. So you have the marvin version that will work with your endpoint only. As for being backwards compatible (also addressed in the design doc) - no old tests are broken, they will still run perfectly fine.
Re: marvin create network offering incomplete?
On Tue, Oct 08, 2013 at 03:27:10PM +0200, Daan Hoogland wrote: H, I am building an integration test with marvin and the following data: network_offering: { name: 'Test Network offering', displaytext: 'Test Network offering', guestiptype: 'Isolated', supportedservices: 'Connectivity', traffictype: 'GUEST', availability: 'Optional', specifyvlan: False, specifyipranges: False, serviceproviderlist: { Connectivity: NiciraNVP }, conservemode: False, tags: [ nicira-based ] }, This looks fine. network: { name: Test Network, displaytext: Test Network, tags: nicira-based }, in the code I put: self.network_offering = NetworkOffering.create( self.apiclient, self.testdata[network_offering] ) # Enable Network offering self.network_offering.update(self.apiclient, state='Enabled') self.testdata[network][zoneid] = self.zone.id self.testdata[network][networkoffering] = self.network_offering.id self.network = Network.create( self.apiclient, self.testdata[network] ) The network doesn't get created: 431, errorText:More than one physical networks exist in zone id=1 and no tags are specified in order to make a choice When inspecting indeed the tags are not set on the offering and neither is conservemode. Is that a resource-tag for the network offering? What is the API request that is going to the management server from marvin? How does it compare against what you sent through via the UI? Am I hunting a bug ar did I misconfigure my test? I'm not sure. But it could just be that marvin created network offering is not the same as the network offering created via the UI. Daan -- Prasanna., Powered by BigRock.com
Re: ACS 4.2 - Error when trying to declare a LB rule in a vpc to a tier network with lb offering
Hello! I don't understand wht is going wrong : When i'm looking into the official docs, i see that vpc is still declared to be able to do lb only on one tier ?? However, https://issues.apache.org/jira/browse/CLOUDSTACK-2367 says that this feature is implemented. I have already configured several tiers in a vpc with internal lb service for each, i deployed several vms into 2 differents tiers. But when i try to create a lb rule choosing two vm in a tier, i got the error message i noticed two messages ago. If somebody has an idea, i would really appreciate. Thanks. Benoit. 2013/10/8 benoit lair kurushi4...@gmail.com Hello! Any ideas for this problem ? Thanks for your help. Regards, Benoit. 2013/10/7 benoit lair kurushi4...@gmail.com Hi, I'm working with a CS 4.2, Xenserver 6.2 in a centos 6.3 Deployed a VPC, multiples tiers, each with a Network offering with LB activated. When i navigate on the vpc summary page, i click on the button Public ip adresses on the Vpc virtual router item, I click on acquire new ip, this one is 10.14.6.5, i click on this one and go to configuration tab. I click on load balacing, try to create a lb rule very simple : just a name, port public 80, private port 80, algorithm least connections, no stickiness, no health check, no autoscale, just select 2 vms already deployed and running : I try to create my lb rule, i got this error message in the UI : Failed to create load balancer rule: lb_rule_mano_frontal1 When i look into my mgmt server log : 2013-10-07 11:54:46,591 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Associating ip Ip[10.14.6.5-1] to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,598 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Successfully associated ip address 10.14.6.5 to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,604 WARN [network.lb.LoadBalancingRulesManagerImpl] (catalina-exec-21:null) Failed to create load balancer due to com.cloud.exception.InvalidParameterValueException: Scheme Public is not supported by the network offering [Network Offering [13-Guest-DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB] at com.cloud.network.lb.LoadBalancingRulesManagerImpl.isLbServiceSupportedInNetwork(LoadBalancingRulesManagerImpl.java:2136) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalancer(LoadBalancingRulesManagerImpl.java:1432) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalancerRule(LoadBalancingRulesManagerImpl.java:1360) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at org.apache.cloudstack.api.command.user.loadbalancer.CreateLoadBalancerRuleCmd.create(CreateLoadBalancerRuleCmd.java:282) at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:104) at com.cloud.api.ApiServer.queueCommand(ApiServer.java:460) at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372) at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305) at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889) at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 2013-10-07 11:54:46,619 DEBUG [network.vpc.VpcManagerImpl] (catalina-exec-21:null) Releasing VPC ip address Ip[10.14.6.5-1] from vpc network id=204 2013-10-07
Re: Review Request 13841: Missing tests from QA repo to ASF - 3 tests from test_vmware_drs.py
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/13841/ --- (Updated Oct. 8, 2013, 2:14 p.m.) Review request for cloudstack, Girish Shilamkar, Harikrishna Patnala, and Prasanna Santhanam. Repository: cloudstack-git Description --- New File added: test_vmware_drs.py Tests added : def test_vm_creation_in_fully_automated_mode(self): def test_vmware_anti_affinity(self): def test_vmware_affinity(self): The tests need manual setup and therefore have been marked as WIP and skipped for the moment Diffs (updated) - test/integration/component/test_vmware_drs.py PRE-CREATION Diff: https://reviews.apache.org/r/13841/diff/ Testing (updated) --- Tested locally. One test case is working, two are skipped (one due to unavailability of particular setup and other die to feature not available in cloudstack yet) Run Log: == client.log == 2013-10-08 02:27:43,371 - DEBUG - test_vm_creation_in_fully_automated_mode (test_vmware_drs.TestVMPlacement) - max memory: 14911 == result.log == test_vmware_affinity (test_vmware_drs.TestAffinityRules) Test Set up affinity rules ... skipped 'Skip' test_vmware_anti_affinity (test_vmware_drs.TestAntiAffinityRules) Test Set up anti-affinity rules ... skipped 'Skip' test_vm_creation_in_fully_automated_mode (test_vmware_drs.TestVMPlacement) Test VM Creation in automation mode = Fully automated ... == client.log == 2013-10-08 02:28:53,855 - DEBUG - test_vm_creation_in_fully_automated_mode (test_vmware_drs.TestVMPlacement) - Deploying VM in account: test-R7LY31 == result.log == ok -- Ran 3 tests in 241.612s OK (skipped=2) Thanks, Ashutosh Kelkar
Contrail plugin
As stated, I've imported the contrail plugin donation into the contrail branch. I've taken the time to add the ASF license header to all of the new files in that branch. I think we have to complete the following in order to merge into master. 1) I'd like to see the package structure changed to match org.apache.cloudstack, instead of the Juniper namespace. We only have com.cloud namespaces for legacy reasons, and are trying to consolidate into the apache ns. 2) Folks with past experience with network plugins need to review the plugin's code and provide comments or +1s for a merge. Chiradeep and Hugo, you've been randomly selected to help on this... ;-) Pedro, I'll assume that you will be happy to provide patches via reviewboard against this branch if changes are requested (including the package structure noted above). 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. Anything I'm missing? -chip
Re: marvin over https
Ok, this is a bug. requests lib is verifying SSL by default while cloudmonkey is probably ignoring SSL. There are two options 1) Fix marvin to accept SSL while detecting your default certs in /etc/ssl/certs? Or use an env variable 2) Ignore SSL auth from marvin. Can you please file a bug report? It should be a simple fix, so you can run with it or I'll get to it tomorrow. Ref: http://www.python-requests.org/en/latest/user/advanced/#ssl-cert-verification On Tue, Oct 08, 2013 at 04:14:46PM +0200, Daan Hoogland wrote: H Prasanna, $ ./zoneCommand.py Traceback (most recent call last): File ./zoneCommand.py, line 91, in module print zones: + repr(blub.listZones(conn)) File ./zoneCommand.py, line 42, in listZones resp = conn.marvin_request(lz) File /usr/lib/python2.7/site-packages/marvin/cloudstackConnection.py, line 218, in marvin_request cmdname, self.auth, payload=payload, method=method) File /usr/lib/python2.7/site-packages/marvin/cloudstackConnection.py, line 153, in request raise c requests.exceptions.SSLError: [Errno 1] _ssl.c:508: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed This is using the keys that are also used by cloudmonkey any hint? Daan On Tue, Oct 8, 2013 at 2:31 PM, Daan Hoogland daan.hoogl...@gmail.comwrote: H Prasanne, I didn't get around this bit a few days. Cloudmonkey works throught the same connection. I will find some time the coming days to test this with debug enabled marvin. regards, Daan On Thu, Oct 3, 2013 at 6:22 AM, Prasanna Santhanam t...@apache.org wrote: On Thu, Sep 26, 2013 at 04:21:38PM +0200, Daan Hoogland wrote: H, I have some trouble getting marvin to connect to cloudstack over https. I am supposing the following should work conn = cloudConnection(mgmtip, apiKey=apikey, securityKey=secretkey, logging=log, port=443, scheme=https) lz = listZones.listZonesCmd() conn.marvin_request(lz) is this a valid assumption? I can browse to the https://mgmtip/client/ and login to retrieve the keys used, but on running the code above i get requests.exceptions.ConnectionError: HTTPSConnectionPool(host='10.200.23.16', port=443): Max retries exceeded with url: /client/api?apiKey=JGvIQPeIVsbgEhVC3shZ51r9buYwClB4ToJZX9Cxs9e3NZbRoJLNyANnWEKgsmgt1uoF_eLdL31GHMwcss6Zywcommand=listZonessignature=KL93r9GYIr6%2FRcbNHuaOj3jUF6o%3Dresponse=json (Caused by class 'socket.error': [Errno 111] Connection refused) In the loglevel() method in CloudConnection.py, switch the logging to logging.DEBUG. That will spew out more verbose logging as to what's happening here. I've never tried it on an https enabled cloudstack so there might be a bug. Does cloudmonkey work for you on this endpoint? If yes, then I don't see why marvin shouldn't. Both use the same request mechanism. I am not sure where to look. at marvin, httprequest or the setup of my env. Hints? thanks, Daan -- Prasanna., Powered by BigRock.com -- Prasanna., Powered by BigRock.com
Re: Contrail plugin
On Tue, Oct 08, 2013 at 10:23:32AM -0400, Chip Childers wrote: As stated, I've imported the contrail plugin donation into the contrail branch. I've taken the time to add the ASF license header to all of the new files in that branch. I think we have to complete the following in order to merge into master. 1) I'd like to see the package structure changed to match org.apache.cloudstack, instead of the Juniper namespace. We only have com.cloud namespaces for legacy reasons, and are trying to consolidate into the apache ns. 2) Folks with past experience with network plugins need to review the plugin's code and provide comments or +1s for a merge. Chiradeep and Hugo, you've been randomly selected to help on this... ;-) Pedro, I'll assume that you will be happy to provide patches via reviewboard against this branch if changes are requested (including the package structure noted above). 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. I saw the thread earlier about a mysql db generated for performing an integration test. If someone can point me to the spec/docs/readme on how to run these presumably without the contrail device I'm happy to take a look. Anything I'm missing? -chip -- Prasanna., Powered by BigRock.com
Re: Contrail plugin
On Tue, Oct 08, 2013 at 07:59:24PM +0530, Prasanna Santhanam wrote: On Tue, Oct 08, 2013 at 10:23:32AM -0400, Chip Childers wrote: 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. I saw the thread earlier about a mysql db generated for performing an integration test. If someone can point me to the spec/docs/readme on how to run these presumably without the contrail device I'm happy to take a look. Pedro, perhaps you can provide some guidance here. Prasanna - it appears that most of the stuff is in plugins/network-elements/juniper-contrail/test/ There are mysql start and stop .sh scripts, and unit tests in there as well. -chip
Re: ACS 4.2 - Error when trying to declare a LB rule in a vpc to a tier network with lb offering
On 08/10/13 7:41 PM, benoit lair kurushi4...@gmail.com wrote: Hello! I don't understand wht is going wrong : When i'm looking into the official docs, i see that vpc is still declared to be able to do lb only on one tier ?? However, https://issues.apache.org/jira/browse/CLOUDSTACK-2367 says that this feature is implemented. Both external and internal LB are supported. Please see [1]. Both both functionality are mutually exclusive with in a tier. From the exception it appears that you are trying to do external LB on tier created with 'DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB' offering which does not support it. Try creating a tier with a network offering with lb type as 'public lb' [1] https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Insta llation_Guide/configure-vpc.html#add-loadbalancer-rule-vpc I have already configured several tiers in a vpc with internal lb service for each, i deployed several vms into 2 differents tiers. But when i try to create a lb rule choosing two vm in a tier, i got the error message i noticed two messages ago. If somebody has an idea, i would really appreciate. Thanks. Benoit. 2013/10/8 benoit lair kurushi4...@gmail.com Hello! Any ideas for this problem ? Thanks for your help. Regards, Benoit. 2013/10/7 benoit lair kurushi4...@gmail.com Hi, I'm working with a CS 4.2, Xenserver 6.2 in a centos 6.3 Deployed a VPC, multiples tiers, each with a Network offering with LB activated. When i navigate on the vpc summary page, i click on the button Public ip adresses on the Vpc virtual router item, I click on acquire new ip, this one is 10.14.6.5, i click on this one and go to configuration tab. I click on load balacing, try to create a lb rule very simple : just a name, port public 80, private port 80, algorithm least connections, no stickiness, no health check, no autoscale, just select 2 vms already deployed and running : I try to create my lb rule, i got this error message in the UI : Failed to create load balancer rule: lb_rule_mano_frontal1 When i look into my mgmt server log : 2013-10-07 11:54:46,591 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Associating ip Ip[10.14.6.5-1] to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,598 DEBUG [cloud.network.NetworkManagerImpl] (catalina-exec-21:null) Successfully associated ip address 10.14.6.5 to network Ntwk[204|Guest|13] 2013-10-07 11:54:46,604 WARN [network.lb.LoadBalancingRulesManagerImpl] (catalina-exec-21:null) Failed to create load balancer due to com.cloud.exception.InvalidParameterValueException: Scheme Public is not supported by the network offering [Network Offering [13-Guest-DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB] at com.cloud.network.lb.LoadBalancingRulesManagerImpl.isLbServiceSupportedI nNetwork(LoadBalancingRulesManagerImpl.java:2136) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalan cer(LoadBalancingRulesManagerImpl.java:1432) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercepto rDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalan cerRule(LoadBalancingRulesManagerImpl.java:1360) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercepto rDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at org.apache.cloudstack.api.command.user.loadbalancer.CreateLoadBalancerRu leCmd.create(CreateLoadBalancerRuleCmd.java:282) at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:104) at com.cloud.api.ApiServer.queueCommand(ApiServer.java:460) at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372) at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305) at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Applica tionFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilt erChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValv e.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValv e.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java :127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java :102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555 ) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve. java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:2 98) at
Re: [DISCUSS] Return ssh publickeys in listSSHKeyPairs
On Tue, Oct 08, 2013 at 01:05:32PM +, Frankie Onuonga wrote: Hi guys , From my fundamentals of security I do not think returning a public key is wrong . What is sensitive is the private key. As long as that is bit exposed in any way then all should be well. +1 to Frankie's comment
Re: Wiki access
On Tue, Oct 08, 2013 at 06:14:39AM +, Rajani Karuturi wrote: May I get access to edit pages as well? username: rajanik Looks like this was already taken care of.
Re: Contrail plugin
On Oct 8, 2013, at 7:36 AM, Chip Childers wrote: On Tue, Oct 08, 2013 at 07:59:24PM +0530, Prasanna Santhanam wrote: On Tue, Oct 08, 2013 at 10:23:32AM -0400, Chip Childers wrote: 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. I saw the thread earlier about a mysql db generated for performing an integration test. If someone can point me to the spec/docs/readme on how to run these presumably without the contrail device I'm happy to take a look. The integration tests are being automatically executed by maven. (mvn -pl :cloud-plugin-network-contrail clean test). They spawn an instance of mysql on a dynamically allocated port in order to ensure that the database contents are always initialized with the same content and that these set of tests do not leave content in the database that could influence other tests... Pedro, perhaps you can provide some guidance here. Prasanna - it appears that most of the stuff is in plugins/network-elements/juniper-contrail/test/ correct. There are mysql start and stop .sh scripts, and unit tests in there as well. Yes. the shell scripts start and stop a mysql instance. The scripts are invoked from JUnit static initializer (@BeforeClass). -chip
Re: Latest Master DB issue
It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.dao.EntityManagerImpl.findById(EntityManagerImpl.java:45) at org.apache.cloudstack.context.CallContext.register(CallContext.java:166) at org.apache.cloudstack.context.CallContext.registerSystemCallContextOnceOnl y(CallContext.java:141) at org.apache.cloudstack.context.CallContextListener.onEnterContext(CallConte xtListener.java:36) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithC ontext(DefaultManagedContext.java:83) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithCo ntext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedCo ntextRunnable.java:46) at org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedC ontextTimerTask.java:27) at java.util.TimerThread.mainLoop(Timer.java:534) at java.util.TimerThread.run(Timer.java:484) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'account.default' in 'field list' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcc essorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstr uctorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1053) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4074) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4006) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2468) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2629) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2719) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:21 55) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2318) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:983) ... 27 more -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com 514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com @CloudOps_
Re: Contrail plugin
Chip, On Oct 8, 2013, at 7:23 AM, Chip Childers wrote: As stated, I've imported the contrail plugin donation into the contrail branch. I've taken the time to add the ASF license header to all of the new files in that branch. I think we have to complete the following in order to merge into master. 1) I'd like to see the package structure changed to match org.apache.cloudstack, instead of the Juniper namespace. We only have com.cloud namespaces for legacy reasons, and are trying to consolidate into the apache ns. Will do. 2) Folks with past experience with network plugins need to review the plugin's code and provide comments or +1s for a merge. Chiradeep and Hugo, you've been randomly selected to help on this... ;-) Pedro, I'll assume that you will be happy to provide patches via reviewboard against this branch if changes are requested (including the package structure noted above). Yes. There is a team of us at Juniper that will be working on the contrail plugin. We will be more than happy to follow the structure that you recommend. 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. Our plan at the moment is to: - Add unit tests to cover all the ObjectModel classes. - Create additional integration tests to cover the plugin integration with the CloudStack NetworkManager. At the moment we are struggling a bit with the changes from 4.1 (where we did most of the development) to 4.2 to 4.3... Pedro.
RE: Call for 4.3 and 4.2.1 Release Managers!
-Original Message- From: Chip Childers [mailto:chip.child...@sungard.com] Sent: Monday, October 07, 2013 8:22 AM To: dev@cloudstack.apache.org Subject: Re: Call for 4.3 and 4.2.1 Release Managers! I think you might be able to help though! We basically need to structure the work around various cat herding [1] activities (forgive the metaphore, but it's appropriate... ;-) ). I'm sure that there is something... perhaps helping keep on top of reviewboard submissions, to politely nudge relevant committers to review specific submissions. Whomever wants to take the lead for 4.3 should probably (IMO), start a discussion on (1) coordination of work and [Animesh] Since we are getting closer to code freeze date and we cannot find overall release management I can pick up 4.3 and call out the roles and ask for volunteers for them. (2) re-thinking the schedule to reset based on both historic performance and reality of the calendar (we've really been working on 4.2 when we should have moved on to 4.3). [Animesh] Chip my preference would be to stick to the timeline of 4.3 which will make it a smaller release but will give us the opportunity to clear up our technical debt. That should give us a view into where help is needed! -chip [1] http://www.urbandictionary.com/define.php?term=herding%20cats On Mon, Oct 7, 2013 at 11:17 AM, Frankie Onuonga fran...@angani.co wrote: I would volunteer with but I think I am still to junior to . Kind regards Frankie Onuonga Sent from my Windows Phone From: Chip Childersmailto:chip.child...@sungard.com Sent: 10/7/2013 5:34 PM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org Subject: Re: Call for 4.3 and 4.2.1 Release Managers! On Sat, Sep 21, 2013 at 03:55:27PM -0400, Chip Childers wrote: On Sep 20, 2013, at 1:27 PM, Animesh Chaturvedi animesh.chaturv...@citrix.com wrote: -Original Message- From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] Sent: Friday, September 20, 2013 12:51 AM To: dev Subject: Re: Call for 4.3 and 4.2.1 Release Managers! H Animesh and the rest, I had some consults at home and at Schuberg Philis. The conclusion is that it is not wise to take up the task as release manager right now. I will be glad to take it up some future iteration. sorry to lay this burden back, Daan [Animesh] Ok anyone else wants to step up to the plate. I'm willing to do 4.3 if nobody else can / wants too. I'm actually going to have to pass on this. Also, given the idea of breaking up the work, perhaps someone can volunteer the list of rolls and others can step up to take each of the partial RM rolls for 4.3? -chip
Re: marvin create network offering incomplete?
I've used marvin without the testdata construct fine. never the ui. I'll lookup the code once back at desk. Next afternoon Thanks Prassana (formerly known as Prussia, almost called Praag by the spellchecker ) mobile biligual spell checker used Op 8 okt. 2013 16:01 schreef Prasanna Santhanam t...@apache.org: On Tue, Oct 08, 2013 at 03:27:10PM +0200, Daan Hoogland wrote: H, I am building an integration test with marvin and the following data: network_offering: { name: 'Test Network offering', displaytext: 'Test Network offering', guestiptype: 'Isolated', supportedservices: 'Connectivity', traffictype: 'GUEST', availability: 'Optional', specifyvlan: False, specifyipranges: False, serviceproviderlist: { Connectivity: NiciraNVP }, conservemode: False, tags: [ nicira-based ] }, This looks fine. network: { name: Test Network, displaytext: Test Network, tags: nicira-based }, in the code I put: self.network_offering = NetworkOffering.create( self.apiclient, self.testdata[network_offering] ) # Enable Network offering self.network_offering.update(self.apiclient, state='Enabled') self.testdata[network][zoneid] = self.zone.id self.testdata[network][networkoffering] = self.network_offering.id self.network = Network.create( self.apiclient, self.testdata[network] ) The network doesn't get created: 431, errorText:More than one physical networks exist in zone id=1 and no tags are specified in order to make a choice When inspecting indeed the tags are not set on the offering and neither is conservemode. Is that a resource-tag for the network offering? What is the API request that is going to the management server from marvin? How does it compare against what you sent through via the UI? Am I hunting a bug ar did I misconfigure my test? I'm not sure. But it could just be that marvin created network offering is not the same as the network offering created via the UI. Daan -- Prasanna., Powered by BigRock.com
Re: Call for 4.3 and 4.2.1 Release Managers!
On Tue, Oct 08, 2013 at 04:49:24PM +, Animesh Chaturvedi wrote: Whomever wants to take the lead for 4.3 should probably (IMO), start a discussion on (1) coordination of work and [Animesh] Since we are getting closer to code freeze date and we cannot find overall release management I can pick up 4.3 and call out the roles and ask for volunteers for them. Sounds good. (2) re-thinking the schedule to reset based on both historic performance and reality of the calendar (we've really been working on 4.2 when we should have moved on to 4.3). [Animesh] Chip my preference would be to stick to the timeline of 4.3 which will make it a smaller release but will give us the opportunity to clear up our technical debt. +1 to being smaller and tech-debt focused. -chip
Re: Contrail plugin
On Tue, Oct 08, 2013 at 09:43:39AM -0700, Pedro Roque Marques wrote: Chip, On Oct 8, 2013, at 7:23 AM, Chip Childers wrote: As stated, I've imported the contrail plugin donation into the contrail branch. I've taken the time to add the ASF license header to all of the new files in that branch. I think we have to complete the following in order to merge into master. 1) I'd like to see the package structure changed to match org.apache.cloudstack, instead of the Juniper namespace. We only have com.cloud namespaces for legacy reasons, and are trying to consolidate into the apache ns. Will do. Fantastic, thanks! 2) Folks with past experience with network plugins need to review the plugin's code and provide comments or +1s for a merge. Chiradeep and Hugo, you've been randomly selected to help on this... ;-) Pedro, I'll assume that you will be happy to provide patches via reviewboard against this branch if changes are requested (including the package structure noted above). Yes. There is a team of us at Juniper that will be working on the contrail plugin. We will be more than happy to follow the structure that you recommend. Hopefully we can get other's to provide feedback on the plugin code itself... I think that would be useful for all involved. 3) I'd love if we could get some consensus on what additional tests and / or changes to the test approach are needed. Prasanna - as with Hugo and Chiradeep, you've been randomly selected to at least provide some input here. Our plan at the moment is to: - Add unit tests to cover all the ObjectModel classes. - Create additional integration tests to cover the plugin integration with the CloudStack NetworkManager. At the moment we are struggling a bit with the changes from 4.1 (where we did most of the development) to 4.2 to 4.3... That sounds like a great plan. One request - please be sure that the individuals doing the work are submitting their patches to the project as individuals. That'll eliminate any issues with going through the clearance process, and give them the rightful credit for their contributions. Pedro. Thanks Pedro! -chip
Re: Latest Master DB issue
It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.dao.EntityManagerImpl.findById(EntityManagerImpl.java:45) at org.apache.cloudstack.context.CallContext.register(CallContext.java:166) at org.apache.cloudstack.context.CallContext.registerSystemCallContextOnceOnl y(CallContext.java:141) at org.apache.cloudstack.context.CallContextListener.onEnterContext(CallConte xtListener.java:36) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithC ontext(DefaultManagedContext.java:83) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithCo ntext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedCo ntextRunnable.java:46) at org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedC ontextTimerTask.java:27) at java.util.TimerThread.mainLoop(Timer.java:534) at java.util.TimerThread.run(Timer.java:484) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'account.default' in 'field list' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcc essorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstr uctorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1053) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4074) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4006) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2468) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2629) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2719) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:21 55) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2318) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:983) ... 27 more -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com 514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com @CloudOps_ -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com 514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com @CloudOps_
Re: [MERGE] spring-modularization to master - Spring Modularization
From what I can gather it seems that master currently fails the BVT (and know when I say BVT I mean that black box that apparently exists somewhere doing something, but I have no clue what it really means). So in turn my spring modularization branch will additionally fail BVT. Citrix internal QA ran some tests against my branch and they mostly passed but some failed. Its quite difficult to sort through this all because tests are failing on master. So I don't know what to do at this point. At least my branch won't completely blow up everything. I just know the longer it takes to merge this the more painful it will be Honestly this is all quite frustrating for myself being new to contributing to ACS. I feel somewhat lost in the whole process of how to get features in. I'll refrain from venting my frustrations. Darren
Re: Contrail plugin
I'll take some time and review this code too. I already know there's going to be a conflict with the stuff I did in the spring modularization branch. Moving to full spring we have gotten rid of the custom ACS AOP for the mgmt server. This code relies on that framework so it will have to move to being a standard org.aopalliance.intercept.MethodInteceptor. I don't particularly care for the fact that functionally it being keyed off of ActionEvents (or AOP in general). I'll need to review the code further to provide more useful feedback, but just giving the heads up that the AOP stuff will have to change a bit. Darren
Re: Latest Master DB issue
Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.dao.EntityManagerImpl.findById(EntityManagerImpl.java:45) at org.apache.cloudstack.context.CallContext.register(CallContext.java:166) at org.apache.cloudstack.context.CallContext.registerSystemCallContextOnceOnl y(CallContext.java:141) at org.apache.cloudstack.context.CallContextListener.onEnterContext(CallConte xtListener.java:36) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithC ontext(DefaultManagedContext.java:83) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithCo ntext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedCo ntextRunnable.java:46) at org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedC ontextTimerTask.java:27) at java.util.TimerThread.mainLoop(Timer.java:534) at java.util.TimerThread.run(Timer.java:484) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'account.default' in 'field list' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcc essorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstr uctorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1053) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4074) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4006) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2468) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2629) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2719) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:21 55) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2318) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(Delegatin gPreparedStatement.java:96) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:983) ... 27 more --
Re: [DISCUSS] Pluggable VM snapshot related operations?
My only comment is that having the return type as boolean and using to that indicate quiesce behaviour seems obscure and will probably lead to a problem later. Your basically saying the result of the takeVMSnapshot will only ever need to communicate back whether unquiesce needs to happen. Maybe some result object would be more extensible. Actually, I think I have more comments. This seems a bit odd to me. Why would a storage driver in ACS implement a VM snapshot functionality? VM snapshot is a really a hypervisor orchestrated operation. So it seems like were trying to implement a poor mans VM snapshot. Maybe if I understood what NetApp was trying to do it would make more sense, but its all odd. To do a proper VM snapshot you need to snapshot memory and disk at the exact same time. How are we going to do that if ACS is orchestrating the VM snapshot and delegating to storage providers. Its not like you are going to pause the VM or are you? Darren On Mon, Oct 7, 2013 at 11:59 AM, Edison Su edison...@citrix.com wrote: I created a design document page at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Pluggable+VM+snapshot+related+operations, feel free to add items on it. And a new branch pluggable_vm_snapshot is created. -Original Message- From: SuichII, Christopher [mailto:chris.su...@netapp.com] Sent: Monday, October 07, 2013 10:02 AM To: dev@cloudstack.apache.org Subject: Re: [DISCUSS] Pluggable VM snapshot related operations? I'm a fan of option 2 - this gives us the most flexibility (as you stated). The option is given to completely override the way VM snapshots work AND storage providers are given to opportunity to work within the default VM snapshot workflow. I believe this option should satisfy your concern, Mike. The snapshot and quiesce strategy would be in charge of communicating with the hypervisor. Storage providers should be able to leverage the default strategies and simply perform the storage operations. I don't think it should be much of an issue that new method to the storage driver interface may not apply to everyone. In fact, that is already the case. Some methods such as un/maintain(), attachToXXX() and takeSnapshot() are already not implemented by every driver - they just return false when asked if they can handle the operation. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms - Cloud Solutions Citrix, Cisco Red Hat On Oct 5, 2013, at 12:11 AM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Well, my first thought on this is that the storage driver should not be telling the hypervisor to do anything. It should be responsible for creating/deleting volumes, snapshots, etc. on its storage system only. On Fri, Oct 4, 2013 at 5:57 PM, Edison Su edison...@citrix.com wrote: In 4.2, we added VM snapshot for Vmware/Xenserver. The current workflow will be like the following: createVMSnapshot api - VMSnapshotManagerImpl: creatVMSnapshot - send CreateVMSnapshotCommand to hypervisor to create vm snapshot. If anybody wants to change the workflow, then need to either change VMSnapshotManagerImpl directly or subclass VMSnapshotManagerImpl. Both are not the ideal choice, as VMSnapshotManagerImpl should be able to handle different ways to take vm snapshot, instead of hard code. The requirements for the pluggable VM snapshot coming from: Storage vendor may have their optimization, such as NetApp. VM snapshot can be implemented in a totally different way(For example, I could just send a command to guest VM, to tell my application to flush disk and hold disk write, then come to hypervisor to take a volume snapshot). If we agree on enable pluggable VM snapshot, then we can move on discuss how to implement it. The possible options: 1. coarse grained interface. Add a VMSnapshotStrategy interface, which has the following interfaces: VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot); Boolean revertVMSnapshot(VMSnapshot vmSnapshot); Boolean DeleteVMSnapshot(VMSnapshot vmSnapshot); The work flow will be: createVMSnapshot api - VMSnapshotManagerImpl: creatVMSnapshot - VMSnapshotStrategy: takeVMSnapshot VMSnapshotManagerImpl will manage VM state, do the sanity check, then will handle over to VMSnapshotStrategy. In VMSnapshotStrategy implementation, it may just send a Create/revert/delete VMSnapshotCommand to hypervisor host, or do anything special operations. 2. fine-grained interface. Not only add a VMSnapshotStrategy interface, but also add certain methods on the storage driver. The VMSnapshotStrategy interface will be the same as option 1. Will add the following methods on storage driver: /* volumesBelongToVM is the list of volumes of the VM that created on this storage, storage vendor can either take one snapshot for this volumes in one shot, or take snapshot for each volume
Re: Latest Master DB issue
Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.dao.EntityManagerImpl.findById(EntityManagerImpl.java:45) at org.apache.cloudstack.context.CallContext.register(CallContext.java:166 ) at org.apache.cloudstack.context.CallContext.registerSystemCallContextOnce Onl y(CallContext.java:141) at org.apache.cloudstack.context.CallContextListener.onEnterContext(CallCo nte xtListener.java:36) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWi thC ontext(DefaultManagedContext.java:83) at
Re: Latest Master DB issue
Just throwing in a pitch for my spring-modularization branch. The DB initialization stuff should all be fixed in the spring-modularization branch. If it isn't we can better address it there anyhow. The current initialization of spring in master is pure madness. When it works, its because it just happens to work at that time. Adding new beans and dependencies can easily break the initialization order as seen over and over again with the DB upgrades. Darren On Tue, Oct 8, 2013 at 10:47 AM, Alena Prokharchyk alena.prokharc...@citrix.com wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD
Re: Latest Master DB issue
Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.dao.EntityManagerImpl.findById(EntityManagerImpl.java:45) at org.apache.cloudstack.context.CallContext.register(CallContext.java:166 ) at org.apache.cloudstack.context.CallContext.registerSystemCallContextOnce Onl y(CallContext.java:141) at
Re: Hypervisor Questions
OK, so, I've collected a bunch of info with regards to how CS handles HA, DRS, FT, and LV. Can people confirm or deny this information: CloudStack 4.2: XenServer: HA: Request made by CloudStack (handled by hypervisor). No prioritization*. FT: No such feature (warm secondary VM...HA with essentially no downtime). DRS: Handled by CloudStack (only when initially deploying a VM). LM**: Only within a XenServer Resource Pool. VMware: HA: Request made by CloudStack (handled by hypervisor). No prioritization*. FT: No support for VMware Fault Tolerance feature (warm secondary VM...HA with essentially no downtime). DRS: Handled by CloudStack (only when initially deploying a VM). LM**: Can be across VMware Clusters (within same CS Pod). KVM: HA: Request made by CloudStack (handled by hypervisor). No prioritization*. FT: No such feature (warm secondary VM...HA with essentially no downtime). DRS: Handled by CloudStack (only when initially deploying a VM). LM**: Only within a CS KVM Cluster. * There is no way in CloudStack to specify one VM should be started over another if there is an insufficient number of resources remaining in the cluster to start all VMs that were running on a failed host. ** LM = Live Migration On the topic of HA, will CloudStack ever restart a VM on a host in a different cluster from the one the VMs were running on when that host went offline? Thanks! On Fri, Oct 4, 2013 at 10:19 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Right, Marcus, what you say makes sense. You brought up a point I was going to ask about: If you can migrate between clusters, what's the point of a cluster? That point aside, I was really more interested (for the purposes of this e-mail thread) to find out how this works for KVM today (not to recommend adding a new feature if cross-cluster support is not currently in CloudStack for KVM). By your response, can I assume you can only migrate a VM from one KVM host to another in the same cluster? Thanks On Fri, Oct 4, 2013 at 9:48 PM, Marcus Sorensen shadow...@gmail.comwrote: From a kvm standpoint, a cluster has admin-defined meaning. Its not always going to map to some externally defined cluster. I can make my whole data center a cluster, and not use zone-wide storage, or still use zone wide storage. If you have zone-wide storage, maybe you want to keep VMs within small defined clusters for some non-storage reason. It makes sense to me to keep the cluster as an entity that confines where a vm can run, regardless of zone wide storage, otherwise is there a purpose to it at all? Maybe we can offer a migrate vm between cluster function if zone-wide is in use? Also keep in mind that there's a pod level. You'd still need to be in the same pod if you wanted to jump between clusters. On Oct 4, 2013 5:05 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Perhaps Marcus can answer this from a KVM standpoint? On Fri, Oct 4, 2013 at 5:03 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: This implies live migration between VMware clusters is supported in CloudStack: https://issues.apache.org/jira/browse/CLOUDSTACK-4265 On Fri, Oct 4, 2013 at 4:56 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: For anyone who is interested in the outcome of this thread, this seems to answer the XenServer part: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Enabling+Storage+XenMotion+for+XenServer CloudStack currently allows live migration of a virtual machine from one host to another only within a cluster. Document was last updated June 28, 2013. On Fri, Oct 4, 2013 at 12:41 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Yeah, that's kind of what I was interested in learning about. Now that we have zone-wide primary storage, does that mean CS is able to issue the live migration of a VM from one cluster to another (or are we still confined to clusters)? On Fri, Oct 4, 2013 at 11:55 AM, Travis Graham tgra...@tgraham.uswrote: Was that a limitation caused by the primary storage only being available to a single cluster and not zone wide like 4.2.0 provides? Travis On Oct 4, 2013, at 1:52 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Maybe this is a silly question, but if CS handles Live Migrations, are we still constrained to migrating VMs from one host to another in the same cluster? Same question for HA. On Wed, Oct 2, 2013 at 8:42 AM, Clayton Weise cwe...@keyinfo.com wrote: AFAIK, no, but it's a great RFE that I would vote for. -Original Message- From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] Sent: Tuesday, October 01, 2013 9:26 PM To: dev@cloudstack.apache.org Subject: Re: Hypervisor Questions Oh, and, yes, when I referred to HA, it was (as you said) with the meaning of a host going offline and VMs being restarted on other hosts (perhaps in a prioritized
Re: Latest Master DB issue
I'm not familiar with RPM install, but I don't think that the workaround would be valid for everyone - QA folks will not have mvn install on their machines, so this issue is a blocker for them. You can try it on your setup though to unblock yourself. It does have to be fixed in master, either by spring-modularization branch merge, or by applying some temporarily checkin till the merge is done if too many people are blocked by that. -Alena. On 10/8/13 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Interce pt orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Interce pt orD
Re: Latest Master DB issue
Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at
Re: [MERGE] spring-modularization to master - Spring Modularization
I'm not getting any notifications of BVT test failures. Where do I subscribe? On 10/8/13 10:20 AM, Darren Shepherd darren.s.sheph...@gmail.com wrote: From what I can gather it seems that master currently fails the BVT (and know when I say BVT I mean that black box that apparently exists somewhere doing something, but I have no clue what it really means). So in turn my spring modularization branch will additionally fail BVT. Citrix internal QA ran some tests against my branch and they mostly passed but some failed. Its quite difficult to sort through this all because tests are failing on master. So I don't know what to do at this point. At least my branch won't completely blow up everything. I just know the longer it takes to merge this the more painful it will be Honestly this is all quite frustrating for myself being new to contributing to ACS. I feel somewhat lost in the whole process of how to get features in. I'll refrain from venting my frustrations. Darren
Re: Latest Master DB issue
I guess in my case, it's fine. It was a fresh install... Francois On 10/8/2013, 2:08 PM, Darren Shepherd wrote: Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.lockRow(GenericDaoBase.java:963) at com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercept orD ispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:926) at
[New Feature FS] SSL Offload Support for Cloudstack
Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: Call for 4.3 and 4.2.1 Release Managers!
+1. We should get a call out to the community to see who is expecting to merge for 4.3. Although 10/31 is the feature freeze date, proposals and branches should be in already IMHO. On 10/8/13 9:54 AM, Chip Childers chip.child...@sungard.com wrote: On Tue, Oct 08, 2013 at 04:49:24PM +, Animesh Chaturvedi wrote: Whomever wants to take the lead for 4.3 should probably (IMO), start a discussion on (1) coordination of work and [Animesh] Since we are getting closer to code freeze date and we cannot find overall release management I can pick up 4.3 and call out the roles and ask for volunteers for them. Sounds good. (2) re-thinking the schedule to reset based on both historic performance and reality of the calendar (we've really been working on 4.2 when we should have moved on to 4.3). [Animesh] Chip my preference would be to stick to the timeline of 4.3 which will make it a smaller release but will give us the opportunity to clear up our technical debt. +1 to being smaller and tech-debt focused. -chip
RE: [DISCUSS] Breaking out Marvin from CloudStack
-Original Message- From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com] Sent: Tuesday, October 08, 2013 1:28 AM To: dev@cloudstack.apache.org Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack Comments Inline. -Original Message- From: Edison Su [mailto:edison...@citrix.com] Sent: Tuesday, October 08, 2013 4:18 AM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack Few questions: 1. About the more object-oriented CloudStack API python binding: Is the proposed api good enough? For example, The current hand written create virtual machine looks like: class VirtualMachine(object): @classmethod def create(cls, apiclient, services, templateid=None, accountid=None, domainid=None, zoneid=None, networkids=None, serviceofferingid=None, securitygroupids=None, projectid=None, startvm=None, diskofferingid=None, affinitygroupnames=None, group=None, hostid=None, keypair=None, mode='basic', method='GET'): the proposed api may look like: class VirtualMachine(object): def create(self, apiclient, accountId, templateId, **kwargs) The proposed api will look better than previous one, and it's automatically generated, so easy to maintain. But as a consumer of the api, how do people know what kind of parameters should be passed in? Will you have an online document for your api? Or you assume people will look at the api docs generated by CloudStack? Or why not make the api itself as self-contained? For example, add docs before create method: class VirtualMachine(object): ''' Args: accountId: what ever templateId: whatever networkids: whatever ''' ''' Response: ''' def create(self, apiclient, accountId, templateId, **kwargs) All the api documents should be included in api discovery already, so it should be easy to add them in your api binding. [Santhosh]: Each verb as an action on entity, will have provision as earlier to have all required and as well optional arguments. Regarding doc strings, If the API docs are having this facilitation, we will add them as corresponding doc strings during generation of python binding and as well entities. As you rightly mentioned, it will good to add this . We will make sure to get it. Adding adequate doc strings applies even while writing test feature\lib as well, it will improve ease ,readability,usage etc. Anyways a wiki page, and additional pydoc documents posted online will be there. That's great! The way you separate required parameters from optional parameters on the method signature, is quite a good idea. 2. Regarding to data factories. From the proposed factories, in each test case, does test writer still need to write the code to get data, such as writing code to get account during the setupclass? I looked at some of the existing test cases, most of them have the same code snippet: class Services: def __init__(self): self.services = { account: { email: t...@test.commailto:t...@test.com, firstname: Test, lastname: User, username: test, password: password, }, virtual_machine: { displayname: Test VM, username: root, password: password, ssh_port: 22, hypervisor: 'XenServer', privateport: 22, publicport: 22, protocol: 'TCP', }, With the data factories, the code will look like the following? Class TestFoo: Def setupClass(): Account = UserAccount(apiclient) VM = UserVM(apiClient) And if I want to customize the default data factories, I should be able to use something like: UserAccount(apiclient, username='myfoo')? And the data factories should be able to customized based on test environment, right? For example, the current iso test cases are hardcoded to test against http://people.apache.org/~tsp/dummy.iso, but it won't work for devcloud, or in an internal network. The ISO data factory should be able to return an url based on different test environment, thus iso test cases can be reused. [Santhosh] : Currently, as you mentioned, Services class is part of many test modules, this is basically data part for the test. We are separating this with factory approach. Thus, segregating data from test. Compare the earlier mention of Services class in earlier test code without Service class in the below test code. class TestVpcLifeCycle(cloudstackTestCase): def setUp(self): self.apiclient = super(TestVpcLifeCycle, self).getClsTestClient().getApiClient() self.zoneid = get_zone(self.apiclient).id
Re: [DISCUSS] Pluggable VM snapshot related operations?
So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? In NetApp, what are you snapshotting? The whole netapp volume (I don't know the correct term), a file on NFS, an iscsi volume? I don't know a whole heck of a lot about the netapp snapshot capabilities. I know storage solutions can snapshot better and faster than hypervisors can with COW files. I've personally just been always perplexed on whats the best way to implement it. For storage solutions that are block based, its really easy to have the storage doing the snapshot. For shared file systems, like NFS, its seems way more complicated as you don't want to snapshot the entire filesystem in order to snapshot one file. Darren On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher chris.su...@netapp.com wrote: I can comment on the second half. Through storage operations, storage providers can create backups much faster than hypervisors and over time, their snapshots are more efficient than the snapshot chains that hypervisors create. It is true that a VM snapshot taken at the storage level is slightly different as it would be psuedo-quiesced, not have it's memory snapshotted. This is accomplished through hypervisor snapshots: 1) VM snapshot request (lets say VM 'A' 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted, creating active VM 'A*' -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*' 3) Storage driver(s) take snapshots of each volume 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled back into VM 'A*' so the hypervisor snapshot no longer exists Now, a couple notes: -The reason this is optional is that not all users necessarily care about the memory or disk consistency of their VMs and would prefer faster snapshots to consistency. -Preemptively, yes, we are actually taking hypervisor snapshots which means there isn't actually a performance of taking storage snapshots when quiescing the VM. However, the performance gain will come both during restoring the VM and during normal operations as described above. Although you can think of it as a poor man's VM snapshot, I would think of it more as a consistent multi-volume snapshot. Again, the difference being that this snapshot was not truly quiesced like a hypervisor snapshot would be. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 1:47 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: My only comment is that having the return type as boolean and using to that indicate quiesce behaviour seems obscure and will probably lead to a problem later. Your basically saying the result of the takeVMSnapshot will only ever need to communicate back whether unquiesce needs to happen. Maybe some result object would be more extensible. Actually, I think I have more comments. This seems a bit odd to me. Why would a storage driver in ACS implement a VM snapshot functionality? VM snapshot is a really a hypervisor orchestrated operation. So it seems like were trying to implement a poor mans VM snapshot. Maybe if I understood what NetApp was trying to do it would make more sense, but its all odd. To do a proper VM snapshot you need to snapshot memory and disk at the exact same time. How are we going to do that if ACS is orchestrating the VM snapshot and delegating to storage providers. Its not like you are going to pause the VM or are you? Darren On Mon, Oct 7, 2013 at 11:59 AM, Edison Su edison...@citrix.com wrote: I created a design document page at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Pluggable+VM+snapshot+related+operations, feel free to add items on it. And a new branch pluggable_vm_snapshot is created. -Original Message- From: SuichII, Christopher [mailto:chris.su...@netapp.com] Sent: Monday, October 07, 2013 10:02 AM To: dev@cloudstack.apache.org Subject: Re: [DISCUSS] Pluggable VM snapshot related operations? I'm a fan of option 2 - this gives us the most flexibility (as you stated). The option is given to completely override the way VM snapshots work AND storage providers are given to opportunity to work within the default VM snapshot workflow. I believe this option should satisfy your concern, Mike. The snapshot and quiesce strategy would be in charge of communicating with the hypervisor. Storage providers should be able to leverage the default strategies and simply perform the storage operations. I don't think it should be much of an issue that new method to the storage driver interface may not apply to everyone. In fact, that is already the case. Some methods such as un/maintain(), attachToXXX() and takeSnapshot() are already not implemented by every driver - they just
RE: Latest Master DB issue
Here the defect created for this issue https://issues.apache.org/jira/browse/CLOUDSTACK-4825 Regards, Rayees -Original Message- From: Francois Gaudreault [mailto:fgaudrea...@cloudops.com] Sent: Tuesday, October 08, 2013 11:15 AM To: dev@cloudstack.apache.org Subject: Re: Latest Master DB issue I guess in my case, it's fine. It was a fresh install... Francois On 10/8/2013, 2:08 PM, Darren Shepherd wrote: Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:58:49 | Complete | ++-+-+--+ 1 row in set (0.00 sec) I installed cloudstack-management-4.3.0: [root@eng-testing-cstack_master ~]# rpm -qf /usr/bin/cloudstack-setup-databases cloudstack-management-4.3.0-SNAPSHOT.el6.x86_64 Francois On 10/8/2013, 1:04 PM, Francois Gaudreault wrote: It's a fresh master RPM install. Francois On 10/8/2013, 12:40 PM, Alena Prokharchyk wrote: It is not a small issue. is_default filed was added to the table as a part of the 41-42 db upgrade. Looks like the code tries to retrieve system user before the db upgrade is completed. DB upgrade is a major part of system integrity check; no queries to the DB should be made before its completed. Francois, did you start seeing this problem just recently? -Alena. On 10/8/13 8:04 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hi, I compiled Master this morning, and there is a small DB issue. One field is missing in the account table (field default). CS will not start because of that. 2013-10-08 11:01:42,623 FATAL [o.a.c.c.CallContext] (Timer-2:null) Exiting the system because we're unable to register the system call context. com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@4c1aa2e9: SELECT account.id, account.account_name, account.type, account.domain_id, account.state, account.removed, account.cleanup_needed, account.network_domain, account.uuid, account.default_zone_id, account.default FROM account WHERE account.id = 1 AND account.removed IS NULL at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:986) at
Re: [New Feature FS] SSL Offload Support for Cloudstack
Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Typo in KVM docs
Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™*
Re: Typo in KVM docs
I have some other KVM docs that I've been updating as I do my development work, so I should be able to modify this, as well. Thanks! On Tue, Oct 8, 2013 at 12:51 PM, Travis Graham tgra...@tgraham.us wrote: Yep, that's a typo. Should be 16059 like libvirtd.conf has by default. If you'll open a Jira for it I'll submit a patch the docs. Travis On Oct 8, 2013, at 2:44 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™*
Re: Typo in KVM docs
On Tue, Oct 08, 2013 at 12:44:43PM -0600, Mike Tutkowski wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Which doc version? I found these bug reports, all showing as closed: https://issues.apache.org/jira/browse/CLOUDSTACK-2094 https://issues.apache.org/jira/browse/CLOUDSTACK-1193 https://issues.apache.org/jira/browse/CLOUDSTACK-990
Re: Typo in KVM docs
Careful which branch you are working on Mike. I think that David's plan is that we are baselines on 4.2 in the new docs repo, and he was going to then pull from 4.2 into master (again, in the new repo). On Tue, Oct 8, 2013 at 2:54 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: I have some other KVM docs that I've been updating as I do my development work, so I should be able to modify this, as well. Thanks! On Tue, Oct 8, 2013 at 12:51 PM, Travis Graham tgra...@tgraham.us wrote: Yep, that's a typo. Should be 16059 like libvirtd.conf has by default. If you'll open a Jira for it I'll submit a patch the docs. Travis On Oct 8, 2013, at 2:44 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™*
Re: Typo in KVM docs
I was actually looking at what's on the web for 4.2 (even though I'm developing on master). When I went to find this isssue in 4.3, it appears the problem has been corrected. On Tue, Oct 8, 2013 at 12:56 PM, Chip Childers chip.child...@sungard.comwrote: Careful which branch you are working on Mike. I think that David's plan is that we are baselines on 4.2 in the new docs repo, and he was going to then pull from 4.2 into master (again, in the new repo). On Tue, Oct 8, 2013 at 2:54 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: I have some other KVM docs that I've been updating as I do my development work, so I should be able to modify this, as well. Thanks! On Tue, Oct 8, 2013 at 12:51 PM, Travis Graham tgra...@tgraham.us wrote: Yep, that's a typo. Should be 16059 like libvirtd.conf has by default. If you'll open a Jira for it I'll submit a patch the docs. Travis On Oct 8, 2013, at 2:44 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™*
VM Backup plug-in framework
Hi, Can anyone tell me if there are plans to implement a more comprehensive mechanism to backup data within guest VM's, or to provide a framework for backup vendors to plug into? This is needed because: 1. The native volume snapshot functionality is slow and does not provide the required feature set. (Incremental forever, Dedup, file level indexing, tape support etc) 2. Integrating 3rd party tools is painful and can only be done at the guest/hypervisor layers which is outside of CCP. Restoring entire VM's is problematic and billing even more so. What would be ideal is a plug-in framework that lets backup vendors integrate their software with CS, so that CS aware backup and restore can be scheduled and exectued by end users. Im hoping we are not the only people out there who are struggling with this! Many thanks, Simon Murphy Solutions Architect ViFX | Cloud Infrastructure Level 7, 57 Fort Street, Auckland, New Zealand 1010 PO Box 106700, Auckland, New Zealand 1143 M +64 21 285 4519 | S simon_a_murphy www.vifx.co.nzhttp://www.vifx.co.nz/ follow us on twitterhttps://twitter.com/ViFX Auckland | Wellington | Christchurch [cid:image003.jpg@01CDDF95.815BF160] experience. expertise. execution. This email and any files transmitted with it are confidential, without prejudice and may contain information that is subject to legal privilege. It is intended solely for the use of the individual/s to whom it is addressed in accordance with the provisions of the Privacy Act (1993). The content contained in this email does not, necessarily, reflect the official policy position of ViFX nor does ViFX have any responsibility for any alterations to the contents of this email that may occur following transmission. If you are not the addressee it may be unlawful for you to read, copy, distribute, disclose or otherwise use the information contained within this email. If you are not the intended recipient, please notify the sender prior to deleting this email message from your system. Please note ViFX reserves the right to monitor, from time to time, the communications sent to and from its email network.
Re: [DISCUSS] Pluggable VM snapshot related operations?
-- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 2:24 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? If the VM snapshot is not going to the hypervisor, then yes, it will actually be a hypervisor snapshot. Just to be clear, the unquiesce is not quite a delete - it is a collapse of the VM snapshot and the active VM back into one file. In NetApp, what are you snapshotting? The whole netapp volume (I don't know the correct term), a file on NFS, an iscsi volume? I don't know a whole heck of a lot about the netapp snapshot capabilities. Essentially we are using internal APIs to create file level backups - don't worry too much about the terminology. I know storage solutions can snapshot better and faster than hypervisors can with COW files. I've personally just been always perplexed on whats the best way to implement it. For storage solutions that are block based, its really easy to have the storage doing the snapshot. For shared file systems, like NFS, its seems way more complicated as you don't want to snapshot the entire filesystem in order to snapshot one file. With filesystems like NFS, things are certainly more complicated, but that is taken care of by our controller's operating system, Data ONTAP, and we simply use APIs to communicate with it. Darren On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher chris.su...@netapp.com wrote: I can comment on the second half. Through storage operations, storage providers can create backups much faster than hypervisors and over time, their snapshots are more efficient than the snapshot chains that hypervisors create. It is true that a VM snapshot taken at the storage level is slightly different as it would be psuedo-quiesced, not have it's memory snapshotted. This is accomplished through hypervisor snapshots: 1) VM snapshot request (lets say VM 'A' 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted, creating active VM 'A*' -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*' 3) Storage driver(s) take snapshots of each volume 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled back into VM 'A*' so the hypervisor snapshot no longer exists Now, a couple notes: -The reason this is optional is that not all users necessarily care about the memory or disk consistency of their VMs and would prefer faster snapshots to consistency. -Preemptively, yes, we are actually taking hypervisor snapshots which means there isn't actually a performance of taking storage snapshots when quiescing the VM. However, the performance gain will come both during restoring the VM and during normal operations as described above. Although you can think of it as a poor man's VM snapshot, I would think of it more as a consistent multi-volume snapshot. Again, the difference being that this snapshot was not truly quiesced like a hypervisor snapshot would be. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 1:47 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: My only comment is that having the return type as boolean and using to that indicate quiesce behaviour seems obscure and will probably lead to a problem later. Your basically saying the result of the takeVMSnapshot will only ever need to communicate back whether unquiesce needs to happen. Maybe some result object would be more extensible. Actually, I think I have more comments. This seems a bit odd to me. Why would a storage driver in ACS implement a VM snapshot functionality? VM snapshot is a really a hypervisor orchestrated operation. So it seems like were trying to implement a poor mans VM snapshot. Maybe if I understood what NetApp was trying to do it would make more sense, but its all odd. To do a proper VM snapshot you need to snapshot memory and disk at the exact same time. How are we going to do that if ACS is orchestrating the VM snapshot and delegating to storage providers. Its not like you are going to pause the VM or are you? Darren On Mon, Oct 7, 2013 at 11:59 AM, Edison Su edison...@citrix.com wrote: I created a design document page at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Pluggable+VM+snapshot+related+operations, feel free to add items on it. And a new branch pluggable_vm_snapshot is created. -Original Message- From: SuichII, Christopher [mailto:chris.su...@netapp.com] Sent: Monday, October 07, 2013 10:02 AM To: dev@cloudstack.apache.org Subject: Re: [DISCUSS] Pluggable VM snapshot related operations? I'm a fan
Re: Typo in KVM docs
Ok now I am mixed up :P libvirtd.conf has 16509 by default. (at least on CentOS) So is it 16509 or 16059? :P Francois On 10/8/2013, 2:58 PM, Mike Tutkowski wrote: I was actually looking at what's on the web for 4.2 (even though I'm developing on master). When I went to find this isssue in 4.3, it appears the problem has been corrected. On Tue, Oct 8, 2013 at 12:56 PM, Chip Childers chip.child...@sungard.comwrote: Careful which branch you are working on Mike. I think that David's plan is that we are baselines on 4.2 in the new docs repo, and he was going to then pull from 4.2 into master (again, in the new repo). On Tue, Oct 8, 2013 at 2:54 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: I have some other KVM docs that I've been updating as I do my development work, so I should be able to modify this, as well. Thanks! On Tue, Oct 8, 2013 at 12:51 PM, Travis Graham tgra...@tgraham.us wrote: Yep, that's a typo. Should be 16059 like libvirtd.conf has by default. If you'll open a Jira for it I'll submit a patch the docs. Travis On Oct 8, 2013, at 2:44 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/solution/overview/?video=play *™* -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com 514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com @CloudOps_
Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14522/#review26786 --- ui/scripts/storage.js https://reviews.apache.org/r/14522/#comment52129 The ui change here, is there way to disable it from ui, if the storage provider is not NetApp? Or move the ui change into your plugin? - edison su On Oct. 7, 2013, 8:26 p.m., Chris Suich wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14522/ --- (Updated Oct. 7, 2013, 8:26 p.m.) Review request for cloudstack, Brian Federle and edison su. Repository: cloudstack-git Description --- After the last batch of work to the revertSnapshot API, SnapshotServiceImpl was not tied into the workflow to be used by storage providers. I have added the logic in a similar fashion to takeSnapshot(), backupSnapshot() and deleteSnapshot(). I have also added a 'Revert to Snapshot' action to the volume snapshots list in the UI. Diffs - api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java 946eebd client/WEB-INF/classes/resources/messages.properties f92b85a client/tomcatconf/commands.properties.in 58c770d engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java c09adca server/src/com/cloud/server/ManagementServerImpl.java 0a0fcdc server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 0b53cfd ui/dictionary.jsp f93f9dc ui/scripts/storage.js 88fb9f2 Diff: https://reviews.apache.org/r/14522/diff/ Testing --- I have tested all of this locally with a custom storage provider. Unfortunately, I'm still in the middle of figuring out how to properly unit test this type of code. If anyone has any recommendations, please let me know. Thanks, Chris Suich
Re: [New Feature FS] SSL Offload Support for Cloudstack
A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Termination+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the Netscaler plugin, and await a time when HA proxy has a stable release that includes SSL term. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: [New Feature FS] SSL Offload Support for Cloudstack
The API should do input validation on the SSL cert, key and chain. Getting those three pieces of info is usually difficult for most people to get right as they don't really know what those three things are. There's about a 80% chance most calls will fail. If you rely on the provider it will probably just give back some general failure message that we won't be able to map back to the user as useful information. I would implement the cert management as a separate CertificateService. Darren On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed syed1.mush...@gmail.com wrote: A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Termination+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the Netscaler plugin, and await a time when HA proxy has a stable release that includes SSL term. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: [DISCUSS] Pluggable VM snapshot related operations?
Who is going to decide whether the hypervisor snapshot should actually happen or not? Or how? Darren On Tue, Oct 8, 2013 at 12:38 PM, SuichII, Christopher chris.su...@netapp.com wrote: -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 2:24 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? If the VM snapshot is not going to the hypervisor, then yes, it will actually be a hypervisor snapshot. Just to be clear, the unquiesce is not quite a delete - it is a collapse of the VM snapshot and the active VM back into one file. In NetApp, what are you snapshotting? The whole netapp volume (I don't know the correct term), a file on NFS, an iscsi volume? I don't know a whole heck of a lot about the netapp snapshot capabilities. Essentially we are using internal APIs to create file level backups - don't worry too much about the terminology. I know storage solutions can snapshot better and faster than hypervisors can with COW files. I've personally just been always perplexed on whats the best way to implement it. For storage solutions that are block based, its really easy to have the storage doing the snapshot. For shared file systems, like NFS, its seems way more complicated as you don't want to snapshot the entire filesystem in order to snapshot one file. With filesystems like NFS, things are certainly more complicated, but that is taken care of by our controller's operating system, Data ONTAP, and we simply use APIs to communicate with it. Darren On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher chris.su...@netapp.com wrote: I can comment on the second half. Through storage operations, storage providers can create backups much faster than hypervisors and over time, their snapshots are more efficient than the snapshot chains that hypervisors create. It is true that a VM snapshot taken at the storage level is slightly different as it would be psuedo-quiesced, not have it's memory snapshotted. This is accomplished through hypervisor snapshots: 1) VM snapshot request (lets say VM 'A' 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted, creating active VM 'A*' -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*' 3) Storage driver(s) take snapshots of each volume 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled back into VM 'A*' so the hypervisor snapshot no longer exists Now, a couple notes: -The reason this is optional is that not all users necessarily care about the memory or disk consistency of their VMs and would prefer faster snapshots to consistency. -Preemptively, yes, we are actually taking hypervisor snapshots which means there isn't actually a performance of taking storage snapshots when quiescing the VM. However, the performance gain will come both during restoring the VM and during normal operations as described above. Although you can think of it as a poor man's VM snapshot, I would think of it more as a consistent multi-volume snapshot. Again, the difference being that this snapshot was not truly quiesced like a hypervisor snapshot would be. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 1:47 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: My only comment is that having the return type as boolean and using to that indicate quiesce behaviour seems obscure and will probably lead to a problem later. Your basically saying the result of the takeVMSnapshot will only ever need to communicate back whether unquiesce needs to happen. Maybe some result object would be more extensible. Actually, I think I have more comments. This seems a bit odd to me. Why would a storage driver in ACS implement a VM snapshot functionality? VM snapshot is a really a hypervisor orchestrated operation. So it seems like were trying to implement a poor mans VM snapshot. Maybe if I understood what NetApp was trying to do it would make more sense, but its all odd. To do a proper VM snapshot you need to snapshot memory and disk at the exact same time. How are we going to do that if ACS is orchestrating the VM snapshot and delegating to storage providers. Its not like you are going to pause the VM or are you? Darren On Mon, Oct 7, 2013 at 11:59 AM, Edison Su edison...@citrix.com wrote: I created a design document page at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Pluggable+VM+snapshot+related+operations, feel free to add items on it. And a new branch pluggable_vm_snapshot is created. -Original Message- From: SuichII,
Re: [New Feature FS] SSL Offload Support for Cloudstack
Thanks Darren for your reply. Do you happen to have any info on a library that I can use for certificate validation? Thanks, -Syed On Tue 08 Oct 2013 04:53:40 PM EDT, Darren Shepherd wrote: The API should do input validation on the SSL cert, key and chain. Getting those three pieces of info is usually difficult for most people to get right as they don't really know what those three things are. There's about a 80% chance most calls will fail. If you rely on the provider it will probably just give back some general failure message that we won't be able to map back to the user as useful information. I would implement the cert management as a separate CertificateService. Darren On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed syed1.mush...@gmail.com wrote: A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Termination+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the Netscaler plugin, and await a time when HA proxy has a stable release that includes SSL term. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: [DISCUSS] Pluggable VM snapshot related operations?
Whether the hypervisor snapshot happens depends on whether the 'quiesce' option is specified with the snapshot request. If a user doesn't care about the consistency of their backup, then the hypervisor snapshot/quiesce step can be skipped altogether. This of course is not the case if the default provider is being used, in which case a hypervisor snapshot is the only way of creating a backup since it can't be offloaded to the storage driver. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 4:57 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Who is going to decide whether the hypervisor snapshot should actually happen or not? Or how? Darren On Tue, Oct 8, 2013 at 12:38 PM, SuichII, Christopher chris.su...@netapp.com wrote: -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 2:24 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? If the VM snapshot is not going to the hypervisor, then yes, it will actually be a hypervisor snapshot. Just to be clear, the unquiesce is not quite a delete - it is a collapse of the VM snapshot and the active VM back into one file. In NetApp, what are you snapshotting? The whole netapp volume (I don't know the correct term), a file on NFS, an iscsi volume? I don't know a whole heck of a lot about the netapp snapshot capabilities. Essentially we are using internal APIs to create file level backups - don't worry too much about the terminology. I know storage solutions can snapshot better and faster than hypervisors can with COW files. I've personally just been always perplexed on whats the best way to implement it. For storage solutions that are block based, its really easy to have the storage doing the snapshot. For shared file systems, like NFS, its seems way more complicated as you don't want to snapshot the entire filesystem in order to snapshot one file. With filesystems like NFS, things are certainly more complicated, but that is taken care of by our controller's operating system, Data ONTAP, and we simply use APIs to communicate with it. Darren On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher chris.su...@netapp.com wrote: I can comment on the second half. Through storage operations, storage providers can create backups much faster than hypervisors and over time, their snapshots are more efficient than the snapshot chains that hypervisors create. It is true that a VM snapshot taken at the storage level is slightly different as it would be psuedo-quiesced, not have it's memory snapshotted. This is accomplished through hypervisor snapshots: 1) VM snapshot request (lets say VM 'A' 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted, creating active VM 'A*' -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*' 3) Storage driver(s) take snapshots of each volume 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled back into VM 'A*' so the hypervisor snapshot no longer exists Now, a couple notes: -The reason this is optional is that not all users necessarily care about the memory or disk consistency of their VMs and would prefer faster snapshots to consistency. -Preemptively, yes, we are actually taking hypervisor snapshots which means there isn't actually a performance of taking storage snapshots when quiescing the VM. However, the performance gain will come both during restoring the VM and during normal operations as described above. Although you can think of it as a poor man's VM snapshot, I would think of it more as a consistent multi-volume snapshot. Again, the difference being that this snapshot was not truly quiesced like a hypervisor snapshot would be. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 1:47 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: My only comment is that having the return type as boolean and using to that indicate quiesce behaviour seems obscure and will probably lead to a problem later. Your basically saying the result of the takeVMSnapshot will only ever need to communicate back whether unquiesce needs to happen. Maybe some result object would be more extensible. Actually, I think I have more comments. This seems a bit odd to me. Why would a storage driver in ACS implement a VM snapshot functionality? VM snapshot is a really a hypervisor orchestrated operation. So it seems like were trying to implement a poor mans VM snapshot. Maybe if I understood
Re: Latest Master DB issue
Hey, I half way introduced this issue in a really long and round about way. I don't think there's a good simple fix unless we merge the spring-modularization branch. I'm going to look further into it. But here's the background of why we are seeing this. I introduced Managed Context framework that will wrap all the background threads and manage the thread locals. This was the union of CallContext, ServerContext, and AsyncJob*Context into one simple framework. The problem with ACS though is that A LOT of background threads are spawned at all different random times of the initialization. So what is happening is that during the initialization of some bean its kicking off a background thread that tries to access the database before the database upgrade has ran. Now the CallContext has a strange suicidal behaviour (this was already there, I didn't change this), if it can't find account 1, it does a System.exit(1). So since this one background thread is failing, the whole JVM shuts down. Before CallContext only existed on some threads, but the addition of the Managed Context framework, it is now on almost all threads. Now in the spring-modularization branch there is a very strict and (mostly) deterministic initialization order. The database upgrade class will be initialized and ran before any other bean in CloudStack is even initiated. So this works around all these DB problems. The current spring setup in master is very, very fragile. As I said before, it is really difficult to ensure certain aspects are initialized before others, and since we moved to (which I don't really agree with) doing DB schema upgrades purely on startup of the mgmt server, we now have to be extra careful about initialization order. Darren On Tue, Oct 8, 2013 at 11:34 AM, Rayees Namathponnan rayees.namathpon...@citrix.com wrote: Here the defect created for this issue https://issues.apache.org/jira/browse/CLOUDSTACK-4825 Regards, Rayees -Original Message- From: Francois Gaudreault [mailto:fgaudrea...@cloudops.com] Sent: Tuesday, October 08, 2013 11:15 AM To: dev@cloudstack.apache.org Subject: Re: Latest Master DB issue I guess in my case, it's fine. It was a fresh install... Francois On 10/8/2013, 2:08 PM, Darren Shepherd wrote: Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of it, and performs the db upgrade to the version of the current code. Only after that all the managers get invoked + system caller context get initialized. Looks like the load order for step 3) got broken recently, and system context gets initialized before the db upgrade is finished. So we either need to fix the order, or invoke DataBaseUpgradeChecker as a part of cloudstack-setup-databases so at the point when management server starts up, the DB already upgraded to the latest version. -Alena. On 10/8/13 10:25 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Hmm... I just checked the DB version and it's 4.0??? It should be 4.3.0 no? mysql select * from version; ++-+-+--+ | id | version | updated | step
Re: VM Backup plug-in framework
Hi Chris, Correct, I am talking about a way to orchestrate backing up and restoring an entire VM (or set of VM's), but using more efficient technique at the back end to transfer data. Today we do the following: - When a CS account is created we create a corresponding vSphere folder, and we ensure that all VM's for that account are placed in the folder. Access is restricted to that customer. - We configure Veeam to back up all VM's in the vSphere folder using a schedule that we have agreed with the customer. CS has no knowledge that these backups are being taken. - Veeam uses vSphere integrated backups (VADP, CBT) and performs incremental forever backups to our backup server - To restore, we give customers access the Veeam Enterprise Manager and they can perform file level or entire VM restores - CS seems to reference the VM name and not the UID so this does appear to work, however it hasn't been tested at scale This is less than ideal for the following reasons: - VM placement is prone to user error until we can write a script that automates this - Veeam backups occur without knowledge of CS - We can't integrate billing (we use CPBM) - There is a separate console for Self service restore - We need to implement the backup schedule on behalf of the customer which can be time consuming What I feel is missing is an integrated way to provide 'enterprise grade' data protection for CS VM's. Our Customers expect this, and some have long retention requirements (up to 7 years!) so the native snapshot function just isn't fit for purpose. It makes sense to me that CS would orchestrate the backup and restore operations, and hand off to a 3rd party system (Commvault, Veeam, snap manager etc) for the actual data transfer and long term storage Thanks! Simon Murphy Solutions Architect ViFX | Cloud Infrastructure Level 7, 57 Fort Street, Auckland, New Zealand 1010 PO Box 106700, Auckland, New Zealand 1143 M +64 21 285 4519 | S simon_a_murphy www.vifx.co.nz http://www.vifx.co.nz/ follow us on twitter https://twitter.com/ViFX Auckland | Wellington | Christchurch experience. expertise. execution. This email and any files transmitted with it are confidential, without prejudice and may contain information that is subject to legal privilege. It is intended solely for the use of the individual/s to whom it is addressed in accordance with the provisions of the Privacy Act (1993). The content contained in this email does not, necessarily, reflect the official policy position of ViFX nor does ViFX have any responsibility for any alterations to the contents of this email that may occur following transmission. If you are not the addressee it may be unlawful for you to read, copy, distribute, disclose or otherwise use the information contained within this email. If you are not the intended recipient, please notify the sender prior to deleting this email message from your system. Please note ViFX reserves the right to monitor, from time to time, the communications sent to and from its email network. On 9/10/13 10:02 AM, SuichII, Christopher chris.su...@netapp.com wrote: http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3C18A 67ed7-9cb0-486a-be80-e16152f33...@netapp.com%3E http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3C18 a67ed7-9cb0-486a-be80-e16152f33...@netapp.com%3Ehttp://mail-archives.apac he.org/mod_mbox/cloudstack-dev/201309.mbox/18A67ED7-9CB0-486A-BE80-E16152 f33...@netapp.com
Re: Latest Master DB issue
Some more info about this. What specifically is happening is that the VmwareContextPool call is creating a Timer during the constructor of the class which is being constructed in a static block from VmwareContextFactory. So when the VmwareContextFactory class is loaded by the class loader, the background thread is created. Which is way, way before the Database upgrade happens. This will still be fixed if we merge the spring modularization, but this vmware code should change regardless. Background threads should only be launched from a @PostConstruct or ComponentLifecycle.start() method. They should not be started when a class is constructed or loaded. Darren On Tue, Oct 8, 2013 at 2:22 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Hey, I half way introduced this issue in a really long and round about way. I don't think there's a good simple fix unless we merge the spring-modularization branch. I'm going to look further into it. But here's the background of why we are seeing this. I introduced Managed Context framework that will wrap all the background threads and manage the thread locals. This was the union of CallContext, ServerContext, and AsyncJob*Context into one simple framework. The problem with ACS though is that A LOT of background threads are spawned at all different random times of the initialization. So what is happening is that during the initialization of some bean its kicking off a background thread that tries to access the database before the database upgrade has ran. Now the CallContext has a strange suicidal behaviour (this was already there, I didn't change this), if it can't find account 1, it does a System.exit(1). So since this one background thread is failing, the whole JVM shuts down. Before CallContext only existed on some threads, but the addition of the Managed Context framework, it is now on almost all threads. Now in the spring-modularization branch there is a very strict and (mostly) deterministic initialization order. The database upgrade class will be initialized and ran before any other bean in CloudStack is even initiated. So this works around all these DB problems. The current spring setup in master is very, very fragile. As I said before, it is really difficult to ensure certain aspects are initialized before others, and since we moved to (which I don't really agree with) doing DB schema upgrades purely on startup of the mgmt server, we now have to be extra careful about initialization order. Darren On Tue, Oct 8, 2013 at 11:34 AM, Rayees Namathponnan rayees.namathpon...@citrix.com wrote: Here the defect created for this issue https://issues.apache.org/jira/browse/CLOUDSTACK-4825 Regards, Rayees -Original Message- From: Francois Gaudreault [mailto:fgaudrea...@cloudops.com] Sent: Tuesday, October 08, 2013 11:15 AM To: dev@cloudstack.apache.org Subject: Re: Latest Master DB issue I guess in my case, it's fine. It was a fresh install... Francois On 10/8/2013, 2:08 PM, Darren Shepherd wrote: Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2) Deploy the DB using 'mvn -P developer -pl developer -Ddeploydb'. As a part of this step, the DataBaseUpgradeChecker: * first deploys the base DB version - 4.0.0 * then checks the current version of the code, and performs the db upgrade if needed. So on master, version table looks like this after the db is deployed: mysql select * from version; ++-+-+--+ | id | version | updated | step | ++-+-+--+ | 1 | 4.0.0 | 2013-10-08 10:34:47 | Complete | | 2 | 4.1.0 | 2013-10-08 17:35:22 | Complete | | 3 | 4.2.0 | 2013-10-08 17:35:22 | Complete | | 4 | 4.3.0 | 2013-10-08 17:35:22 | Complete | ++-+-+--+ 4 rows in set (0.00 sec) 3) Start management server. When deployed from rpm: 1) you deploy the code 2) run cloudstack-setup-databases. As the result of this step, 4.0.0 base version of the DB is deployed. Thats why you see only 4.0.0 record in the DB. 3) Start management server. DataBaseUpgradeChecker is being invoked as a part of
Latest automation result on master
Here the BVT automation result on KVM, You can see the result @ http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-smoke-matrix/ We had 100% pass rate in 4.2, but in latest master its reduced around 85%, observed below issues with latest master run https://issues.apache.org/jira/browse/CLOUDSTACK-4835 https://issues.apache.org/jira/browse/CLOUDSTACK-4834 https://issues.apache.org/jira/browse/CLOUDSTACK-4833 Automation result on master Configuration NameAll Failed Passed Defect suite=test_volumes 2 2 0 CLOUDSTACK-4834 suite=test_vm_snapshots 3 0 0 suite=test_vm_life_cycle 10 1 0 suite=test_templates8 1 1 CLOUDSTACK-4833 suite=test_ssvm10 0 0 suite=test_service_offerings4 0 0 suite=test_scale_vm 1 0 0 suite=test_routers 9 0 0 suite=test_resource_detail 1 1 0 suite=test_reset_vm_on_reboot 1 0 0 suite=test_regions 1 0 0 suite=test_pvlan1 0 0 suite=test_public_ip_range 1 0 0 suite=test_privategw_acl1 1 0 suite=test_portable_publicip2 0 0 suite=test_non_contigiousvlan 1 0 0 suite=test_nic 1 0 0 suite=test_network_acl 1 0 0 suite=test_network 7 0 0 suite=test_multipleips_per_nic 1 0 0 suite=test_loadbalance 3 0 0 suite=test_iso 2 1 0 CLOUDSTACK-4833 suite=test_internal_lb 1 0 0 suite=test_guest_vlan_range 2 2 0 suite=test_global_settings 2 2 0 CLOUDSTACK-4835 suite=test_disk_offerings 3 0 0 suite=test_deploy_vms_with_varied_deploymentplanners3 0 0 suite=test_deploy_vm_with_userdata 2 0 0 suite=test_deploy_vm1 0 0 suite=test_affinity_groups 1 0 0 Regards, Rayees
Re: JIRA - release 4.3 required in Affected and Fix version list
Done. On Tue, Oct 8, 2013 at 3:57 PM, Rayees Namathponnan rayees.namathpon...@citrix.com wrote: Hi Jira Admins, I am want to create defect against master, but there is no option to select 4.3 Affected and Fix Version, someone with admin access please add this ? Regards, Rayees
Re: Latest automation result on master
Thanks Rayees - this is helpful. --David On Tue, Oct 8, 2013 at 5:41 PM, Rayees Namathponnan rayees.namathpon...@citrix.com wrote: Here the BVT automation result on KVM, You can see the result @ http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-smoke-matrix/ We had 100% pass rate in 4.2, but in latest master its reduced around 85%, observed below issues with latest master run https://issues.apache.org/jira/browse/CLOUDSTACK-4835 https://issues.apache.org/jira/browse/CLOUDSTACK-4834 https://issues.apache.org/jira/browse/CLOUDSTACK-4833 Automation result on master Configuration NameAll Failed Passed Defect suite=test_volumes 2 2 0 CLOUDSTACK-4834 suite=test_vm_snapshots 3 0 0 suite=test_vm_life_cycle 10 1 0 suite=test_templates8 1 1 CLOUDSTACK-4833 suite=test_ssvm10 0 0 suite=test_service_offerings4 0 0 suite=test_scale_vm 1 0 0 suite=test_routers 9 0 0 suite=test_resource_detail 1 1 0 suite=test_reset_vm_on_reboot 1 0 0 suite=test_regions 1 0 0 suite=test_pvlan1 0 0 suite=test_public_ip_range 1 0 0 suite=test_privategw_acl1 1 0 suite=test_portable_publicip2 0 0 suite=test_non_contigiousvlan 1 0 0 suite=test_nic 1 0 0 suite=test_network_acl 1 0 0 suite=test_network 7 0 0 suite=test_multipleips_per_nic 1 0 0 suite=test_loadbalance 3 0 0 suite=test_iso 2 1 0 CLOUDSTACK-4833 suite=test_internal_lb 1 0 0 suite=test_guest_vlan_range 2 2 0 suite=test_global_settings 2 2 0 CLOUDSTACK-4835 suite=test_disk_offerings 3 0 0 suite=test_deploy_vms_with_varied_deploymentplanners3 0 0 suite=test_deploy_vm_with_userdata 2 0 0 suite=test_deploy_vm1 0 0 suite=test_affinity_groups 1 0 0 Regards, Rayees
Re: [DISCUSS] Pluggable VM snapshot related operations?
A hypervisor snapshot will snapshot memory also. So determining whether do to the hypervisor snapshot from the quiesce option does not seem proper. Sorry, for all the questions, I'm trying to get to the point of understand if this functionality makes sense at this point of code or if maybe their is a different approach. This is what I'm seeing, what if we state it this way 1) VM snapshot, AFAIK, are not backed up today and exist solely on primary. What if we added a backup phase to VM snapshots that can be optionally supported by the storage providers to possibly backup the VM snapshot volumes. 2) Additionally you want to be able to backup multiple disks at once, regardless of VM snapshot. Why don't we add the ability to put volumeIds in snapshot cmd that if the storage provider supports it will get a batch of volumeIds. Now I know we talked about 2 and there was some concerns about it (mostly from me), but I think we could work through those concerns (forgot what they were...). Right now I just get the feeling we are shoehorning some functionality into VM snapshot that isn't quite the right fit. The no quiesce flow just doesn't seem to make sense to me. Darren On Tue, Oct 8, 2013 at 2:05 PM, SuichII, Christopher chris.su...@netapp.com wrote: Whether the hypervisor snapshot happens depends on whether the 'quiesce' option is specified with the snapshot request. If a user doesn't care about the consistency of their backup, then the hypervisor snapshot/quiesce step can be skipped altogether. This of course is not the case if the default provider is being used, in which case a hypervisor snapshot is the only way of creating a backup since it can't be offloaded to the storage driver. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 4:57 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Who is going to decide whether the hypervisor snapshot should actually happen or not? Or how? Darren On Tue, Oct 8, 2013 at 12:38 PM, SuichII, Christopher chris.su...@netapp.com wrote: -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms – Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 2:24 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? If the VM snapshot is not going to the hypervisor, then yes, it will actually be a hypervisor snapshot. Just to be clear, the unquiesce is not quite a delete - it is a collapse of the VM snapshot and the active VM back into one file. In NetApp, what are you snapshotting? The whole netapp volume (I don't know the correct term), a file on NFS, an iscsi volume? I don't know a whole heck of a lot about the netapp snapshot capabilities. Essentially we are using internal APIs to create file level backups - don't worry too much about the terminology. I know storage solutions can snapshot better and faster than hypervisors can with COW files. I've personally just been always perplexed on whats the best way to implement it. For storage solutions that are block based, its really easy to have the storage doing the snapshot. For shared file systems, like NFS, its seems way more complicated as you don't want to snapshot the entire filesystem in order to snapshot one file. With filesystems like NFS, things are certainly more complicated, but that is taken care of by our controller's operating system, Data ONTAP, and we simply use APIs to communicate with it. Darren On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher chris.su...@netapp.com wrote: I can comment on the second half. Through storage operations, storage providers can create backups much faster than hypervisors and over time, their snapshots are more efficient than the snapshot chains that hypervisors create. It is true that a VM snapshot taken at the storage level is slightly different as it would be psuedo-quiesced, not have it's memory snapshotted. This is accomplished through hypervisor snapshots: 1) VM snapshot request (lets say VM 'A' 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted, creating active VM 'A*' -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*' 3) Storage driver(s) take snapshots of each volume 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled back into VM 'A*' so the hypervisor snapshot no longer exists Now, a couple notes: -The reason this is optional is that not all users necessarily care about the memory or disk consistency of their VMs and would prefer faster snapshots to consistency. -Preemptively, yes, we are actually taking hypervisor snapshots which means there isn't actually a performance of taking
Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand
So...got some good news: I spent a couple hours setting up a KVM environment on Ubuntu 12.04.1 from scratch (Installing SSH, Open iSCSI, Java 7, KVM, Git, CloudStack, CloudStack DEBs, KVM system template, etc.) and I can now add this KVM host to CloudStack (on a related note, no errors in agent.err either). I have no idea what is messed up with my old KVM install on Ubuntu, but the new one works. That being the case, I can close out the JIRA ticket I logged a while back and start integrating your code into mine. On Mon, Oct 7, 2013 at 7:46 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Right...yeah, I didn't mean we'd commit to 4.2, but maybe I should work off of 4.2 since master seems to be un-stable in this regard. I plan to set up a machine in the lab tomorrow with Ubuntu 12.04 from scratch to see if it works when I start clean, but - if it doesn't - I should just use 4.2 for development. On Mon, Oct 7, 2013 at 7:05 PM, Marcus Sorensen shadow...@gmail.comwrote: We can't. This patch will never see 4.2. You can still start working on your plugin on 4.2, but the change represented by this patch can only go into master. On Oct 7, 2013 5:01 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: So, now that I'm getting back to this, do you think I should just try to make this work with 4.2 (like we originally talked about)? I updated again from master, rebuilt, redeployed DEBs and still get this JNA error message: log4j:WARN No appenders could be found for logger (org.apache.commons.httpclient.params.DefaultHttpParams). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfigfor more info. java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.UnsatisfiedLinkError: Can't obtain updateLastError method for class com.sun.jna.Native at com.sun.jna.Native.initIDs(Native Method) at com.sun.jna.Native.clinit(Native.java:139) at org.libvirt.jna.Libvirt.clinit(Unknown Source) at org.libvirt.Library.clinit(Unknown Source) at org.libvirt.Connect.init(Unknown Source) at com.cloud.hypervisor.kvm.resource.LibvirtConnection.getConnection(LibvirtConnection.java:44) at com.cloud.hypervisor.kvm.resource.LibvirtConnection.getConnection(LibvirtConnection.java:37) at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:733) at com.cloud.agent.Agent.init(Agent.java:161) at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:415) at com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:370) at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:351) at com.cloud.agent.AgentShell.start(AgentShell.java:448) ... 5 more Cannot start daemon Service exit with a return value of 5 On Mon, Oct 7, 2013 at 2:31 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Sure, that's a good plan. I'll get to it. On Mon, Oct 7, 2013 at 2:29 PM, Marcus Sorensen shadow...@gmail.comwrote: I know you mentioned you might need some minor changes to it, as well as other minor changes just for master (attach volume switched to pool vs adapter or something). My hope was that you would be able to send an update that works for your plugin on master, I'll test against existing libvirtd storage and apply it. On Oct 7, 2013 1:49 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14381/ This looks reasonable to me, Marcus. When do you think you might start the process of getting this into master? - Mike Tutkowski On September 30th, 2013, 5:14 p.m. UTC, Marcus Sorensen wrote: Review request for cloudstack, edison su and Mike Tutkowski. By Marcus Sorensen. *Updated Sept. 30, 2013, 5:14 p.m.* *Repository: * cloudstack-git Description With custom storage plugins comes the need to prep the KVM host prior to utilizing the disks. e.g. an iscsi initiator needs to log into the target and scan for the lun before it can be used on the host. This patch is an example I developed against 4.2, minor changes may be necessary to apply to master, but I want to share with others who are working on storage so they can ensure it works for them. Please tweak as you see fit. MigrateCommand: pass vmTO object so we can see which disks/storage pool types belong to the vm when migrating a VM. This facilitates being able to call disconnectPhysicalDisksViaVmSpec VirtualMachineManagerImpl: pass VirtualMachineTO when migrating so that we can see
Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand
Although the host is added to KVM, I do see the following issues in the CS MS console (any thoughts on this?): WARN [c.c.u.d.Merovingian2] (secstorage-1:ctx-c1c573ee) Was unable to find lock for the key template_spool_ref2 and thread id 2049868806 INFO [c.c.v.VirtualMachineManagerImpl] (secstorage-1:ctx-c1c573ee) Unable to contact resource. com.cloud.exception.StorageUnavailableException: Resource [StoragePool:1] is unreachable: Unable to create Vol[1|vm=1|ROOT]:com.cloud.utils.exception.CloudRuntimeException: org.libvirt.LibvirtException: internal error Child process (/bin/mount 192.168.233.10:/mnt/secondary/template/tmpl/1/3 /mnt/334b3c4e-764b-362a-be2c-ebe8c490d0a9) status unexpected: exit status 32 at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1027) at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:1069) at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:830) at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:649) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.startSecStorageVm(SecondaryStorageManagerImpl.java:261) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.allocCapacity(SecondaryStorageManagerImpl.java:693) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.expandPool(SecondaryStorageManagerImpl.java:1265) at com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:123) at com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:50) at com.cloud.vm.SystemVmLoadScanner.loadScan(SystemVmLoadScanner.java:101) at com.cloud.vm.SystemVmLoadScanner.access$100(SystemVmLoadScanner.java:33) at com.cloud.vm.SystemVmLoadScanner$1.reallyRun(SystemVmLoadScanner.java:78) at com.cloud.vm.SystemVmLoadScanner$1.runInContext(SystemVmLoadScanner.java:71) at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) On Tue, Oct 8, 2013 at 3:58 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: So...got some good news: I spent a couple hours setting up a KVM environment on Ubuntu 12.04.1 from scratch (Installing SSH, Open iSCSI, Java 7, KVM, Git, CloudStack, CloudStack DEBs, KVM system template, etc.) and I can now add this KVM host to CloudStack (on a related note, no errors in agent.err either). I have no idea what is messed up with my old KVM install on Ubuntu, but the new one works. That being the case, I can close out the JIRA ticket I logged a while back and start integrating your code into mine. On Mon, Oct 7, 2013 at 7:46 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Right...yeah, I didn't mean we'd commit to 4.2, but maybe I should work off of 4.2 since master seems to be un-stable in this regard. I plan to set up a machine in the lab tomorrow with Ubuntu 12.04 from scratch to see if it works when I start clean, but - if it doesn't - I should just use 4.2 for development. On Mon, Oct 7, 2013 at 7:05 PM, Marcus Sorensen shadow...@gmail.comwrote: We can't. This patch will never see 4.2. You can still start working on your plugin on 4.2, but the change represented by this patch can only go into master. On Oct 7, 2013 5:01 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: So, now that I'm getting back to this, do you think I should just try to make this work with 4.2 (like we originally talked about)? I updated again from master, rebuilt, redeployed DEBs and still get this JNA error message: log4j:WARN No appenders could be found for logger (org.apache.commons.httpclient.params.DefaultHttpParams). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfigfor more info. java.lang.reflect.InvocationTargetException at
Re: [New Feature FS] SSL Offload Support for Cloudstack
BouncyCastle, its already in ACS. Off list I'll send you some sample code on how to validate this stuff. Darren On Tue, Oct 8, 2013 at 1:58 PM, Syed Ahmed sah...@cloudops.com wrote: Thanks Darren for your reply. Do you happen to have any info on a library that I can use for certificate validation? Thanks, -Syed On Tue 08 Oct 2013 04:53:40 PM EDT, Darren Shepherd wrote: The API should do input validation on the SSL cert, key and chain. Getting those three pieces of info is usually difficult for most people to get right as they don't really know what those three things are. There's about a 80% chance most calls will fail. If you rely on the provider it will probably just give back some general failure message that we won't be able to map back to the user as useful information. I would implement the cert management as a separate CertificateService. Darren On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed syed1.mush...@gmail.com wrote: A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Termination+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the Netscaler plugin, and await a time when HA proxy has a stable release that includes SSL term. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: [MERGE] spring-modularization to master - Spring Modularization
So what's the verdict? What would it take for everyone to feel warm and fuzzy about this merge, given the troubled past this community has had with Spring. I'm not saying the code is perfect, but so far its not terribly bad :) Darren On Tue, Oct 8, 2013 at 11:10 AM, Chiradeep Vittal chiradeep.vit...@citrix.com wrote: I'm not getting any notifications of BVT test failures. Where do I subscribe? On 10/8/13 10:20 AM, Darren Shepherd darren.s.sheph...@gmail.com wrote: From what I can gather it seems that master currently fails the BVT (and know when I say BVT I mean that black box that apparently exists somewhere doing something, but I have no clue what it really means). So in turn my spring modularization branch will additionally fail BVT. Citrix internal QA ran some tests against my branch and they mostly passed but some failed. Its quite difficult to sort through this all because tests are failing on master. So I don't know what to do at this point. At least my branch won't completely blow up everything. I just know the longer it takes to merge this the more painful it will be Honestly this is all quite frustrating for myself being new to contributing to ACS. I feel somewhat lost in the whole process of how to get features in. I'll refrain from venting my frustrations. Darren
RE: Latest automation result on master
Correction, this result on Xen not KVM; KVM automation stopped in http://jenkins.buildacloud.org due to KVM blocker; I will publish the result once its available. Regards, Rayees From: Rayees Namathponnan Sent: Tuesday, October 08, 2013 2:41 PM To: dev@cloudstack.apache.org Subject: Latest automation result on master Here the BVT automation result on KVM, You can see the result @ http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-smoke-matrix/ We had 100% pass rate in 4.2, but in latest master its reduced around 85%, observed below issues with latest master run https://issues.apache.org/jira/browse/CLOUDSTACK-4835 https://issues.apache.org/jira/browse/CLOUDSTACK-4834 https://issues.apache.org/jira/browse/CLOUDSTACK-4833 Automation result on master Configuration NameAll Failed Passed Defect suite=test_volumes 2 2 0 CLOUDSTACK-4834 suite=test_vm_snapshots 3 0 0 suite=test_vm_life_cycle 10 1 0 suite=test_templates8 1 1 CLOUDSTACK-4833 suite=test_ssvm10 0 0 suite=test_service_offerings4 0 0 suite=test_scale_vm 1 0 0 suite=test_routers 9 0 0 suite=test_resource_detail 1 1 0 suite=test_reset_vm_on_reboot 1 0 0 suite=test_regions 1 0 0 suite=test_pvlan1 0 0 suite=test_public_ip_range 1 0 0 suite=test_privategw_acl1 1 0 suite=test_portable_publicip2 0 0 suite=test_non_contigiousvlan 1 0 0 suite=test_nic 1 0 0 suite=test_network_acl 1 0 0 suite=test_network 7 0 0 suite=test_multipleips_per_nic 1 0 0 suite=test_loadbalance 3 0 0 suite=test_iso 2 1 0 CLOUDSTACK-4833 suite=test_internal_lb 1 0 0 suite=test_guest_vlan_range 2 2 0 suite=test_global_settings 2 2 0 CLOUDSTACK-4835 suite=test_disk_offerings 3 0 0 suite=test_deploy_vms_with_varied_deploymentplanners3 0 0 suite=test_deploy_vm_with_userdata 2 0 0 suite=test_deploy_vm1 0 0 suite=test_affinity_groups 1 0 0 Regards, Rayees
[DISCUSS] make commands.properties the exception, not the rule
I would like to largely remove commands.properties. I think most API commands naturally have a default ACL that should be applied. I think it makes sense to add to the @APICommand flags for user, domain, admin. Then, as an override mechanism, people can edit commands.properties to change the default ACL. This would make it such that people could add new commands without the need to edit commands.properties. Thoughts? How will this play with whatever is being done with rbac? Darren
Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand
Can you mount the secondary storage from your KVM host? On Oct 8, 2013 4:01 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Although the host is added to KVM, I do see the following issues in the CS MS console (any thoughts on this?): WARN [c.c.u.d.Merovingian2] (secstorage-1:ctx-c1c573ee) Was unable to find lock for the key template_spool_ref2 and thread id 2049868806 INFO [c.c.v.VirtualMachineManagerImpl] (secstorage-1:ctx-c1c573ee) Unable to contact resource. com.cloud.exception.StorageUnavailableException: Resource [StoragePool:1] is unreachable: Unable to create Vol[1|vm=1|ROOT]:com.cloud.utils.exception.CloudRuntimeException: org.libvirt.LibvirtException: internal error Child process (/bin/mount 192.168.233.10:/mnt/secondary/template/tmpl/1/3 /mnt/334b3c4e-764b-362a-be2c-ebe8c490d0a9) status unexpected: exit status 32 at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:1027) at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.prepare(VolumeOrchestrator.java:1069) at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:830) at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:649) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.startSecStorageVm(SecondaryStorageManagerImpl.java:261) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.allocCapacity(SecondaryStorageManagerImpl.java:693) at com.cloud.storage.secondary.SecondaryStorageManagerImpl.expandPool(SecondaryStorageManagerImpl.java:1265) at com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:123) at com.cloud.secstorage.PremiumSecondaryStorageManagerImpl.scanPool(PremiumSecondaryStorageManagerImpl.java:50) at com.cloud.vm.SystemVmLoadScanner.loadScan(SystemVmLoadScanner.java:101) at com.cloud.vm.SystemVmLoadScanner.access$100(SystemVmLoadScanner.java:33) at com.cloud.vm.SystemVmLoadScanner$1.reallyRun(SystemVmLoadScanner.java:78) at com.cloud.vm.SystemVmLoadScanner$1.runInContext(SystemVmLoadScanner.java:71) at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) On Tue, Oct 8, 2013 at 3:58 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: So...got some good news: I spent a couple hours setting up a KVM environment on Ubuntu 12.04.1 from scratch (Installing SSH, Open iSCSI, Java 7, KVM, Git, CloudStack, CloudStack DEBs, KVM system template, etc.) and I can now add this KVM host to CloudStack (on a related note, no errors in agent.err either). I have no idea what is messed up with my old KVM install on Ubuntu, but the new one works. That being the case, I can close out the JIRA ticket I logged a while back and start integrating your code into mine. On Mon, Oct 7, 2013 at 7:46 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Right...yeah, I didn't mean we'd commit to 4.2, but maybe I should work off of 4.2 since master seems to be un-stable in this regard. I plan to set up a machine in the lab tomorrow with Ubuntu 12.04 from scratch to see if it works when I start clean, but - if it doesn't - I should just use 4.2 for development. On Mon, Oct 7, 2013 at 7:05 PM, Marcus Sorensen shadow...@gmail.comwrote: We can't. This patch will never see 4.2. You can still start working on your plugin on 4.2, but the change represented by this patch can only go into master. On Oct 7, 2013 5:01 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: So, now that I'm getting back to this, do you think I should just try to make this work with 4.2 (like we originally talked about)? I updated again from master, rebuilt, redeployed DEBs and still get this JNA error message: log4j:WARN No appenders could be found for logger (org.apache.commons.httpclient.params.DefaultHttpParams).
RE: [DISCUSS] Pluggable VM snapshot related operations?
-Original Message- From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com] Sent: Tuesday, October 08, 2013 2:54 PM To: dev@cloudstack.apache.org Subject: Re: [DISCUSS] Pluggable VM snapshot related operations? A hypervisor snapshot will snapshot memory also. So determining whether The memory is optional for hypervisor vm snapshot, a.k.a, the Disk-only snapshots: http://support.citrix.com/proddocs/topic/xencenter-61/xs-xc-vms-snapshots-about.html It's supported by both xenserver/kvm/vmware. do to the hypervisor snapshot from the quiesce option does not seem proper. Sorry, for all the questions, I'm trying to get to the point of understand if this functionality makes sense at this point of code or if maybe their is a different approach. This is what I'm seeing, what if we state it this way 1) VM snapshot, AFAIK, are not backed up today and exist solely on primary. What if we added a backup phase to VM snapshots that can be optionally supported by the storage providers to possibly backup the VM snapshot volumes. It's not about backup vm snapshot, it's about how to take vm snapshot. Usually, take/revert vm snapshot is handled by hypervisor itself, but in NetApp(or other storage vendor) case, They want to change the default behavior of hypervisor-base vm snapshot. Some examples: 1. take hypervisor based vm snapshots, on primary storage, hypervisor will maintain the snapshot chain. 2. take vm snapshot through NetApp: a. first, quiesce VM if user specified. There is no separate API to quiesce VM on the hypervisor, so here we will take a VM snapshot through hypervisor API call, hypervisor will take volume snapshot on each volume of the VM. Let's say, on the primary storage, the disk chain looks like: base-image | V Parent disk / \ VV Current disksnapshot-a b. from snapshot-a, find out its parent disk, then take snapshot through NetApp c. un- quiesce VM, here, go to hypervisor, delete snapshot snapshot-a, hypervisor should be able to consolidate current disk and parent disk into one disk, thus from hypervisor point of view , there is always, at most, only one snapshot for the VM. For revert VM snapshot, as long as the VM is stopped, NetApp can revert the snapshot created on NetApp storage easily, and efficiently. The benefit of this whole process, as Chris pointed out, if the snapshot chain is quite long, hypervisor based VM snapshot will get performance hit. 2) Additionally you want to be able to backup multiple disks at once, regardless of VM snapshot. Why don't we add the ability to put volumeIds in snapshot cmd that if the storage provider supports it will get a batch of volumeIds. Now I know we talked about 2 and there was some concerns about it (mostly from me), but I think we could work through those concerns (forgot what they were...). Right now I just get the feeling we are shoehorning some functionality into VM snapshot that isn't quite the right fit. The no quiesce flow just doesn't seem to make sense to me. Not sure above NetApp proposed work flow makes sense to you or to other body or not. If this work flow is only specific to NetApp, then we don't need to enforce the whole process for everybody. Darren On Tue, Oct 8, 2013 at 2:05 PM, SuichII, Christopher chris.su...@netapp.com wrote: Whether the hypervisor snapshot happens depends on whether the 'quiesce' option is specified with the snapshot request. If a user doesn't care about the consistency of their backup, then the hypervisor snapshot/quiesce step can be skipped altogether. This of course is not the case if the default provider is being used, in which case a hypervisor snapshot is the only way of creating a backup since it can't be offloaded to the storage driver. -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms - Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 4:57 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Who is going to decide whether the hypervisor snapshot should actually happen or not? Or how? Darren On Tue, Oct 8, 2013 at 12:38 PM, SuichII, Christopher chris.su...@netapp.com wrote: -- Chris Suich chris.su...@netapp.com NetApp Software Engineer Data Center Platforms - Cloud Solutions Citrix, Cisco Red Hat On Oct 8, 2013, at 2:24 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: So in the implementation, when we say quiesce is that actually being implemented as a VM snapshot (memory and disk). And then when you say unquiesce you are talking about deleting the VM snapshot? If the VM snapshot is not going to the hypervisor, then yes, it will actually be a hypervisor snapshot. Just to be clear, the unquiesce is not quite a delete - it is
RE: [New Feature FS] SSL Offload Support for Cloudstack
There is command in ACS, UploadCustomCertificateCmd, which can receive ssl cert, key can chain as input. Maybe can share some code? -Original Message- From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com] Sent: Tuesday, October 08, 2013 1:54 PM To: dev@cloudstack.apache.org Subject: Re: [New Feature FS] SSL Offload Support for Cloudstack The API should do input validation on the SSL cert, key and chain. Getting those three pieces of info is usually difficult for most people to get right as they don't really know what those three things are. There's about a 80% chance most calls will fail. If you rely on the provider it will probably just give back some general failure message that we won't be able to map back to the user as useful information. I would implement the cert management as a separate CertificateService. Darren On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed syed1.mush...@gmail.com wrote: A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Terminatio n+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the Netscaler plugin, and await a time when HA proxy has a stable release that includes SSL term. Darren On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed sah...@cloudops.com wrote: Hi, I have been working on adding SSL offload functionality to cloudstack and make it work for Netscaler. I have an initial design documented at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offload ing+Support and I would really love your feedback. The bug for this is https://issues.apache.org/jira/browse/CLOUDSTACK-4821 . Thanks, -Syed
Re: [DISCUSS] make commands.properties the exception, not the rule
On 10/8/13 3:23 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: I would like to largely remove commands.properties. I think most API commands naturally have a default ACL that should be applied. I think it makes sense to add to the @APICommand flags for user, domain, admin. Then, as an override mechanism, people can edit commands.properties to change the default ACL. This would make it such that people could add new commands without the need to edit commands.properties. Thoughts? How will this play with whatever is being done with rbac? Darren Darren, what if admin wants to disable certain API command, how does he do it with this new approach?
RE: [DISCUSS] make commands.properties the exception, not the rule
I think commands.properties is not just providing ACL on the API - but it also serves as a whitelist of APIs available on the deployment. It can be a one-step configuration option to disable certain functionality. Prachi -Original Message- From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com] Sent: Tuesday, October 08, 2013 3:24 PM To: dev@cloudstack.apache.org Subject: [DISCUSS] make commands.properties the exception, not the rule I would like to largely remove commands.properties. I think most API commands naturally have a default ACL that should be applied. I think it makes sense to add to the @APICommand flags for user, domain, admin. Then, as an override mechanism, people can edit commands.properties to change the default ACL. This would make it such that people could add new commands without the need to edit commands.properties. Thoughts? How will this play with whatever is being done with rbac? Darren
Re: Typo in KVM docs
That's what I get for not fact checking. Dagnabbit. On Tuesday, October 8, 2013, Francois Gaudreault wrote: Ok great, I wasn't sure since Travis kinda made the same typo like the docs ;P Thanks! Francois On 10/8/2013, 4:27 PM, Mike Tutkowski wrote: We decided it is 16509 (which is what is the default and was - at one point - written incorrectly in the documentation as 16059). On Tue, Oct 8, 2013 at 2:19 PM, Francois Gaudreault fgaudrea...@cloudops.com mailto:fgaudrea...@cloudops.com wrote: Ok now I am mixed up :P libvirtd.conf has 16509 by default. (at least on CentOS) So is it 16509 or 16059? :P Francois On 10/8/2013, 2:58 PM, Mike Tutkowski wrote: I was actually looking at what's on the web for 4.2 (even though I'm developing on master). When I went to find this isssue in 4.3, it appears the problem has been corrected. On Tue, Oct 8, 2013 at 12:56 PM, Chip Childers chip.child...@sungard.com mailto:chip.child...@sungard.comwrote: Careful which branch you are working on Mike. I think that David's plan is that we are baselines on 4.2 in the new docs repo, and he was going to then pull from 4.2 into master (again, in the new repo). On Tue, Oct 8, 2013 at 2:54 PM, Mike Tutkowski mike.tutkow...@solidfire.com mailto:mike.tutkow...@solidfire.com wrote: I have some other KVM docs that I've been updating as I do my development work, so I should be able to modify this, as well. Thanks! On Tue, Oct 8, 2013 at 12:51 PM, Travis Graham tgra...@tgraham.us mailto:tgra...@tgraham.us wrote: Yep, that's a typo. Should be 16059 like libvirtd.conf has by default. If you'll open a Jira for it I'll submit a patch the docs. Travis On Oct 8, 2013, at 2:44 PM, Mike Tutkowski mike.tutkow...@solidfire.com mailto:mike.tutkow...@solidfire.com wrote: Hi, I see the KVM install guide says: tcp_port = 16059 I'm wondering if this is correct or if it should be 16509, which is what is in /etc/libvirt/libvirtd.conf by default. Thanks, -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com mailto:mike.tutkow...@solidfire.com o: 303.746.7302 tel:303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/** solution/overview/?video=playhttp://solidfire.com/solution/overview/?video=play *™* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com mailto:mike.tutkow...@solidfire.com o: 303.746.7302 tel:303.746.7302 Advancing the way the world uses the cloudhttp://solidfire.com/** solution/overview/?video=playhttp://solidfire.com/solution/overview/?video=play *™* -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com mailto:fgaudrea...@cloudops.com 514-629-6775 tel:514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com http://www.cloudops.com @CloudOps_ -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com mailto:mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud http://solidfire.com/** solution/overview/?video=playhttp://solidfire.com/solution/overview/?video=play **/™/ -- Francois Gaudreault Architecte de Solution Cloud | Cloud Solutions Architect fgaudrea...@cloudops.com 514-629-6775 - - - CloudOps 420 rue Guy Montréal QC H3J 1S6 www.cloudops.com @CloudOps_
[DISCUSS] listAll and recursive parameters for BaseListDomainResourceCmd should have default value as TRUE
Hi there, In working with RBAC design, I am really puzzled by the two query parameter listAll and recursive for all BaseListDomainResourceCmd. @Parameter(name = ApiConstants.LIST_ALL, type = CommandType.BOOLEAN, description = If set to false, + list only resources belonging to the command's caller; if set to true - list resources that the caller is authorized to see. Default value is false) private Boolean listAll; @Parameter(name = ApiConstants.IS_RECURSIVE, type = CommandType.BOOLEAN, description = defaults to false, + but if true, lists all resources from the parent specified by the domainId till leaves.) private Boolean recursive; IMHO, if a caller invokes a list API without passing any specific query parameter, he/she should see all resources that he/she is authorized to see. In CloudStack, we have implicit authorization rules as follows: 1. Root admin should be able to see all the resources under Root domain. 2. Domain admin should be able to see all the resources under its own domain tree. 3. Normal user should only see the resources owned by him. 4. Project account should be able to see resources assigned to that project. Based on current AccountManager.buildACLSearchParameters implementation, we are not observing the passed listAll and recursive value at all, seems always treating listAll=true and recursive=true. Thus, I am proposing that we change the default value of listAll and recursive to TRUE instead of current FALSE. Any objections? Thanks -min
Re: [DISCUSS] listAll and recursive parameters for BaseListDomainResourceCmd should have default value as TRUE
On 10/8/13 4:28 PM, Min Chen min.c...@citrix.com wrote: Hi there, In working with RBAC design, I am really puzzled by the two query parameter listAll and recursive for all BaseListDomainResourceCmd. @Parameter(name = ApiConstants.LIST_ALL, type = CommandType.BOOLEAN, description = If set to false, + list only resources belonging to the command's caller; if set to true - list resources that the caller is authorized to see. Default value is false) private Boolean listAll; @Parameter(name = ApiConstants.IS_RECURSIVE, type = CommandType.BOOLEAN, description = defaults to false, + but if true, lists all resources from the parent specified by the domainId till leaves.) private Boolean recursive; IMHO, if a caller invokes a list API without passing any specific query parameter, he/she should see all resources that he/she is authorized to see. In CloudStack, we have implicit authorization rules as follows: 1. Root admin should be able to see all the resources under Root domain. 2. Domain admin should be able to see all the resources under its own domain tree. 3. Normal user should only see the resources owned by him. listAll doesn't impact user calls. 4. Project account should be able to see resources assigned to that project. Project account can't make the calls. Any CS account assigned to the project + admin can list project resources. When listAll is passed in, all resources except project resources, will be returned to the caller. When projectId=-1 is passed in, all resources of all projects in the system that caller is authorized to see, will be returned to the caller. Based on current AccountManager.buildACLSearchParameters implementation, we are not observing the passed listAll and recursive value at all, seems always treating listAll=true and recursive=true. recursive=false is respected when passed along with the domainId. In this case, it will list all the resources under this domain only, without subdomains. When recursive=true is passed with domainId, the resources of domains + subdomains will be returned. Thus, I am proposing that we change the default value of listAll and recursive to TRUE instead of current FALSE. Any objections? The main objection - it will break all the partners/third party apps/UIs built on the current CS behavior. Thanks -min Min,
Re: [New Feature FS] SSL Offload Support for Cloudstack
Thanks Edison for the reply. I see that there is already an implementation of KeystoreManager which does certificate validation and saves it in the keystore table. Also, the API (UploadCustomCertificate) is only callable from admin. I could add functionality to this class for handling certificate chain and also make sure the table stores the account_id as well. We could reduce creating one table by reusing the keystore table. I have a question about terminology. What is a service and a manager because I see them both being used. In my case, I assume that my CertificateService will have the KeystoreManager injected and the Service will serve as a proxy between the Resource layer and the KeystoreManager which is the Db layer. Will this approach work? Thanks -Syed On Tue 08 Oct 2013 06:56:34 PM EDT, Edison Su wrote: There is command in ACS, UploadCustomCertificateCmd, which can receive ssl cert, key can chain as input. Maybe can share some code? -Original Message- From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com] Sent: Tuesday, October 08, 2013 1:54 PM To: dev@cloudstack.apache.org Subject: Re: [New Feature FS] SSL Offload Support for Cloudstack The API should do input validation on the SSL cert, key and chain. Getting those three pieces of info is usually difficult for most people to get right as they don't really know what those three things are. There's about a 80% chance most calls will fail. If you rely on the provider it will probably just give back some general failure message that we won't be able to map back to the user as useful information. I would implement the cert management as a separate CertificateService. Darren On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed syed1.mush...@gmail.com wrote: A question about implementation. I was looking at other commands and the execute() method for each of the other commands seem to call a service ( _lbservice for example ) which takes care of updating the DB and calling the resource layer. Should the Certificate management be implemented as a service or is there something else that I can use? An example would be immensely helpful. Thanks -Syed On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote: Thanks for the feedback guys. Really appreciate it. 1) Changing the name to SSL Termination. I don't have a problem with that. I was looking at Netscaler all the time and they call it SSL offloading. But I agree that termination is a more general term. I have changed the name. The new page is at https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Terminatio n+Support 2) Specify the protocol type. Currently the protocol type of a loadbalncer gets set by checking the source and destination port ( see getNetScalerProtocol() in NetscalerResouce.java ) . So, we should change that and add another optional field in the createLoadBalancerRule for protocol. 3) Certificate chain as a seperate parameter. Again, I was looking at Netscaler as an example but separating the chain and certificate makes sense. I have updated the document accordingly. I was assuming that the certificate parsing/validation would be done by the device and we would just pass the certficate data as-is. But if we are adding chains separately, we should have the ability to parse and combine the chain and certificate for some devices as you mentioned. Thanks -Syed On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote: On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote: Technicality here, can we call the functionality SSL termination? While technically we are offloading ssl from the VM, offloading typically carries a connotation that its being done in hardware. So we are really talking about SSL termination. +1 - completely agree. There's certainly the possibility of an *implementation* being true offloading, but I'd generalize to termination to account for a non-hardware offload of the crypto processing. Couple comments. I wouldn't want to assume anything about SSL based on port numbers. So instead specify the protocol (http/https/ssl/tcp) for the front and back side of the load balancer. Additionally, I'd prefer the chain not be in the cert. When configuring some backends you need the cert and chain separate. It would be easier if they were stored that way. Otherwise you have to do logic of parsing all the certs in the keystore and look for the one that matches the key. Also +1 to this. Cert chains may be optional, certainly, but should actually be separate from the actual cert in the configuration. The implementation may need to combine them into one document, but that's implementation specific. Otherwise, awesome feature. I'll tell you, from an impl perspective, parsing and validating the SSL certs is a pain. I can probably find some java code to help out here on this as I've done this before in the past. Yes, this is a sorely needed feature. I'm happy to see this be added to the
Review Request 14549: Rename net.juniper.contrail to org.apache.cloudstack.network.contrail
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14549/ --- Review request for cloudstack. Repository: cloudstack-git Description --- Rename net.juniper.contrail to org.apache.cloudstack.network.contrail. Diffs - client/tomcatconf/applicationContext.xml.in 0ab2515 client/tomcatconf/componentContext.xml.in 157ad5a plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/command/CreateServiceInstanceCmd.java 92f5eeb plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/response/ServiceInstanceResponse.java 1b7a7d8 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElement.java 885a60f plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElementImpl.java 3a38020 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailGuru.java c655b0b plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManager.java 5195793 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManagerImpl.java 8a3ca1b plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/DBSyncGeneric.java d169b37 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/EventUtils.java acd1bed plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ManagementNetworkGuru.java bad2502 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ModelDatabase.java f9e7c24 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSync.java 4c8c2e9 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSyncImpl.java 06daf12 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandler.java 6f0ecf2 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandlerImpl.java aa4e9d5 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManager.java f3884fb plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManagerImpl.java b90792c plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceVirtualMachine.java 9c8b61d plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpModel.java ca90666 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpPoolModel.java 8e238fd plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/InstanceIpModel.java ff08560 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelController.java 7abb40a plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObject.java 7cd420c plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObjectBase.java 4b05e96 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ServiceInstanceModel.java f65bfc7 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VMInterfaceModel.java 0ec7c9e plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VirtualMachineModel.java df40025 plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VirtualNetworkModel.java 99ab944 plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/api/command/CreateServiceInstanceCmd.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/api/response/ServiceInstanceResponse.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ContrailElement.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ContrailElementImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ContrailGuru.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ContrailManager.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ContrailManagerImpl.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/DBSyncGeneric.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/EventUtils.java PRE-CREATION plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ManagementNetworkGuru.java
questions about registerIso API and updateIsoPermissions API
Hi, I have questions about registerIso API and updateIsoPermissions API. (1) A normal user is allowed to specify isextractable property when registering an ISO (through registerIso API), but NOT allowed to update isextractable property when updating an ISO (through updateIsoPermissions API). Is this by design or it's just an API bug? (2) A normal user is NOT allowed to specify isfeatured property when registering an ISO (through registerIso API), but allowed to update isfeatured property when updating an ISO (through updateIsoPermissions API)? Is this by design or it's just an API bug? Jessica
Re: questions about registerIso API and updateIsoPermissions API
On 10/8/13 5:10 PM, Jessica Wang jessica.w...@citrix.com wrote: Hi, I have questions about registerIso API and updateIsoPermissions API. (1) A normal user is allowed to specify isextractable property when registering an ISO (through registerIso API), but NOT allowed to update isextractable property when updating an ISO (through updateIsoPermissions API). Is this by design or it's just an API bug? Jessica, did you mean updateIso? As UpdateIsoPermissions updates permissions only. Answering your question - if user can specify the flag when registering the template, he should be allowed to update it. (2) A normal user is NOT allowed to specify isfeatured property when registering an ISO (through registerIso API), but allowed to update isfeatured property when updating an ISO (through updateIsoPermissions API)? Is this by design or it's just an API bug? Again, should be updateIso. And yes, its a bug if he can update the flag on existing object, but can't create the object with this flag by default. Jessica
Re: questions about registerIso API and updateIsoPermissions API
Answers inline. From: Jessica Wang jessica.w...@citrix.commailto:jessica.w...@citrix.com Date: Tuesday 8 October 2013 5:10 PM To: dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org Cc: Alena Prokharchyk alena.prokharc...@citrix.commailto:alena.prokharc...@citrix.com, Nitin Mehta nitin.me...@citrix.commailto:nitin.me...@citrix.com, Shweta Agarwal shweta.agar...@citrix.commailto:shweta.agar...@citrix.com Subject: questions about registerIso API and updateIsoPermissions API Hi, I have questions about registerIso API and updateIsoPermissions API. (1) A normal user is allowed to specify isextractable property when registering an ISO (through registerIso API), but NOT allowed to update isextractable property when updating an ISO (through updateIsoPermissions API). Is this by design or it's just an API bug? Nitin This is a grey area. This was done for templates (Isos just inherited it) because derived templates may or may not belong to the same user and we want to follow the principle of least privilege. At the moment, I think that for Isos we should allow to edit it so would call it an API bug. (2) A normal user is NOT allowed to specify isfeatured property when registering an ISO (through registerIso API), but allowed to update isfeatured property when updating an ISO (through updateIsoPermissions API)? Is this by design or it's just an API bug? Nitin Register Iso does provide an option to mark an ISO featured. I see that in the latest master. Jessica
Re: Latest Master DB issue
The problem seems to me is whether or not a background job that touches with database respects the bootstrap initialization order. As of VmwareContextPool itself, its background job does something fully within its own territory (no database, no reference outside). and vmware-base package was originally designed to be running on its own without assuming any container that offers unified lifecycle management. I don't think this type of background job has anything to do with the failure in this particular case. However, I do agree that we need to clean up and unify a few things inside the CloudStack, especially on life-cycle management and all background-jobs that their execution path touches with component life-cyle, auto-wiring, AOP etc. To live with the time before the spring modularization merge, we just need to figure out which background job that triggers all these and get it fixed, it used to work before even it is fragile, I don't think the fix of the problem is impossible. Is anyone working on this issue? Kelven On 10/8/13 2:35 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Some more info about this. What specifically is happening is that the VmwareContextPool call is creating a Timer during the constructor of the class which is being constructed in a static block from VmwareContextFactory. So when the VmwareContextFactory class is loaded by the class loader, the background thread is created. Which is way, way before the Database upgrade happens. This will still be fixed if we merge the spring modularization, but this vmware code should change regardless. Background threads should only be launched from a @PostConstruct or ComponentLifecycle.start() method. They should not be started when a class is constructed or loaded. Darren On Tue, Oct 8, 2013 at 2:22 PM, Darren Shepherd darren.s.sheph...@gmail.com wrote: Hey, I half way introduced this issue in a really long and round about way. I don't think there's a good simple fix unless we merge the spring-modularization branch. I'm going to look further into it. But here's the background of why we are seeing this. I introduced Managed Context framework that will wrap all the background threads and manage the thread locals. This was the union of CallContext, ServerContext, and AsyncJob*Context into one simple framework. The problem with ACS though is that A LOT of background threads are spawned at all different random times of the initialization. So what is happening is that during the initialization of some bean its kicking off a background thread that tries to access the database before the database upgrade has ran. Now the CallContext has a strange suicidal behaviour (this was already there, I didn't change this), if it can't find account 1, it does a System.exit(1). So since this one background thread is failing, the whole JVM shuts down. Before CallContext only existed on some threads, but the addition of the Managed Context framework, it is now on almost all threads. Now in the spring-modularization branch there is a very strict and (mostly) deterministic initialization order. The database upgrade class will be initialized and ran before any other bean in CloudStack is even initiated. So this works around all these DB problems. The current spring setup in master is very, very fragile. As I said before, it is really difficult to ensure certain aspects are initialized before others, and since we moved to (which I don't really agree with) doing DB schema upgrades purely on startup of the mgmt server, we now have to be extra careful about initialization order. Darren On Tue, Oct 8, 2013 at 11:34 AM, Rayees Namathponnan rayees.namathpon...@citrix.com wrote: Here the defect created for this issue https://issues.apache.org/jira/browse/CLOUDSTACK-4825 Regards, Rayees -Original Message- From: Francois Gaudreault [mailto:fgaudrea...@cloudops.com] Sent: Tuesday, October 08, 2013 11:15 AM To: dev@cloudstack.apache.org Subject: Re: Latest Master DB issue I guess in my case, it's fine. It was a fresh install... Francois On 10/8/2013, 2:08 PM, Darren Shepherd wrote: Deploy db from maven will drop all the tables. Not sure if this is fresh install or not. For master, running mvn will be your best bet. Otherwise you can look at running com.cloud.upgrade.DatabaseCreator manually if your adventurous. Darren On Tue, Oct 8, 2013 at 10:55 AM, Francois Gaudreault fgaudrea...@cloudops.com wrote: Thanks Alena for the explaination. What is the better path to fix this on our setup? Should I wait for a fix in master or should I manually run the deploydb with mvn? I guess the second option won't work since I used RPM? Francois On 10/8/2013, 1:47 PM, Alena Prokharchyk wrote: Ok, this is what going on - the DB upgrade procedure is different on developer's setup and when deployed using cloudstack-setup-databases On developers setup: 1) you deploy the code 2)