Re: Fault percentage value of CPU usage in Cloud Platform
Hello Guys, Can someone give solution to my issue? Looking for help. Best Regards, Anil. On Thu, Nov 24, 2016 at 12:14 PM, anil lakineni < anilkumar459.lakin...@gmail.com> wrote: > Dear Will, > > Good Afternoon, i hope everything is fine at your end. > > Please find my comments for your questions, > > - do you have VMs allocated, but turned off? They will count towards the > provisioned CPU even though they are not running because they could be > started at any time and are expecting to have the resources to start. > > *Yes, i have few VMs which are in shutdown state. But, AFAIK the CPU will > not be countable towards the VPS's which are in shutdown state. Because, > when i turned OFF/ON any VPS the CPU allocated and percentage of that > allocated values are changing accordingly. * > > > - do you have more than one cluster? The dashboard only shows the most used > cluster, but if you drill down it shows the whole environments resources, > so if you have more than one cluster, that could explain the difference. > *Yes, i have two clusters. if you see my previous e-mails i was already > mentioned that i can see the true allocated value in the DASHBOARD (i.e., > 800GHz/2000GHz) and the same value in the whole resources (Zone level) as > well. But when it comes to percentage value, the DASH BOARD value is > showing wrong (91%) value where as in the whole resources tab the value > showing is 40% and it's correct since mathematically the percentage of > 800/2000 gives us 40%. * > *Here, the issue is with the percentage of allocated CPU value in the DASH > BOARD. Why it is showing wrong? and it causing us to fail the deployments > (since the cloud platform is verifying the percentage of allocated CPU > value what is there in the DASHBOARD not from the whole resources tab).* > > - are you trying to deploy to a specific cluster with a service offering > tag? SvcOffering:WinL? Is that the most used cluster? > *Yes, to the second cluster (WinL tag) i'm trying. And the two clusters > are almost using in the same ratio.* > > > Is it a bug? my Cloud Version is 4.5. > Do i need to restart any services in the management server to get the > actual percentage value at DASH BOARD? > Do i need to hack the DataBase for changes? > > *Please let me know if you need more information to help me on issue > resolving. Thanks.* > > Best Regards, > Anil. > > On Tue, Nov 22, 2016 at 3:22 PM, Will Stevens <williamstev...@gmail.com> > wrote: > >> A couple things. >> - do you have VMs allocated, but turned off? They will count towards the >> provisioned CPU even though they are not running because they could be >> started at any time and are expecting to have the resources to start. >> - do you have more than one cluster? The dashboard only shows the most >> used >> cluster, but if you drill down it shows the whole environments resources, >> so if you have more than one cluster, that could explain the difference. >> - are you trying to deploy to a specific cluster with a service offering >> tag? SvcOffering:WinL? Is that the most used cluster? >> >> Let us know. >> >> On Nov 22, 2016 6:51 AM, "anil lakineni" <anilkumar459.lakin...@gmail.com >> > >> wrote: >> >> > Hi Sudharma, >> > >> > I verified the management server logs when the VPS got failed to deploy >> and >> > i found that the value of CPU is exceeding than the threshold value So >> that >> > VPS deployment has been failed. >> > Then i have changed the CPU disable & alert threshold value to above 90% >> > and i was able to deploy the VPS. >> > >> > Please check *http://pastebin.com/irrS0TTg < >> http://pastebin.com/irrS0TTg>* >> > for the management server log when the VM deployment was failed. >> > >> > *The brief content of the log is-* >> > >> > 2016-11-17 12:46:34,100 DEBUG [c.c.d.DeploymentPlanningManagerImpl] >> > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) >> (logid:393001e5) >> > DeploymentPlanner allocation algorithm: >> > com.cloud.deploy.FirstFitPlanner@5a32f393 >> > 2016-11-17 12:46:34,101 DEBUG [c.c.d.DeploymentPlanningManagerImpl] >> > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) >> (logid:393001e5) >> > Trying to allocate a host and storage pools from dc:1, >> > pod:null,cluster:null, requested cpu: 38400, requested ram: 68719476736 >> > 2016-11-17 12:46:34,101 DEBUG [c.c.d.DeploymentPlanningManagerImpl] >> > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) >> (logid:393001e5) >> > Is R
Re: Fault percentage value of CPU usage in Cloud Platform
Dear Will, Good Afternoon, i hope everything is fine at your end. Please find my comments for your questions, - do you have VMs allocated, but turned off? They will count towards the provisioned CPU even though they are not running because they could be started at any time and are expecting to have the resources to start. *Yes, i have few VMs which are in shutdown state. But, AFAIK the CPU will not be countable towards the VPS's which are in shutdown state. Because, when i turned OFF/ON any VPS the CPU allocated and percentage of that allocated values are changing accordingly. * - do you have more than one cluster? The dashboard only shows the most used cluster, but if you drill down it shows the whole environments resources, so if you have more than one cluster, that could explain the difference. *Yes, i have two clusters. if you see my previous e-mails i was already mentioned that i can see the true allocated value in the DASHBOARD (i.e., 800GHz/2000GHz) and the same value in the whole resources (Zone level) as well. But when it comes to percentage value, the DASH BOARD value is showing wrong (91%) value where as in the whole resources tab the value showing is 40% and it's correct since mathematically the percentage of 800/2000 gives us 40%. * *Here, the issue is with the percentage of allocated CPU value in the DASH BOARD. Why it is showing wrong? and it causing us to fail the deployments (since the cloud platform is verifying the percentage of allocated CPU value what is there in the DASHBOARD not from the whole resources tab).* - are you trying to deploy to a specific cluster with a service offering tag? SvcOffering:WinL? Is that the most used cluster? *Yes, to the second cluster (WinL tag) i'm trying. And the two clusters are almost using in the same ratio.* Is it a bug? my Cloud Version is 4.5. Do i need to restart any services in the management server to get the actual percentage value at DASH BOARD? Do i need to hack the DataBase for changes? *Please let me know if you need more information to help me on issue resolving. Thanks.* Best Regards, Anil. On Tue, Nov 22, 2016 at 3:22 PM, Will Stevens <williamstev...@gmail.com> wrote: > A couple things. > - do you have VMs allocated, but turned off? They will count towards the > provisioned CPU even though they are not running because they could be > started at any time and are expecting to have the resources to start. > - do you have more than one cluster? The dashboard only shows the most used > cluster, but if you drill down it shows the whole environments resources, > so if you have more than one cluster, that could explain the difference. > - are you trying to deploy to a specific cluster with a service offering > tag? SvcOffering:WinL? Is that the most used cluster? > > Let us know. > > On Nov 22, 2016 6:51 AM, "anil lakineni" <anilkumar459.lakin...@gmail.com> > wrote: > > > Hi Sudharma, > > > > I verified the management server logs when the VPS got failed to deploy > and > > i found that the value of CPU is exceeding than the threshold value So > that > > VPS deployment has been failed. > > Then i have changed the CPU disable & alert threshold value to above 90% > > and i was able to deploy the VPS. > > > > Please check *http://pastebin.com/irrS0TTg <http://pastebin.com/irrS0TTg > >* > > for the management server log when the VM deployment was failed. > > > > *The brief content of the log is-* > > > > 2016-11-17 12:46:34,100 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) > (logid:393001e5) > > DeploymentPlanner allocation algorithm: > > com.cloud.deploy.FirstFitPlanner@5a32f393 > > 2016-11-17 12:46:34,101 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) > (logid:393001e5) > > Trying to allocate a host and storage pools from dc:1, > > pod:null,cluster:null, requested cpu: 38400, requested ram: 68719476736 > > 2016-11-17 12:46:34,101 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) > (logid:393001e5) > > Is ROOT volume READY (pool already allocated)?: No > > 2016-11-17 12:46:34,101 DEBUG [c.c.d.FirstFitPlanner] > > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) > (logid:393001e5) > > Searching all possible resources under this Zone: 1 > > 2016-11-17 12:46:34,104 DEBUG [c.c.d.FirstFitPlanner] > > (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) > (logid:393001e5) > > Listing pods in order of aggregate capacity, that have (atleast one host > > with) enough CPU and RAM capacity under this Zone: 1 > > 2016-11-17 12:46:34,111 DEBUG [c.c.d.FirstFitPlanner] > >
Re: Fault percentage value of CPU usage in Cloud Platform
af38dbf FirstFitRoutingAllocator) (logid:393001e5) Host Allocator returning 0 suitable hosts 2016-11-17 12:46:34,170 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) No suitable hosts found 2016-11-17 12:46:34,170 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) No suitable hosts found under this Cluster: 1 2016-11-17 12:46:34,174 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Could not find suitable Deployment Destination for this VM under any clusters, returning. 2016-11-17 12:46:34,174 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Searching all possible resources under this Zone: 1 2016-11-17 12:46:34,177 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Listing pods in order of aggregate capacity, that have (atleast one host with) enough CPU and RAM capacity under this Zone: 1 2016-11-17 12:46:34,184 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Removing from the podId list these pods from avoid set: [] 2016-11-17 12:46:34,188 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Checking resources under Pod: 1 2016-11-17 12:46:34,189 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Listing clusters in order of aggregate capacity, that have (atleast one host with) enough CPU and RAM capacity under this Pod: 1 2016-11-17 12:46:34,196 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Removing from the clusterId list these clusters from avoid set: [1] 2016-11-17 12:46:34,205 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) *Cannot allocate cluster list [5] for vm creation since their allocated percentage crosses the disable capacity threshold defined at each cluster/ at global value for capacity Type : 1, skipping these clusters* 2016-11-17 12:46:34,205 DEBUG [c.c.d.FirstFitPlanner] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) No clusters found after removing disabled clusters and clusters in avoid list, returning. 2016-11-17 12:46:34,212 DEBUG [c.c.v.UserVmManagerImpl] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) Destroying vm VM[User|i-91-736-VM] as it failed to create on Host with Id:null 2016-11-17 12:46:34,252 DEBUG [c.c.c.CapacityManagerImpl] (API-Job-Executor-22:ctx-f48bbb10 job-98412 ctx-daf38dbf) (logid:393001e5) VM state transitted from :Stopped to Error with event: OperationFailedToErrorvm's original host id: null new host id: null host id before state transition: null Please let me know if you require more information, i will provide you. Best Regards, Anil. On Tue, Nov 22, 2016 at 12:10 PM, Sudharma Jain <sudharma@gmail.com> wrote: > Hi Anil, > > There could be a bug with the dashboard, but it has nothing to do with the > deployment failure. Check your management server logs. > > Thanks, > Sudharma > > On Tue, Nov 22, 2016 at 1:25 PM, anil lakineni < > anilkumar459.lakin...@gmail.com> wrote: > > > Good Morning, > > > > @Will- but we don't have support contract. > > > > @Bharat- True, but the allocated CPU percentage value is showing wrong in > > the Dashboard where as in Zone's Resources *(Path is: 'Infrastructure' -> > > 'Zones' -> 'click on desired zone name' -> 'Resources') *the percentage > > value is showing correct. > > > > Total CPU allocated is 800 GHz out of 2000 GHz. So that means the > > percentage value should be in 40% range but in my case it is showing 91% > in > > the Dashboard which leads in failing new deployments. But, the same value > > in Zone's Resources is showing accurate 40% value. > > > > > > But, for new VPS or VM deployments the cloud is preferring dashboard > > percentage value not the one which is there at Zone's Resources. So would > > you help me to fix this issue? > > > > > > Best Regards, > > Anil. > > > > On Mon, Nov 21, 2016 at 7:20 PM, Bharat Kumar < > bharat.ku...@accelerite.com > > > > > wrote: > > > > > Hi, > > > > > > There may be a difference in what you have allocated and what is being > > > actually used. The dashboard shows what is allocated. > > > > > > Regards, > > > Bharat. > > > > > > On 11/21/16, 9:44 PM, "williamstev...@gmail.com on behalf of Will > > >
Re: Fault percentage value of CPU usage in Cloud Platform
Good Morning, @Will- but we don't have support contract. @Bharat- True, but the allocated CPU percentage value is showing wrong in the Dashboard where as in Zone's Resources *(Path is: 'Infrastructure' -> 'Zones' -> 'click on desired zone name' -> 'Resources') *the percentage value is showing correct. Total CPU allocated is 800 GHz out of 2000 GHz. So that means the percentage value should be in 40% range but in my case it is showing 91% in the Dashboard which leads in failing new deployments. But, the same value in Zone's Resources is showing accurate 40% value. But, for new VPS or VM deployments the cloud is preferring dashboard percentage value not the one which is there at Zone's Resources. So would you help me to fix this issue? Best Regards, Anil. On Mon, Nov 21, 2016 at 7:20 PM, Bharat Kumar <bharat.ku...@accelerite.com> wrote: > Hi, > > There may be a difference in what you have allocated and what is being > actually used. The dashboard shows what is allocated. > > Regards, > Bharat. > > On 11/21/16, 9:44 PM, "williamstev...@gmail.com on behalf of Will > Stevens" <williamstev...@gmail.com on behalf of wstev...@cloudops.com> > wrote: > > >You will have to contact Accelerite for support with ACP (previously CCP). > >We have no visibility into the ACP code or how to support you. > > > >https://support.accelerite.com/hc/en-us > > > >Best of luck... > > > >*Will STEVENS* > >Lead Developer > > > ><https://goo.gl/NYZ8KK> > > > >On Mon, Nov 21, 2016 at 3:44 AM, anil lakineni < > >anilkumar459.lakin...@gmail.com> wrote: > > > >> Dear All, > >> > >> On CloudPlatform dashboard our CPU usage is showing wrong (high -91%) > value > >> which in-turn not allowing us to provision new VMs. But, the fact is > only > >> 40% of the available CPU is utilized and Even in the Dashboard only > >> percentage calculation is showing false metric value, But Cpu usage > value > >> is showing accurate(800/2000 GHZ). > >> > >> In addition to that when we go to check the CPU status at Zones level we > >> are seeing the accurate CPU usage percentage in all Zones, Only we are > >> getting false usage percentage at dashboard level(which leads to fail > the > >> new deployments). > >> > >> - Our CCP version is 4.5.0 > >> - Hypervisors are Xen 6.2 & 6.5 > >> > >> Please help me to sort out this issue and also let me know if any > >> additional information needed. > >> > >> > >> Best Regards, > >> Anil. > >> > > > > > DISCLAIMER > == > This e-mail may contain privileged and confidential information which is > the property of Accelerite, a Persistent Systems business. It is intended > only for the use of the individual or entity to which it is addressed. If > you are not the intended recipient, you are not authorized to read, retain, > copy, print, distribute or use this message. If you have received this > communication in error, please notify the sender and delete all copies of > this message. Accelerite, a Persistent Systems business does not accept any > liability for virus infected mails. >
Fault percentage value of CPU usage in Cloud Platform
Dear All, On CloudPlatform dashboard our CPU usage is showing wrong (high -91%) value which in-turn not allowing us to provision new VMs. But, the fact is only 40% of the available CPU is utilized and Even in the Dashboard only percentage calculation is showing false metric value, But Cpu usage value is showing accurate(800/2000 GHZ). In addition to that when we go to check the CPU status at Zones level we are seeing the accurate CPU usage percentage in all Zones, Only we are getting false usage percentage at dashboard level(which leads to fail the new deployments). - Our CCP version is 4.5.0 - Hypervisors are Xen 6.2 & 6.5 Please help me to sort out this issue and also let me know if any additional information needed. Best Regards, Anil.
Re: Cloud Platform Dashboard's System Capacity is not updated with newly added resources
Hi Will, Thank you for your inputs and clarifications. Yes, I have already seen the reflected resources at Zone tab. I thought it was an issue with my CloudPlatform when those were not reflected at Dashboard, now i can leave this as the Dashboard wouldn't work properly in many cases. Regards, Anil. On Wed, Sep 7, 2016 at 4:05 PM, Will Stevens <williamstev...@gmail.com> wrote: > I don't have the ui in front of me, so I can't give you specifics. > > The dashboard does not work how most people expect. The dashboard only > shows the most used cluster, which is why it did not change. When you click > on the dashboard, it will take you to a second screen full of horizontal > bars. Those should reflect the total capacity of the system. Hope that > helps. > > Will > > On Sep 7, 2016 5:45 AM, "anil lakineni" <anilkumar459.lakin...@gmail.com> > wrote: > > Hi All, > > Guys, Can any one help me on this issue. > *Newly added resources are not updated in the Dashboard's System Capacity > but those new resources are updated in Infrastructure tab's Zone > Resources(Infrastructure --> Zone --> Resources)* > > Thanks, > Anil. > > On Sun, Sep 4, 2016 at 12:13 PM, anil lakineni < > anilkumar459.lakin...@gmail.com> wrote: > > > Greetings All, > > > > Cloud Platform version is 4.5.0 > > Existing Cloud Managed Hypervisor version is XenServer 6.2 > > New Cloud Managed Hypervisor version is XenServer 6.5 > > > > I have successfully added new cluster with Xen 6.5 into Cloud Platform > but > > the Dashboard's System Capacity is not updated with new resources (Except > > CPU and Memory resources), but I'm seeing storage resources are updated > > with new values. > > > > > > Also, I can see the newly updated resources in Infrastructure tab's Zone > > Resources(Infrastructure --> Zone --> Resources), but i can't see these > > resources in the Dashboard tab. > > > > Please can anyone suggest me that what I have been missing here to get > > changes updated in Dashboard. > > > > *P.S:* I have restarted Cloud Management service but no luck and I'm able > > to deploy new VM's in the newly added cluster. > > > > Looking forward for your valuable suggestions. > > > > Thanks, > > Anil. > > > > > > > > > > >
Re: Cloud Platform Dashboard's System Capacity is not updated with newly added resources
Hi All, Guys, Can any one help me on this issue. *Newly added resources are not updated in the Dashboard's System Capacity but those new resources are updated in Infrastructure tab's Zone Resources(Infrastructure --> Zone --> Resources)* Thanks, Anil. On Sun, Sep 4, 2016 at 12:13 PM, anil lakineni < anilkumar459.lakin...@gmail.com> wrote: > Greetings All, > > Cloud Platform version is 4.5.0 > Existing Cloud Managed Hypervisor version is XenServer 6.2 > New Cloud Managed Hypervisor version is XenServer 6.5 > > I have successfully added new cluster with Xen 6.5 into Cloud Platform but > the Dashboard's System Capacity is not updated with new resources (Except > CPU and Memory resources), but I'm seeing storage resources are updated > with new values. > > > Also, I can see the newly updated resources in Infrastructure tab's Zone > Resources(Infrastructure --> Zone --> Resources), but i can't see these > resources in the Dashboard tab. > > Please can anyone suggest me that what I have been missing here to get > changes updated in Dashboard. > > *P.S:* I have restarted Cloud Management service but no luck and I'm able > to deploy new VM's in the newly added cluster. > > Looking forward for your valuable suggestions. > > Thanks, > Anil. > > > > >
Cloud Platform Dashboard's System Capacity is not updated with newly added resources
Greetings All, Cloud Platform version is 4.5.0 Existing Cloud Managed Hypervisor version is XenServer 6.2 New Cloud Managed Hypervisor version is XenServer 6.5 I have successfully added new cluster with Xen 6.5 into Cloud Platform but the Dashboard's System Capacity is not updated with new resources (Except CPU and Memory resources), but I'm seeing storage resources are updated with new values. Also, I can see the newly updated resources in Infrastructure tab's Zone Resources(Infrastructure --> Zone --> Resources), but i can't see these resources in the Dashboard tab. Please can anyone suggest me that what I have been missing here to get changes updated in Dashboard. *P.S:* I have restarted Cloud Management service but no luck and I'm able to deploy new VM's in the newly added cluster. Looking forward for your valuable suggestions. Thanks, Anil.
Re: Getting unauthorized error when using sync command in cloud monkey
Timothy, I'm using CCP's admin keys, please suggest if anything needs to be changed to fix the issue. Suneel, Signature version parameter is available only in cloud monkey 5.3.2 version config file. So, installed 5.3.2 also and I have changed it to 2 then i was able discover API's by hitting sync command. but, when i'm trying to migrate virtual machine it is throwing error like below, *(local) > sync* *506 APIs discovered and cached* *(local) > migrate virtualmachine virtualmachineid=693c4bca-ad01-4369-82c3-0384c1780d7e hostid=824da2e7-2de6-498e-97de-bfec530d7d24* *Error 401 Authentication error* *errorcode = 401* *errortext = unable to verify user credentials and/or request signature* *uuidList:* *(local) >* I believe that migrate virtual machine API is available for ROOT account only, so we should use only 'admin' account's keys in cloud monkey configuration file. When i tried by using same VM's account keys it is not showing ' *migratevirtualmachine*' api as it was not ROOT account. Thanks, Anil. On Tue, May 17, 2016 at 3:13 PM, mvs babu <mvsbabu0...@outlook.com> wrote: > It’s problem with signature version. Change Signature version to 2. > > > > > > > Thank you, > Suneel. > AxiomIO > > > > > > From: Timothy Lothering > Sent: Tuesday, May 17, 2016 4:40 PM > To: us...@cloudstack.apache.org, dev@cloudstack.apache.org > > > > > > Hi Anil, > > Are you using the admin keys for CCP or CPBM? > > Kind Regards, > Timothy Lothering > Timothy Lothering > Solutions Architect > Managed Services > > T: +27877415535 > F: +27877415100 > C: +27824904099 > E: tlother...@datacentrix.co.za > > > DISCLAIMER NOTICE: > > Everything in this e-mail and any attachments relating to the official > business of Datacentrix Holdings Ltd. and its subsidiaries > ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally > privileged and protected by law. Datacentrix does not > own and endorse any other content. Views and opinions are those of the > sender unless clearly stated as being that of Datacentrix. > The person addressed in the e-mail is the sole authorised recipient. > Please notify the sender immediately if it has unintentionally > reached you and do not read, disclose or use the content in any way. > Datacentrix cannot assure that the integrity of this communication > has been maintained nor that it is free of errors, virus, interception or > interference. > -Original Message- > From: anil lakineni [mailto:anilkumar459.lakin...@gmail.com] > Sent: Tuesday, 17 May 2016 12:46 PM > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org > Subject: Re: Getting unauthorized error when using sync command in cloud > monkey > > Thanks Glenn, Timothy for responses. > > I tried both ways which you posted here, but same error is coming. > > We have CPBM in front of CCP, will that be cause. > For this kind of environment do i need to follow any other steps to fix > the issue as API and Secret keys are enabled by CPBM I'm using admin user > in cloudmonkey configuration file. > > Please help me out. > > Thanks, > Anil. > > On Tue, May 17, 2016 at 12:18 PM, Timothy Lothering < > tlother...@datacentrix.co.za> wrote: > > > Hi Anil, > > > > Your file should look similar below (looking at yours, the [LOCAL] > > section is there, but I am not sure if some of the config can be in a > > single line) > > > > [core] > > profile = local > > asyncblock = false > > paramcompletion = true > > history_file = //.cloudmonkey/history > > cache_file = //.cloudmonkey/cache > > log_file = //.cloudmonkey/log > > > > [ui] > > color = true > > prompt = 🠵 > > > display = default > > > > [local] > > username = admin > > apikey = > > url = http://:8080/client/api expires = 600 > > secretkey = timeout = 3600 password = > > > > Thanks > > > > Kind Regards, > > Timothy Lothering > > > > -Original Message- > > From: anil lakineni [mailto:anilkumar459.lakin...@gmail.com] > > Sent: Tuesday, 17 May 2016 10:46 AM > > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org > > Subject: Getting unauthorized error when using sync command in cloud > > monkey > > > > Hi All, > > > > I am unable to sync API's in CloudMonkey and getting below error, > > > > > sync > > *Unauthorized: None* > > *Failed to sync apis, please check your config?* > > *Note: `sync` requires api discovery service enabled on the CloudStack > > management server* > > > > Cloud Monkey version
Re: Getting unauthorized error when using sync command in cloud monkey
Thanks Glenn, Timothy for responses. I tried both ways which you posted here, but same error is coming. We have CPBM in front of CCP, will that be cause. For this kind of environment do i need to follow any other steps to fix the issue as API and Secret keys are enabled by CPBM I'm using admin user in cloudmonkey configuration file. Please help me out. Thanks, Anil. On Tue, May 17, 2016 at 12:18 PM, Timothy Lothering < tlother...@datacentrix.co.za> wrote: > Hi Anil, > > Your file should look similar below (looking at yours, the [LOCAL] section > is there, but I am not sure if some of the config can be in a single line) > > [core] > profile = local > asyncblock = false > paramcompletion = true > history_file = //.cloudmonkey/history > cache_file = //.cloudmonkey/cache > log_file = //.cloudmonkey/log > > [ui] > color = true > prompt = 🠵 > > display = default > > [local] > username = admin > apikey = > url = http://:8080/client/api > expires = 600 > secretkey = > timeout = 3600 > password = > > Thanks > > Kind Regards, > Timothy Lothering > > -Original Message- > From: anil lakineni [mailto:anilkumar459.lakin...@gmail.com] > Sent: Tuesday, 17 May 2016 10:46 AM > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org > Subject: Getting unauthorized error when using sync command in cloud monkey > > Hi All, > > I am unable to sync API's in CloudMonkey and getting below error, > > > sync > *Unauthorized: None* > *Failed to sync apis, please check your config?* > *Note: `sync` requires api discovery service enabled on the CloudStack > management server* > > Cloud Monkey version: 5.2.0 > Citrix Cloud Platform version: 4.5.0 > > This is my cloud monkey configuration file, vi ~/.cloudmonkey/config > > [core] > profile = local > asyncblock = true > paramcompletion = false > history_file = /root/.cloudmonkey/history cache_file = > /root/.cloudmonkey/cache log_file = /root/.cloudmonkey/log > > [ui] > color = true > prompt = > > display = default > > [local] > apikey = > url = http://8080/client/api expires = 600 secretkey = > timeout = 3600 username = xx password = xx > > On both servers, management and cloudmonkey the iptables are in off state. > > Please help me to fix this unauthorized issue, and let me know if any > information needed. > > Thanks, > Anil. > Timothy Lothering > Solutions Architect > Managed Services > > T: +27877415535 > F: +27877415100 > C: +27824904099 > E: tlother...@datacentrix.co.za > > > DISCLAIMER NOTICE: > > Everything in this e-mail and any attachments relating to the official > business of Datacentrix Holdings Ltd. and its subsidiaries > ('Datacentrix') is proprietary to Datacentrix. It is confidential, legally > privileged and protected by law. Datacentrix does not > own and endorse any other content. Views and opinions are those of the > sender unless clearly stated as being that of Datacentrix. > The person addressed in the e-mail is the sole authorised recipient. > Please notify the sender immediately if it has unintentionally > reached you and do not read, disclose or use the content in any way. > Datacentrix cannot assure that the integrity of this communication > has been maintained nor that it is free of errors, virus, interception or > interference. >
Getting unauthorized error when using sync command in cloud monkey
Hi All, I am unable to sync API's in CloudMonkey and getting below error, > sync *Unauthorized: None* *Failed to sync apis, please check your config?* *Note: `sync` requires api discovery service enabled on the CloudStack management server* Cloud Monkey version: 5.2.0 Citrix Cloud Platform version: 4.5.0 This is my cloud monkey configuration file, vi ~/.cloudmonkey/config [core] profile = local asyncblock = true paramcompletion = false history_file = /root/.cloudmonkey/history cache_file = /root/.cloudmonkey/cache log_file = /root/.cloudmonkey/log [ui] color = true prompt = > display = default [local] apikey = url = http://8080/client/api expires = 600 secretkey = timeout = 3600 username = xx password = xx On both servers, management and cloudmonkey the iptables are in off state. Please help me to fix this unauthorized issue, and let me know if any information needed. Thanks, Anil.
Adding new host to existing cluster which has Fiber Channel storage as primary storage
Hi All, Please some body guide me to add new host to existing cluster in CloudStack. Host: XenServer 6.2 CloudPlatform version: 4.5 My existing cluster is configured with Fiber channel storage. Now, i need to add new xen host to existing cluster. I was able to add new host to existing cluster but the newly added host is unable to plugin primary storage. I followed below steps to add host, - In XenCenter i added new host to existing xen pool - Added host in the cloud platform UI - After adding the new host in the UI to existing cluster, i can see UUID generated by cloud to local storage. - Now i cannot see the existing primary storage LUNs at newly added server i can see them in old servers. And we are seeing alert in cloudplatform UI "Unable to attach storage pool15 to the host28" Do i need to follow any procedure before adding in CCP to get storage mapped to new host? Please suggest.. Thanks, Anil.
mysql-bin log files eating more space and DB server root fs filling up now at 98%
Hi All, In Cloud DB server, the root file system reached to 98% and found that */var/lib/mysql/* is consumed with more space. Inside that specified directory i found that " *mysql bin logs* " are eating more space and files are there since one year. My environment has enabled with DB replication. Is it safe to purge the older mysql bin logs ? if yes, Could you please paste the working steps here which wouldn't affect replication as some blogs are saying that replication will be affected if we purge. *Please recommend best solution that was already worked on production environment* So please suggest me the process to free up some space and clean up mysql bin logs. P.S. I have verified other directories and logs, they are consuming very little space except this bin logs directory (not *ibdata1* file). Cloud version is 4.5 and MySQL version is " 5.1.73-log " My MySQL configuration file is, #cat /etc/my.cnf *[mysqld]* *datadir=/var/lib/mysql* *socket=/var/lib/mysql/mysql.sock* *user=mysql* *# Disabling symbolic-links is recommended to prevent assorted security risks* *symbolic-links=0* *innodb_rollback_on_timeout=1* *innodb_lock_wait_timeout=600* *max_connections=1400* *log-bin=mysql-bin* *binlog-format = 'ROW'* *innodb_buffer_pool_size=5500m* *default-character-set=utf8* *default-collation=utf8_unicode_ci* *character-set-server=utf8* *collation-server=utf8_unicode_ci* *default-time-zone='+03:00'* *# for Master / Slave* *server-id = 1* *[mysqld_safe]* *log-error=/var/log/mysqld.log* *pid-file=/var/run/mysqld/mysqld.pid* Please let me know if any other information needed and please suggest the process that would cleanup old logs automatically by mysql. Hope will get some help here.. Regards, Anil.
Re: Getting Full Volume Snapshots every day
Hello guys, Any ideas on the request? Please.. Regards, Anil. On Thu, Nov 5, 2015 at 6:48 PM, mvs babuwrote: > Hi Team, > > > We have scheduled daily volume snapshots in ACS 4.3.1 and XenServer 6.2 > SP1 as Hypervisor. In secondary storage, we are getting full backups every > day instead of incremental backups. > > After restarting management service and SQL service, we found that the > next backup was incremental and there after remaining backups are full > again. > > We have two management servers and configured Data Base HA. > > > Please find DB information for one volume below, > > http://pastebin.com/kXc28QpM > > http://pastebin.com/iFMDtAZH > > > Thank you, > Suneel Mallela
Re: Download of ROOT Volume failed with exception after some time in CCP 4.5
Hello guys, Please help me.We are experiencing a failure while downloading ROOT volume from CS 4.5 and Xen 6.2 hosts. The Download ROOT volume has been failed with an error "*Failed to copy the volume from the source primary storage pool to secondary storage" *after 12 hrs of time (but the time out exception parameters (migratewait, copy.volume.wait, job.cancel.threshold.minutes and job.expire.minutes) value is 36 hrs) and i have seen in secondary storage volume directory that nearly 137 GB of data was copied to secondary storage. The provisioned volume disk size is 250 GB. Any idea how to fix this? any one has recovered from this kind of issue earlier? How do I resolve this issue? Regards, Anil. On Wed, Oct 21, 2015 at 6:48 PM, anil lakineni < anilkumar459.lakin...@gmail.com> wrote: > Hi CloudStackers, > > I am using CloudPlatform 4.5.0 and XenServer 6.2.0 > > I was initiated volume download process at ROOT volume but due to file > size i got time out exception error > > *Unable to Serialize: Job is cancelled as it has been blocking others for > too long* > > *After the exception i did below steps:* > > 1. Changed some parameters in Global Settings to with hold the time to > complete download process in order to prevent time out exception for future > (because i have initiated volume download process again). > > > 2. Restarted management service to effect changes which i changed in > Global Settings. > > 3. Before re-initiating volume download process, I had changed state entry > to 'Ready' of the desired volume in the Cloud Database *volumes *table,because > the state of the volume was not changed from 'migrating' state after job > failed. So for to see download volume button on desired volume to > re-initiate volume download process. > > But it got failed with error "*Failed to copy the volume from the source > primary storage pool to secondary storage*" > > 4. Later when checked at cloud *volume store ref *table i saw that status > of entry is still in "Creating" state. So i just deleted this entire row > from the table. > But got the same error when initiated volume download process : 'Failed to > copy the volume from the source primary storage pool to secondary storage' > > 5. So that later i was removed .vhd (previously downloaded) file manually > from the volumes directory in secondary storage. > > 6. then i succeeded in re-initiating volume download process and it is > downloading now. > > *Here my question is,* > > a) Does all the steps which i have taken were true or not? > > b) If job failed like the above error which i mentioned, should i wait for > some time to cleanup all the jobs by CloudStack automatically to > re-initiate job? > > c) If the job failed with this type of timeout exception, does CloudStack > runs job in back ground irrespective of exceptions? because i have seen > that after this error exception the size of the .vhd file in volume > directory of secondary storage is keep on increasing.So i thought the job > was running in background after these type of timeout exceptions, am i > right? > > > Could any one please post your valuable recommendations on this scenario > and much appreciation for your answers on my questions. > > Regards, > Anil. >
Download of ROOT Volume failed with exception after some time in CCP 4.5
Hi CloudStackers, I am using CloudPlatform 4.5.0 and XenServer 6.2.0 I was initiated volume download process at ROOT volume but due to file size i got time out exception error *Unable to Serialize: Job is cancelled as it has been blocking others for too long* *After the exception i did below steps:* 1. Changed some parameters in Global Settings to with hold the time to complete download process in order to prevent time out exception for future (because i have initiated volume download process again). 2. Restarted management service to effect changes which i changed in Global Settings. 3. Before re-initiating volume download process, I had changed state entry to 'Ready' of the desired volume in the Cloud Database *volumes *table,because the state of the volume was not changed from 'migrating' state after job failed. So for to see download volume button on desired volume to re-initiate volume download process. But it got failed with error "*Failed to copy the volume from the source primary storage pool to secondary storage*" 4. Later when checked at cloud *volume store ref *table i saw that status of entry is still in "Creating" state. So i just deleted this entire row from the table. But got the same error when initiated volume download process : 'Failed to copy the volume from the source primary storage pool to secondary storage' 5. So that later i was removed .vhd (previously downloaded) file manually from the volumes directory in secondary storage. 6. then i succeeded in re-initiating volume download process and it is downloading now. *Here my question is,* a) Does all the steps which i have taken were true or not? b) If job failed like the above error which i mentioned, should i wait for some time to cleanup all the jobs by CloudStack automatically to re-initiate job? c) If the job failed with this type of timeout exception, does CloudStack runs job in back ground irrespective of exceptions? because i have seen that after this error exception the size of the .vhd file in volume directory of secondary storage is keep on increasing.So i thought the job was running in background after these type of timeout exceptions, am i right? Could any one please post your valuable recommendations on this scenario and much appreciation for your answers on my questions. Regards, Anil.
Usage service not working in CS 4.3.1
Dears, >From past three days, I have been facing issues with usage service in CS4.3.1 with Xen host 6.2.0 In the CS mgmt server when check with usage service status, it is saying running. But usage records are not generating in the cloud_usage In the management-server.log I am getting below error, *2015-09-16 02:48:09,180 INFO [c.c.h.HighAvailabilityManagerImpl] (HA-5:ctx-9e572072) checking health of usage server* *2015-09-16 02:48:09,184 DEBUG [c.c.h.HighAvailabilityManagerImpl] (HA-5:ctx-9e572072) usage server running? false, heartbeat: Mon Sep 14 22:46:09 GMT-08:00 2015* *2015-09-16 02:48:09,184 WARN [o.a.c.alerts] (HA-5:ctx-9e572072) alertType:: 13 // dataCenterId:: 0 // podId:: 0 // clusterId:: null // message:: No usage server process running* *2015-09-16 02:48:09,187 DEBUG [c.c.a.AlertManagerImpl] (HA-5:ctx-9e572072) Have already sent: 1 emails for alert type '13' -- skipping send email* When I was check at vi /var/log/cloudstack/usage/cloudstack-usage.err, the below error is showing *com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up* ** ** *Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure* The full log of cloudstack-usage.err is copied and pasted in http://pastebin.com/y9t7ebN4 I have checked and symbolic link mysql-connector-java.jar is available for usage server at usage lib. Please help me if I am missing something to fix this issue and let me know if you need any other information from my side. Thanks in advance, Regards, Anil.
CCP-4.5.0: Adding additional interfaces are not accepting on VMs if the VM has vm snapshots
Hi All, We are facing issues when adding additional interfaces on VMs if VM has vm snapshots. Issue is VM not accepting to add additional interface due to VM snapshots. Error is: *Add netowrk to VM Action failed - Service Exception : NIC cannot be added to VM with VM Snapshots* I believe that this is not a correct behaviour of CloudPlatform, because we are able to add additional interfaces on VMs with VM snapshots in remaining versions of Cloud Stack. Is it a bug on CCP 4.5.0?? Please let me know if I have to change/modify any values at CCP I will be Waiting for your valuable inputs. Thanks in advance. Using CitrixCloudPlatform 4.5.0 and XenServer 6.2 hosts Regards, Anil.
Facing BOOT issues (if DATA drives are attached) on VMs in CCP 4.5.0
Hi All, We are facing boot issues when VMs have DATA drives. *After VM was shut down we were unable to power on server unless we detach Data Drive.* We are using Citrix Cloud Portal 4.5.0 and XenServer 6.2 *Note: *We have not facing this issue on some of the VM's which also have DATA drives but facing this kind of boot issue on only some VMs. If any one would have this kind of issue, please let us know the persistent solution to fix the issue. Thanks in advance. Regards, Anil.
Re: Facing BOOT issues (if DATA drives are attached) on VMs in CCP 4.5.0
Dave, Thanks for the replay and suggestion. I believe in the CS when we deploy a fresh VM from Template or ISO a drive is created which is ROOT drive (C drive) and it will acts as primary drive (surely an entry will be created in Cloud DataBase tables to maintain this record) for the VM until the END of VM life cycle, right? So we can say ROOT drive is always a first boot order drive in the BIOS. And later on added/attached drives will get numbers like 2,3 and so on..We can see VM's drive numbers like 0,1,2,3 (Here *0* is primary drive which is ROOT/C drive) and so on in XenCenter. We were not facing this issue when rebooted VM from CloudPortal UI and only facing issue when VM was shut down & thereafter next boot. Please recommend me any suggestions on this issue. Regards, Anil. On Wed, Sep 2, 2015 at 7:28 PM, Dave Dunaway <dave.duna...@gmail.com> wrote: > Check the boot order in the BIOS and make sure the VM is not trying to boot > off the DATA disk. This is maybe why you don't see this issue when there is > only one disk attached. > > hth. > > dave. > > On Wed, Sep 2, 2015 at 7:38 AM, anil lakineni < > anilkumar459.lakin...@gmail.com> wrote: > > > Hi Rajani, > > > > Host is *XenServer 6.2* > > > > Guest OS is *Windows 2008 R2 & Windows 2012 R2* > > > > Regards, > > Anil. > > > > On Wed, Sep 2, 2015 at 3:42 PM, Rajani Karuturi <raj...@apache.org> > wrote: > > > > > whats the host and guest os? > > > > > > ~Rajani > > > > > > On Wed, Sep 2, 2015 at 2:33 PM, anil lakineni < > > > anilkumar459.lakin...@gmail.com> wrote: > > > > > > > Hi All, > > > > > > > > We are facing boot issues when VMs have DATA drives. > > > > > > > > > > > > *After VM was shut down we were unable to power on server unless we > > > detach > > > > Data Drive.* > > > > > > > > We are using Citrix Cloud Portal 4.5.0 and XenServer 6.2 > > > > > > > > *Note: *We have not facing this issue on some of the VM's which also > > have > > > > DATA drives but facing this kind of boot issue on only some VMs. > > > > > > > > If any one would have this kind of issue, please let us know the > > > persistent > > > > solution to fix the issue. > > > > Thanks in advance. > > > > > > > > Regards, > > > > Anil. > > > > > > > > > >
Re: Facing BOOT issues (if DATA drives are attached) on VMs in CCP 4.5.0
Hi Rajani, Host is *XenServer 6.2* Guest OS is *Windows 2008 R2 & Windows 2012 R2* Regards, Anil. On Wed, Sep 2, 2015 at 3:42 PM, Rajani Karuturi <raj...@apache.org> wrote: > whats the host and guest os? > > ~Rajani > > On Wed, Sep 2, 2015 at 2:33 PM, anil lakineni < > anilkumar459.lakin...@gmail.com> wrote: > > > Hi All, > > > > We are facing boot issues when VMs have DATA drives. > > > > > > *After VM was shut down we were unable to power on server unless we > detach > > Data Drive.* > > > > We are using Citrix Cloud Portal 4.5.0 and XenServer 6.2 > > > > *Note: *We have not facing this issue on some of the VM's which also have > > DATA drives but facing this kind of boot issue on only some VMs. > > > > If any one would have this kind of issue, please let us know the > persistent > > solution to fix the issue. > > Thanks in advance. > > > > Regards, > > Anil. > > >
Urgent Question at VPC router:
Hi All, I have a web server inside cloud VPC, It usually getting 2500 user hits on average at every time But I need to increase the user connections for ex: 5000 hits on web server. Do you have any idea that I should change some configurations at VPC vR in order to achieve my requirement.? Else, Is cloud platform with XenServer supports my requirement? Please Help me with valuable suggestions, Thanks in advance. Regards, Anil.
Re: [PROPOSAL] Snapshot Improvements
Hi Anshul and All, I knew it is off topic here but you guys have good knowledge on snapshots in CloudStack. So that I pointed my issues in this thread, Please help me. I would like to know necessary steps to be taken after getting below list of errors while creating volume-snapshot, In the CloudStack web UI,snapshots(scheduled recurring daily volume snapshot policy) a. Alert I was removing entries from UI and updating in Cloud Database (if any entries related to this snapshot) to destroy state.In this case,tried to remove entry from the UI if not happened am hacking database and updating this snapshot state to destroy (snapshot and snapshot_store_ref) b. Allocated I was removing entries from UI and updating in Cloud Database (if any entries related to this snapshot) to destroy state. In this case, removing snapshot from the UI throwing errors so that am updating in the database (select state from snapshot where id=) to BackedUp then able to remove it from UI with some exceptions and then after updating the database snapshot table's state to destroy. Here no snapshot_store_ref table entries for this snapshot. c. backingup I was removing entries from UI and Cloud Database (if any entries related to this snapshot) to destroy state . In this case,entries are being created in snapshot snapshot_store_ref tables. SO that am updating on both tables state's to destroy then deleting vhd file from secondary storage as it was not fully completed snapshot vhd. d.no entries regarding about snapshot info on UI and database In this case, am doing cloud management service restart,there after next scheduled volume snapshots are creating but not already skipped volume snapshots for the day. Reason for this we could not identified till date, Even in management logs also no information. In all the above cases, the reason could be network disruptions or any other things. we are still investigating on this (please let us know if you are suspecting any reasons). My question is on the above cases, 1. Is it possible to re-initiate snapshot creation/continue in any case of above listed errors because these are scheduled snapshot policy (daily) 2. Was all the process taking on above errors to remove error snapshots, correct? if not please suggest solutions Please review and suggest solutions Thanks, Anil. On Fri, Jul 24, 2015 at 11:12 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote: Is there currently any mechanism in place to prevent users from trying to take VM snapshots and volume snapshots at the same time? For example, is there a check in place in the relevant APIs that throws an exception if you try to get yourself in this situation? On Fri, Jul 24, 2015 at 1:54 AM, Anshul Gangwar anshul.gang...@citrix.com wrote: Rajani, Because there were observed issues which are leading to VM corruption if both type of snapshots are allowed to exist together. I will be fixing issues to make sure VM corruption do not happen. Regards, Anshul On 24-Jul-2015, at 12:36 pm, Rajani Karuturi raj...@apache.orgmailto: raj...@apache.org wrote: Hi Anshul, Do you know why volume and VM snapshots werent allowed to co-exists? What is being done to resolve them? ~Rajani On Thu, Jul 23, 2015 at 2:57 PM, Anshul Gangwar anshul.gang...@citrix.com mailto:anshul.gang...@citrix.com wrote: I am working on improving snapshots experience in CloudStack. FS is available at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Snapshot+Improvements . Please review and provide comments/suggestions. Regards, Anshul -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud http://solidfire.com/solution/overview/?video=play*™*
Getting error while Uploading Windows vhd as template to CloudPortal
Hi All, When trying to upload Windows VHD file as template to CitrixCloudPortal, getting following error, *Template content is unsupported, or mismatch between selected format and template content. Found : data* Please suggest me to get rid of from this error message. Regards, Anil.
Re: Unable to mount Secondary Storage on SSVM
Hi All, Finally I am able to mount Secondary Storage path on SSVM. Our NFS Server Version is v3 and SSVM (Client) version is v4 (Present). So I tried to mount manually with version 3 and it worked !!! (Here on SSVM, I did #*man mount.nfs *there it is saying when we hit the command *#mount -t nfs serverIP:/path /mount point* ,It will try to mount with v2 v3 but SSVM did not. I don't know why NFS is doing strange behaviour on SSVM like this) The existing SSVM was created using an old template recently .The old SSVM which were created from the same template used to mount automatically previously. I couldn't understand why the issue suddenly showed up for this new SSVM. Any thoughts on this.?? Also where can we find the scripts for auto mount on Cloud Stack, SSVM scripts and Cloud-Stack scripts on XenServer ??? Thanks, Anil. On Mon, May 11, 2015 at 6:12 PM, anil lakineni anilkumar459.lakin...@gmail.com wrote: Hi ilya, No luck on mount, after Stopping iptables on SSVM. Any further suggestions please Thanks, Anil. On Sun, May 10, 2015 at 2:20 AM, ilya ilya.mailing.li...@gmail.com wrote: Disregard my previous message, i see its already done. Try mounting it in the shell of SSVM by hand. If fails, disable iptables and try again. Also check the routing table to make sure all is proper. Like Andrei mentioned you may need it to be NFSv3 compatible as it will make use of no_root_squash (which must also be enabled on your share). Regards ilya On 5/9/15 1:46 PM, ilya wrote: Make sure that management network/range of IPs that VM and Storage Network (if you have one), is whitelisted as NFS access clients on the export. If that fails, try also adding public ip range to see if that helps - as a test. Regards ilya On 5/8/15 6:50 AM, Andrei Mikhailovsky wrote: Srini, you need to make sure the nfs server is configured properly to allow access from SSVM, as you can see, the access is denied. please check that the nfs server supports the same protocol version as ssvm is requesting. Most likely, it's nfs v3. Check logs on the nfs server to verify why it is denying access. Once this is fixed, you should be good to go. Andrei - Original Message - From: srinivas niddapu sr...@axiomio.com To: Rohit Yadav rohit.ya...@shapeblue.com, dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org Sent: Friday, 8 May, 2015 2:03:10 PM Subject: RE: Unable to mount Secondary Storage on SSVM Appreciated info Rohit. As we verified our NFS storage there is no permission restrictions (* FULL ACCESS). Validated the same NFS share on the Cloud Stack Hypervisors, already mounted and data visible. We try to mount the NFS volume in the SSVM manually but its throwing error. Unable to mount. mount.nfs: access denied by server while mounting While restoring snapshot in the CloudStack UI getting below error. Status Failed to create templatecom.cloud.utils.exception.CloudRuntimeException: GetRootDir for nfs://172.30.36.51/vS02304090GCSP_NAS07 failed due to com.cloud.utils.exception.CloudRuntimeException: Unable to mount 172.30.36.51:/vS02304090GCSP_NAS07 at /mnt/SecStorage/1c7f122c-e72e-3daa-a54a-3693b89d4015 due to mount.nfs: access denied by server while mounting 172.30.36.51:/vS02304090GCSP_NAS07 at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(NfsSecondaryStorageResource.java:1956) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfsXenserver(NfsSecondaryStorageResource.java:377) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfs(NfsSecondaryStorageResource.java:444) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.createTemplateFromSnapshot(NfsSecondaryStorageResource.java:553) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:632) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:236) at com.cloud.storage.resource.PremiumSecondaryStorageResource.defaultAction(PremiumSecondaryStorageResource.java:63) at com.cloud.storage.resource.PremiumSecondaryStorageResource.executeRequest(PremiumSecondaryStorageResource.java:59) at com.cloud.agent.Agent.processRequest(Agent.java:498) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:806) at com.cloud.utils.nio.Task.run(Task.java:83) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Any suggestions. Thanks, Srini. -Original Message- From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com] Sent: Friday, May 08, 2015 5:57 PM To: dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org; srinivas niddapu Subject: Re: Unable to mount
Re: Unable to mount Secondary Storage on SSVM
Hi Andrei, Our SSVM is using NFSv4, we can see here http://pastebin.com/HWvtYg8z When we are trying to mount the same path(CS Secondary Storage), It is mounted perfectly with all the other machines (CS Hosts) except with SSVM. and we have scheduled recurring snapshots policy on our cloud-stack and these snapshots are perfectly backed up to secondary storage path which is the one we are trying to mount manually on SSVM. (I have tested and mounted the same secondary storage path on other machines and there in the snapshots folder I can able to see the scheduled snapshots vhd files ). So if the secondary storage path is failing to mount on SSVM how could the snapshot vhd files are moving to same secondary storage location.?? Any suggestions please. Thanks, Anil. On Sun, May 10, 2015 at 2:20 AM, ilya ilya.mailing.li...@gmail.com wrote: Disregard my previous message, i see its already done. Try mounting it in the shell of SSVM by hand. If fails, disable iptables and try again. Also check the routing table to make sure all is proper. Like Andrei mentioned you may need it to be NFSv3 compatible as it will make use of no_root_squash (which must also be enabled on your share). Regards ilya On 5/9/15 1:46 PM, ilya wrote: Make sure that management network/range of IPs that VM and Storage Network (if you have one), is whitelisted as NFS access clients on the export. If that fails, try also adding public ip range to see if that helps - as a test. Regards ilya On 5/8/15 6:50 AM, Andrei Mikhailovsky wrote: Srini, you need to make sure the nfs server is configured properly to allow access from SSVM, as you can see, the access is denied. please check that the nfs server supports the same protocol version as ssvm is requesting. Most likely, it's nfs v3. Check logs on the nfs server to verify why it is denying access. Once this is fixed, you should be good to go. Andrei - Original Message - From: srinivas niddapu sr...@axiomio.com To: Rohit Yadav rohit.ya...@shapeblue.com, dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org Sent: Friday, 8 May, 2015 2:03:10 PM Subject: RE: Unable to mount Secondary Storage on SSVM Appreciated info Rohit. As we verified our NFS storage there is no permission restrictions (* FULL ACCESS). Validated the same NFS share on the Cloud Stack Hypervisors, already mounted and data visible. We try to mount the NFS volume in the SSVM manually but its throwing error. Unable to mount. mount.nfs: access denied by server while mounting While restoring snapshot in the CloudStack UI getting below error. Status Failed to create templatecom.cloud.utils.exception.CloudRuntimeException: GetRootDir for nfs://172.30.36.51/vS02304090GCSP_NAS07 failed due to com.cloud.utils.exception.CloudRuntimeException: Unable to mount 172.30.36.51:/vS02304090GCSP_NAS07 at /mnt/SecStorage/1c7f122c-e72e-3daa-a54a-3693b89d4015 due to mount.nfs: access denied by server while mounting 172.30.36.51:/vS02304090GCSP_NAS07 at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(NfsSecondaryStorageResource.java:1956) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfsXenserver(NfsSecondaryStorageResource.java:377) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfs(NfsSecondaryStorageResource.java:444) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.createTemplateFromSnapshot(NfsSecondaryStorageResource.java:553) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:632) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:236) at com.cloud.storage.resource.PremiumSecondaryStorageResource.defaultAction(PremiumSecondaryStorageResource.java:63) at com.cloud.storage.resource.PremiumSecondaryStorageResource.executeRequest(PremiumSecondaryStorageResource.java:59) at com.cloud.agent.Agent.processRequest(Agent.java:498) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:806) at com.cloud.utils.nio.Task.run(Task.java:83) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Any suggestions. Thanks, Srini. -Original Message- From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com] Sent: Friday, May 08, 2015 5:57 PM To: dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org; srinivas niddapu Subject: Re: Unable to mount Secondary Storage on SSVM On 08-May-2015, at 2:05 pm, anil lakineni anilkumar459.lakin...@gmail.com wrote: and this Secondary Storage path is mounting and working with all other servers except with SSVM.. Getting error mount.nfs: access denied by server while
Re: Unable to mount Secondary Storage on SSVM
Hi ilya, No luck on mount, after Stopping iptables on SSVM. Any further suggestions please Thanks, Anil. On Sun, May 10, 2015 at 2:20 AM, ilya ilya.mailing.li...@gmail.com wrote: Disregard my previous message, i see its already done. Try mounting it in the shell of SSVM by hand. If fails, disable iptables and try again. Also check the routing table to make sure all is proper. Like Andrei mentioned you may need it to be NFSv3 compatible as it will make use of no_root_squash (which must also be enabled on your share). Regards ilya On 5/9/15 1:46 PM, ilya wrote: Make sure that management network/range of IPs that VM and Storage Network (if you have one), is whitelisted as NFS access clients on the export. If that fails, try also adding public ip range to see if that helps - as a test. Regards ilya On 5/8/15 6:50 AM, Andrei Mikhailovsky wrote: Srini, you need to make sure the nfs server is configured properly to allow access from SSVM, as you can see, the access is denied. please check that the nfs server supports the same protocol version as ssvm is requesting. Most likely, it's nfs v3. Check logs on the nfs server to verify why it is denying access. Once this is fixed, you should be good to go. Andrei - Original Message - From: srinivas niddapu sr...@axiomio.com To: Rohit Yadav rohit.ya...@shapeblue.com, dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org Sent: Friday, 8 May, 2015 2:03:10 PM Subject: RE: Unable to mount Secondary Storage on SSVM Appreciated info Rohit. As we verified our NFS storage there is no permission restrictions (* FULL ACCESS). Validated the same NFS share on the Cloud Stack Hypervisors, already mounted and data visible. We try to mount the NFS volume in the SSVM manually but its throwing error. Unable to mount. mount.nfs: access denied by server while mounting While restoring snapshot in the CloudStack UI getting below error. Status Failed to create templatecom.cloud.utils.exception.CloudRuntimeException: GetRootDir for nfs://172.30.36.51/vS02304090GCSP_NAS07 failed due to com.cloud.utils.exception.CloudRuntimeException: Unable to mount 172.30.36.51:/vS02304090GCSP_NAS07 at /mnt/SecStorage/1c7f122c-e72e-3daa-a54a-3693b89d4015 due to mount.nfs: access denied by server while mounting 172.30.36.51:/vS02304090GCSP_NAS07 at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(NfsSecondaryStorageResource.java:1956) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfsXenserver(NfsSecondaryStorageResource.java:377) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.copySnapshotToTemplateFromNfsToNfs(NfsSecondaryStorageResource.java:444) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.createTemplateFromSnapshot(NfsSecondaryStorageResource.java:553) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSecondaryStorageResource.java:632) at org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeRequest(NfsSecondaryStorageResource.java:236) at com.cloud.storage.resource.PremiumSecondaryStorageResource.defaultAction(PremiumSecondaryStorageResource.java:63) at com.cloud.storage.resource.PremiumSecondaryStorageResource.executeRequest(PremiumSecondaryStorageResource.java:59) at com.cloud.agent.Agent.processRequest(Agent.java:498) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:806) at com.cloud.utils.nio.Task.run(Task.java:83) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Any suggestions. Thanks, Srini. -Original Message- From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com] Sent: Friday, May 08, 2015 5:57 PM To: dev@cloudstack.apache.org Cc: us...@cloudstack.apache.org; srinivas niddapu Subject: Re: Unable to mount Secondary Storage on SSVM On 08-May-2015, at 2:05 pm, anil lakineni anilkumar459.lakin...@gmail.com wrote: and this Secondary Storage path is mounting and working with all other servers except with SSVM.. Getting error mount.nfs: access denied by server while mounting xx.xx.xx.xx:/ Check your nfs exports file and do a chmod 777 on the mount points, such as /export/secondary or /export/primary. Regards, Rohit Yadav Software Architect, ShapeBlue M. +91 88 262 30892 | rohit.ya...@shapeblue.com Blog: bhaisaab.org | Twitter: @_bhaisaab Find out more about ShapeBlue and our range of CloudStack related services IaaS Cloud Design Build http://shapeblue.com/iaas-cloud-design-and-build// CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/ CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/ CloudStack Software Engineering http://shapeblue.com/cloudstack-software
Unable to mount Secondary Storage on SSVM
Hi All, I have a strange issue that our secondary storage (NFS) is not mounting with SSVM while before it was worked with SSVM. Cloud-Stack version is 4.3.1 Xen Server version is 6.2.0 and this Secondary Storage path is mounting and working with all other servers except with SSVM.. Getting error mount.nfs: access denied by server while mounting xx.xx.xx.xx:/ Please Help with this.. Looking forward for your replays. Thank You, Anil.
Re: Centralized Management console for all tenants on CS 4.3.1..??
Somesh, Thank you for your replay.. My CS Environment is Advanced Zone without Security Groups option. Can we have other possibilities like blocking communication between VMs at virtual router (VR) level or any thing else. If we can do blocking at VR level, can you please elaborate the process to me (any commands need to execute in the VR). Looking forward for your replays. Thank You, Anil. On Wed, May 6, 2015 at 3:59 AM, Somesh Naidu somesh.na...@citrix.com wrote: You could use Security Groups to achieve this. Somesh CloudPlatform Escalations Citrix Systems, Inc. -Original Message- From: anil lakineni [mailto:anilkumar459.lakin...@gmail.com] Sent: Tuesday, May 05, 2015 4:32 PM To: us...@cloudstack.apache.org; dev@cloudstack.apache.org Subject: Re: Centralized Management console for all tenants on CS 4.3.1..?? Hi All, Comments are in line, A quick question, Is there any possibility to stop communication between two VMs which are using shared network (why because I need communication between some VMs on this shared network and I don't want communication between some VMs on this same shared network). Waiting for your valuable replays. Thank You, Anil. On Tue, May 5, 2015 at 7:50 PM, anil lakineni anilkumar459.lakin...@gmail.com wrote: Hi All, I need a help.. I want to monitor all the tenant VMs from a centralized VM with in the Cloud Stack. Here my testing plan is, I have two isolated accounts and each account contains a VM .And I will be deploying a VM (which is centralized management VM) on ROOT account. Now I want to monitor those two isloated account VMs from ROOT account VM. and here main concern is no two tenants VMs will communicate. Please can any one suggest me the best possible ways to solve my task. My CS version is 4.3.1 and XenServer version is 6.2.0 Looking forward for your valuable comments. Best Regards, Anil.
Re: Centralized Management console for all tenants on CS 4.3.1..??
Hi ilya, Thanks for provided additional information.. We have already an existing CS Environment (with XenServer hosts) with no security groups option enabled. So, do we have any options to re-enable security groups option for existing zone which is already in production. Thank You, Anil. On Wed, May 6, 2015 at 9:15 AM, ilya ilya.mailing.li...@gmail.com wrote: A little more context to what Somesh mentioned. If you are running Xen/KVM, then you can deploy a cloudstack zone with Security Groups. This means cloudstack will manage the iptables rules on the hypervisors and push only the ACL rules you define in cloudstack. This suppose to be very scalable and would solve the common firewall management challenges as well as need for running VLAN isolation. Really powerful concept to say the least in very large setups and abstracts lots of firewall and switch level complexity. On 5/5/15 3:29 PM, Somesh Naidu wrote: You could use Security Groups to achieve this. Somesh CloudPlatform Escalations Citrix Systems, Inc. -Original Message- From: anil lakineni [mailto:anilkumar459.lakin...@gmail.com] Sent: Tuesday, May 05, 2015 4:32 PM To: us...@cloudstack.apache.org; dev@cloudstack.apache.org Subject: Re: Centralized Management console for all tenants on CS 4.3.1..?? Hi All, Comments are in line, A quick question, Is there any possibility to stop communication between two VMs which are using shared network (why because I need communication between some VMs on this shared network and I don't want communication between some VMs on this same shared network). Waiting for your valuable replays. Thank You, Anil. On Tue, May 5, 2015 at 7:50 PM, anil lakineni anilkumar459.lakin...@gmail.com wrote: Hi All, I need a help.. I want to monitor all the tenant VMs from a centralized VM with in the Cloud Stack. Here my testing plan is, I have two isolated accounts and each account contains a VM .And I will be deploying a VM (which is centralized management VM) on ROOT account. Now I want to monitor those two isloated account VMs from ROOT account VM. and here main concern is no two tenants VMs will communicate. Please can any one suggest me the best possible ways to solve my task. My CS version is 4.3.1 and XenServer version is 6.2.0 Looking forward for your valuable comments. Best Regards, Anil.
Centralized Management console for all tenants on CS 4.3.1..??
Hi All, I need a help.. I want to monitor all the tenant VMs from a centralized VM with in the Cloud Stack. Here my testing plan is, I have two isolated accounts and each account contains a VM .And I will be deploying a VM (which is centralized management VM) on ROOT account. Now I want to monitor those two isloated account VMs from ROOT account VM. and here main concern is no two tenants VMs will communicate. Please can any one suggest me the best possible ways to solve my task. My CS version is 4.3.1 and XenServer version is 6.2.0 Looking forward for your valuable comments. Best Regards, Anil.
Re: Centralized Management console for all tenants on CS 4.3.1..??
Hi All, Comments are in line, A quick question, Is there any possibility to stop communication between two VMs which are using shared network (why because I need communication between some VMs on this shared network and I don't want communication between some VMs on this same shared network). Waiting for your valuable replays. Thank You, Anil. On Tue, May 5, 2015 at 7:50 PM, anil lakineni anilkumar459.lakin...@gmail.com wrote: Hi All, I need a help.. I want to monitor all the tenant VMs from a centralized VM with in the Cloud Stack. Here my testing plan is, I have two isolated accounts and each account contains a VM .And I will be deploying a VM (which is centralized management VM) on ROOT account. Now I want to monitor those two isloated account VMs from ROOT account VM. and here main concern is no two tenants VMs will communicate. Please can any one suggest me the best possible ways to solve my task. My CS version is 4.3.1 and XenServer version is 6.2.0 Looking forward for your valuable comments. Best Regards, Anil.
Re: Migrating to VPC/Site-to-Site VPN
Logan, Create a new account on that customer domain and then create a network (VPC only no other networks) on that account, Then shut down VMs on the existing account then there is move option to other account(if the ACS version is 4.2.0 above). Now you can move the VMs to new account. Thanks, Anil. On Wed, Apr 1, 2015 at 3:45 AM, Logan Barfield lbarfi...@tqhosting.com wrote: We have a customer that is currently set up in an isolated network in an advanced zone. They recently mentioned that they have a need for site-to-site VPN connectivity for their application. Is it possible to move an existing isolated network into a VPC for site-to-site VPN functionality? Or is there another way to set up a site-to-site VPN using the existing network? Thank You, Logan Barfield Tranquil Hosting