Thanks Geoff, good article, that solved my question.
/Sonali
-Original Message-
From: Somesh Naidu [mailto:somesh.na...@citrix.com]
Sent: Tuesday, January 6, 2015 11:39 PM
To: users@cloudstack.apache.org
Subject: RE: Xenserver pool HA in Cloudstack
Nice article Geoff! I am surprised t
If host is in maintenance mode and you are not seeing Œx¹ option try with
deleteHost api .
On 1/7/15, 5:01 AM, "Somesh Naidu" wrote:
>When you put the host in maintenance, it does not always enter
>maintenance mode, sometimes it is stuck in "PrepareMaintenance" mode.
>
>So you might try it agai
1. Make sure you have 'no_root_squash' in your /etc/exports
2. Make sure you have your portmap listening on all interface.
3. Make sure you have NFS4 specifically turned off.
4. Lastly, make sure iptables allows NFS related ports. You can disable
the firewall for time being until you can add the NF
Hi Chris,
You have configured IP address for bond0:172.16.0.101 (management &
storage). Is that configured for bond network only or bridge ? What is the
Gateway for your 172.16.0.0/24 CIDR which you have given in pod setup ? You
can verify that in your SSVM details ( infratruture->SystemVMs
When you put the host in maintenance, it does not always enter maintenance
mode, sometimes it is stuck in "PrepareMaintenance" mode.
So you might try it again and make sure the status is "Maintenance". At that
point, you might also want to do a screen refresh just in case the UI was
behaving f
I had it in maintenance mode for a while, I took it out and remove it
from pool in XenCenter. I never saw the "X" to remove it gracefully from
Cluster.
at this moment status is set to "Disconnected"
Thanks,
Motty
On 01/06/2015 03:16 PM, Somesh Naidu wrote:
It should show you a "X" button to r
It should show you a "X" button to remove the host. Have you checked if the
host has indeed entered the maintenance mode? What does the status of the host
show as?
-Original Message-
From: Motty Cruz [mailto:motty.c...@gmail.com]
Sent: Tuesday, January 06, 2015 5:52 PM
To: users@cloudst
Hello All,
I would like to gracefully remove XenServer host from Cloudstack
Cluster, I can't find the steps to do so. I set the server in
maintenance mode but there are no options to remove it. perhaps a novice
question, am using Cloudstack 4.4.1
Thanks,
Motty
We are currently running a single location CloudStack deployment:
- 1 Hardware firewall
- 1 Mangement/Database Server
- 1 NFS staging store (for S3 secondary storage)
- Ceph RBD for primary storage
- 4 Hypervisors
- 1 Zone/Pod/Cluster
We are looking to expand our deployment to other datacenters, a
Thanks for the info.
I finally ended up restarting the management servers since there was a
DB split-brain issue underneath.
FG
On 2015-01-06 12:32 PM, ilya musayev wrote:
Here are my notes on cleaning up stuck VmWorkJobQueue, i use MySQL
workbench to make changes interactively...
Message
Hi Gopalakrishnan,
The mangement and storage range is 172.16.0.0/24
The public range is xxx.xxx.61.0/24
Thanks in advance!
Agreed. Very good article. I have a question though - is it really
appropriate to describe this as “self-healing”, when there are manual
steps to be performed in case of failure of the Pool Master?
Typically, I use this term to refer to situations where the master fails,
a different host gets el
Nice article Geoff! I am surprised this information is not present in standard
ACS documentation.
-Original Message-
From: Geoff Higginbottom [mailto:geoff.higginbot...@shapeblue.com]
Sent: Tuesday, January 06, 2015 11:48 AM
To: users@cloudstack.apache.org
Subject: RE: Xenserver pool HA
Here are my notes on cleaning up stuck VmWorkJobQueue, i use MySQL
workbench to make changes interactively...
Messages in log file:
(1013,VmWorkJobQueue, 2375) is reaching concurrency limit 1
(1013,VmWorkJobQueue, 2375) is reaching concurrency limit 1
(1433,VmWorkJobQueue, 2470) is reaching co
@Suresh,
The link you referenced is a design doc for a possible future feature, this is
NOT in CloudStack as of today.
@Sonali,
As promised, here is a link to the article I have been working on today
covering the use of XenServer HA with CloudStack,
http://shapeblue.com/cloudstack/xenserver-n
+1 for implementing this option - which will reduce upgrade time window as
well.
-Original Message-
From: Somesh Naidu [mailto:somesh.na...@citrix.com]
Sent: 06 January 2015 21:31
To: users@cloudstack.apache.org
Subject: RE: How to reduce the size of the cloud_usage database
Just sa
Hello,
Is there a way to implement XenServer's guest's VIF locking in ACS? Or
maybe VIF locking is planning in future releases?
http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/reference.html#networking-standalone_host_changing_network_config
Thanks.
From xen6.2 onwards cloudstack deponds(leverages) on xen native HA
capabilities Please check below link:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/User+VM+HA+using+native+XS+HA+capabilities
regards
sadhu
-Original Message-
From: Somesh Naidu [mailto:somesh.na...@citrix.co
Can you try with *async* instead?
If it still fails, check SMlog to see if there are more details why the mount
is failing when creating the SR.
-Original Message-
From: Tejas Gadaria [mailto:refond.g...@gmail.com]
Sent: Tuesday, January 06, 2015 7:44 AM
To: d...@cloudstack.apache.org
C
Just saw this ...
> new (global) configuration option to limit size
+1 for the above.
It would be useful to have similar functionality developed for cloud_usage as
we have for alerts and events, see below:
+--+
Sonali, what version of XS are you using?
-Original Message-
From: Geoff Higginbottom [mailto:geoff.higginbot...@shapeblue.com]
Sent: Tuesday, January 06, 2015 9:03 AM
To: users@cloudstack.apache.org
Subject: RE: Xenserver pool HA in Cloudstack
Sonali,
There have been some changes rece
Just saw this ...
> new (global) configuration option to limit size
+1 for the above.
It would be useful to have similar functionality developed for cloud_usage as
we have for alerts and events, see below:
+--+
Dave,
Deleting records from cloud_usage is safe and permitted. These records are not
used by any CS functionality. This data (raw usage data) is consumed by
entities external to CS (billing software, etc). This data is exposed by CS via
listUsage API. Deleting records from cloud_usage table wil
Sonali,
There have been some changes recently and as luck would have it today I am
working on a blog article for www.shapeblue.com covering the correct way to
configure HA in XenServer. It's due to go live later today, I'll respond to
this thread once it's live.
Regards
Geoff Higginbottom
D
Hi,
I have Xenserver cluster setup with 4 Hosts in it. I was wondering about HA
configuration of xenserver pool. But at some places I saw that cloudstack HA
and Xenserver HA both are different . And One shouldn't use Xenserver HA, since
Cloudstack HA is by default active. Is it true?
Yours si
Hi Tejas.
I am able to mount on XS host through CLI "mount.nfs4" command.
I am using Ubuntu 12.04 Server as nfs server.
Regards,
Tejas
On Tue, Jan 6, 2015 at 12:54 PM, Tejas Sheth wrote:
> Hi Tejas,
>
>Have you checked whether mannualy you are able to mount nfs share to xen
> server or not
Hi Prashant,
/etc/export file content is given below
/mnt/data/VM-TT/primary *(rw,sync,no_root_squash,no_subtree_check)
Still problem persist.
I have Ubuntu 12.04 Server acting as NFS-Server.
Regards,
Tejas
On Tue, Jan 6, 2015 at 12:40 PM, Prashant Kumar Mishra <
prashantkumar.mis...@citrix.c
Hi David,
There is no official way to reduce the size of the cloud_usage database
as it gets populated from the cloud database.
Can you share why you want to reduce it? Are you're facing any MySQL
performance issues?
If your DB size is causing problems we could work with you to develop a
new (g
Hi all,
Is there an "official" procedure on how to reduce the size of the cloud_usage
database?
Thanks,
Dave
Restart MS should work . If you don¹t want to restart clean it manually
from db , it seems because of one unfinished job , other jobs is also
pending .
On 1/6/15, 4:09 AM, "Francois Gaudreault" wrote:
>These are 60min and 1440min for standard jobs and 600sec for vm related
>jobs right? I have t
30 matches
Mail list logo