It appears you have pending jobs that are not letting you go through.
Usually stop and starting management service should be enough.
However, try running this script from your management server to show us
what you have running in async_job table.
> #!/bin/bash
> DATESTAMP=$(date +%m%d%y-%H%M%S
You need another cert for the proxy host.
In theory, all you could have done it with 1 SSL cert in front of
ha-proxy, then restrict communication to 8080 via iptables from MS to
ha-proxy.
Though, ideally - SSL accross the board is better.
With that said, get one more cert for ha-proxy..
On 4/13
Hi ilya and all,
Good day to you, and thank you for your reply.
Yes, I was able to access the second management server using http. To
resolve the problem, I ended up purchasing another SSL certificate for the
second management server, and after converting to PKS12 format and enable
SSL on server.
Rafael
Please see response in-line:
On 4/12/16 3:15 PM, Rafael Weingärtner wrote:
> Ilya that is interesting.
> By multiple CloudStack environments, you mean environments that do not have
> any link between them? I mean, they are not “regions” of one another or
> something like that.
>
> Do you
Indra
Both MGMT servers should be accessed via web browser.
However in your case, since you did not enable SSL on second server as
evident by port 8080, you need to use http header and not https.
Try http://second-management-server:8080/client/
Also, you can get away with single SSL for both MG
Just curious - is ACPId installed on VM?
On 4/13/16 6:18 AM, Simon Godard wrote:
> Hi,
>
> I am trying to understand why a destroyVirtualMachine API call would take
> around 1 hour to get a successful async job result. From CloudStack log, I
> can see that the StopVmCmd occurred right away, but
Gabriel
What i mentioning is all going to be donated to ACS (and kindly
developed by great team @ ShapeBlue).
We have many more things in pipeline to make ACS better - just cant
speak of them as we haven't finalized the internal feature roadmap.
CloudStack Manager will be developed in house init
1-run ssvm health check script which reside on ssvm , and check the
report for any error
2-monitor ssvm log to know exact issue.
On 4/14/16, 8:00 AM, "Tim Mackey" wrote:
>Umm, a thought. Has the secondary storage VM started (view on
>infrastructure tab). If not, you'll want to debug that first.
Umm, a thought. Has the secondary storage VM started (view on
infrastructure tab). If not, you'll want to debug that first. Here's some
debugging tips for SSVM:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM,+templates,+Secondary+storage+troubleshooting
On Wed, Apr 13, 2016 at 10:20 P
I did not understand? you did mounted the NFS folder on the MS?
I did not understand the link between the used that is used to run the MS
and the point regarding the SSVM.
The MS sends the command to download the ISO to the MS, that I am almost
sure. Haven't you checked the SSVM? Have you checked
Thanks for yout time Rafael, my answers between lines...
On Wed, Apr 13, 2016 at 10:47 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:
> Well, I saw your other email with the error.
>
> Did you try to mount the secondary storage at the MS?
>
I try now:
cloud01:/primary on /mnt/85d7
Well, I saw your other email with the error.
Did you try to mount the secondary storage at the MS?
If I am not wrong, the Secondary storage VM is the one that mounts the
storage and downloads the ISOs and VHDs. Have you tried to access it, and
check its connectivity? Did you set the Google's DNS
On Wed, Apr 13, 2016 at 10:23 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:
> how are you adding the ISO?
> I mean uploading or registering a link? After you add, is there any
> progress on the download? Didn't you see any other log on
> management-server.og?
>
Only with "Register
Hi Tim, thanks for yout time...
On Wed, Apr 13, 2016 at 10:06 PM, Tim Mackey wrote:
> There are a few possibilities, but the first thing to know is that
> catalina.out isn't the log you should be looking at. Take a look here for
> some tips on troubleshooting:
>
> http://docs.cloudstack.apache.
how are you adding the ISO?
I mean uploading or registering a link? After you add, is there any
progress on the download? Didn't you see any other log on
management-server.og?
On Wed, Apr 13, 2016 at 10:06 PM, Tim Mackey wrote:
> Welcome, Carlos.
>
> There are a few possibilities, but the first
Welcome, Carlos.
There are a few possibilities, but the first thing to know is that
catalina.out isn't the log you should be looking at. Take a look here for
some tips on troubleshooting:
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/troubleshooting.html.
The one thin
Hello everyone!
I have a clean installation of the last Cloudstack version in a single
server, all works fine (installation process)
When I try to add/register ISO the process on admin finish ok, but in my
catalina.out i saw this:
INFO [o.a.c.s.d.l.CloudStackImageStoreLifeCycleImpl]
(catalina-e
Hi Simon,
The following work was done but unfortunately the problem is not fixed:
on management server:
service cloudstack-management stop
on the host:
service cloudstack-agent stop
service libvirtd restart
service cloudstack-agent start
on management server:
service cloudstack-manageme
I finally found the problem and resolved the issue. The problem was in the
Python code change I made. I had a flag variable that indicated to save data
when it was changed while processing a list. This worked fine as long as it
executed the logic and defined the flag variable. The problem wa
Ilya
Everything sounds very interesting. I am happy to hear about projects
working towards the improvement of ACS. Are there perspectives of this
being donated for the ACS project?
Just curious, the usage metrics view you mentioned is this related to the
work presented by Rohit in [
https://cwiki
Gabriel,
In regards to operation issues and SLA(s), there are several initiatives
that come to mind:
1) Rewrite and enhance CloudStack HA (being worked on) - the specs are
posted on Confluence, targeting KVM primarily, as Xen and VmWare have
this problem solved
2) Distributed Resource Scheduler
Well, here goes one possible explanation. If I had to bet, I would bet on
this one, and not on some chunk of code that might be synchronized.
When you use the destroy command, first the ACS stops the VM. The stop
process is the one that can be slow. The OS of the VM might have taken time
long time
Gabriel,
In regards to operation issues, management and SLA(s), there are several
initiatives that come to mind:
1) Rewrite and enhance CloudStack HA (being worked on) - the specs are
posted on Confluence, targeting KVM primarily, as Xen and VmWare have
this problem solved
2) Distributed Resour
You need to mount it, usually from /dev/xvdd
Try: mount /dev/xvdd /mnt
Erik
Den onsdag 13. april 2016 skrev Stavros Konstantaras
følgende:
> Thank you for the idea Sanjeev.
>
> Tried this already but the iso does not appear in my VM (nor /mnt or
> /media). I downloaded xen-tools 4.6.2, compile
The stop operation seems to be as quick as usual. Again, we don’t have slow
destroy on all VMs. It occurred twice in a short time frame but we didn’t
experience it since then. I just want to understand the root cause to see if
the management server performance was at fault or if it’s a concurren
If you just use the stop option? Is it taking a long time too?
On Wed, Apr 13, 2016 at 10:37 AM, Simon Godard wrote:
> We are using XenServer 6.2.
>
> Most VM destroy (expunge=true) are fairly quick. Is there anything else I
> could be looking for? At the time of the slow destroy, there weren’t
Thank you for the idea Sanjeev.
Tried this already but the iso does not appear in my VM (nor /mnt or /media). I
downloaded xen-tools 4.6.2, compiled them and installed them but still can’t
manage to mount the second volume.
Any other software package that could help solve this problem?
Kind
Xen-tools include the required PV drivers and we don't have to download
anything for this. Try attach ISO option on a vm from CS UI and you will see
xen-tools iso in the drop down list.
Best Regards,
Sanjeev N
Chief Product Engineer, Accelerite
Off: +91 40 6722 9368 | EMail: sanjeev.neelar...@ac
We are using XenServer 6.2.
Most VM destroy (expunge=true) are fairly quick. Is there anything else I could
be looking for? At the time of the slow destroy, there weren’t a very high
number of async jobs ongoing. I suspect it could be related to a DB concurrency
issue, looking at this log I jus
Hi,
Check the comparisons here: http://ark.intel.com/compare/83356,81705,92981,91767
With the new options, I would opt for the E5-2630v4, more cores for the same
price.
Regards,
Timothy Lothering
Timothy Lothering
Solutions Architect
Managed Services
T: +27877415535
F: +27877415100
C: +278249
Thanks for the idea. Do the xen-tools include the required PV drivers?
I know that there is no official rpm of xen-tools so I need to download them
and install them manually, correct?
Regards
Stavros
> On 13 Apr 2016, at 15:22, Erik Weber wrote:
>
> Install xentools.
>
> I have no idea wha
I believe after you install the drivers you have to change the OS type for
your VM. That information is used when creating the VMM. So, you have to
choose a type that indicates to ACS that we can create a “PV” VMM.
On Wed, Apr 13, 2016 at 10:20 AM, Stavros Konstantaras <
s.konstanta...@uva.nl> w
Install xentools.
I have no idea what centos-release-xen is, but apparently it is not enough
for your hypervisors
--
Erik
On Wed, Apr 13, 2016 at 3:20 PM, Stavros Konstantaras wrote:
> Hi all ACS members.
>
> Did anyone face in the past any issue while attaching a second volume on a
> CentOS7
What hypervisor are you using?
Every single VM in your environment is presenting this behavior?
On Wed, Apr 13, 2016 at 10:18 AM, Simon Godard wrote:
> Hi,
>
> I am trying to understand why a destroyVirtualMachine API call would take
> around 1 hour to get a successful async job result. From
Hi all ACS members.
Did anyone face in the past any issue while attaching a second volume on a
CentOS7 VM? I receive the following message:
"Failed to attach volume testVolume5 to VM CentOS7VM; Failed to attach volume
for uuid: aa763fd8-02dc-42a1-bfe1-4e44201e487f due to You attempted an
oper
Hi,
I am trying to understand why a destroyVirtualMachine API call would take
around 1 hour to get a successful async job result. From CloudStack log, I
can see that the StopVmCmd occurred right away, but the DestroyVmCmd took 1
hour to complete.
Do you know what could cause such delays?
The onl
Very big thanks for Timothy:)
And left last choice
E5-2650v4 or E5-2630v4
Price different ±80%, effectivity - ±10% only.
Ofcourse, 2650 have more pCPU/vCPU, but is it worth to pay double price.
Pagarbiai
Mindaugas Milinavičius
UAB STARNITA
Direktorius
http://www.clustspace.com
LT: +
Very thorough Tim :)
Kind regards,
Paul Angus
Regards,
Paul Angus
paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
-Original Message-
From: Timothy Lothering [mailto:tlother...@datacentrix.co.za]
Sent: 13 April 2016 09:42
To
Hi Mindaugas,
As per previous responses, the trend is to keep to memory as you base benchmark
for VM density, from personal experience, memory is always the limiting factor.
vCPUs are rarely a bottleneck for general workloads (there are however specific
instances where CPUs are the limiting fac
:) yes.
It's the converged stuff that catches people out.
Kind regards,
Paul Angus
Regards,
Paul Angus
paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
-Original Message-
From: Mindaugas Milinavičius [mailto:uabstarn...@gmail
Server/blade is HBA 8Gb FC connectivity with switch. Its more then enough:)
13 апр. 2016 г. 10:57 пользователь "Paul Angus"
написал:
> I'd agree with that. Memory is nearly always the limiting factor when it
> comes to VMs per host.
>
> -- unless you're talking about blades, and then you have to
I'd agree with that. Memory is nearly always the limiting factor when it comes
to VMs per host.
-- unless you're talking about blades, and then you have to start looking
carefully at the connectivity between the chassis and the switch fabric.
Kind regards,
Paul Angus
Regards,
Paul Angus
pa
42 matches
Mail list logo