Will tend to agree with comments on the links. Perhaps deep blue over
white for links? Otherwise it looks great.
Osay
On 9 April 2015 at 08:09, Vadim Kimlaychuk
wrote:
> +1
> Much much better than the current one. More information on the screen. I
> have only one remark : "Notes" -- white tex
Hi Kyle,
In my setup I have observed this for stopped VM, the nic table ip4_address set
to 'null'.
After that I am not able to reproduce the issue.
I will keep looking into my setup for this issue.
Can you please send the below commands output from your setup.
#select instance_id,ip4_address f
With Docker being the craze these days, I just want to make sure you know that
we now have a simulator image on the Docker hub.
There are a few out there, but our branches are now automatically used to build
the images under the cloudstack organization in Docker hub.
thanks to Pierre-Luc Dion fo
On Apr 9, 2015, at 2:24 AM, Phillip Kent wrote:
> Hi Sebastian,
>
> I would add to the documentation section:
>
> - Review and improve the API Reference documentation
>
> (I tried to login to wiki and edit, but did not seem to have edit privileges
> for that page)
>
I just granted you acce
I have one instance in which I can no longer stop/start/reboot or do any other
instance command against it. The VM itself is still fully functioning in
vmware. From vcenter directly I can fully manage it start/stop/reboot etc. When
I try to manage it from cloudstack it throws an unhandled except
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/08/2015 07:04 PM, Sanjeev N wrote:
> Try migrating the vm to the second host. This should create GRE
> tunnel between the two hosts. On Apr 8, 2015 8:17 PM, "Daniel
> Kollmer" wrote:
>
Unfortunately it does not.
In syslog I can see the output
We currently have ACS 4.5.1 RC2 voting and testing going on. Unfortunately,
this time too we’ve found few issues which have been fixed now.
I’ve shared this repo with everyone to help test 4.5.1 for regressions and
blockers. If we’re good, we should have another RC3 next week and if everything
g
Hi Erik,
What api are you executing and which argument in the API are you trying to
auto-complete. It’s quite likely that cloudmonkey is simply printing what mgmt
server is replying (http error code 431 and rest is the message, cloudmonkey
does not have any such error message as it works in a g
Please do.
I still have a fair bit of work to create more generalized
cloud-set-guest-password/sshkey and userdata scripts that will behave as
expected regardless of the *N?X used (I'm starting some work to get this
playbook working on FreeBSD) but the idea I'm running with is to have
these scripts
Excellent suggestion. That worked perfectly!
> From: [email protected]
> To: [email protected]
> Subject: RE: Stuck in expunging state
> Date: Mon, 6 Apr 2015 18:06:07 +
>
> Hi,
>
> Check VM details first. Here example vm has id as 8143.
>
> Confirm VM state: Expunging from
Hi Erik,
Looking at the NPE and code, looks like the volumePool returned using the
volume’s pool ID was null. While the surface issue seems to be solvable by
simply adding a != null check. Can you share if the volume (you were trying to
detach) has NULL as the pool_id in db or if the storage po
Thanks everyone, I'll try your upstart configs and see how it goes :-)
--
Erik
On Thu, Apr 9, 2015 at 12:16 AM, Jeff Moody wrote:
> I have scripts and an Ansible playbook for bundling templates at
> https://github.com/fifthecho/CloudStack-Template
>
>
>
> On April 8, 2015 6:00:10 PM Erik Weber
While testing 4.5.1 RC2 I see this when trying to detach a volume:
http://pastebin.com/E4kRdVBr
Can't say if it's regression or not, but any help solving it is appreciated
:-)
--
Erik
Hi,
We always create VM instances in project context, so that all resources are
assigned to project instead of individual accounts. I have a script to call
listProjects API to show project’s resource limits and current usages.
Recently, I noticed that for a couple of projects, my script report
One final thing I had to do was set the ROOT volume associated with this
instance to removed. For some reason in the volumes tables there was 2 ROOT
volumes listed for this instance. One was remove the other was not. I ran the
following command which got rid of it. mysql> update volumes set rem
Even though the identifications steps don't match, disabling intremap in XS
6.5 seems to have solved the issue for us:
http://support.citrix.com/article/CTX136517
2015-04-08 22:15 GMT+02:00 Tomasz Zięba :
> Yes, XenTools65
>
> Regards,
> Tomasz Zięba
>
> Twitter: @TZieba
> LinkedIn: pl.linkedin.c
16 matches
Mail list logo