Hi,
you shouldn't use the latest master IPA version with ironic as of Mitaka
release.
The ironic API endpoint it tries to contact (v1/lookup...) was introduced
during Newton development and thus is present in ironic API from Newton
release onwards. The fallback to the old lookup endpoint (implemen
no, that's not my question.
I'm already overcommiting, but now need to prioritize one instance above others
in terms of perfomance.
От: Eugene Nikanorov
Отправлено: 12 января 2017 г. 1:13:27
Кому: Ivan Derbenev; openstack@lists.openstack.org
Тема: Re: [Openstack
Hi, All,
I'm a newcomer to Openstack Ironic. Recently, I'm work on deploy ironic
manually, and I found that the node status 100% *blocked in `callback wait`
status* until timeout. The ironic-api log shows that:
2017-01-12 10:21:00.626 158262 INFO keystonemiddleware.auth_token [-]
Rejecting reque
Ivan,
see if it provides an answer:
https://ask.openstack.org/en/question/55307/overcommitting-value-in-novaconf/
Regards,
Eugene.
On Wed, Jan 11, 2017 at 1:55 PM, James Downs wrote:
> On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:
>
> > if both vms start using all 64gb memory,
On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:
> if both vms start using all 64gb memory, both of them start using swap
Don't overcommit RAM.
> So, the question is - is it possible to prioritize 1st vm above 2nd? so the
> second one will fail before the 1st, to leave maximum pos
Hello, guys!
Imagine we have a compute node with kvm hypervisor installed
It has 64gb RAM and quad core processor
We create 2 machines in nova on this host - both with 64gb and 4 VCPUs
if both vms start using all 64gb memory, both of them start using swap
same for cpu - they use it equally
Hi ALL
I have openstack liberty running and everything works fine.
The vm console from dashboard is also working fine using port 6080.
when click console, the URL is:
http://controller_vip:6080/vnc_auto.html?token=XXX&title=VM_name(openstack_id)
I want to customize the dashboard console port to
Mohammed,
It looks like you may be right. Just found the permissions issue in the
nova log on the compute node.
4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage pool
Hi John,
It just works for us with Mitaka. You might be running with issues regarding
permissions where the Nova user might not be able to write to the images pool.
Turn debug on in your nova compute and snapshot a machine on it, you'll see the
logs and if it's turning it off, it's probably b
Have you or anyone else implemented this in Mitaka?
Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)
our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.
Unfortunately, that's w
Hi,
Per
https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/local.conf.sample
when installing kubernets as part of devstack installation (this is local
.conf default) the issue is there are no workers (nodes) defined:
[stack@comp1 devstack]$ kubectl get node
[stac
Hi Eugen,
Thanks for the response! That makes a lost of sense and is what I figured
was going on but I missed it in the documentation. We use Ceph as well and
I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this Se
Hi,
this seems to be exptected, the docs say:
"Shut down the source VM before you take the snapshot to ensure that
all data is flushed to disk."
So if the VM is not shut down, it's freezed to prevent data loss (I
guess). Depending on your storage backend, there are other ways to
perform
13 matches
Mail list logo