Hi,
I checked that both of the methods you propose work well.
After I add 'should_delete_outdated_entities' function to InstanceDriver,
it took about 10 minutes to clear the old Instance.
And I added two sentences you said to Nova-cpu.conf, so the vitrage
collector get notifications well.
Thank
Hello Team,
I have seen the steps of starting the kuryr libnetwork container on compute
node. But If I need to run the same container inside the VM running on
compute node, is't possible to do that?
I am not sure how can I map the /var/run/openvswitch inside the nested VM
because this is present
Started one Fedora 29 VM on openstack using official qcow2 image.
I followed this doc for kuryr installation inside the VM.
https://docs.openstack.org/kuryr-libnetwork/latest/readme.html
My objective is to run the nested docker i.e inside the VM.
While installing the kuryr-libbetwork from the
Hi Feilong,
Thanks for your reply.
Kindly find the below outputs.
[root@packstack1 ~]# rpm -qa | grep -i magnum
python-magnum-7.0.1-1.el7.noarch
openstack-magnum-conductor-7.0.1-1.el7.noarch
openstack-magnum-ui-5.0.1-1.el7.noarch
openstack-magnum-api-7.0.1-1.el7.noarch
Hi Vikrant,
Before we dig more, it would be nice if you can let us know the version
of your Magnum and Heat. Cheers.
On 30/11/18 12:12 AM, Vikrant Aggarwal wrote:
> Hello Team,
>
> Trying to deploy on K8 on fedora atomic.
>
> Here is the output of cluster template:
> ~~~
> [root@packstack1
On 29. 11. 18 20:20, Fox, Kevin M wrote:
If the base layers are shared, you won't pay extra for the separate puppet
container
Yes, and that's the state we're in right now.
unless you have another container also installing ruby in an upper layer.
Not just Ruby but also Puppet and Systemd.
Oh, rereading the conversation again, the concern is having shared deps move up
layers? so more systemd related then ruby?
The conversation about --nodeps makes it sound like its not actually used. Just
an artifact of how the rpms are built... What about creating a dummy package
that
If the base layers are shared, you won't pay extra for the separate puppet
container unless you have another container also installing ruby in an upper
layer. With OpenStack, thats unlikely.
the apparent size of a container is not equal to its actual size.
Thanks,
Kevin
On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
On 11/28/18 6:02 PM, Jiří Stránský wrote:
Reiterating again on previous points:
-I'd be fine removing systemd. But lets do it properly and not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can certainly put
them in
Hello Team,
Trying to deploy on K8 on fedora atomic.
Here is the output of cluster template:
~~~
[root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum
cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57
WARNING: The magnum client is deprecated and will be removed in a future
Hello,
This got lost way down in my mailbox.
I think we have a consensus about getting rid of the newton branches.
Does anybody in Stable release team have time to deprecate the
stable/newton branches?
Best regards
Tobias
On 11/19/2018 06:21 PM, Alex Schultz wrote:
On Mon, Nov 19, 2018 at
On 11/28/18 8:55 PM, Doug Hellmann wrote:
I thought the preferred solution for more complex settings was config maps. Did
that approach not work out?
Regardless, now that the driver work is done if someone wants to take another
stab at etcd integration it’ll be more straightforward today.
Hi,
I’ll be on vacation till Dec 17th. I’ll be replying emails but with delays.
Thanks
Renat Akhmerov
@Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
13 matches
Mail list logo