Re: [Openstack-operators] Fwd: Nova hypervisor uuid

2018-11-29 Thread Ignazio Cassano
Many thks Matt.
If the issue will happen again
I hope openstack commands will show duplicate entries and let me clean them.
When happened nova-hypervisor list command did not show duplcated entries
but I saw them only in the database.
Regards
Ignazio

Il giorno Gio 29 Nov 2018 17:28 Matt Riedemann  ha
scritto:

> On 11/29/2018 10:27 AM, Ignazio Cassano wrote:
> > I did in the DB directly.
> > I am using queens now.
> > Any python client command to delete hold records or I must use api ?
>
> You can use the CLI:
>
>
> https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-service-delete
>
>
> https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/compute-service.html#compute-service-delete
>
> --
>
> Thanks,
>
> Matt
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [Openstack-operators] Fwd: Nova hypervisor uuid

2018-11-29 Thread Ignazio Cassano
Hi Matt,
I did in the DB directly.
I am using queens now.
Any python client command to delete hold records or I must use api ?

Thanks & Regards
Ignazio


Il giorno gio 29 nov 2018 alle ore 16:28 Matt Riedemann 
ha scritto:

> On 11/29/2018 12:49 AM, Ignazio Cassano wrote:
> > Hello Mattm
> > Yes I mean sometimes I have same host/node names with different uuid in
> > compute_nodes table in nova database
> > I must delete nodes with uuid those not match with nova-hypervisor list
> > command.
> > At this time I have the following:
> > MariaDB [nova]> select hypervisor_hostname,uuid,deleted from
> compute_nodes;
> > +-+--+-+
> > | hypervisor_hostname | uuid | deleted |
> > +-+--+-+
> > | tst2-kvm02  | 802b21c2-11fb-4426-86b9-bf25c8a5ae1d |   0 |
> > | tst2-kvm01  | ce27803b-06cd-44a7-b927-1fa42c813b0f |   0 |
> > +-+--+-+
> > 2 rows in set (0,00 sec)
> >
> >
> > But sometimes old uuid are inserted in the table .
> > I deleted again them.
> > I restarted kvm nodes and now the table is ok.
> > I also restarded each controller and the tables is ok.
> > I do not know because 3 days ago I had same compute nodes names with
> > different uuids.
> >
> > Thanks and Regards
> > Ignazio
>
> OK I guess if it happens again, please get the
> host/hypervisor_hostname/uuid/deleted values from the compute_nodes
> table before you cleanup any entries.
>
> Also, when you're deleting the resources from the DB, are you doing it
> in the DB directly or via the DELETE /os-services/{service_id} API?
> Because the latter cleans up other related resources to the nova-compute
> service (the services table record, the compute_nodes table record, the
> related resource_providers table record in placement, and the
> host_mappings table record in the nova API DB). The resource
> provider/host mappings cleanup when deleting a compute service is a more
> recent bug fix though which depending on your release you might not have:
>
> https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac80
>
> --
>
> Thanks,
>
> Matt
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [Openstack-operators] Fwd: Nova hypervisor uuid

2018-11-28 Thread Ignazio Cassano
Hello Mattm
Yes I mean sometimes I have same host/node names with different uuid in
compute_nodes table in nova database
I must delete nodes with uuid those not match with nova-hypervisor list
command.
At this time I have the following:
MariaDB [nova]> select hypervisor_hostname,uuid,deleted from compute_nodes;
+-+--+-+
| hypervisor_hostname | uuid | deleted |
+-+--+-+
| tst2-kvm02  | 802b21c2-11fb-4426-86b9-bf25c8a5ae1d |   0 |
| tst2-kvm01  | ce27803b-06cd-44a7-b927-1fa42c813b0f |   0 |
+-+--+-+
2 rows in set (0,00 sec)


But sometimes old uuid are inserted in the table .
I deleted again them.
I restarted kvm nodes and now the table is ok.
I also restarded each controller and the tables is ok.
I do not know because 3 days ago I had same compute nodes names with
different uuids.

Thanks and Regards
Ignazio

Il giorno mer 28 nov 2018 alle ore 17:54 Matt Riedemann 
ha scritto:

> On 11/28/2018 4:19 AM, Ignazio Cassano wrote:
> > Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me.
> > I am sure kvm nodes names are note changed.
> > Tables where uuid are duplicated are:
> > dataresource_providers in nova_api db
> > compute_nodes in nova db
> > Regards
> > Ignazio
>
> It would be easier if you simply dumped the result of a select query on
> the compute_nodes table where the duplicate nodes exist (you said
> duplicate UUIDs but I think you mean duplicate host/node names with
> different UUIDs, correct?).
>
> There is a unique constraint on host/hypervisor_hostname
> (nodename)/deleted:
>
>  schema.UniqueConstraint(
>  'host', 'hypervisor_hostname', 'deleted',
>  name="uniq_compute_nodes0host0hypervisor_hostname0deleted"),
>
> So I'm wondering if the deleted field is not 0 on one of those because
> if one is marked as deleted, then the compute service will create a new
> compute_nodes table record on startup (and associated resource provider).
>
> --
>
> Thanks,
>
> Matt
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [Openstack-operators] Fwd: Nova hypervisor uuid

2018-11-28 Thread Ignazio Cassano
Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me.
I am sure kvm nodes names are note changed.
Tables where uuid are duplicated are:
dataresource_providers in nova_api db
compute_nodes in nova db
Regards
Ignazio

Il 28/Nov/2018 11:09 AM, "Gianpiero Ardissono"  ha
scritto:

>
> -- Forwarded message -
> From: Matt Riedemann 
> Date: mar 27 nov 2018, 19:03
> Subject: Re: [Openstack-operators] Nova hypervisor uuid
> To: 
>
>
> On 11/27/2018 11:32 AM, Ignazio Cassano wrote:
> > Hi  All,
> > Please anyone know where hypervisor uuid is retrived?
> > Sometime updating kmv nodes with yum update it changes and in nova
> > database 2 uuids are assigned to the same node.
> > regards
> > Ignazio
> >
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> To be clear, do you mean the computes_nodes.uuid column value in the
> cell database? Which is also used for the GET /os-hypervisors response
> 'id' value if using microversion >= 2.53. If so, that is generated
> randomly* when the compute_nodes table record is created:
>
>
> https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L588
>
>
> https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/objects/compute_node.py#L312
>
> When you hit this problem, are you sure the hostname on the compute host
> is not changing? Because when nova-compute starts up, it should look for
> the existing compute node record by host name and node name, which for
> the libvirt driver should be the same. That lookup code is here:
>
>
> https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L815
>
> So the only way nova-compute should create a new compute_nodes table
> record for the same host is if the host/node name changes during the
> upgrade. Is the deleted value in the database the same (0) for both of
> those records?
>
> * The exception to this is for the ironic driver which re-uses the
> ironic node uuid as of this change:
> https://review.openstack.org/#/c/571535/
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Nova hypervisor uuid

2018-11-27 Thread Ignazio Cassano
Hi  All,
Please anyone know where hypervisor uuid is retrived?
Sometime updating kmv nodes with yum update it changes and in nova database
2 uuids are assigned to the same node.
regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack zun on centos???

2018-11-14 Thread Ignazio Cassano
Hello again,
I read zun documents very fastly.
After zun installation (with or without kolla) , do containers run directly
in compute nodes or in virtual machines ?
Magnum can run them in both but Magnum seems very different.
Ignazio


Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M  ha
scritto:

> kolla installs it via containers.
>
> Thanks,
> Kevin
> --
> *From:* Ignazio Cassano [ignaziocass...@gmail.com]
> *Sent:* Wednesday, November 14, 2018 10:48 AM
> *To:* Eduardo Gonzalez
> *Cc:* OpenStack Operators
> *Subject:* Re: [Openstack-operators] Openstack zun on centos???
>
> Hi Edoardo,
> does it mean openstack kolla installs zun using pip ?
> I did not find any zun rpm package
> Regards
> Ignazio
>
> Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez  ha
> scritto:
>
>> Hi Cassano, you can use zun in centos deployed by kolla-ansible.
>>
>> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html
>>
>> Regards
>>
>> El mié., 14 nov. 2018 17:11, Ignazio Cassano 
>> escribió:
>>
>>> Hi All,
>>> I'd like to know if openstack zun will be released for centos.
>>> Reading documentation at docs.openstack.org only ubuntu installation is
>>> reported.
>>> Many thanks
>>> Ignazio
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack zun on centos???

2018-11-14 Thread Ignazio Cassano
Thanks Kevin and Edoardo.
unfortunately I installed openstack using ansible playbooks and some
projects are already in production.
PROBALY KOLLA MIGRATION MEANS OPENSTACK REINSTALLATION
;-(
Ignazio

Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M  ha
scritto:

> kolla installs it via containers.
>
> Thanks,
> Kevin
> --
> *From:* Ignazio Cassano [ignaziocass...@gmail.com]
> *Sent:* Wednesday, November 14, 2018 10:48 AM
> *To:* Eduardo Gonzalez
> *Cc:* OpenStack Operators
> *Subject:* Re: [Openstack-operators] Openstack zun on centos???
>
> Hi Edoardo,
> does it mean openstack kolla installs zun using pip ?
> I did not find any zun rpm package
> Regards
> Ignazio
>
> Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez  ha
> scritto:
>
>> Hi Cassano, you can use zun in centos deployed by kolla-ansible.
>>
>> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html
>>
>> Regards
>>
>> El mié., 14 nov. 2018 17:11, Ignazio Cassano 
>> escribió:
>>
>>> Hi All,
>>> I'd like to know if openstack zun will be released for centos.
>>> Reading documentation at docs.openstack.org only ubuntu installation is
>>> reported.
>>> Many thanks
>>> Ignazio
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack zun on centos???

2018-11-14 Thread Ignazio Cassano
Hi Edoardo,
does it mean openstack kolla installs zun using pip ?
I did not find any zun rpm package
Regards
Ignazio

Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez  ha
scritto:

> Hi Cassano, you can use zun in centos deployed by kolla-ansible.
>
> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html
>
> Regards
>
> El mié., 14 nov. 2018 17:11, Ignazio Cassano 
> escribió:
>
>> Hi All,
>> I'd like to know if openstack zun will be released for centos.
>> Reading documentation at docs.openstack.org only ubuntu installation is
>> reported.
>> Many thanks
>> Ignazio
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Openstack zun on centos???

2018-11-14 Thread Ignazio Cassano
Hi All,
I'd like to know if openstack zun will be released for centos.
Reading documentation at docs.openstack.org only ubuntu installation is
reported.
Many thanks
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] upgrade pike to queens get `xxx DBConnectionError SELECT 1`

2018-11-13 Thread Ignazio Cassano
Hi all,
upgradig from pike to queens I got a lot of errors in nova-api.log:

upgrade pike to queens get `xxx DBConnectionError SELECT 1`

My issue is the same reported at:

https://bugs.launchpad.net/oslo.db/+bug/1774544

The difference is that I am using centos 7.
I am also using haproxy to balancing galera cluster .
Modifying nova.conf ans using end exluding haproxu specifying only a
galera cluster member the problem disappears.

Please, any help ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] queens: vnc password console does not work anymore

2018-11-13 Thread Ignazio Cassano
Hi All,
before upgrading to queens, we used in qemu.conf the vnc_passwd parameter.
Using the dashboard console asked me for the password.
Upgrading to queens it does not work anymore (unable to negotiote security
with server).
Removing the vnc_password from qemu.conf e restart libvirt and virtual
machine it returns to work.
What is changed in queens ?
Thanks
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hi Chris,
many thanks for your answer.
It solved the issue.
Regards
Ignazio

Il giorno mar 13 nov 2018 alle ore 03:46 Chris Apsey <
bitskr...@bitskrieg.net> ha scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini?  The former value was deprecated several releases ago
> and now no longer functions as of pike.  The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano 
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skapl...@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomość napisana przez Ignazio Cassano  w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> >  haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skapl...@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomość napisana przez Ignazio Cassano 
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded  manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace  the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > ___
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello, I am going to check it.
Thanks
Ignazio

Il giorno Mar 13 Nov 2018 03:46 Chris Apsey  ha
scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini?  The former value was deprecated several releases ago
> and now no longer functions as of pike.  The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano 
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skapl...@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomość napisana przez Ignazio Cassano  w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> >  haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skapl...@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomość napisana przez Ignazio Cassano 
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded  manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace  the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > ___
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Any other suggestion ?
It does not work.
Nova metatada is on port 8775 in listen but no way to solve this issue.
Thanks
Ignazio

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Yes, sorry.
Also the 8775 port is reachable from neutron metadata agent
Regards
Ignazio

Il giorno lun 12 nov 2018 alle ore 23:08 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:55:
> >
> > Hello,
> > the nova api in on the same controller on port 8774 and it can be
> reached from the metadata agent
>
> Nova-metadata-api is running on port 8775 IIRC.
>
> > No firewall is present
> > Regards
> >
> > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> > >
> > > Hello again,
> > > I have another installation of ocata .
> > > On ocata the metadata for a network id is displayed by ps -afe like
> this:
> > >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> > >
> > > On queens like this:
> > >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> > >
> > > Is it the correct behaviour ?
> >
> > Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
> >
> > >
> > > Regards
> > > Ignazio
> > >
> > >
> > >
> > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > > Hi,
> > >
> > > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> > >
> > > > Wiadomość napisana przez Ignazio Cassano 
> w dniu 12.11.2018, o godz. 19:49:
> > > >
> > > > Hi All,
> > > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > > All worked fine.
> > > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > > Please, anyone can help me?
> > > > Regards
> > > > Ignazio
> > > >
> > > > ___
> > > > OpenStack-operators mailing list
> > > > OpenStack-operators@lists.openstack.org
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> > > —
> > > Slawek Kaplonski
> > > Senior software engineer
> > > Red Hat
> > >
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
db.sqlalchemy.engines
first_packet = self.connection._read_packet()
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in
_read_packet
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
packet_header = self._read_bytes(4)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in
_read_bytes
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection
to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error
at: http://sqlalche.me/e/e3q8)


Anything went wrong while upgrade to queens  in nova database ?

# nova-manage db online_data_migrations

/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Running batches of 50 until complete
1 rows matched query migrate_instances_add_request_spec, 0 migrated
+-+--+---+
|  Migration  | Total Needed | Completed |
+-+--+---+
| delete_build_requests_with_no_instance_uuid |  0   | 0 |
|migrate_aggregate_reset_autoincrement|  0   | 0 |
|  migrate_aggregates |  0   | 0 |
|  migrate_instance_groups_to_api_db  |  0   | 0 |
|  migrate_instances_add_request_spec |  1   | 0 |
|  migrate_keypairs_to_api_db |  0   | 0 |
|   migrate_quota_classes_to_api_db   |  0   | 0 |
|migrate_quota_limits_to_api_db   |  0   | 0 |
|  migration_migrate_to_uuid  |  0   | 0 |
| populate_missing_availability_zones |  0   | 0 |
|populate_uuids   |  0   | 0 |
| service_uuids_online_data_migration |  0   | 0 |
+-+--+---+




Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello again, ath the same time I tried to create an instance nova-api log
reports:

2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
[req-e4799f40-eeab-482d-9717-cb41be8ffde2 89f76bc5de5545f381da2c10c7df7f15
59f1f232ce28409593d66d8f6495e434 - default default] Database connection was
found disconnected; reconnecting: DBConnectionError:
(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server
during query') [SQL: u'SELECT 1'] (Background on this error at:
http://sqlalche.me/e/e3q8)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _quer




I never lost connections to de db before upgrading
:-(

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
>

Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello,
the nova api in on the same controller on port 8774 and it can be reached
from the metadata agent
No firewall is present
Regards

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello again,
I have another installation of ocata .
On ocata the metadata for a network id is displayed by ps -afe like this:
 /usr/bin/python2 /bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
--state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
--metadata_proxy_group=993
--log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
--log-dir=/var/log/neutron

On queens like this:
 haproxy -f
/var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf

Is it the correct behaviour ?

Regards
Ignazio



Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> >
> > Hi All,
> > I upgraded  manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace
> the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
PS
Thanks for your help

Il giorno lun 12 nov 2018 alle ore 22:15 Ignazio Cassano <
ignaziocass...@gmail.com> ha scritto:

> Hello,
> attached here there is the log file.
>
> Connecting to an instance created befor upgrade I also tried:
> wget http://169.254.169.254/2009-04-04/meta-data/instance-id
>
> The following is the output
>
> --2018-11-12 22:14:45--
> http://169.254.169.254/2009-04-04/meta-data/instance-id
> Connecting to 169.254.169.254:80... connected.
> HTTP request sent, awaiting response... 500 Internal Server Error
> 2018-11-12 22:14:45 ERROR 500: Internal Server Error
>
>
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
>
>> Hi,
>>
>> Can You share logs from Your haproxy-metadata-proxy service which is
>> running in qdhcp namespace? There should be some info about reason of those
>> errors 500.
>>
>> > Wiadomość napisana przez Ignazio Cassano  w
>> dniu 12.11.2018, o godz. 19:49:
>> >
>> > Hi All,
>> > I upgraded  manually my centos 7 openstack ocata to pike.
>> > All worked fine.
>> > Then I upgraded from pike to Queens and instances stopped to reach
>> metadata on 169.254.169.254 with error 500.
>> > I am using isolated metadata true in my dhcp conf and in dhcp
>> namespace  the port 80 is in listen.
>> > Please, anyone can help me?
>> > Regards
>> > Ignazio
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hi All,
I upgraded  manually my centos 7 openstack ocata to pike.
All worked fine.
Then I upgraded from pike to Queens and instances stopped to reach metadata
on 169.254.169.254 with error 500.
I am using isolated metadata true in my dhcp conf and in dhcp namespace
 the port 80 is in listen.
Please, anyone can help me?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-18 Thread Ignazio Cassano
Hello, sorry for late in my answer
the following is the content of my ocata repo file:

[centos-openstack-ocata]
name=CentOS-7 - OpenStack ocata
baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-ocata/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
exclude=sip,PyQt4


Epel is not enable as suggested in documentation-
Regards
Ignazio

Il giorno gio 18 ott 2018 alle ore 10:24 Sylvain Bauza 
ha scritto:

>
>
> On Wed, Oct 17, 2018 at 4:46 PM Ignazio Cassano 
> wrote:
>
>> Hello, here the select you suggested:
>>
>> MariaDB [nova]> select * from shadow_services;
>> Empty set (0,00 sec)
>>
>> MariaDB [nova]> select * from shadow_compute_nodes;
>> Empty set (0,00 sec)
>>
>> As far as the upgrade tooling is concerned, we are using only yum update
>> on old compute nodes to have same packages installed on the new
>> compute-nodes
>>
>
>
> Well, to be honest, I was looking at some other bug for OSP
> https://bugzilla.redhat.com/show_bug.cgi?id=1636463 which is pretty
> identical so you're not alone :-)
> For some reason, yum update modifies something in the DB that I don't know
> yet. Which exact packages are you using ? RDO ones ?
>
> I marked the downstream bug as NOTABUG since I wasn't able to reproduce it
> and given I also provided a SQL query for fixing it, but maybe we should
> try to see which specific package has a problem...
>
> -Sylvain
>
>
> Procedure used
>> We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02,
>> podto1-kvm03
>> 1) install a new compute node (podto1-kvm04)
>> 2) On controller we discovered the new compute node: su -s /bin/sh -c
>> "nova-manage cell_v2 discover_hosts --verbose" nova
>> 3) Evacuate podto1-kvm01
>> 4) yum update on podto1-kvm01 and reboot it
>> 5) Evacuate podto1-kvm02
>> 6) yum update on podto1-kvm02 and reboot it
>> 7) Evacuate podto1-kvm03
>> 8) yum update podto1-kvm03 and reboot it
>>
>>
>>
>> Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann <
>> mriede...@gmail.com> ha scritto:
>>
>>> On 10/17/2018 9:13 AM, Ignazio Cassano wrote:
>>> > Hello Sylvain, here the output of some selects:
>>> > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes;
>>> > +--+-+
>>> > | host | hypervisor_hostname |
>>> > +--+-+
>>> > | podto1-kvm01 | podto1-kvm01|
>>> > | podto1-kvm02 | podto1-kvm02|
>>> > | podto1-kvm03 | podto1-kvm03|
>>> > | podto1-kvm04 | podto1-kvm04|
>>> > | podto1-kvm05 | podto1-kvm05|
>>> > +--+-+
>>> >
>>> > MariaDB [nova]> select host from compute_nodes where
>>> host='podto1-kvm01'
>>> > and hypervisor_hostname='podto1-kvm01';
>>> > +--+
>>> > | host |
>>> > +--+
>>> > | podto1-kvm01 |
>>> > +--+
>>>
>>> Does your upgrade tooling run a db archive/purge at all? It's possible
>>> that the actual services table record was deleted via the os-services
>>> REST API for some reason, which would delete the compute_nodes table
>>> record, and then a restart of the nova-compute process would recreate
>>> the services and compute_nodes table records, but with a new compute
>>> node uuid and thus a new resource provider.
>>>
>>> Maybe query your shadow_services and shadow_compute_nodes tables for
>>> "podto1-kvm01" and see if a record existed at one point, was deleted and
>>> then archived to the shadow tables.
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-17 Thread Ignazio Cassano
Hello, here the select you suggested:

MariaDB [nova]> select * from shadow_services;
Empty set (0,00 sec)

MariaDB [nova]> select * from shadow_compute_nodes;
Empty set (0,00 sec)

As far as the upgrade tooling is concerned, we are using only yum update on
old compute nodes to have same packages installed on the new compute-nodes
Procedure used
We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02,
podto1-kvm03
1) install a new compute node (podto1-kvm04)
2) On controller we discovered the new compute node: su -s /bin/sh -c
"nova-manage cell_v2 discover_hosts --verbose" nova
3) Evacuate podto1-kvm01
4) yum update on podto1-kvm01 and reboot it
5) Evacuate podto1-kvm02
6) yum update on podto1-kvm02 and reboot it
7) Evacuate podto1-kvm03
8) yum update podto1-kvm03 and reboot it



Il giorno mer 17 ott 2018 alle ore 16:37 Matt Riedemann 
ha scritto:

> On 10/17/2018 9:13 AM, Ignazio Cassano wrote:
> > Hello Sylvain, here the output of some selects:
> > MariaDB [nova]> select host,hypervisor_hostname from compute_nodes;
> > +--+-+
> > | host | hypervisor_hostname |
> > +--+-+
> > | podto1-kvm01 | podto1-kvm01|
> > | podto1-kvm02 | podto1-kvm02|
> > | podto1-kvm03 | podto1-kvm03|
> > | podto1-kvm04 | podto1-kvm04|
> > | podto1-kvm05 | podto1-kvm05|
> > +--+-+
> >
> > MariaDB [nova]> select host from compute_nodes where host='podto1-kvm01'
> > and hypervisor_hostname='podto1-kvm01';
> > +--+
> > | host |
> > +--+
> > | podto1-kvm01 |
> > +--+
>
> Does your upgrade tooling run a db archive/purge at all? It's possible
> that the actual services table record was deleted via the os-services
> REST API for some reason, which would delete the compute_nodes table
> record, and then a restart of the nova-compute process would recreate
> the services and compute_nodes table records, but with a new compute
> node uuid and thus a new resource provider.
>
> Maybe query your shadow_services and shadow_compute_nodes tables for
> "podto1-kvm01" and see if a record existed at one point, was deleted and
> then archived to the shadow tables.
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-17 Thread Ignazio Cassano
Hello, I am sure we are not using nova-compute with duplicate names.
As I told previously we tried on 3 differents openstack installations and
we faced the same issue.
Procedure used
We have an openstack with 3 compute nodes : podto1-kvm01, podto1-kvm02,
podto1-kvm03
1) install a new compute node (podto1-kvm04)
2) On controller we discovered the new compute node: su -s /bin/sh -c
"nova-manage cell_v2 discover_hosts --verbose" nova
3) Evacuate podto1-kvm01
4) yum update on podto1-kvm01 and reboot it
5) Evacuate podto1-kvm02
6) yum update on podto1-kvm02 and reboot it
7) Evacuate podto1-kvm03
8) yum update podto1-kvm03 and reboot it

Regards


Il giorno mer 17 ott 2018 alle ore 16:19 Jay Pipes  ha
scritto:

> On 10/17/2018 01:41 AM, Ignazio Cassano wrote:
> > Hello Jay,  when I add a New compute node I run nova-manage cell_v2
> > discover host .
> > IS it possible this command update the old host uuid in resource table?
>
> No, not unless you already had a nova-compute installed on a host with
> the exact same hostname... which, from looking at the output of your
> SELECT from compute_nodes table, doesn't seem to be the case.
>
> In short, I think both Sylvain and I are stumped as to how your
> placement resource_providers table ended up with these phantom records :(
>
> -jay
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-17 Thread Ignazio Cassano
Hello Sylvain, here the output of some selects:
MariaDB [nova]> select host,hypervisor_hostname from compute_nodes;
+--+-+
| host | hypervisor_hostname |
+--+-+
| podto1-kvm01 | podto1-kvm01|
| podto1-kvm02 | podto1-kvm02|
| podto1-kvm03 | podto1-kvm03|
| podto1-kvm04 | podto1-kvm04|
| podto1-kvm05 | podto1-kvm05|
+--+-+

MariaDB [nova]> select host from compute_nodes where host='podto1-kvm01'
and hypervisor_hostname='podto1-kvm01';
+--+
| host |
+--+
| podto1-kvm01 |
+--+



Il giorno mer 17 ott 2018 alle ore 16:02 Sylvain Bauza 
ha scritto:

>
>
> On Wed, Oct 17, 2018 at 12:56 AM Jay Pipes  wrote:
>
>> On 10/16/2018 10:11 AM, Sylvain Bauza wrote:
>> > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano
>> > mailto:ignaziocass...@gmail.com>> wrote:
>> >
>> > Hi everybody,
>> > when on my ocata installation based on centos7 I update (only update
>> > not  changing openstack version) some kvm compute nodes, I
>> > diescovered uuid in resource_providers nova_api db table are
>> > different from uuid in compute_nodes nova db table.
>> > This causes several errors in nova-compute service, because it not
>> > able to receive instances anymore.
>> > Aligning uuid from compute_nodes solves this problem.
>> > Could anyone tel me if it is a bug ?
>> >
>> >
>> > What do you mean by "updating some compute nodes" ? In Nova, we
>> consider
>> > uniqueness of compute nodes by a tuple (host, hypervisor_hostname)
>> where
>> > host is your nova-compute service name for this compute host, and
>> > hypervisor_hostname is in the case of libvirt the 'hostname' reported
>> by
>> > the libvirt API [1]
>> >
>> > If somehow one of the two values change, then the Nova Resource Tracker
>> > will consider this new record as a separate compute node, hereby
>> > creating a new compute_nodes table record, and then a new UUID.
>> > Could you please check your compute_nodes table and see whether some
>> > entries were recently created ?
>>
>> The compute_nodes table has no unique constraint on the
>> hypervisor_hostname field unfortunately, even though it should. It's not
>> like you can have two compute nodes with the same hostname. But, alas,
>> this is one of those vestigial tails in nova due to poor initial table
>> design and coupling between the concept of a nova-compute service worker
>> and the hypervisor resource node itself.
>>
>>
> Sorry if I was unclear, but I meant we have a UK for (host,
> hypervisor_hostname, deleted) (I didn't explain about deleted, but meh).
>
> https://github.com/openstack/nova/blob/01c33c5/nova/db/sqlalchemy/models.py#L116-L118
>
> But yeah, we don't have any UK for just (hypervisor_hostname, deleted),
> sure.
>
> Ignazio, I was tempted to say you may have run into this:
>>
>> https://bugs.launchpad.net/nova/+bug/1714248
>>
>> But then I see you're not using Ironic... I'm not entirely sure how you
>> ended up with duplicate hypervisor_hostname records for the same compute
>> node, but some of those duplicate records must have had the deleted
>> field set to a non-zero value, given the constraint we currently have on
>> (host, hypervisor_hostname, deleted).
>>
>> This means that your deployment script or some external scripts must
>> have been deleting compute node records somehow, though I'm not entirely
>> sure how...
>>
>>
> Yeah that's why I asked for the compute_nodes records. Ignazio, could you
> please verify this ?
> Do you have multiple records for the same (host, hypervisor_hostname)
> tuple ?
>
> 'select from compute_nodes where host=XXX and hypervisor_hostname=YYY'
>
>
> -Sylvain
>
> Best,
>> -jay
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-16 Thread Ignazio Cassano
hello Iain, it is not possible.
I checked hostnames several times.
No changes .
I tried same procedure on  3 different ocata installations because we have
3 distinct openstack . Same results.
Regards
Ignazio

Il Mar 16 Ott 2018 17:20 iain MacDonnell  ha
scritto:

>
> Is it possible that the hostnames of the nodes changed when you updated
> them? e.g. maybe they were using fully-qualified names before and
> changed to short-form, or vice versa ?
>
>  ~iain
>
>
> On 10/16/2018 07:22 AM, Ignazio Cassano wrote:
> > Hi Sylvain,
> > I mean launching "yum update" on compute nodes.
> > Now I am going to describe what happened.
> > We had an environment made up of 3 kvm nodes.
> > We added two new compute nodes.
> > Since the addition has been made after 3 or 4 months after the first
> > openstack installation, the 2 new compute nodes are updated to most
> > recent ocata packages.
> > So we launched a yum update also on the 3 old compute nodes.
> > After the above operations, the resource_providers table contains wrong
> > uuid for the 3 old nodes and they stooped to work.
> > Updating resource_providers uuid getting them from compute_nodes table,
> > the old 3 nodes return to work fine.
> > Regards
> > Ignazio
> >
> > Il giorno mar 16 ott 2018 alle ore 16:11 Sylvain Bauza
> > mailto:sba...@redhat.com>> ha scritto:
> >
> >
> >
> > On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano
> > mailto:ignaziocass...@gmail.com>> wrote:
> >
> > Hi everybody,
> > when on my ocata installation based on centos7 I update (only
> > update not  changing openstack version) some kvm compute nodes,
> > I diescovered uuid in resource_providers nova_api db table are
> > different from uuid in compute_nodes nova db table.
> > This causes several errors in nova-compute service, because it
> > not able to receive instances anymore.
> > Aligning uuid from compute_nodes solves this problem.
> > Could anyone tel me if it is a bug ?
> >
> >
> > What do you mean by "updating some compute nodes" ? In Nova, we
> > consider uniqueness of compute nodes by a tuple (host,
> > hypervisor_hostname) where host is your nova-compute service name
> > for this compute host, and hypervisor_hostname is in the case of
> > libvirt the 'hostname' reported by the libvirt API [1]
> >
> > If somehow one of the two values change, then the Nova Resource
> > Tracker will consider this new record as a separate compute node,
> > hereby creating a new compute_nodes table record, and then a new
> UUID.
> > Could you please check your compute_nodes table and see whether some
> > entries were recently created ?
> >
> > -Sylvain
> >
> > [1]
> >
> https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__libvirt.org_docs_libvirt-2Dappdev-2Dguide-2Dpython_en-2DUS_html_libvirt-5Fapplication-5Fdevelopment-5Fguide-5Fusing-5Fpython-2DConnections-2DHost-5FInfo.html=DwMFaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI=-qYx_DDcBW_aiXp2tLBcR4pN0VZ9ZNclcx5LfIVor_E=
> >
> >
> > Regards
> > Ignazio
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > <mailto:OpenStack-operators@lists.openstack.org>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > <
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwMFaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI=COsaMeTCgWBDl9EQVZB_AGikvKqCIaWcA5RY7IcLYgw=
> >
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs=_TK1Um7U6rr6DWfsEbv4Rlnc21v6RU0YDRepaIogZrI=COsaMeTCgWBDl9EQVZB_AGikvKqCIaWcA5RY7IcLYgw=
> >
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-16 Thread Ignazio Cassano
Hi Sylvain,
I mean launching "yum update" on compute nodes.
Now I am going to describe what happened.
We had an environment made up of 3 kvm nodes.
We added two new compute nodes.
Since the addition has been made after 3 or 4 months after the first
openstack installation, the 2 new compute nodes are updated to most recent
ocata packages.
So we launched a yum update also on the 3 old compute nodes.
After the above operations, the resource_providers table contains wrong
uuid for the 3 old nodes and they stooped to work.
Updating resource_providers uuid getting them from compute_nodes table, the
old 3 nodes return to work fine.
Regards
Ignazio

Il giorno mar 16 ott 2018 alle ore 16:11 Sylvain Bauza 
ha scritto:

>
>
> On Tue, Oct 16, 2018 at 3:28 PM Ignazio Cassano 
> wrote:
>
>> Hi everybody,
>> when on my ocata installation based on centos7 I update (only update not
>> changing openstack version) some kvm compute nodes, I diescovered uuid in
>> resource_providers nova_api db table are different from uuid in
>> compute_nodes nova db table.
>> This causes several errors in nova-compute service, because it not able
>> to receive instances anymore.
>> Aligning uuid from compute_nodes solves this problem.
>> Could anyone tel me if it is a bug ?
>>
>>
> What do you mean by "updating some compute nodes" ? In Nova, we consider
> uniqueness of compute nodes by a tuple (host, hypervisor_hostname) where
> host is your nova-compute service name for this compute host, and
> hypervisor_hostname is in the case of libvirt the 'hostname' reported by
> the libvirt API [1]
>
> If somehow one of the two values change, then the Nova Resource Tracker
> will consider this new record as a separate compute node, hereby creating a
> new compute_nodes table record, and then a new UUID.
> Could you please check your compute_nodes table and see whether some
> entries were recently created ?
>
> -Sylvain
>
> [1]
> https://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_application_development_guide_using_python-Connections-Host_Info.html
>
> Regards
>> Ignazio
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] nova_api resource_providers table issues on ocata

2018-10-16 Thread Ignazio Cassano
Hi everybody,
when on my ocata installation based on centos7 I update (only update not
changing openstack version) some kvm compute nodes, I diescovered uuid in
resource_providers nova_api db table are different from uuid in
compute_nodes nova db table.
This causes several errors in nova-compute service, because it not able to
receive instances anymore.
Aligning uuid from compute_nodes solves this problem.
Could anyone tel me if it is a bug ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata nova /etc/nova/policy.json

2018-09-06 Thread Ignazio Cassano
Thanks but I made a mistake because I forgot to change  user variables
before deleting the instance.
User belonging to user role cannot delete instances of other projects.
Sorry for my mistake
Regards
Ignazio

Il giorno gio 6 set 2018 alle ore 16:41 iain MacDonnell <
iain.macdonn...@oracle.com> ha scritto:

>
>
> On 09/06/2018 06:31 AM, Ignazio Cassano wrote:
> > I installed openstack ocata on centos and I saw /etc/nova/policy.json
> > coontains the following:
> > {
> > }
> >
> > I created an instance in a a project "admin" with user admin that
> > belogns to admin project
> >
> > I created a demo project with a user demo with "user" role.
> >
> > Using command lines (openstack server list --all-projects) the user demo
> > can list the admin instances and can also delete one of them.
> >
> > I think this is a bug and a nova policy.json must be created with some
> > rules for avoiding the above.
>
> See
>
> https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>
> You have something else going on ...
>
>  ~iain
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata nova /etc/nova/policy.json

2018-09-06 Thread Ignazio Cassano
Hi everyone,
I installed openstack ocata on centos and I saw /etc/nova/policy.json
coontains the following:
{
}

I created an instance in a a project "admin" with user admin that belogns
to admin project

I created a demo project with a user demo with "user" role.

Using command lines (openstack server list --all-projects) the user demo
can list the admin instances and can also delete one of them.

I think this is a bug and a nova policy.json must be created with some
rules for avoiding the above.

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] dashboard show only project after upgrading

2018-07-31 Thread Ignazio Cassano
Hello, purging and reinstalling the dashboard solved.
thanks
ignazio


Il Lun 30 Lug 2018 17:53 Chris Apsey  ha scritto:

> Ignazio,
>
> Are your horizon instances in separate containers/VMS?  If so, I'd highly
> recommend completely wiping them and rebuilding from scratch since horizon
> itself is stateless.  I am not a fan of upgrades for reasons like this.
>
> If that's not possible, a purge of the horizon packages on your controller
> and a reinstallation should fix it.
>
> Chris
>
> On July 30, 2018 11:38:03 Ignazio Cassano 
> wrote:
>
>> Sorry, I sent a wrong image.
>> The correct screenshot is attached here.
>> Regards
>>
>> 2018-07-30 17:33 GMT+02:00 Ignazio Cassano :
>>
>>> Hello everyone,
>>> I upgraded openstack centos 7 from ocata to pike ad command line work
>>> fine
>>> but dashboard does not show any menu on the left .
>>> I missed the following menus:
>>>
>>> Project
>>> Admin
>>> Identity
>>>
>>> You can find the image attached here.
>>>
>>> Could anyone help me ?
>>> Regards
>>> Ignazio
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] manila-ui does not work after upgrading from ocata to pike

2018-07-31 Thread Ignazio Cassano
Hello Tom, I wiil upgrade from pike to Queens asap.
I upgraded from ocata to pike to go on step by step, but it is useful dir
the community I can open a bug.
What do you think ?
Thanks
Ignazio


Il Mar 31 Lug 2018 19:12 Tom Barron  ha scritto:

> On 31/07/18 14:23 +0200, Ignazio Cassano wrote:
> >Hi everyone,
> >I upgraded my centos 7 openstack from ocata to pike.
> >Openstack dashboard works fine only if I remove openstack manila ui
> package.
> >With the manila ui it gives me internal server error.
> >
> >In httpd error log I read:
> >
> >
> > KeyError:  >
> >
> >Please, anyone has solved this issue yet ?
> >
> >Regards
> >Ignazio
>
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> Seems likely to be a packaging issue.  Might be the issue that this
> (queens) patch [1] addressed.  To get someone to work on the pike
> issue please file a BZ against RDO [2].
>
> -- Tom Barron (tbarron)
>
> [1] https://review.rdoproject.org/r/#/c/14049/
>
> [2]
> https://bugzilla.redhat.com/enter_bug.cgi?product=RDO=openstack-manila-ui
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] manila-ui does not work after upgrading from ocata to pike

2018-07-31 Thread Ignazio Cassano
Hi everyone,
I upgraded my centos 7 openstack from ocata to pike.
Openstack dashboard works fine only if I remove openstack manila ui package.
With the manila ui it gives me internal server error.

In httpd error log I read:


 KeyError: ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] dashboard show only project after upgrading

2018-07-30 Thread Ignazio Cassano
Ok, I will check.
thanks

Il Lun 30 Lug 2018 23:40 Mathieu Gagné  ha scritto:

> Try enabling DEBUG in local_settings.py. Some dashboard or panel might
> fail loading for some reasons.
> I had a similar behavior last week and enabling DEBUG should show the
> error.
>
> --
> Mathieu
>
>
> On Mon, Jul 30, 2018 at 11:35 AM, Ignazio Cassano
>  wrote:
> > Sorry, I sent a wrong image.
> > The correct screenshot is attached here.
> > Regards
> >
> > 2018-07-30 17:33 GMT+02:00 Ignazio Cassano :
> >>
> >> Hello everyone,
> >> I upgraded openstack centos 7 from ocata to pike ad command line work
> fine
> >> but dashboard does not show any menu on the left .
> >> I missed the following menus:
> >>
> >> Project
> >> Admin
> >> Identity
> >>
> >> You can find the image attached here.
> >>
> >> Could anyone help me ?
> >> Regards
> >> Ignazio
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] dashboard show only project after upgrading

2018-07-30 Thread Ignazio Cassano
Hello Chris, I am not using containers yet. I wiil try to purge it.
Many thanks.
Ignazio

Il Lun 30 Lug 2018 17:53 Chris Apsey  ha scritto:

> Ignazio,
>
> Are your horizon instances in separate containers/VMS?  If so, I'd highly
> recommend completely wiping them and rebuilding from scratch since horizon
> itself is stateless.  I am not a fan of upgrades for reasons like this.
>
> If that's not possible, a purge of the horizon packages on your controller
> and a reinstallation should fix it.
>
> Chris
>
> On July 30, 2018 11:38:03 Ignazio Cassano 
> wrote:
>
>> Sorry, I sent a wrong image.
>> The correct screenshot is attached here.
>> Regards
>>
>> 2018-07-30 17:33 GMT+02:00 Ignazio Cassano :
>>
>>> Hello everyone,
>>> I upgraded openstack centos 7 from ocata to pike ad command line work
>>> fine
>>> but dashboard does not show any menu on the left .
>>> I missed the following menus:
>>>
>>> Project
>>> Admin
>>> Identity
>>>
>>> You can find the image attached here.
>>>
>>> Could anyone help me ?
>>> Regards
>>> Ignazio
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] diskimage-builder error

2018-07-03 Thread Ignazio Cassano
Hi Tony, the message I sent is waiting for moderator approval because it is
big. :-(

2018-07-03 6:36 GMT+02:00 Tony Breeds :

> On Tue, Jul 03, 2018 at 06:12:21AM +0200, Ignazio Cassano wrote:
> > Hi Tony, I sent log file and script yesterday. I hope you received them.
>
> Sorry I can't find them in any of my inboxes :(
>
> Yours Tony.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] diskimage-builder error

2018-07-02 Thread Ignazio Cassano
Hi Tony, I sent log file and script yesterday. I hope you received them.
Ignazio

Il Mar 3 Lug 2018 02:49 Tony Breeds  ha scritto:

> On Mon, Jul 02, 2018 at 08:13:39AM +0200, Ignazio Cassano wrote:
> > Tony, do you mean the script I am using to create the image ?
>
> Yup, it'd be good to try and reproduce this outside your environment as
> that'll make fixing the underlying bug quicker.
>
> Yours Tony.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] diskimage-builder error

2018-07-02 Thread Ignazio Cassano
Tony, do you mean the script I am using to create the image ?


2018-07-02 8:11 GMT+02:00 Tony Breeds :

> On Mon, Jul 02, 2018 at 07:55:13AM +0200, Ignazio Cassano wrote:
> > Hi Tony,
> > applying the patch reported here (https://review.openstack.org/
> #/c/561740/)
> > the issue is solved..
> > The above path was related to another issue (distutils) but is solves
> also
> > the cleanup error.
> > Anycase I could unpatch the diskimage code and send you the log.
> > What do you think ?
>
> That would be helpful.  Or if it's easier you can let us know how you're
> running DIB
>
>
> Yours Tony.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] diskimage-builder error

2018-07-01 Thread Ignazio Cassano
Hi Tony,
applying the patch reported here (https://review.openstack.org/#/c/561740/)
the issue is solved..
The above path was related to another issue (distutils) but is solves also
the cleanup error.
Anycase I could unpatch the diskimage code and send you the log.
What do you think ?
Regards
Ignazio


2018-07-02 3:34 GMT+02:00 Tony Breeds :

> On Sun, Jul 01, 2018 at 12:25:24PM +0200, Ignazio Cassano wrote:
> > Hi All,
> > I just installed disk-image builder on my centos 7.
> > For creating centos7 image I am using the same command used 3 o 4 months
> > ago, but wiith the last diskimage-builder installed with pip I got the
> > following error:
> >
> > +
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:20
> > :   '[' -r /proc/meminfo ']'
> > ++
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:21
> > :   awk '/^MemTotal/ { print $2 }' /proc/meminfo
> > +
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:21
> > :   total_kB=8157200
> > +
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:24
> > :   RAM_NEEDED=4
> > +
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:25
> > :   '[' 8157200 -lt 4194304 ']'
> > +
> > /usr/share/diskimage-builder/lib/common-functions:tmpfs_check:25
> > :   return 0
> > +
> > /usr/share/diskimage-builder/lib/common-functions:cleanup_image_dir:211
> > :   timeout 120 sh -c 'while ! sudo umount -f /tmp/dib_image.azW5Wi4F; do
> > sleep 1; done'
> > +
> > /usr/share/diskimage-builder/lib/common-functions:cleanup_image_dir:216
> > :   rm -rf --one-file-system /tmp/dib_image.azW5Wi4F
> > +
> > /usr/share/diskimage-builder/lib/img-functions:trap_cleanup:46
> > :   exit 1
> >
> >
> >
> > Anyone knows if is it a bug ?
>
> Not one I know of.  It looks like the image based install was close to
> completion, the tmpfs was unmounted but for some reason the removal of
> the (empty) directory failed.  The only reason I can think of is if a
> FS was still mounted.
>
> Are you able to send the full logs?  It's possible something failed
> earlier and we're just seeing a secondary failure.
>
> If you can supply the logs, can you supply your command line and link to
> the base image you're using?
>
> Yours Tony.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] diskimage-builder error

2018-07-01 Thread Ignazio Cassano
Hi All,
I just installed disk-image builder on my centos 7.
For creating centos7 image I am using the same command used 3 o 4 months
ago, but wiith the last diskimage-builder installed with pip I got the
following error:

+
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:20
:   '[' -r /proc/meminfo ']'
++
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:21
:   awk '/^MemTotal/ { print $2 }' /proc/meminfo
+
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:21
:   total_kB=8157200
+
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:24
:   RAM_NEEDED=4
+
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:25
:   '[' 8157200 -lt 4194304 ']'
+
/usr/share/diskimage-builder/lib/common-functions:tmpfs_check:25
:   return 0
+
/usr/share/diskimage-builder/lib/common-functions:cleanup_image_dir:211
:   timeout 120 sh -c 'while ! sudo umount -f /tmp/dib_image.azW5Wi4F; do
sleep 1; done'
+
/usr/share/diskimage-builder/lib/common-functions:cleanup_image_dir:216
:   rm -rf --one-file-system /tmp/dib_image.azW5Wi4F
+
/usr/share/diskimage-builder/lib/img-functions:trap_cleanup:46
:   exit 1



Anyone knows if is it a bug ?
Please, help me.
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] security groups loggin on ocata

2018-06-22 Thread Ignazio Cassano
Dear All,
I read neutron service_plugins supports log for security groups.
I did not understand if the above feature is available only from queens
release
or if there is the opportunity to enable it also on ocata.
Any case, could anyone suggest a way to log securoty groups  ?
Many Thanks and Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] openstack ocata gnocchi statsd errors

2018-06-15 Thread Ignazio Cassano
Many thanks for your explanation.
Regards
Ignazio

2018-06-14 17:35 GMT+02:00 gordon chung :

>
>
> On 2018-06-14 1:30 AM, Ignazio Cassano wrote:
> > Hello Gordon, what do you mean with "coordination service" ?
>
> it is what handles locking/coordination across multiple workers. you
> configure it via a config option in your gnocchi.conf file.
>
> as you didn't define one it is using your indexer by default and creates
> locks on, in your case, mariadb.
>
>
> i would upgrade your tooz as there was a bug which caused it to create a
> lot of open connections. you can also increase your mariadb connection
> limit.
>
> cheers,
>
>
> --
> gord
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] openstack ocata gnocchi statsd errors

2018-06-13 Thread Ignazio Cassano
Hello Gordon, what do you mean with "coordination service" ?
I installed ceilometer and gnocchi following ocata community documentation.
Versions are the followings :

openstack-gnocchi-statsd-3.1.16-1.el7.noarch
python2-gnocchiclient-3.1.0-1.el7.noarch
python-gnocchi-3.1.16-1.el7.noarch
openstack-gnocchi-api-3.1.16-1.el7.noarch
openstack-gnocchi-indexer-sqlalchemy-3.1.16-1.el7.noarch
openstack-gnocchi-common-3.1.16-1.el7.noarch
openstack-gnocchi-metricd-3.1.16-1.el7.noarch

The indexer is on mariadb.

Thanks and Regards
Ignazio

2018-06-14 0:51 GMT+02:00 gordon chung :

> i would probably ask this question on gnocchi[1] as i don't know how many
> devs read this. you should also add what numerical version of gnocchi
> you're using and what coordination service you're using.
>
> if i were to quickly guess, i'm going to to assume you're not using a
> dedicated coordination service and are relying on the db based on your
> comment. if so, i would add the version of tooz you have as there are
> certain versions of tooz that do not handle locking with SQL well. (don't
> have the exact versions of tooz that are broken handy)
>
> [1] https://github.com/gnocchixyz/gnocchi/issues
> --
> gord
>
> ________
> From: Ignazio Cassano 
> Sent: June 13, 2018 7:36 AM
> To: OpenStack Operators
> Subject: [Openstack-operators] openstack ocata gnocchi statsd errors
>
> Hello everyone,
> I installed centos 7 openstack ocata with gnocchi.
> The gnocchi backand is "file" and /var/lib/gnocchi is on netapp nfs.
> Sometime a lot of locks are created on nfs and the statsd.log reports
> the following:
>
> 2018-03-05 08:56:34.743 58931 ERROR trollius [-] Exception in callback
> _flush() at /usr/lib/python2.7/site-packages/gnocchi/statsd.py:179
> handle:  /usr/lib/python2.7/site-packages/gnocchi/statsd.py:179>
> 2018-03-05 08:56:34.743 58931 ERROR trollius Traceback (most recent call
> last):
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/trollius/events.py", line 136, in _run
> 2018-03-05 08:56:34.743 58931 ERROR trollius
>  self._callback(*self._args)
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/gnocchi/statsd.py", line 181, in _flush
> 2018-03-05 08:56:34.743 58931 ERROR trollius stats.flush()
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/gnocchi/statsd.py", line 93, in flush
> 2018-03-05 08:56:34.743 58931 ERROR trollius with_metrics=True)
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
> 2018-03-05 08:56:34.743 58931 ERROR trollius ectxt.value = e.inner_exc
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
> __exit__
> 2018-03-05 08:56:34.743 58931 ERROR trollius self.force_reraise()
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
> force_reraise
> 2018-03-05 08:56:34.743 58931 ERROR trollius six.reraise(self.type_,
> self.value, self.tb)
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
> 2018-03-05 08:56:34.743 58931 ERROR trollius return f(*args, **kwargs)
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy.py", line
> 932, in get_resource
> 2018-03-05 08:56:34.743 58931 ERROR trollius session,
> resource_type)['resource']
> 2018-03-05 08:56:34.743 58931 ERROR trollius   File
> "/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy.py", line
> 574, in _resource_type_to_mappers
>
>
> Anyone could help me, please?
> What's happening ?
>
> Regards
> Ignazio
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] openstack ocata gnocchi statsd errors

2018-06-13 Thread Ignazio Cassano
Hello everyone,
I installed centos 7 openstack ocata with gnocchi.
The gnocchi backand is "file" and /var/lib/gnocchi is on netapp nfs.
Sometime a lot of locks are created on nfs and the statsd.log reports
the following:

2018-03-05 08:56:34.743 58931 ERROR trollius [-] Exception in callback
_flush() at /usr/lib/python2.7/site-packages/gnocchi/statsd.py:179
handle: 
2018-03-05 08:56:34.743 58931 ERROR trollius Traceback (most recent call
last):
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/trollius/events.py", line 136, in _run
2018-03-05 08:56:34.743 58931 ERROR trollius self._callback(*self._args)
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/gnocchi/statsd.py", line 181, in _flush
2018-03-05 08:56:34.743 58931 ERROR trollius stats.flush()
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/gnocchi/statsd.py", line 93, in flush
2018-03-05 08:56:34.743 58931 ERROR trollius with_metrics=True)
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2018-03-05 08:56:34.743 58931 ERROR trollius ectxt.value = e.inner_exc
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
__exit__
2018-03-05 08:56:34.743 58931 ERROR trollius self.force_reraise()
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2018-03-05 08:56:34.743 58931 ERROR trollius six.reraise(self.type_,
self.value, self.tb)
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
2018-03-05 08:56:34.743 58931 ERROR trollius return f(*args, **kwargs)
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy.py", line 932,
in get_resource
2018-03-05 08:56:34.743 58931 ERROR trollius session,
resource_type)['resource']
2018-03-05 08:56:34.743 58931 ERROR trollius   File
"/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy.py", line 574,
in _resource_type_to_mappers


Anyone could help me, please?
What's happening ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] openstack ocata swift

2018-06-12 Thread Ignazio Cassano
Hello everyone,
I configured ocata with three vm backends
and one proxy
when a storage backends goes down, proxy continues to redirect write
requests to it. Is is the default behaviour ?
Please, could anyone help me ?
I expect proxy does not redirect write request to a shutdown node
automatically.
On documentation I read I must exclude it manually.
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] openstack ocata swift 404

2018-06-11 Thread Ignazio Cassano
Hi averyone,
I have jist installed openstack ocata with 3 swift server for testing
purposes.
When all swift servers are up I can create containers.
If one server is down, I can create containers also if  try a lot of times.
Probably I attemps to create on the shutted sown server and sometimes I try
on the other servers.

Is it the correct beahviour ?

Or I must modify my configuration to exclude e server from the cluster ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] community vs founation membership

2018-05-22 Thread Ignazio Cassano
Hi Jeremy, thanks for your help.
I am interested in openstack testing (no code contributing).
Becoming community member give me any advantage ?
At this time I am testing on ocata  on centos 7.
My environment is in HA with pacemaker (3 controllers) and 5 kvm nodes.
Regards
Ignazio

2018-05-22 15:56 GMT+02:00 Jeremy Stanley <fu...@yuggoth.org>:

> On 2018-05-22 15:32:36 +0200 (+0200), Ignazio Cassano wrote:
> > please, what's the difference between community and foundation
> > membership ?
>
> The "community" setting is just a means of indicating that you have
> a profile/account for any of various purposes (scheduling, speaker
> submissions, et cetera) but are not officially an Individual Member
> of the OpenStack Foundation. A foundation membership is necessary
> for some official activities, particularly for participating in
> elections (board of directors, user committee, technical committee,
> project team lead) as either a candidate or voter. Joining the
> OpenStack Foundation as an Individual Member comes with no cost
> other than a minute or two of your time to provide contact
> information at https://www.openstack.org/join/ but does obligate you
> to at least vote in OpenStack Foundation Board of Directors
> elections once you are eligible to do so.
> --
> Jeremy Stanley
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] community vs founation membership

2018-05-22 Thread Ignazio Cassano
Hi all,
please, what's the difference between community and foundation membership ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata octavia http loadbalancer error 503 Service Unavailable

2018-05-16 Thread Ignazio Cassano
Hi everyone,
I am using octavia on centos7 ocata.
When I define a http load balancer it does not work.
Accessing the load balancer address returns 503 error.
The above happens when the load balancer protocol specified is HTTP.

If the load balancer protocol specified is TCP , it works.

Probably the amphora instance haproxy is facing some issue with http health
check ?

Could anyone help me , please ?

Thanks & Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata gnocchi file system : erasing old data

2018-05-16 Thread Ignazio Cassano
Many thanks
Ignazio

2018-05-16 16:45 GMT+02:00 gordon chung <g...@live.ca>:

>
>
> On 2018-05-15 2:40 AM, Ignazio Cassano wrote:
> > gnocchi resource delete instance id
> >
> >
> > Does the above procedure remove data either from database or
> > /var/lib/gnocchi directory ?
>
> not immediately, it will mark the data for deletion. there is a
> 'janitor' service that runs periodically that will remove the data. this
> is defined by the `metric_cleanup_delay` in the configuration file.
>
> cheers,
>
> --
> gord
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata gnocchi file system : erasing old data

2018-05-15 Thread Ignazio Cassano
Hi Gordon, please, let me to understand better
I am collecting only  instance metrics:
 I need to remove data about instances that ah been removed.
I could do that with:
gnocchi resource list --type instance -c id -f value
 if the instance has been delete I could:

gnocchi resource delete instance id


Does the above procedure remove data either from database or
/var/lib/gnocchi directory ?


Any suggestion ?

Thanks and Regards
Ignazio







2018-05-15 3:33 GMT+02:00 gordon chung <g...@live.ca>:

>
>
> On 2018-05-14 10:16 AM, Ignazio Cassano wrote:
> > Hi everyone,
> > I am osing ocata on centos 7 with ceilometer and gnocchi.
> > The gnocchi backend is nfs and I would like to know if it is possible
> > remove old data on the backend file system.
> > Dome directories on the backend are 6 months old.
> >
> > Please, any suggestion ?
>
> there isn't a way to this without manually modifying the files (which
> can get a bit sketchy). Gnocchi is designed to capture (at most) the
> amount of data you define in your policy and does not prune data based
> on 'now' so it won't shrink on it's own over time.
>
> i guess we could support a tool that could prune based on 'now' but that
> doesn't exist currently.
>
> the safest way to clean old data is to delete the metric. if you want to
> delete only some of the metric that will be difficult. it'll require you
> figuring out what time range of data is stored in a given file (which is
> not difficult if you look at code), then properly deserialising,
> pruning, and reserialising (probably difficult or at least annoying).
>
> cheers,
>
> --
> gord
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata gnocchi file system : erasing old data

2018-05-14 Thread Ignazio Cassano
Hi everyone,
I am osing ocata on centos 7 with ceilometer and gnocchi.
The gnocchi backend is nfs and I would like to know if it is possible
remove old data on the backend file system.
Dome directories on the backend are 6 months old.

Please, any suggestion ?

Thanks and Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata octavia apmhora ssl error

2018-05-11 Thread Ignazio Cassano
Hi eveyone,
I am trying to configura octavia lbaas on ocata centos 7.

When I create the load balancer a vm is created from amphora image but
worker log reports:

018-05-11 17:38:56.013 125607 WARNING
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
instance. Retrying.
2018-05-11 17:39:01.016 125607 WARNING
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
instance. Retrying.


I think it is trying to connect the amphora instance on port 9443 but in
the amphora instance  /var/log/amphora-agent.log file the following is
reported:

2018-05-11 15:38:45 +] [900] [DEBUG] Failed to send error message.
[2018-05-11 15:38:50 +] [900] [DEBUG] Error processing SSL request.
[2018-05-11 15:38:50 +] [900] [DEBUG] Invalid request from ip=:::
10.138.176.96: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure
(_ssl.c:1977)
[2018-05-11 15:38:50 +] [900] [DEBUG] Failed to send error message.
[2018-05-11 15:38:55 +] [900] [DEBUG] Error processing SSL request.
[2018-05-11 15:38:55 +] [900] [DEBUG] Invalid request from ip=:::
10.138.176.96: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure
(_ssl.c:1977)
[2018-05-11 15:38:55 +] [900] [DEBUG] Failed to send error message.
[2018-05-11 15:39:00 +] [900] [DEBUG] Error processing SSL request.
[2018-05-11 15:39:00 +] [900] [DEBUG] Invalid request from ip=:::
10.138.176.96: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure
(_ssl.c:1977)
[2018-05-11 15:39:00 +] [900] [DEBUG] Failed to send error message.

10.138.176.96 is the address of my controller worker.

Security groups allow any protocol any port
there aren't connection problem between networks.
Probably some errors in certificate creation
Anyone can help me, please?

Is possible to disable  ssl for testing ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] octavia amphora instances on ocata

2018-05-11 Thread Ignazio Cassano
Hi everyone,
I installed octavia on ocata centos 7 and now when I create a load balancer
amphora instances are
automatically created but there are some problems:

1) amphora-agent on amphora instances is in error state because it needs
ceertificates
(Must I create amphora image with certicates on it ? Or certicates are
copyed  durinig instance deployment ?)

2) health-manager.log reports:

Amphora 4e6d19d3-bc19-4882-aeca-4772b069c53b health message reports 0
listeners when 1 expected


Please, could anyone explain what happens ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] octavia on ocata no amphora instaces are created

2018-05-11 Thread Ignazio Cassano
Hi everyone,
I installed octavia on ocata centos 7.
Load balancer, listener and pool are created and they are active but I
cannot see any amphora instance.
No errors in octavia logs.
Could anyone help me  ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia on ocata centos 7

2018-05-10 Thread Ignazio Cassano
Many thanks for your help.
Ignazio

Il Gio 10 Mag 2018 21:05 iain MacDonnell <iain.macdonn...@oracle.com> ha
scritto:

>
>
> On 05/10/2018 10:45 AM, Ignazio Cassano wrote:
> > I am moving from lbaas v2 based on haproxy driver to octavia on centos 7
> > ocata.
> [snip]
> > On the octavia server all services are active, amphora images are
> > installed, but when I try to create a load balancer:
> >
> > nuutron lbaas-loadbalancer-create --name lb1 private-subnet
> >
> > it tries to connect to 127.0.0.1:5000
>
> Google found:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1434904 =>
> https://bugzilla.redhat.com/show_bug.cgi?id=1433728
>
> Seems that you may be missing the service_auth section from
> neutron_lbaas.conf or/and octavia.conf ?
>
> I've been through the frustration of trying to get Octavia working. The
> docs are bit iffy, and it's ... "still maturing" (from my observation).
>
> I think I did have it working with neutron_lbaasv2 at one point. My
> neutron_lbaas.conf included:
>
> [service_auth]
> auth_url = http://mykeystonehost:35357/v3
> admin_user = neutron
> admin_tenant_name = service
> admin_password = n0ttell1nU
> admin_user_domain = default
> admin_project_domain = default
> region = myregion
>
> and octavia.conf:
>
> [service_auth]
> memcached_servers = mymemcachedhost:11211
> auth_url = http://mykeystonehost:35357
> auth_type = password
> project_domain_name = default
> project_name = service
> user_domain_name = default
> username = octavia
> password = n0ttell1nU
>
>
> Not sure how correct those are, but IIRC it did basically work.
>
> I've since moved to pure Octavia on Queens, where there is no
> neutron_lbaas.
>
> GL!
>
>  ~iain
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Octavia on ocata centos 7

2018-05-10 Thread Ignazio Cassano
Hi everyone,
I am moving from lbaas v2 based on haproxy driver to octavia on centos 7
ocata.

I installed a new host with octavia following the documentation.
I removed all old load balancers, stopped lbaas agent and configured
neutron following this link:

https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html


On the octavia server all services are active, amphora images are
installed, but when I try to create a load balancer:

nuutron lbaas-loadbalancer-create --name lb1 private-subnet

it tries to connect to 127.0.0.1:5000

Either on octavia.conf or neutron.conf the section for keystone is
correctly configured

to reach controller address.

The old lbaas v2 based on haproxy driver worked fine before changing
configuration but

is was not possible protect lbaas adresses with security groups (this
is a very old problem) because security groups are applyed only to vm
ports.

Since Octavia load balancer is based on vm deirved from amphora image,
I'd like to use it to improve my security.

Any suggestion for my octavia configuration or alternatives to improve
security on lbaas ?

Thanks and Regards

Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] octavia worker on ocata

2018-05-10 Thread Ignazio Cassano
I am sorry,
I forgot to setup topic attribute in oslo_messaging section.

Regards
Ignazio

2018-05-10 11:58 GMT+02:00 Ignazio Cassano <ignaziocass...@gmail.com>:

> Hello everyone,
> I've just installed octavia on ocata .
> All octavia services are running except worker.
> It reports the following error in worker.log:
>
> 2018-05-10 11:33:27.404 121193 ERROR oslo_service.service InvalidTarget: A
> server's target must have topic and server names specified: server=podto2-octavia>
> 2018-05-10 11:33:27.404 121193 ERROR oslo_service.service
>
> Could anyone help me ?
> Regards
> Ignazio
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] octavia worker on ocata

2018-05-10 Thread Ignazio Cassano
Hello everyone,
I've just installed octavia on ocata .
All octavia services are running except worker.
It reports the following error in worker.log:

2018-05-10 11:33:27.404 121193 ERROR oslo_service.service InvalidTarget: A
server's target must have topic and server names specified:
2018-05-10 11:33:27.404 121193 ERROR oslo_service.service

Could anyone help me ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata /usr/bin/octavia-diskimage-create.sh -i centos fails

2018-05-09 Thread Ignazio Cassano
Hi all,
I am trying to create an octavia amphra image on ocata usigng the package
openstack-octavia-diskimage-create on centos 7 but it fails:

diskimage-builder fails to create disk image- cannot uninstall virtualenv
I read this is a bug and a workaround cloud be setting DIB_INSTALLTYPE
_pip_and_virtualenv to "package"
In this case the command reported:Command "python setup.py egg_info" failed
with error code 1 in /tmp/pip-obMDl9-build/

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] octavia on ocata

2018-05-07 Thread Ignazio Cassano
Many thanks


2018-05-07 9:17 GMT+02:00 Shake Chen <shake.c...@gmail.com>:

> in kolla, ocata, Ocatavia is work.
>
> On Mon, May 7, 2018 at 3:11 PM, Ignazio Cassano <ignaziocass...@gmail.com>
> wrote:
>
>> Hello everyone,
>> I'd like to know if anynone has tried to installa octavia lbaas on ocata
>> centos 7 release .
>> If yes, does it work ?
>>
>> Regards
>> Ignazio
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Shake Chen
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] octavia on ocata

2018-05-07 Thread Ignazio Cassano
Hello everyone,
I'd like to know if anynone has tried to installa octavia lbaas on ocata
centos 7 release .
If yes, does it work ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Diskimage-builder lvm

2018-03-27 Thread Ignazio Cassano
Many thanks for your help
Ignazio


Il Mar 27 Mar 2018 09:52 Andreas Scheuring <scheu...@linux.vnet.ibm.com> ha
scritto:

> Hi, I recently did it like this:
>
>
> For Ubuntu: Cherry picked: https://review.openstack.org/#/c/504588/
>
>
> export DIB_BLOCK_DEVICE_CONFIG='
>   - local_loop:
>   name: image0
>   size: 3GB
>
>   - partitioning:
>   base: image0
>   label: mbr
>   partitions:
> - name: root
>   flags: [ boot, primary ]
>   size: 100%
>   - lvm:
>   name: lvm
>   pvs:
> - name: pv
>   options: ["--force"]
>   base: root
>
>   vgs:
> - name: vg
>   base: ["pv"]
>   options: ["--force"]
>
>   lvs:
> - name: lv_root
>   base: vg
>   size: 1800M
>
> - name: lv_tmp
>   base: vg
>   size: 100M
>
> - name: lv_var
>   base: vg
>   size: 500M
>
> - name: lv_log
>   base: vg
>   size: 100M
>
> - name: lv_audit
>   base: vg
>   size: 100M
>
> - name: lv_home
>   base: vg
>   size: 200M
>   - mkfs:
>   name: root_fs
>   base: lv_root
>   label: cloudimage-root
>   type: ext4
>   mount:
> mount_point: /
> fstab:
>   options: "defaults"
>   fsck-passno: 1
>
> '
>
> # See https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1573982
> # See https://bugs.launchpad.net/diskimage-builder/+bug/1715686
> # need to specify lvm2 module to include it into the ramdisk
> disk-image-create -p lvm2 ubuntu-minimal
>
>
> # -> During boot the kernel was still not able to find the root disk
> (identified along uuid)
> # Executing the following command in the initramfs emergency shell made
> the volume appear for me
> $ vgchange -ay
> $ blkid
> # /dev/vda1: UUID="O11hwE-0Efq-SwO7-Bko8-hVe6-jyOT-7UAiza"
> TYPE="LVM2_member" PARTUUID="7885c310-01"
> # /dev/mapper/vg-lv_root: LABEL="cloudimage-root"
> UUID="bd6d2d2c-3ca6-4032-99fc-bf20e246e22a" TYPE=“ext4”
> exit
>
>
>
> Maybe things are going more smoothly with other distros, or maybe things
> have been fixed in the meanwhile… Or I did something wrong…
>
> Good luck!
>
> ---
> Andreas Scheuring (andreas_s)
>
>
>
> On 26. Mar 2018, at 17:43, Ignazio Cassano <ignaziocass...@gmail.com>
> wrote:
>
> Hi all,
> I read diskimage-builder  documentatio but is not clear for me how I can
> supply lvm configuration to the command.
> Some yaml files are supplied but diskimage seems to expect json file.
> Please, could anyone post an example?
> Regards
> Ignazio
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Diskimage-builder lvm

2018-03-26 Thread Ignazio Cassano
Hi all,
I read diskimage-builder  documentatio but is not clear for me how I can
supply lvm configuration to the command.
Some yaml files are supplied but diskimage seems to expect json file.
Please, could anyone post an example?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata security groups don't work with LBaaS v2 ports

2018-03-26 Thread Ignazio Cassano
Hello Saverio,
neutron.lbaas.v2-agent should apply iptables rules but it does not work.
Also in redhat exixts the same issue reported here:

https://bugzilla.redhat.com/show_bug.cgi?id=1500118

Regards

2018-03-26 9:32 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:

> Hello Ignazio,
>
> it would interesting to know how this works. For instances ports,
> those ports are created by openvswitch on the compute nodes, where the
> neutron-agent will take care of the security groups enforcement (via
> iptables or openvswitch rules).
>
> the LBaaS is a namespace that lives where the neutron-lbaasv2-agent is
> running.
>
> The question is if the neutron-lbaasv2-agent is capable for setting
> iptables rules. I would start to read the code there.
>
> Cheers,
>
> Saverio
>
>
> 2018-03-23 13:51 GMT+01:00 Ignazio Cassano <ignaziocass...@gmail.com>:
> > Hi all,
> > following the ocata documentation, I am trying to apply security group
> to a
> > lbaas v2 port but
> > it seems not working because any filter is applyed.
> > The Port Security Enabled is True on lbaas port, so I expect applying
> > security group should work.
> > Is this a bug ?
> > Regards
> > Ignazio
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] fwaas v2

2018-03-22 Thread Ignazio Cassano
Hello all,
I am tryining to use fwaas v2 on centos 7 openstack ocata.
After creating firewall rules an policy I am looking for creating firewall
group .
I am able to create the firewall group, but it does not work when I try to
set the ports into it.

openstack firewall group set --port
87173e27-c2b3-4a67-83d0-d8645d9f309b  prova
Failed to set firewall group 'prova': Firewall Group Port
87173e27-c2b3-4a67-83d0-d8645d9f309b is invalid
Neutron server returns request_ids:
['req-9ef8ad1e-9fad-4956-8aff-907c32d01e1f']

I tried with either router or vm ports but the result is always the same.

Anyone could help me, please ?
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] zabbix templates for openstack

2018-01-30 Thread Ignazio Cassano
Hello guys,
I am searching some zabbix templates for openstack for monitoring either
openstack infrastructure or openstack instances.
I read zcp ceilometer proxy could be useful but I'd like to know if it
works with gnocchi because I am using ocata release.
At this time zcp seems to work with ceilometer and mongo.
Please, has anyone found any template or tool for the above purpose ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata ceiolometer compute can not get info from libvirt: 'perf.cpu_cycles'

2017-11-15 Thread Ignazio Cassano
Hello Guys,
after upgrading livirt to 3.2 version, ceilometer on compute gives some
erros, for example:

can not get info from libvirt: 'perf.cpu_cycles'


Must I modify some configuration on ceilometer or libvirt ?

After upgrading gnocchi does not how measures for instances.

Please, help me
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Centos Ocata fwaas v2 not working

2017-11-03 Thread Ignazio Cassano
Hi guys,
I installaed fwaas module on ocata on centos 7.3 but enabling the version 2
some
errors are reported in l3-agent.log:

Could not load
neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver

and then a lot of errors_

017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent
Traceback (most recent call last):
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
line 220, in add_router
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent
self._process_router_add(new_router)
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
line 195, in _process_router_add
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent
fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
line 46, in get_firewalls_for_tenant
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent
return cctxt.call(context, 'get_firewalls_for_tenant', host=self.host)
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent   File
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 151, in call
2017-11-03 10:36:59.292 12576 ERROR
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent
return self._original_context.call(ctxt, method, **kwargs)


Please, could anyone help me in this step ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cannot attach new volume to an instance

2017-07-27 Thread Ignazio Cassano
The instance boots from a volume.
Fails attacching a second volume.
A guy suggested to use hw_disk_bus=virtio and it works.
Regards

Il 27/Lug/2017 05:33 PM, "Chris Friesen" <chris.frie...@windriver.com> ha
scritto:

> On 07/27/2017 08:44 AM, Ignazio Cassano wrote:
>
>> Hello All,
>> instance created from images with the following metadata:
>>
>> hw_disk_bus=scsi
>> hw_scsi_model=virtio-scsi
>>
>> do not allow to attach new volume.
>>
>> Deletting the above metadata in the image and creating a new instance, it
>> allows
>> new volume attachment.
>>
>> I got the same behaviour on all openstack releases from liberty to ocata.
>> Regards
>> Ignazio
>>
>
> How does it fail?  Does it behave differently if you try to boot an
> instance from such a volume?  (As opposed to attaching a volume to a
> running instance.)
>
> Chris
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] cannot attach new volume to an instance

2017-07-27 Thread Ignazio Cassano
Hello All,
instance created from images with the following metadata:

hw_disk_bus=scsi
hw_scsi_model=virtio-scsi

do not allow to attach new volume.

Deletting the above metadata in the image and creating a new instance, it
allows new volume attachment.

I got the same behaviour on all openstack releases from liberty to ocata.
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Tricircle

2017-07-11 Thread Ignazio Cassano
We need either multicloud based in openstack or DR.
Ignazio

Il 11/Lug/2017 09:16 PM, "Curtis" <serverasc...@gmail.com> ha scritto:

> On Tue, Jul 11, 2017 at 1:13 PM, Curtis <serverasc...@gmail.com> wrote:
> > On Tue, Jul 11, 2017 at 12:43 PM, Ignazio Cassano
> > <ignaziocass...@gmail.com> wrote:
> >> Many thanks.
> >> As far as networking is concerned in a multipod environment, tricircle
> is
> >> the official project that openstack will support?
> >
> > It's one project. :)
> >
> > I believe there were also some sessions at the last summit regarding
> > "neutron multi site" in that the neutron project is looking at ways to
> > accomplish this as well. I didn't attend any, but if you google that
> > you'll probably find some etherpads.
> >
> > Potentially there would be solutions available via SDN controllers,
> > and each solution would be different.
> >
> >> Cinder,  for example, is another element to consider: ceph or other
> storage
> >> replication solutions are
> >> not enough for moving instance from a pod to another because they have
> their
> >> own cinder db.
> >> Probably in this case we should export the volume from a pod and import
> to
> >> another this could get a long time.
> >
> > Yeah I'm not sure what has happened in that area either, but I'm
> > fairly sure that Ceph can backup to another Ceph cluster and it would
> > be interesting if OpenStack could somehow be made aware of that, but
> > of course you have all kinds of other issues in terms of IP migration
> > etc. I would imagine many people would suggest having applications
> > that can exist across multiple clouds as opposed to doing some kind of
> > "openstack aware disaster recovery" though. It would be superb if
> > there was an OpenStack way of managing that kind of
> > deployment...perhaps there is and I'm just not aware of it. :)
>
> Sorry, just to clarify, by "that kind of deployment" I mean
> multi-cloud applications not openstack aware disaster recovery. :)
>
> >
> > Thanks,
> > Curtis.
> >
> >> Heat stack is more than e single vm and it could require an important
> effort
> >> for moving.
> >> Stretched cluster coulb be a solution in case of well connected pods ?
> >> Regards
> >> Ignazio
> >>
> >>
> >> Il 11/Lug/2017 08:10 PM, "Curtis" <serverasc...@gmail.com> ha scritto:
> >>>
> >>> On Tue, Jul 11, 2017 at 10:47 AM, Ignazio Cassano
> >>> <ignaziocass...@gmail.com> wrote:
> >>> > I would like bothnetworking is important but an heat multi pod
> >>> > Orchestrator could be fantastic.
> >>>
> >>> So like a higher level system that can manage multiple clouds via
> >>> their heat API?
> >>>
> >>> I'm only familiar with some work in the NFV area around MANO
> >>> (management and orchestration). There are several systems that can
> >>> manage multiple clouds, some using heat and others using the standard
> >>> APIs. One OpenStack related example would be the Tacker system.
> >>>
> >>> I think a higher level heat system that could manage other heat
> >>> systems would be interesting. I see some mention of heat multicloud
> >>> but I'm not sure where that ended up. I should look into that...
> >>>
> >>> Thanks,
> >>> Curtis.
> >>>
> >>>
> >>> > Regards
> >>> > Ignazio
> >>> >
> >>> > Il 11/Lug/2017 06:20 PM, "Curtis" <serverasc...@gmail.com> ha
> scritto:
> >>> >>
> >>> >> On Tue, Jul 11, 2017 at 9:54 AM, Ignazio Cassano
> >>> >> <ignaziocass...@gmail.com> wrote:
> >>> >> > Hi openstackers,
> >>> >> > anyone is using tricircle in production environment?
> >>> >> > Any alternative for e multi pod openstack like tricircle?
> >>> >>
> >>> >> What is it you want to do?
> >>> >>
> >>> >> AFAIK tricircle has pivoted recently to accomplish networking across
> >>> >> multi-region openstack deployments.
> >>> >>
> >>> >> Are you mostly looking for networking across clouds or are you
> looking
> >>> >> to tie a bunch of clouds together with some higher level
> abstraction?
> >>> >> Or both. :)
> >>> >>
> >>> >> Thanks,
> >>> >> Curtis.
> >>> >>
> >>> >> > Regards
> >>> >> > Ignazio
> >>> >> >
> >>> >> >
> >>> >> > ___
> >>> >> > OpenStack-operators mailing list
> >>> >> > OpenStack-operators@lists.openstack.org
> >>> >> >
> >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >>> >> >
> >>> >>
> >>> >>
> >>> >>
> >>> >> --
> >>> >> Blog: serverascode.com
> >>>
> >>>
> >>>
> >>> --
> >>> Blog: serverascode.com
> >
> >
> >
> > --
> > Blog: serverascode.com
>
>
>
> --
> Blog: serverascode.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: Tricircle

2017-07-11 Thread Ignazio Cassano
-- Messaggio inoltrato --
Da: "Ignazio Cassano" <ignaziocass...@gmail.com>
Data: 11/Lug/2017 05:54 PM
Oggetto: Tricircle
A: "OpenStack Operators" <openstack-operators@lists.openstack.org>
Cc:

Hi openstackers,
anyone is using tricircle in production environment?
Any alternative for e multi pod openstack like tricircle?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Tricircle

2017-07-11 Thread Ignazio Cassano
Hi openstackers,
anyone is using tricircle in production environment?
Any alternative for e multi pod openstack like tricircle?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-29 Thread Ignazio Cassano
Hello Amit, tomorrow I'll try with trusty. Centos7 is working.
Some questions:
your instance created ny heat with SoftwareDeployment has os-collect-config
runnig ?
If yes, lauching os.refresh-config ang going under /var/lib/heat-config/ on
yoyr instance, you should see deployment scripts 

Regards
Ignazio


2017-06-29 13:47 GMT+02:00 Amit Kumar <ebiib...@gmail.com>:

> Hi,
>
> I tried to create Ubuntu Trusty image using diskimage-builder tag 1.28.0,
> dib-run-parts got included in VM os-refresh-config should have worked but
> still SoftwareDeployment didn't work with the cloud image.
>
> Regards,
> Amit
>
>
> On Jun 29, 2017 5:08 PM, "Matteo Panella" <matteo.pane...@cnaf.infn.it>
> wrote:
>
> On 29/06/2017 12:11, Ignazio Cassano wrote:
> > Hello all,
> > the new version of diskimage-builder (I am testing for centos 7) does
> > not install dib-utils and jq in te image.
> > The above are required by os-refresh-config .
>
> Yup, I reverted diskimage-builder to 2.2.0 (the last tag before
> dib-run-parts was not injected anymore) and os-refresh-config works
> correctly.
>
> os-refresh-config should probably be modified to depend on
> dib-run-parts, however:
> a) dib-run-parts provides package-style installation with an
> RPM-specific package name
> b) the package does not exist for Ubuntu 14.04 and there are no
> source-style installation scripts
>
> Regards,
> --
> Matteo Panella
> INFN CNAF
> Via Ranzani 13/2 c - 40127 Bologna, Italy
> Phone: +39 051 609 2903 <+39%20051%20609%202903>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-29 Thread Ignazio Cassano
Hello all,
the new version of diskimage-builder (I am testing for centos 7) does not
install dib-utils and jq in te image.
The above are required by os-refresh-config .
If os-refresh-config works bad,  heat softwaredeployment does not work.
Ignazio

2017-06-29 10:34 GMT+02:00 Ignazio Cassano <ignaziocass...@gmail.com>:

> Hello Amit,
> I found and I am trying what suggested here:
>
> https://bugs.launchpad.net/heat/+bug/1691964
>
> I would like to underline angain  that images created 3 monts ago works
> fine , so the problem could be diskimage-builder.
> I think support for diskimage-builder can be otained on openstack
> developer mailing list and you coult try to write them, so they will read
> my and your email.
> Regards
> Ignazio
>
> 2017-06-28 19:39 GMT+02:00 Amit Kumar <ebiib...@gmail.com>:
>
>> Hi,
>>
>> Probably the same problem I am facing. You replied on one of my e-mail
>> regarding SoftwareDeployment as well. Let me know if you find any working
>> working version of disk images builder.
>>
>> Regards,
>> Amit
>>
>> On Jun 28, 2017 10:26 PM, "Ignazio Cassano" <ignaziocass...@gmail.com>
>> wrote:
>>
>> Hi All, last March I user diskimage builder tool to generate a centos7
>> image with tools for heat software deployment. That image worked fine with
>> heat software deployment in newton and works today with ocata.
>> Upgrading diskimage builder and creating again the image , heat software
>> deployment does not work because some errors in os-refresh-config .
>> I do not remember what version of diskimage builder I used in March. At
>> this time I am trying with 2.6.1 and 2.6.2 with same issues.
>> Does anyone know what is changed ?
>> Regards
>> Ignazio
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-29 Thread Ignazio Cassano
Hello Amit,
I found and I am trying what suggested here:

https://bugs.launchpad.net/heat/+bug/1691964

I would like to underline angain  that images created 3 monts ago works
fine , so the problem could be diskimage-builder.
I think support for diskimage-builder can be otained on openstack developer
mailing list and you coult try to write them, so they will read my and your
email.
Regards
Ignazio

2017-06-28 19:39 GMT+02:00 Amit Kumar <ebiib...@gmail.com>:

> Hi,
>
> Probably the same problem I am facing. You replied on one of my e-mail
> regarding SoftwareDeployment as well. Let me know if you find any working
> working version of disk images builder.
>
> Regards,
> Amit
>
> On Jun 28, 2017 10:26 PM, "Ignazio Cassano" <ignaziocass...@gmail.com>
> wrote:
>
> Hi All, last March I user diskimage builder tool to generate a centos7
> image with tools for heat software deployment. That image worked fine with
> heat software deployment in newton and works today with ocata.
> Upgrading diskimage builder and creating again the image , heat software
> deployment does not work because some errors in os-refresh-config .
> I do not remember what version of diskimage builder I used in March. At
> this time I am trying with 2.6.1 and 2.6.2 with same issues.
> Does anyone know what is changed ?
> Regards
> Ignazio
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-28 Thread Ignazio Cassano
Yes, I remember your request and I am grateful because I decided to test
again software deployment.
As I wrote an image I built 3 months ago works fine and os-refresh-config
does not give errors. Updating diskimage-builder New images  ( centos and
ubuntu) do not work .
I think the problem is not related to openstack configuration because the
old image works fine with heat softwareconfig and software deployment.
I also wrote an email to developers mailing list and I am waiting for some
help.
Regards
Ignazio

Il 28/Giu/2017 19:39, "Amit Kumar" <ebiib...@gmail.com> ha scritto:

> Hi,
>
> Probably the same problem I am facing. You replied on one of my e-mail
> regarding SoftwareDeployment as well. Let me know if you find any working
> working version of disk images builder.
>
> Regards,
> Amit
>
> On Jun 28, 2017 10:26 PM, "Ignazio Cassano" <ignaziocass...@gmail.com>
> wrote:
>
> Hi All, last March I user diskimage builder tool to generate a centos7
> image with tools for heat software deployment. That image worked fine with
> heat software deployment in newton and works today with ocata.
> Upgrading diskimage builder and creating again the image , heat software
> deployment does not work because some errors in os-refresh-config .
> I do not remember what version of diskimage builder I used in March. At
> this time I am trying with 2.6.1 and 2.6.2 with same issues.
> Does anyone know what is changed ?
> Regards
> Ignazio
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ocata diskimage-builder heat issues

2017-06-28 Thread Ignazio Cassano
Hi All, last March I user diskimage builder tool to generate a centos7
image with tools for heat software deployment. That image worked fine with
heat software deployment in newton and works today with ocata.
Upgrading diskimage builder and creating again the image , heat software
deployment does not work because some errors in os-refresh-config .
I do not remember what version of diskimage builder I used in March. At
this time I am trying with 2.6.1 and 2.6.2 with same issues.
Does anyone know what is changed ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ocata instance snapshot

2017-06-23 Thread Ignazio Cassano
Hi All,
using ocata dashboard I am trying to create instance snapshots.
If the instance has an attached volume, the image created is empty.
If the instance does not have an attached volume, the image created is not
empty.
This is by design or I must change my configuration ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] trove ocata error

2017-06-19 Thread Ignazio Cassano
Hello All,
I've just installed ocata with trove and I found the same issue showed at
the follwing link:

https://bugs.launchpad.net/openstack-ansible/+bug/1672452


The issue seems the same: in my log I found:

2017-06-19 21:07:33.959 39029 ERROR oslo_service.periodic_task
ConnectFailure: Unable to establish connection to
http://0.0.0.0:5000/v2.0/tokens

I am sure in my configuration is not specified the above url but the
following:

trove_auth_url =  http://10.102.184.83:5000/v2.0

Anyone solved this problem ?
Thanks & Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata manila netapp ontap

2017-06-17 Thread Ignazio Cassano
Hi All,
I solved creating /var/lock/manila on controllers .
Seems ocata does not create it in the installation phase.
Regards
Ignazio

2017-06-17 10:21 GMT+02:00 Ignazio Cassano <ignaziocass...@gmail.com>:

> Hi  All,
> I configured manila with netapp ontap driver and I am able to create
> shares but when apply access ip rule it remains in queued to apply state
> and I cannot remove the share.
> Similar configuration works fine on newton.
> What could be wrong ?
> Thanks & Regards
> Ignazio
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata ceilometer-upgrade.log shows Unauthorized (HTTP 401)

2017-06-14 Thread Ignazio Cassano
Many thanks, Gordon 
It seems to work adding OS_AUTH_TYPE=password
I a going to modify glance, compute etc etc to verify if it collects
information
Regards
Ignazio

2017-06-14 15:06 GMT+02:00 gordon chung <g...@live.ca>:

>
>
> On 14/06/17 05:15 AM, Ignazio Cassano wrote:
> >
> > source admin-openrc
> >
> > gnocchi metric list
> >
> > The request you have made requires authentication. (HTTP 401)
>
> i have no idea what admin-openrc sets, i'm guessing this?
> https://docs.openstack.org/ocata/install-guide-rdo/keystone-openrc.html
>
> the only thing i notice missing is it should have:
>
> export OS_AUTH_TYPE=password
>
> please read docs here:
> http://gnocchi.xyz/gnocchiclient/shell.html?highlight=os_auth_type
>
> cheers,
>
> --
> gord
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata ceilometer-upgrade.log shows Unauthorized (HTTP 401)

2017-06-14 Thread Ignazio Cassano
Hi Gordon, I followed the documentation https://docs.openstack.org/
project-install-guide/telemetry/ocata/install-base-rdo.html
and added auth_mode = keystone
Now ceilometer-upgrade works but the following command reports error:

source admin-openrc

gnocchi metric list

The request you have made requires authentication. (HTTP 401)


:-(
Ignazio


2017-06-13 21:39 GMT+02:00 gordon chung <g...@live.ca>:

>
>
> On 13/06/17 01:17 PM, Ignazio Cassano wrote:
> > Hi Gordon, I had no enough time for trying with devstack.
>
> you gave up after a day?
>
> how about paste your ceilometer.conf and gnocchi.conf. did you follow
> the steps here?
> https://docs.openstack.org/project-install-guide/
> telemetry/ocata/install-base-rdo.html
>
> the only step i see missing is that it doesn't tell you to set auth_mode
> = keystone in gnocchi.conf (if you're using gnocchi3.1+)
>
> cheers,
> --
> gord
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata ceilometer-upgrade.log shows Unauthorized (HTTP 401)

2017-06-13 Thread Ignazio Cassano
Hi Gordon, I had no enough time for trying with devstack.
Anycase I tried to create an environment file  ceil-openrc with the same
credentials stored in ceilometer.conf under service_c redentials section.
Doing:
source ceil-openrc
openstack user list

It shows users correctly, so I think c redentials are correct but
ceilometer-upgrade reports unautborized in log file.
Searching on google I found an unanswered question for the same issue.
https://ask.openstack.org/en/question/105833/ceilometer-upgradelog-shows-unauthorized-http-401/

I would be grateful if anyone could send me a configuration example that is
working.
Regards
Ignazio


Il 12/Giu/2017 17:13, "gordon chung" <g...@live.ca> ha scritto:

>
>
> On 12/06/17 10:30 AM, Ignazio Cassano wrote:
> > 2017-06-12 15:43:09.594 35526 CRITICAL ceilometer [-] Unauthorized:
> Unauthorized (HTTP 401)
>
> the credentials you're using aren't valid.
>
> my advice is to launch devstack with gnocchi and ceilometer enabled.
> from there, take a look at the configuration settings for both which
> should help you understand how to connect the two services.
>
> cheers,
>
> --
> gord
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata ceilometer-upgrade.log shows Unauthorized (HTTP 401)

2017-06-12 Thread Ignazio Cassano
I'll try with your suggestion.
thx
Ignazio

2017-06-12 17:10 GMT+02:00 gordon chung <g...@live.ca>:

>
>
> On 12/06/17 10:30 AM, Ignazio Cassano wrote:
> > 2017-06-12 15:43:09.594 35526 CRITICAL ceilometer [-] Unauthorized:
> Unauthorized (HTTP 401)
>
> the credentials you're using aren't valid.
>
> my advice is to launch devstack with gnocchi and ceilometer enabled.
> from there, take a look at the configuration settings for both which
> should help you understand how to connect the two services.
>
> cheers,
>
> --
> gord
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata ceilometer-upgrade.log shows Unauthorized (HTTP 401)

2017-06-12 Thread Ignazio Cassano
Hi All,
installing ocata on centos 7.33 with gnocchi and ceilometer, when I run :

ceilometer-upgrade --skip-metering-database

it exits with status = 1 and in the file
/var/log/ceilometer/ceilometer-upgrade.log

I found the following lines:

2017-06-12 15:43:08.982 35526 WARNING
oslo_reports.guru_meditation_report [-] Guru meditation now registers
SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1
will no longer be registered in a future release, so please use
SIGUSR2 to generate reports.
2017-06-12 15:43:08.982 35526 INFO ceilometer.cmd.storage [-] Skipping
metering database upgrade
2017-06-12 15:43:09.594 35526 CRITICAL ceilometer [-] Unauthorized:
Unauthorized (HTTP 401)
2017-06-12 15:43:09.594 35526 ERROR ceilometer Traceback (most recent
call last):
2017-06-12 15:43:09.594 35526 ERROR ceilometer   File
"/usr/bin/ceilometer-upgrade", line 10, in 
2017-06-12 15:43:09.594 35526 ERROR ceilometer sys.exit(upgrade())
2017-06-12 15:43:09.594 35526 ERROR ceilometer   File
"/usr/lib/python2.7/site-packages/ceilometer/cmd/storage.py", line 53,
in upgrade
2017-06-12 15:43:09.594 35526 ERROR ceilometer
gnocchi_client.upgrade_resource_types(conf)
2017-06-12 15:43:09.594 35526 ERROR ceilometer   File
"/usr/lib/python2.7/site-packages/ceilometer/gnocchi_client.py", line
116, in upgrade_resource_types
2017-06-12 15:43:09.594 35526 ERROR ceilometer
gnocchi.resource_type.create(resource_type=rt)
2017-06-12 15:43:09.594 35526 ERROR ceilometer   File
"/usr/lib/python2.7/site-packages/gnocchiclient/v1/resource_type.py",
line 35, in create


Please, anyone ha solved this issue yet ?

Regards

Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] gnocchi api bind address and port on ocata centos 7

2017-06-12 Thread Ignazio Cassano
Hello All,
I am trying to configure ocata with gnocchi on a cluster environment on
centos 7 controllers.
If I not modify /usr/lib/systemd/system/openstack-gnocchi-api.service,
gnocchi
try to bind on port 8000.
If I want to bind on port 8041 I must modify
/usr/lib/systemd/system/openstack-gnocchi-api.service, gnocchi as follows:
[Unit]
Description=OpenStack ceilometer API service
After=syslog.target network.target

[Service]
Type=simple
User=gnocchi
ExecStart=/usr/bin/gnocchi-api --port 8041 -- --logfile
/var/log/gnocchi/api.log
Restart=on-failure

[Install]
WantedBy=multi-user.target

Anycase it listens on all server interfaces while I want it only on one
interface (internal api or management interface).

I tryed to modify /etc/gnocchi/gnocchi.conf as follows:

[api]
port = 8041
host = myadress

but it does non work.

Anyone could help me, please ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata centos 7 placement api not working

2017-06-07 Thread Ignazio Cassano
Hello, I solved the problem.
Haproxy for redirect api placement api must contain the vollowinf directive:
listen nova_compute_placemente_cluster
  bind 10.102.184.83:8778
  http-request del-header X-Forwarded-Proto
  server tst-controller-01 10.102.184.70:8778 check fall 5 inter 2000 rise 2
  server tst-controller-02 10.102.184.71:8778 check fall 5 inter 2000 rise 2
  server tst-controller-03 10.102.184.72:8778 check fall 5 inter 2000 rise 2

(del-header X-Forwarded-Proto  solves)

Regards
Ignazio

2017-06-07 15:00 GMT+02:00 Curtis <serverasc...@gmail.com>:

> On Wed, Jun 7, 2017 at 6:45 AM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Hello All,
> > I just installed ocata on centos 7 and verifying nova installation al ran
> > the command:
> >
> > nova-status -d upgrade check
> >
> >
> > It returns:
> >
> > Error:
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456,
> in
> > main
> > ret = fn(*fn_args, **fn_kwargs)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386,
> in
> > check
> > result = func(self)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201,
> in
> > _check_placement
> > versions = self._placement_get("/")
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189,
> in
> > _placement_get
> > return client.get(path, endpoint_filter=ks_filter).json()
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 758, in get
> > return self.request(url, 'GET', **kwargs)
> >   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line
> 101,
> > in inner
> > return wrapped(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 616, in request
> > resp = send(**kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 690, in _send_request
> > raise exceptions.ConnectFailure(msg)
> > ConnectFailure: Unable to establish connection to
> > http://10.102.184.83:8778/: ('Connection aborted.',
> BadStatusLine("''",))
> >
> >
> >
> > The followinf is my /etc/httpd/conf.d/00-nova-placement-api.conf:
> >
> > Listen 10.102.184.70:8778
> >
> > 
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> >   WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
> > group=nova
> >   WSGIScriptAlias / /usr/bin/nova-placement-api
> >   = 2.4>
> > ErrorLogFormat "%M"
> >   
> >   ErrorLog /var/log/nova/nova-placement-api.log
> >   #SSLEngine On
> >   #SSLCertificateFile ...
> >   #SSLCertificateKeyFile ...
> >  
> > = 2.4>
> > Require all granted
> > 
> > 
> > Order allow,deny
> > Allow from all
> > 
> >  
> > 
> >
> > Alias /nova-placement-api /usr/bin/nova-placement-api
> > 
> >   SetHandler wsgi-script
> >   Options +ExecCGI
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> > 
> >
> >
> >
> > Could anyone help me to solve this problem ?
>
> The IPs above are different. Presumably there is some kind of haproxy
> or other loadbalancer on 10.102.184.83 that forwards to the actual
> service and it's configured and working? Also, is the placement
> endpoint in the keystone catalog...where does it point to?
>
> Just some thoughts, thanks,
> Curtis.
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Blog: serverascode.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata centos 7 placement api not working

2017-06-07 Thread Ignazio Cassano
Hello again Curtis, do you need my haproxy configuration ?
Do you think placement api are not supported in this scenario ?
Regards
Ignazio

2017-06-07 15:00 GMT+02:00 Curtis <serverasc...@gmail.com>:

> On Wed, Jun 7, 2017 at 6:45 AM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Hello All,
> > I just installed ocata on centos 7 and verifying nova installation al ran
> > the command:
> >
> > nova-status -d upgrade check
> >
> >
> > It returns:
> >
> > Error:
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456,
> in
> > main
> > ret = fn(*fn_args, **fn_kwargs)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386,
> in
> > check
> > result = func(self)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201,
> in
> > _check_placement
> > versions = self._placement_get("/")
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189,
> in
> > _placement_get
> > return client.get(path, endpoint_filter=ks_filter).json()
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 758, in get
> > return self.request(url, 'GET', **kwargs)
> >   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line
> 101,
> > in inner
> > return wrapped(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 616, in request
> > resp = send(**kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 690, in _send_request
> > raise exceptions.ConnectFailure(msg)
> > ConnectFailure: Unable to establish connection to
> > http://10.102.184.83:8778/: ('Connection aborted.',
> BadStatusLine("''",))
> >
> >
> >
> > The followinf is my /etc/httpd/conf.d/00-nova-placement-api.conf:
> >
> > Listen 10.102.184.70:8778
> >
> > 
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> >   WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
> > group=nova
> >   WSGIScriptAlias / /usr/bin/nova-placement-api
> >   = 2.4>
> > ErrorLogFormat "%M"
> >   
> >   ErrorLog /var/log/nova/nova-placement-api.log
> >   #SSLEngine On
> >   #SSLCertificateFile ...
> >   #SSLCertificateKeyFile ...
> >  
> > = 2.4>
> > Require all granted
> > 
> > 
> > Order allow,deny
> > Allow from all
> > 
> >  
> > 
> >
> > Alias /nova-placement-api /usr/bin/nova-placement-api
> > 
> >   SetHandler wsgi-script
> >   Options +ExecCGI
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> > 
> >
> >
> >
> > Could anyone help me to solve this problem ?
>
> The IPs above are different. Presumably there is some kind of haproxy
> or other loadbalancer on 10.102.184.83 that forwards to the actual
> service and it's configured and working? Also, is the placement
> endpoint in the keystone catalog...where does it point to?
>
> Just some thoughts, thanks,
> Curtis.
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Blog: serverascode.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata centos 7 placement api not working

2017-06-07 Thread Ignazio Cassano
Ps
The placement endpoint is in the vip

Il 07/Giu/2017 15:00, "Curtis" <serverasc...@gmail.com> ha scritto:

> On Wed, Jun 7, 2017 at 6:45 AM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Hello All,
> > I just installed ocata on centos 7 and verifying nova installation al ran
> > the command:
> >
> > nova-status -d upgrade check
> >
> >
> > It returns:
> >
> > Error:
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456,
> in
> > main
> > ret = fn(*fn_args, **fn_kwargs)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386,
> in
> > check
> > result = func(self)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201,
> in
> > _check_placement
> > versions = self._placement_get("/")
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189,
> in
> > _placement_get
> > return client.get(path, endpoint_filter=ks_filter).json()
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 758, in get
> > return self.request(url, 'GET', **kwargs)
> >   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line
> 101,
> > in inner
> > return wrapped(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 616, in request
> > resp = send(**kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 690, in _send_request
> > raise exceptions.ConnectFailure(msg)
> > ConnectFailure: Unable to establish connection to
> > http://10.102.184.83:8778/: ('Connection aborted.',
> BadStatusLine("''",))
> >
> >
> >
> > The followinf is my /etc/httpd/conf.d/00-nova-placement-api.conf:
> >
> > Listen 10.102.184.70:8778
> >
> > 
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> >   WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
> > group=nova
> >   WSGIScriptAlias / /usr/bin/nova-placement-api
> >   = 2.4>
> > ErrorLogFormat "%M"
> >   
> >   ErrorLog /var/log/nova/nova-placement-api.log
> >   #SSLEngine On
> >   #SSLCertificateFile ...
> >   #SSLCertificateKeyFile ...
> >  
> > = 2.4>
> > Require all granted
> > 
> > 
> > Order allow,deny
> > Allow from all
> > 
> >  
> > 
> >
> > Alias /nova-placement-api /usr/bin/nova-placement-api
> > 
> >   SetHandler wsgi-script
> >   Options +ExecCGI
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> > 
> >
> >
> >
> > Could anyone help me to solve this problem ?
>
> The IPs above are different. Presumably there is some kind of haproxy
> or other loadbalancer on 10.102.184.83 that forwards to the actual
> service and it's configured and working? Also, is the placement
> endpoint in the keystone catalog...where does it point to?
>
> Just some thoughts, thanks,
> Curtis.
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Blog: serverascode.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ocata centos 7 placement api not working

2017-06-07 Thread Ignazio Cassano
Hello, thanks for your answer.
I have 3 controllers in cluster.
On each controller haproxy is installaed.
10.102.184.83 is the vip end redirect on each controller with haproxy.
It works for all services the only issue is on placement api.
Regards



2017-06-07 15:00 GMT+02:00 Curtis <serverasc...@gmail.com>:

> On Wed, Jun 7, 2017 at 6:45 AM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Hello All,
> > I just installed ocata on centos 7 and verifying nova installation al ran
> > the command:
> >
> > nova-status -d upgrade check
> >
> >
> > It returns:
> >
> > Error:
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456,
> in
> > main
> > ret = fn(*fn_args, **fn_kwargs)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386,
> in
> > check
> > result = func(self)
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201,
> in
> > _check_placement
> > versions = self._placement_get("/")
> >   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189,
> in
> > _placement_get
> > return client.get(path, endpoint_filter=ks_filter).json()
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 758, in get
> > return self.request(url, 'GET', **kwargs)
> >   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line
> 101,
> > in inner
> > return wrapped(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 616, in request
> > resp = send(**kwargs)
> >   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> > 690, in _send_request
> > raise exceptions.ConnectFailure(msg)
> > ConnectFailure: Unable to establish connection to
> > http://10.102.184.83:8778/: ('Connection aborted.',
> BadStatusLine("''",))
> >
> >
> >
> > The followinf is my /etc/httpd/conf.d/00-nova-placement-api.conf:
> >
> > Listen 10.102.184.70:8778
> >
> > 
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> >   WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
> > group=nova
> >   WSGIScriptAlias / /usr/bin/nova-placement-api
> >   = 2.4>
> > ErrorLogFormat "%M"
> >   
> >   ErrorLog /var/log/nova/nova-placement-api.log
> >   #SSLEngine On
> >   #SSLCertificateFile ...
> >   #SSLCertificateKeyFile ...
> >  
> > = 2.4>
> > Require all granted
> > 
> > 
> > Order allow,deny
> > Allow from all
> > 
> >  
> > 
> >
> > Alias /nova-placement-api /usr/bin/nova-placement-api
> > 
> >   SetHandler wsgi-script
> >   Options +ExecCGI
> >   WSGIProcessGroup nova-placement-api
> >   WSGIApplicationGroup %{GLOBAL}
> >   WSGIPassAuthorization On
> > 
> >
> >
> >
> > Could anyone help me to solve this problem ?
>
> The IPs above are different. Presumably there is some kind of haproxy
> or other loadbalancer on 10.102.184.83 that forwards to the actual
> service and it's configured and working? Also, is the placement
> endpoint in the keystone catalog...where does it point to?
>
> Just some thoughts, thanks,
> Curtis.
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Blog: serverascode.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ocata centos 7 placement api not working

2017-06-07 Thread Ignazio Cassano
Hello All,
I just installed ocata on centos 7 and verifying nova installation al ran
the command:

nova-status -d upgrade check


It returns:

Error:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456, in
main
ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386, in
check
result = func(self)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201, in
_check_placement
versions = self._placement_get("/")
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189, in
_placement_get
return client.get(path, endpoint_filter=ks_filter).json()
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
758, in get
return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101,
in inner
return wrapped(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
616, in request
resp = send(**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
690, in _send_request
raise exceptions.ConnectFailure(msg)
ConnectFailure: Unable to establish connection to http://10.102.184.83:8778/:
('Connection aborted.', BadStatusLine("''",))



The followinf is my /etc/httpd/conf.d/00-nova-placement-api.conf:

Listen 10.102.184.70:8778


  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  = 2.4>
ErrorLogFormat "%M"
  
  ErrorLog /var/log/nova/nova-placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
 
= 2.4>
Require all granted


Order allow,deny
Allow from all

 


Alias /nova-placement-api /usr/bin/nova-placement-api

  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On




Could anyone help me to solve this problem ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] newton trove image creation

2017-05-30 Thread Ignazio Cassano
Hi All,
reading trove documentation I found is not enough clear how create trove
images.
At the following link

https://docs.openstack.org/developer/trove/dev/building_guest_images.html

I do not understant how initialize some variables:

export HOST_USERNAMEexport HOST_SCP_USERNAMEexport
GUEST_USERNAMEexport CONTROLLER_IPexport TROVESTACK_SCRIPTSexport
SERVICE_TYPEexport PATH_TROVEexport ESCAPED_PATH_TROVEexport
SSH_DIRexport GUEST_LOGDIRexport ESCAPED_GUEST_LOGDIRexport
DIB_CLOUD_INIT_DATASOURCES="ConfigDrive"export
DATASTORE_PKG_LOCATIONexport BRANCH_OVERRIDE

Anyone may help me ?

Thanks & Regards

Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Newton lbaas v2 remote_addr

2017-05-15 Thread Ignazio Cassano
Hi All,  I installed newton with lbaas v2 haproxy .
Creating an http load balancer the remote_addr showed  by each balanced
apache  is always the load balancing ip. Is there any option to show the
client address ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Remote pacemaker on coHi mpute nodes

2017-05-15 Thread Ignazio Cassano
Hello,
adding remote compute with:
pcs resource create computenode1 remote  server=10.102.184.91

instead of:
pcs resource create computenode1 ocf:pacemaker:remote reconnect_interval=60
op monitor interval=20

SOLVES the issue when unexpected compute node reboot happens.
It returns online and works fine.
Thank all for help
Regards

2017-05-13 16:55 GMT+02:00 Sam P <sam47pr...@gmail.com>:

> Hi,
>
>  This might not what exactly you are looking for... but... you may extend
> this.
>  In Masakari [0], we use pacemaker-remote in masakari-monitors[1] to
> monitor node failures.
>  In [1], there is hostmonitor.sh, which will gonna deprecate in next
> cycle, but straightforward way to do this.
>  [0] https://wiki.openstack.org/wiki/Masakari
>  [1] https://github.com/openstack/masakari-monitors/tree/master/
> masakarimonitors/hostmonitor
>
>  Then there is pacemaker-resources agents,
>  https://github.com/openstack/openstack-resource-agents/tree/master/ocf
>
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> Hope we can get more details on Monday.
>
> --- Regards,
> Sampath
>
>
>
> On Sat, May 13, 2017 at 9:52 PM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Thanks Curtis.
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> > Regards
> > Ignazio
> >
> > Il 13/Mag/2017 14:27, "Curtis" <serverasc...@gmail.com> ha scritto:
> >
> > On Fri, May 12, 2017 at 10:23 PM, Ignazio Cassano
> > <ignaziocass...@gmail.com> wrote:
> >> Hi Curtis, at this time I am using remote pacemaker only for controlli
> ng
> >> openstack services on compute nodes (neutron openvswitch-agent,
> >> nova-compute, ceilometer compute). I wrote my own ansible playbooks to
> >> install and configure all components.
> >> Second step could  be expand it for vm high availability.
> >> I did not find any procedure for cleaning up compute node after
> rebooting
> >> and I googled a lot without luck.
> >
> > Can you paste some putput of something like "pcs status" and I can try
> > to take a look?
> >
> > I've only used pacemaker a little, but I'm fairly sure it's going to
> > be something like "pcs resource cleanup "
> >
> > Thanks,
> > Curtis.
> >
> >> Regards
> >> Ignazio
> >>
> >> Il 13/Mag/2017 00:32, "Curtis" <serverasc...@gmail.com> ha scritto:
> >>
> >> On Fri, May 12, 2017 at 8:51 AM, Ignazio Cassano
> >> <ignaziocass...@gmail.com> wrote:
> >>> Hello All,
> >>> I installed openstack newton p
> >>> with a pacemaker cluster made up of 3 controllers and 2 compute nodes.
> >>> All
> >>> computer have centos 7.3.
> >>> Compute nodes are provided with remote pacemaker ocf resource.
> >>> If before shutting down a compute node I disable the compute node
> >>> resource
> >>> in the cluster and enable it when the compute returns up, it work fine
> >>> and
> >>> cluster shows it online.
> >>> If the compute node goes down before disabling the compute node
> resource
> >>> in
> >>> the cluster, it remains offline also after it is powered up.
> >>> The only solution I found is removing the compute node resource in the
> >>> cluster and add it again with a different name (adding this new name in
> >>> all
> >>> controllers /etc/hosts file).
> >>> With the above workaround it returns online for the cluster and all its
> >>> resources  (openstack-nova-compute etc etc) return to work fine.
> >>> Please,  does anyone know a better solution ?
> >>
> >> What are you using pacemaker for on the compute nodes? I have not done
> >> that personally, but my impression is that sometimes people do that in
> >> order to have virtual machines restarted somewhere else should the
> >> compute node go down outside of a maintenance window (ie. "instance
> >> high availability"). Is that your use case? If so, I would imagine
> >> there is some kind of clean up procedure to put the compute node back
> >> into use when pacemaker thinks it has failed. Did you use some kind of
> >> openstack distribution or follow a particular installation document to
> >> enabl

Re: [Openstack-operators] Remote pacemaker on coHi mpute nodes

2017-05-14 Thread Ignazio Cassano
Hello again, meanwile I was connected to my lab, I read some documentation.
Adding remote compute with the following command:

pcs resource create computenode1 remote  server=10.102.184.91
pcs property set --node computenode1 osprole=compute

instead of:

 pcs resource create compute-1 ocf:pacemaker:remote reconnect_interval=60
op monitor interval=20
 pcs property set --node compute-1 osprole=compute

I can avoid to insert aliases every time an unexpected compute reboot
happens.

Anycase I would prefer to avoid this workaround.
I expect remote compute resource returns online like any other resources in
case of failure.
Regards
Ignazio



2017-05-13 16:55 GMT+02:00 Sam P <sam47pr...@gmail.com>:

> Hi,
>
>  This might not what exactly you are looking for... but... you may extend
> this.
>  In Masakari [0], we use pacemaker-remote in masakari-monitors[1] to
> monitor node failures.
>  In [1], there is hostmonitor.sh, which will gonna deprecate in next
> cycle, but straightforward way to do this.
>  [0] https://wiki.openstack.org/wiki/Masakari
>  [1] https://github.com/openstack/masakari-monitors/tree/master/
> masakarimonitors/hostmonitor
>
>  Then there is pacemaker-resources agents,
>  https://github.com/openstack/openstack-resource-agents/tree/master/ocf
>
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> Hope we can get more details on Monday.
>
> --- Regards,
> Sampath
>
>
>
> On Sat, May 13, 2017 at 9:52 PM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Thanks Curtis.
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> > Regards
> > Ignazio
> >
> > Il 13/Mag/2017 14:27, "Curtis" <serverasc...@gmail.com> ha scritto:
> >
> > On Fri, May 12, 2017 at 10:23 PM, Ignazio Cassano
> > <ignaziocass...@gmail.com> wrote:
> >> Hi Curtis, at this time I am using remote pacemaker only for controlli
> ng
> >> openstack services on compute nodes (neutron openvswitch-agent,
> >> nova-compute, ceilometer compute). I wrote my own ansible playbooks to
> >> install and configure all components.
> >> Second step could  be expand it for vm high availability.
> >> I did not find any procedure for cleaning up compute node after
> rebooting
> >> and I googled a lot without luck.
> >
> > Can you paste some putput of something like "pcs status" and I can try
> > to take a look?
> >
> > I've only used pacemaker a little, but I'm fairly sure it's going to
> > be something like "pcs resource cleanup "
> >
> > Thanks,
> > Curtis.
> >
> >> Regards
> >> Ignazio
> >>
> >> Il 13/Mag/2017 00:32, "Curtis" <serverasc...@gmail.com> ha scritto:
> >>
> >> On Fri, May 12, 2017 at 8:51 AM, Ignazio Cassano
> >> <ignaziocass...@gmail.com> wrote:
> >>> Hello All,
> >>> I installed openstack newton p
> >>> with a pacemaker cluster made up of 3 controllers and 2 compute nodes.
> >>> All
> >>> computer have centos 7.3.
> >>> Compute nodes are provided with remote pacemaker ocf resource.
> >>> If before shutting down a compute node I disable the compute node
> >>> resource
> >>> in the cluster and enable it when the compute returns up, it work fine
> >>> and
> >>> cluster shows it online.
> >>> If the compute node goes down before disabling the compute node
> resource
> >>> in
> >>> the cluster, it remains offline also after it is powered up.
> >>> The only solution I found is removing the compute node resource in the
> >>> cluster and add it again with a different name (adding this new name in
> >>> all
> >>> controllers /etc/hosts file).
> >>> With the above workaround it returns online for the cluster and all its
> >>> resources  (openstack-nova-compute etc etc) return to work fine.
> >>> Please,  does anyone know a better solution ?
> >>
> >> What are you using pacemaker for on the compute nodes? I have not done
> >> that personally, but my impression is that sometimes people do that in
> >> order to have virtual machines restarted somewhere else should the
> >> compute node go down outside of a maintenance window (ie. "instance
> >> high availability"). Is th

Re: [Openstack-operators] Remote pacemaker on coHi mpute nodes

2017-05-14 Thread Ignazio Cassano
Hello, this morning I connected to my office by remote to send information
you requested.

Attached here there are:
status: results of command pcs status
resources: results of command pcs resources
hosts: controllers /etc/hosts where I added aliases for compute nodes every
time I rebooted a compute node simulating an unexpected reboot
As you can see last simulation was on compute-node1. Infact it is marked
offline but its remote pacemaker service is online.

[root@compute-1 ~]# systemctl status pacemaker_remote.service
● pacemaker_remote.service - Pacemaker Remote Service
   Loaded: loaded (/usr/lib/systemd/system/pacemaker_remote.service;
enabled; vendor preset: disabled)
   Active: active (running) since ven 2017-05-12 09:30:08 EDT; 1 day 20h ago
 Docs: man:pacemaker_remoted

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Remote/index.html
 Main PID: 3756 (pacemaker_remot)
   CGroup: /system.slice/pacemaker_remote.service
   └─3756 /usr/sbin/pacemaker_remoted

mag 12 09:30:08 compute-1 systemd[1]: Started Pacemaker Remote Service.
mag 12 09:30:08 compute-1 systemd[1]: Starting Pacemaker Remote Service...
mag 12 09:30:08 compute-1 pacemaker_remoted[3756]:   notice: Additional
loggi...
mag 12 09:30:08 compute-1 pacemaker_remoted[3756]:   notice: Starting a tls
l...
mag 12 09:30:08 compute-1 pacemaker_remoted[3756]:   notice: Listening on
add...
Hint: Some lines were ellipsized, use -l to show in full.

Regards Ignazio


2017-05-13 16:55 GMT+02:00 Sam P <sam47pr...@gmail.com>:

> Hi,
>
>  This might not what exactly you are looking for... but... you may extend
> this.
>  In Masakari [0], we use pacemaker-remote in masakari-monitors[1] to
> monitor node failures.
>  In [1], there is hostmonitor.sh, which will gonna deprecate in next
> cycle, but straightforward way to do this.
>  [0] https://wiki.openstack.org/wiki/Masakari
>  [1] https://github.com/openstack/masakari-monitors/tree/master/
> masakarimonitors/hostmonitor
>
>  Then there is pacemaker-resources agents,
>  https://github.com/openstack/openstack-resource-agents/tree/master/ocf
>
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> Hope we can get more details on Monday.
>
> --- Regards,
> Sampath
>
>
>
> On Sat, May 13, 2017 at 9:52 PM, Ignazio Cassano
> <ignaziocass...@gmail.com> wrote:
> > Thanks Curtis.
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> > Regards
> > Ignazio
> >
> > Il 13/Mag/2017 14:27, "Curtis" <serverasc...@gmail.com> ha scritto:
> >
> > On Fri, May 12, 2017 at 10:23 PM, Ignazio Cassano
> > <ignaziocass...@gmail.com> wrote:
> >> Hi Curtis, at this time I am using remote pacemaker only for controlli
> ng
> >> openstack services on compute nodes (neutron openvswitch-agent,
> >> nova-compute, ceilometer compute). I wrote my own ansible playbooks to
> >> install and configure all components.
> >> Second step could  be expand it for vm high availability.
> >> I did not find any procedure for cleaning up compute node after
> rebooting
> >> and I googled a lot without luck.
> >
> > Can you paste some putput of something like "pcs status" and I can try
> > to take a look?
> >
> > I've only used pacemaker a little, but I'm fairly sure it's going to
> > be something like "pcs resource cleanup "
> >
> > Thanks,
> > Curtis.
> >
> >> Regards
> >> Ignazio
> >>
> >> Il 13/Mag/2017 00:32, "Curtis" <serverasc...@gmail.com> ha scritto:
> >>
> >> On Fri, May 12, 2017 at 8:51 AM, Ignazio Cassano
> >> <ignaziocass...@gmail.com> wrote:
> >>> Hello All,
> >>> I installed openstack newton p
> >>> with a pacemaker cluster made up of 3 controllers and 2 compute nodes.
> >>> All
> >>> computer have centos 7.3.
> >>> Compute nodes are provided with remote pacemaker ocf resource.
> >>> If before shutting down a compute node I disable the compute node
> >>> resource
> >>> in the cluster and enable it when the compute returns up, it work fine
> >>> and
> >>> cluster shows it online.
> >>> If the compute node goes down before disabling the compute node
> resource
> >>> in
> >>> the cluster, it remains offline also after it is powered up.
> >>> The only solution I found is 

Re: [Openstack-operators] Openvswitch flat and provider in the same bond

2017-04-19 Thread Ignazio Cassano
Thx Kris.
If I map flat with br-flat and I add bond.567 interface to br-flat, it does
not work ?
Regards
Ignazio


Il 19/Apr/2017 21:20, "Kris G. Lindgren" <klindg...@godaddy.com> ha scritto:

We handle this a different way, I want to look to see if we can redo it, so
it requires less pre-configuration on the host.  However, this is what we
currently do (and have been doing for ~4 years):
http://www.dorm.org/blog/wp-content/uploads/2015/10/ovs-wiring-676x390.png

>From there inside neutron ml2.  You just specify bridge mapping entries for
br:.

We have the HV vlan being the native vlan.  However, you can simply change
to the config for mgmt0 to have tag=.  If I were to change this I
would think about using vlan networks instead of flat networks for
everything VM related.  So that OVS, would create the vlan specific
interfaces, instead of us doing it ahead of time and telling neutron to use
what we created.



Under redhat we use the following network-scripts to make this happen on
boot:
/etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth0

USERCTL=no

ONBOOT=yes

BOOTPROTO=none

# The next 2 lines are required on first

# boot to work around some kudzu stupidity.

ETHTOOL_OPTS="wol g autoneg on"

HWADDR=F0:4D:A2:0A:E4:26

unset HWADDR



/etc/sysconfig/network-scripts/ifcfg-eth2:

DEVICE=eth2

USERCTL=no

ONBOOT=yes

BOOTPROTO=none

ETHTOOL_OPTS="wol g autoneg on"

# The next 2 lines are required on first

# boot to work around some kudzu stupidity.

HWADDR=F0:4D:A2:0A:E4:2A

unset HWADDR



/etc/sysconfig/network-scripts/ifcfg-bond0:

DEVICE=bond0

TYPE=OVSBond

DEVICETYPE=ovs

USERCTL=no

ONBOOT=yes

BOOTPROTO=none

BOND_IFACES="eth0 eth2"

OVS_BRIDGE=br-ext

OVS_EXTRA="set interface eth0 other-config:enable-vlan-splinters=true --
set interface eth2 other-config:enable-vlan-splinters=true"

/etc/sysconfig/network-scripts/ifcfg-br-ext:

DEVICE=br-ext

TYPE=OVSBridge

DEVICETYPE="ovs"

BOOTPROTO=none

ONBOOT=yes

USERCTL=no

BOOTPROTO=static



/etc/sysconfig/network-scripts/ifcfg-mgmt0:

DEVICE=mgmt0

TYPE=OVSIntPort

DEVICETYPE="ovs"

ONBOOT=yes

USERCTL=no

OVS_OPTIONS="vlan_mode=native-untagged"

OVS_EXTRA=""

OVS_BRIDGE=br-ext

IPADDR=

NETMASK=



/etc/sysconfig/network-scripts/ifcfg-ext-vlan-499:

DEVICE=ext-vlan-499

TYPE=OVSPort

DEVICETYPE="ovs"

ONBOOT=yes

USERCTL=no

OVS_OPTIONS="tag=499"

OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
options:peer=br499-ext"

OVS_BRIDGE=br-ext

/etc/sysconfig/network-scripts/ifcfg-br499-ext:

DEVICE=br499-ext

TYPE=OVSIntPort

DEVICETYPE="ovs"

ONBOOT=yes

USERCTL=no

OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
options:peer=ext-vlan-499"

OVS_BRIDGE=br499

/etc/sysconfig/network-scripts/ifcfg-br499:

DEVICE=br499

TYPE=OVSBridge

DEVICETYPE="ovs"

BOOTPROTO=none

ONBOOT=yes

USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-ext-vlan-500:

DEVICE=ext-vlan-500

TYPE=OVSPort

DEVICETYPE="ovs"

ONBOOT=yes

USERCTL=no

OVS_OPTIONS="tag=500"

OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
options:peer=br500-ext"

OVS_BRIDGE=br-ext

/etc/sysconfig/network-scripts/ifcfg-br500-ext:

DEVICE=br500-ext

TYPE=OVSIntPort

DEVICETYPE="ovs"

ONBOOT=yes

USERCTL=no

OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
options:peer=ext-vlan-500"

OVS_BRIDGE=br500

/etc/sysconfig/network-scripts/ifcfg-br500:

DEVICE=br500

TYPE=OVSBridge

DEVICETYPE="ovs"

BOOTPROTO=none

ONBOOT=yes

USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-br-int:

DEVICE=br-int

TYPE=OVSBridge

DEVICETYPE="ovs"

BOOTPROTO=none

ONBOOT=yes

USERCTL=no



___

Kris Lindgren

Senior Linux Systems Engineer

GoDaddy



*From: *Ignazio Cassano <ignaziocass...@gmail.com>
*Date: *Wednesday, April 19, 2017 at 1:06 PM
*To: *Dan Sneddon <dsned...@redhat.com>
*Cc: *OpenStack Operators <openstack-operators@lists.openstack.org>
*Subject: *Re: [Openstack-operators] Openvswitch flat and provider in the
same bond



Hi Dan, on the physical  switch the 567 is not the native vlan but is
tagged like 555 and 556.

I know I could set 567 as a native vlan to receive it untagged.

But if I would like more than one flat network ?

I am not skilled in networking but I think only one native vlan can be set
in switch.

Any further solution or suggestion ?

Regards

Ignazio



Il 19/Apr/2017 20:19, "Dan Sneddon" <dsned...@redhat.com> ha scritto:

On 04/19/2017 09:02 AM, Ignazio Cassano wrote:
> Dear All,  in my openstack Newton installation compute e controllers
> node have e separate management network nic and a lacp bond0 where
> provider vlan (555,556) and flat vlan (567) are trunked.
> Since I cannot spec

Re: [Openstack-operators] Openvswitch flat and provider in the same bond

2017-04-19 Thread Ignazio Cassano
Many Thanks, Dan
Ignazio

Il 19/Apr/2017 21:20, "Kris G. Lindgren" <klindg...@godaddy.com> ha scritto:

> We handle this a different way, I want to look to see if we can redo it,
> so it requires less pre-configuration on the host.  However, this is what
> we currently do (and have been doing for ~4 years):
> http://www.dorm.org/blog/wp-content/uploads/2015/10/ovs-wiring-676x390.png
>
> From there inside neutron ml2.  You just specify bridge mapping entries
> for br:.
>
> We have the HV vlan being the native vlan.  However, you can simply change
> to the config for mgmt0 to have tag=.  If I were to change this I
> would think about using vlan networks instead of flat networks for
> everything VM related.  So that OVS, would create the vlan specific
> interfaces, instead of us doing it ahead of time and telling neutron to use
> what we created.
>
>
>
> Under redhat we use the following network-scripts to make this happen on
> boot:
> /etc/sysconfig/network-scripts/ifcfg-eth0:
>
> DEVICE=eth0
>
> USERCTL=no
>
> ONBOOT=yes
>
> BOOTPROTO=none
>
> # The next 2 lines are required on first
>
> # boot to work around some kudzu stupidity.
>
> ETHTOOL_OPTS="wol g autoneg on"
>
> HWADDR=F0:4D:A2:0A:E4:26
>
> unset HWADDR
>
>
>
> /etc/sysconfig/network-scripts/ifcfg-eth2:
>
> DEVICE=eth2
>
> USERCTL=no
>
> ONBOOT=yes
>
> BOOTPROTO=none
>
> ETHTOOL_OPTS="wol g autoneg on"
>
> # The next 2 lines are required on first
>
> # boot to work around some kudzu stupidity.
>
> HWADDR=F0:4D:A2:0A:E4:2A
>
> unset HWADDR
>
>
>
> /etc/sysconfig/network-scripts/ifcfg-bond0:
>
> DEVICE=bond0
>
> TYPE=OVSBond
>
> DEVICETYPE=ovs
>
> USERCTL=no
>
> ONBOOT=yes
>
> BOOTPROTO=none
>
> BOND_IFACES="eth0 eth2"
>
> OVS_BRIDGE=br-ext
>
> OVS_EXTRA="set interface eth0 other-config:enable-vlan-splinters=true --
> set interface eth2 other-config:enable-vlan-splinters=true"
>
> /etc/sysconfig/network-scripts/ifcfg-br-ext:
>
> DEVICE=br-ext
>
> TYPE=OVSBridge
>
> DEVICETYPE="ovs"
>
> BOOTPROTO=none
>
> ONBOOT=yes
>
> USERCTL=no
>
> BOOTPROTO=static
>
>
>
> /etc/sysconfig/network-scripts/ifcfg-mgmt0:
>
> DEVICE=mgmt0
>
> TYPE=OVSIntPort
>
> DEVICETYPE="ovs"
>
> ONBOOT=yes
>
> USERCTL=no
>
> OVS_OPTIONS="vlan_mode=native-untagged"
>
> OVS_EXTRA=""
>
> OVS_BRIDGE=br-ext
>
> IPADDR=
>
> NETMASK=
>
>
>
> /etc/sysconfig/network-scripts/ifcfg-ext-vlan-499:
>
> DEVICE=ext-vlan-499
>
> TYPE=OVSPort
>
> DEVICETYPE="ovs"
>
> ONBOOT=yes
>
> USERCTL=no
>
> OVS_OPTIONS="tag=499"
>
> OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
> options:peer=br499-ext"
>
> OVS_BRIDGE=br-ext
>
> /etc/sysconfig/network-scripts/ifcfg-br499-ext:
>
> DEVICE=br499-ext
>
> TYPE=OVSIntPort
>
> DEVICETYPE="ovs"
>
> ONBOOT=yes
>
> USERCTL=no
>
> OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
> options:peer=ext-vlan-499"
>
> OVS_BRIDGE=br499
>
> /etc/sysconfig/network-scripts/ifcfg-br499:
>
> DEVICE=br499
>
> TYPE=OVSBridge
>
> DEVICETYPE="ovs"
>
> BOOTPROTO=none
>
> ONBOOT=yes
>
> USERCTL=no
>
> /etc/sysconfig/network-scripts/ifcfg-ext-vlan-500:
>
> DEVICE=ext-vlan-500
>
> TYPE=OVSPort
>
> DEVICETYPE="ovs"
>
> ONBOOT=yes
>
> USERCTL=no
>
> OVS_OPTIONS="tag=500"
>
> OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
> options:peer=br500-ext"
>
> OVS_BRIDGE=br-ext
>
> /etc/sysconfig/network-scripts/ifcfg-br500-ext:
>
> DEVICE=br500-ext
>
> TYPE=OVSIntPort
>
> DEVICETYPE="ovs"
>
> ONBOOT=yes
>
> USERCTL=no
>
> OVS_EXTRA="set interface $DEVICE type=patch -- set interface $DEVICE
> options:peer=ext-vlan-500"
>
> OVS_BRIDGE=br500
>
> /etc/sysconfig/network-scripts/ifcfg-br500:
>
> DEVICE=br500
>
> TYPE=OVSBridge
>
> DEVICETYPE="ovs"
>
> BOOTPROTO=none
>
> ONBOOT=yes
>
> USERCTL=no
>
> /etc/sysconfig/network-scripts/ifcfg-br-int:
>
> DEVICE=br-int
>
> TYPE=OVSBridge
>
> DEVICETYPE="ovs"
>
> BOOTPROTO=none
>
> ONBOOT=yes
>
> USERCTL=no
>
>
>
> ___
>
> Kris Lindgren
>
> Senior Linux Systems Engineer
>
> GoDaddy
>
&

  1   2   >