Re: [Openstack] Cinder - Could not start Cinder Volume service-Ocata

2017-06-20 Thread Adhi Priharmanto
try to Populate the Block Storage database:

# su -s /bin/sh -c "cinder-manage db sync" cinder


just like describe in
https://docs.openstack.org/ocata/install-guide-rdo/cinder-controller-install.html#install-and-configure-components





On Wed, Jun 21, 2017 at 1:37 PM, Arne Wiebalck 
wrote:

> I saw something similar when preparing the Ocala upgrade: in our case this
> was caused by stale entries in the ‘services' table.
> So, you may want to check the entries in that table and clean things up if
> necessary.
>
> Cheers,
>  Arne
>
>
> On 17 Jun 2017, at 19:59, SGopinath s.gopinath  wrote:
>
>
> Hi,
>
> I could not start th cinder-volume service and I'm getting the following
> error.
>
> I pasted here my /etc/cinder/cinder.conf.
>
> May please help.
>
> Thanks,
> S.Gopinath
>
> /etc/cinder/cinder.conf
> ==
>
>
> [DEFAULT]
> rootwrap_config = /etc/cinder/rootwrap.conf
> api_paste_confg = /etc/cinder/api-paste.ini
> iscsi_helper = tgtadm
> volume_name_template = volume-%s
> #volume_group = cinder-volumes
> verbose = True
> auth_strategy = keystone
> state_path = /var/lib/cinder
> lock_path = /var/lock/cinder
> volumes_dir = /var/lib/cinder/volumes
> ###SXG
> transport_url = rabbit://openstack:OpenStack123@controller
> auth_strategy = keystone
> my_ip = 10.163.61.22
> enabled_backends = lvm
> glance_api_servers = http://controller:9292
>
> ###SXG
> [database]
> connection = mysql+pymysql://cinder:CinderOpenStack123@controller/cinder
>
> ###SXG
> [keystone_authtoken]
> auth_uri = http://controller:5000
> auth_url = http://controller:35357
> memcached_servers = controller:11211
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> project_name = service
> username = cinder
> password = CinderOpenStack123
>
> ###SXG
> [lvm]
> volume_drivers = cinder.volume.drivers.lvm.LVMVolumeDriver
> volume_group = cinder-volumes
> iscsi_protocol = iscsi
> iscsi_helper = tgtadm
>
> ###SXG
> [oslo_concurrency]
> lock_path = /var/lib/cinder/tmp
>
> 
> ###END##
>
> root@cloud1:/etc/cinder# vgs
>   VG #PV #LV #SN Attr   VSize  VFree
>   cinder-volumes   1   0   0 wz--n- 37.22g 37.22g
> root@cloud1:/etc/cinder# pvs
>   PV VG Fmt  Attr PSize  PFree
>   /dev/sda4  cinder-volumes lvm2 a--  37.22g 37.22g
>
>
> 
> ###
> 2017-06-18 05:11:50.953 5230 WARNING oslo_reports.guru_meditation_report
> [-] Guru mediation now registers SIGUSR1 and SIGUSR2 by default for
> backward compatibility. SIGUSR1 will no longer be registered in a future
> release, so please use SIGUSR2 to generate reports.
> 2017-06-18 05:11:51.105 5230 INFO root [-] Generating grammar tables from
> /usr/lib/python2.7/lib2to3/Grammar.txt
> 2017-06-18 05:11:51.183 5230 INFO root [-] Generating grammar tables from
> /usr/lib/python2.7/lib2to3/PatternGrammar.txt
> 2017-06-18 05:11:51.238 5230 WARNING py.warnings
> [req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -]
> /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:241:
> NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
>   exception.NotSupportedWarning
>
> 2017-06-18 05:11:51.357 5230 INFO cinder.rpc 
> [req-85789cbc-b26c-47c5-be34-035fae86e504
> - - - - -] Automatically selected cinder-scheduler objects version 1.21 as
> minimum service version.
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume
> [req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] Volume service
> cloud1@lvm failed to start.
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume Traceback (most
> recent call last):
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 81, in main
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume
> binary='cinder-volume')
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/service.py", line 268, in create
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume
> service_name=service_name)
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/service.py", line 150, in
> __init__
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs)
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 235, in
> __init__
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs)
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 156, in
> __init__
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume
> self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI()
> 2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File
> "/usr/lib/python2.7/dist-packages/cinder/rpc.py", line

Re: [Openstack] Cinder - Could not start Cinder Volume service-Ocata

2017-06-20 Thread Arne Wiebalck
I saw something similar when preparing the Ocala upgrade: in our case this was 
caused by stale entries in the ‘services' table.
So, you may want to check the entries in that table and clean things up if 
necessary.

Cheers,
 Arne


On 17 Jun 2017, at 19:59, SGopinath s.gopinath 
mailto:s.gopin...@gov.in>> wrote:


Hi,

I could not start th cinder-volume service and I'm getting the following error.

I pasted here my /etc/cinder/cinder.conf.

May please help.

Thanks,
S.Gopinath

/etc/cinder/cinder.conf
==


[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
#volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
###SXG
transport_url = rabbit://openstack:OpenStack123@controller
auth_strategy = keystone
my_ip = 10.163.61.22
enabled_backends = lvm
glance_api_servers = http://controller:9292

###SXG
[database]
connection = mysql+pymysql://cinder:CinderOpenStack123@controller/cinder

###SXG
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CinderOpenStack123

###SXG
[lvm]
volume_drivers = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

###SXG
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

###END##

root@cloud1:/etc/cinder# vgs
  VG #PV #LV #SN Attr   VSize  VFree
  cinder-volumes   1   0   0 wz--n- 37.22g 37.22g
root@cloud1:/etc/cinder# pvs
  PV VG Fmt  Attr PSize  PFree
  /dev/sda4  cinder-volumes lvm2 a--  37.22g 37.22g


###
2017-06-18 05:11:50.953 5230 WARNING oslo_reports.guru_meditation_report [-] 
Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward 
compatibility. SIGUSR1 will no longer be registered in a future release, so 
please use SIGUSR2 to generate reports.
2017-06-18 05:11:51.105 5230 INFO root [-] Generating grammar tables from 
/usr/lib/python2.7/lib2to3/Grammar.txt
2017-06-18 05:11:51.183 5230 INFO root [-] Generating grammar tables from 
/usr/lib/python2.7/lib2to3/PatternGrammar.txt
2017-06-18 05:11:51.238 5230 WARNING py.warnings 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] 
/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:241: 
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

2017-06-18 05:11:51.357 5230 INFO cinder.rpc 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] Automatically selected 
cinder-scheduler objects version 1.21 as minimum service version.
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] Volume service cloud1@lvm 
failed to start.
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume Traceback (most recent 
call last):
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 81, in main
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume binary='cinder-volume')
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 268, in create
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume 
service_name=service_name)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 150, in __init__
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 235, in 
__init__
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/manager.py", line 156, in __init__
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume self.scheduler_rpcapi 
= scheduler_rpcapi.SchedulerAPI()
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/rpc.py", line 188, in __init__
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume serializer = 
base.CinderObjectSerializer(obj_version_cap)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume   File 
"/usr/lib/python2.7/dist-packages/cinder/objects/base.py", line 412, in __init__
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume raise 
exception.CappedVersionUnknown(version=version_cap)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume CappedVersionUnknown: 
Unrecovera

[Openstack] [Magnum] Kube-controller-manager&scheduler doesn't work normally in kubernetes by deploying magnum

2017-06-20 Thread 손길산
Hello stack users.
I was deploying kubernetes env by using openstack magnum(newton) in virtualBox.
I followed the official install guide and installed it.
But kube-scheduler and kube-controller-manager was constantly dying and 
creating.
And I Found that scheduler port(10251) and controller-manager port(10252) port 
does not opened like this.
 
I think kube-scheduler container and kube-controller-manager doesn't work 
normally.
So  i tried to many things (modify kube-controller-manager.yaml & 
kube-scheduler.yaml, magnum template TLS_disabled=False->True).
But it's still working abnormally like this.
 
It is cluster-template and dns_nameserver is 8.8.8.8.
 
Are there any people who have experienced something similar to me?
If there is, please share your solution. 
 
 
 
 
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production openstack on 5 node

2017-06-20 Thread Remo Mattei
I did a deployment with cs9 hp it was pretty bad. I hope the new one does 
better. 

Nevertheless I do not see many using hp out there. Maybe different regions like 
emea do better with that. 

Inviato da iPhone

> Il giorno 20 giu 2017, alle ore 21:10, John van Ommen 
>  ha scritto:
> 
> At HPE we originally used TripleO but switched to a 'flat' model.
> 
> I personally didn't see any advantage to Triple O. In theory, it should be 
> easier to manage and upgrade. In the real world, Helion 3.0 and 4.0 are 
> superior in every respect.
> 
> John
> 
>> On Jun 20, 2017 9:02 PM, "Remo Mattei"  wrote:
>> I worked for Red Hat and they really want to get ooo going because the 
>> installation tools did never work as everyone was hoping. Before Red Hat I 
>> was at Mirantis and the fuel installation was nice now dead. I know ooo will 
>> go into containers next couple of release but kolla–Ansible is one of the 
>> emerging solutions now to get it out fast. 
>> 
>> I am doing a project now where I am working on deploying ooo just finished 
>> the doc for Ocata undercloud. 
>> 
>> Just my two cents to concord with Mike’s statement. 
>> 
>> Remo 
>> 
>> Inviato da iPhone
>> 
>>> Il giorno 20 giu 2017, alle ore 20:51, Mike Smith  
>>> ha scritto:
>>> 
>>> There are definitely 1,001 opinions on what is “best”.  We use RDO at 
>>> Overstock and we use home-grown puppet modules because we do our own puppet 
>>> modules for everything else we do here.  We based everything around the 
>>> official Openstack install documents and we do it because we want to 
>>> *fully* understand everything we can instead of treating it like a black 
>>> box that knows how to do the magic.
>>> 
>>> However, there are lots of options out there - ansible, kolla, puppet plus 
>>> vendor-specific options too like those provided by Mirantis.  If there are 
>>> config management tools (ansible, puppet, etc) that you already use, you 
>>> may want to check out the Openstack options for those.  You are correct 
>>> that that packstack is more of a ‘all-in-one-server’ installer for a quick 
>>> POC.  It can to more, but I think RDO recommends “Triple-O” (which stands 
>>> for Openstack-on-Openstack) for production RDO deploys.  Since they are 
>>> affiliated with RedHat, they would also lean heavily towards the Ansible 
>>> option as well.
>>> 
>>> Good luck!
>>> 
>>> 
>>> Mike Smith
>>> Overstock Cloud Team
>>> 
>>> 
>>> 
 On Jun 20, 2017, at 5:49 PM, Erik McCormick  
 wrote:
 
 This is a religious discussion for most, but I would suggest Kolla. It 
 takes out a lot of the guess work, has a good upgrade mechanism, and is 
 well supported by the community via mailing list and ORC. Take a look.
 
 -Erik
 
 
> On Jun 20, 2017 7:15 PM, "Satish Patel"  wrote:
> We are deploying 5 node openstack cloud for internal use and wondering
> what method we should use, initial test was on RDO packstack but i
> heard packstack isn't good for production, some people on google
> suggesting using triplo. I found its little complicated because you
> need one more extra server run undercloud openstack to deploy
> overcloud openstack, should i really use triplo or is there any other
> and easy method which can allow us to upgrade in future also.
> 
> ___
> Mailing list: 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> 
>>> wbr>5949ed75173232896912281!
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> 
>>> !DSPAM:1,5949ed75173232896912281!
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
> !DSPAM:1,5949f395185901832122137!
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production openstack on 5 node

2017-06-20 Thread John van Ommen
At HPE we originally used TripleO but switched to a 'flat' model.

I personally didn't see any advantage to Triple O. In theory, it should be
easier to manage and upgrade. In the real world, Helion 3.0 and 4.0 are
superior in every respect.

John

On Jun 20, 2017 9:02 PM, "Remo Mattei"  wrote:

> I worked for Red Hat and they really want to get ooo going because the
> installation tools did never work as everyone was hoping. Before Red Hat I
> was at Mirantis and the fuel installation was nice now dead. I know ooo
> will go into containers next couple of release but kolla–Ansible is one of
> the emerging solutions now to get it out fast.
>
> I am doing a project now where I am working on deploying ooo just finished
> the doc for Ocata undercloud.
>
> Just my two cents to concord with Mike’s statement.
>
> Remo
>
> Inviato da iPhone
>
> Il giorno 20 giu 2017, alle ore 20:51, Mike Smith 
> ha scritto:
>
> There are definitely 1,001 opinions on what is “best”.  We use RDO at
> Overstock and we use home-grown puppet modules because we do our own puppet
> modules for everything else we do here.  We based everything around the
> official Openstack install documents and we do it because we want to
> *fully* understand everything we can instead of treating it like a black
> box that knows how to do the magic.
>
> However, there are lots of options out there - ansible, kolla, puppet plus
> vendor-specific options too like those provided by Mirantis.  If there are
> config management tools (ansible, puppet, etc) that you already use, you
> may want to check out the Openstack options for those.  You are correct
> that that packstack is more of a ‘all-in-one-server’ installer for a quick
> POC.  It can to more, but I think RDO recommends “Triple-O” (which stands
> for Openstack-on-Openstack) for production RDO deploys.  Since they are
> affiliated with RedHat, they would also lean heavily towards the Ansible
> option as well.
>
> Good luck!
>
>
> Mike Smith
> Overstock Cloud Team
>
>
>
> On Jun 20, 2017, at 5:49 PM, Erik McCormick 
> wrote:
>
> This is a religious discussion for most, but I would suggest Kolla. It
> takes out a lot of the guess work, has a good upgrade mechanism, and is
> well supported by the community via mailing list and ORC. Take a look.
>
> -Erik
>
>
> On Jun 20, 2017 7:15 PM, "Satish Patel"  wrote:
>
>> We are deploying 5 node openstack cloud for internal use and wondering
>> what method we should use, initial test was on RDO packstack but i
>> heard packstack isn't good for production, some people on google
>> suggesting using triplo. I found its little complicated because you
>> need one more extra server run undercloud openstack to deploy
>> overcloud openstack, should i really use triplo or is there any other
>> and easy method which can allow us to upgrade in future also.
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
> !DSPAM:1,5949ed75173232896912281!
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
> !DSPAM:1,5949ed75173232896912281!
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production openstack on 5 node

2017-06-20 Thread Remo Mattei
I worked for Red Hat and they really want to get ooo going because the 
installation tools did never work as everyone was hoping. Before Red Hat I was 
at Mirantis and the fuel installation was nice now dead. I know ooo will go 
into containers next couple of release but kolla–Ansible is one of the emerging 
solutions now to get it out fast. 

I am doing a project now where I am working on deploying ooo just finished the 
doc for Ocata undercloud. 

Just my two cents to concord with Mike’s statement. 

Remo 

Inviato da iPhone

> Il giorno 20 giu 2017, alle ore 20:51, Mike Smith  ha 
> scritto:
> 
> There are definitely 1,001 opinions on what is “best”.  We use RDO at 
> Overstock and we use home-grown puppet modules because we do our own puppet 
> modules for everything else we do here.  We based everything around the 
> official Openstack install documents and we do it because we want to *fully* 
> understand everything we can instead of treating it like a black box that 
> knows how to do the magic.
> 
> However, there are lots of options out there - ansible, kolla, puppet plus 
> vendor-specific options too like those provided by Mirantis.  If there are 
> config management tools (ansible, puppet, etc) that you already use, you may 
> want to check out the Openstack options for those.  You are correct that that 
> packstack is more of a ‘all-in-one-server’ installer for a quick POC.  It can 
> to more, but I think RDO recommends “Triple-O” (which stands for 
> Openstack-on-Openstack) for production RDO deploys.  Since they are 
> affiliated with RedHat, they would also lean heavily towards the Ansible 
> option as well.
> 
> Good luck!
> 
> 
> Mike Smith
> Overstock Cloud Team
> 
> 
> 
>> On Jun 20, 2017, at 5:49 PM, Erik McCormick  
>> wrote:
>> 
>> This is a religious discussion for most, but I would suggest Kolla. It takes 
>> out a lot of the guess work, has a good upgrade mechanism, and is well 
>> supported by the community via mailing list and ORC. Take a look.
>> 
>> -Erik
>> 
>> 
>>> On Jun 20, 2017 7:15 PM, "Satish Patel"  wrote:
>>> We are deploying 5 node openstack cloud for internal use and wondering
>>> what method we should use, initial test was on RDO packstack but i
>>> heard packstack isn't good for production, some people on google
>>> suggesting using triplo. I found its little complicated because you
>>> need one more extra server run undercloud openstack to deploy
>>> overcloud openstack, should i really use triplo or is there any other
>>> and easy method which can allow us to upgrade in future also.
>>> 
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> !DSPAM:1,5949ed75173232896912281!
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> !DSPAM:1,5949ed75173232896912281!
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production openstack on 5 node

2017-06-20 Thread Mike Smith
There are definitely 1,001 opinions on what is “best”.  We use RDO at Overstock 
and we use home-grown puppet modules because we do our own puppet modules for 
everything else we do here.  We based everything around the official Openstack 
install documents and we do it because we want to *fully* understand everything 
we can instead of treating it like a black box that knows how to do the magic.

However, there are lots of options out there - ansible, kolla, puppet plus 
vendor-specific options too like those provided by Mirantis.  If there are 
config management tools (ansible, puppet, etc) that you already use, you may 
want to check out the Openstack options for those.  You are correct that that 
packstack is more of a ‘all-in-one-server’ installer for a quick POC.  It can 
to more, but I think RDO recommends “Triple-O” (which stands for 
Openstack-on-Openstack) for production RDO deploys.  Since they are affiliated 
with RedHat, they would also lean heavily towards the Ansible option as well.

Good luck!


Mike Smith
Overstock Cloud Team



On Jun 20, 2017, at 5:49 PM, Erik McCormick 
mailto:emccorm...@cirrusseven.com>> wrote:

This is a religious discussion for most, but I would suggest Kolla. It takes 
out a lot of the guess work, has a good upgrade mechanism, and is well 
supported by the community via mailing list and ORC. Take a look.

-Erik


On Jun 20, 2017 7:15 PM, "Satish Patel" 
mailto:satish@gmail.com>> wrote:
We are deploying 5 node openstack cloud for internal use and wondering
what method we should use, initial test was on RDO packstack but i
heard packstack isn't good for production, some people on google
suggesting using triplo. I found its little complicated because you
need one more extra server run undercloud openstack to deploy
overcloud openstack, should i really use triplo or is there any other
and easy method which can allow us to upgrade in future also.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder -Ocata - Cinder volume could not start

2017-06-20 Thread SGopinath s.gopinath

Hi, 

I could not start th cinder-volume service and I'm getting the following error. 

I pasted here my /etc/cinder/cinder.conf. 

May please help. 

Thanks, 
S.Gopinath 

/etc/cinder/cinder.conf 
== 


[DEFAULT] 
rootwrap_config = /etc/cinder/rootwrap.conf 
api_paste_confg = /etc/cinder/api-paste.ini 
iscsi_helper = tgtadm 
volume_name_template = volume-%s 
#volume_group = cinder-volumes 
verbose = True 
auth_strategy = keystone 
state_path = /var/lib/cinder 
lock_path = /var/lock/cinder 
volumes_dir = /var/lib/cinder/volumes 
###SXG 
transport_url = rabbit://openstack:OpenStack123@controller 
auth_strategy = keystone 
my_ip = [ callto:10.163.61.22 | 10.163.61.22 ] 
enabled_backends = lvm 
glance_api_servers = [ http://controller:9292/ | http://controller:9292 ] 

###SXG 
[database] 
connection = mysql+pymysql://cinder:CinderOpenStack123@controller/cinder 

###SXG 
[keystone_authtoken] 
auth_uri = [ http://controller:5000/ | http://controller:5000 ] 
auth_url = [ http://controller:35357/ | http://controller:35357 ] 
memcached_servers = controller:11211 
auth_type = password 
project_domain_name = default 
user_domain_name = default 
project_name = service 
username = cinder 
password = CinderOpenStack123 

###SXG 
[lvm] 
volume_drivers = cinder.volume.drivers.lvm.LVMVolumeDriver 
volume_group = cinder-volumes 
iscsi_protocol = iscsi 
iscsi_helper = tgtadm 

###SXG 
[oslo_concurrency] 
lock_path = /var/lib/cinder/tmp 

###END##
 

root@cloud1:/etc/cinder# vgs 
VG #PV #LV #SN Attr VSize VFree 
cinder-volumes 1 0 0 wz--n- 37.22g 37.22g 
root@cloud1:/etc/cinder# pvs 
PV VG Fmt Attr PSize PFree 
/dev/sda4 cinder-volumes lvm2 a-- 37.22g 37.22g 


###
 
2017-06-18 05:11:50.953 5230 WARNING oslo_reports.guru_meditation_report [-] 
Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward 
compatibility. SIGUSR1 will no longer be registered in a future release, so 
please use SIGUSR2 to generate reports. 
2017-06-18 05:11:51.105 5230 INFO root [-] Generating grammar tables from 
/usr/lib/python2.7/lib2to3/Grammar.txt 
2017-06-18 05:11:51.183 5230 INFO root [-] Generating grammar tables from 
/usr/lib/python2.7/lib2to3/PatternGrammar.txt 
2017-06-18 05:11:51.238 5230 WARNING py.warnings 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] 
/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:241: 
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported 
exception.NotSupportedWarning 

2017-06-18 05:11:51.357 5230 INFO cinder.rpc 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] Automatically selected 
cinder-scheduler objects version 1.21 as minimum service version. 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] Volume service cloud1@lvm 
failed to start. 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume Traceback (most recent 
call last): 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 81, in main 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume binary='cinder-volume') 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 268, in create 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume service_name=service_name) 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 150, in __init__ 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs) 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 235, in 
__init__ 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume *args, **kwargs) 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/manager.py", line 156, in __init__ 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume self.scheduler_rpcapi = 
scheduler_rpcapi.SchedulerAPI() 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/rpc.py", line 188, in __init__ 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume serializer = 
base.CinderObjectSerializer(obj_version_cap) 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume File 
"/usr/lib/python2.7/dist-packages/cinder/objects/base.py", line 412, in 
__init__ 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume raise 
exception.CappedVersionUnknown(version=version_cap) 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume CappedVersionUnknown: 
Unrecoverable Error: Versioned Objects in DB are capped to unknown version 
1.21. 
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume 
2017-06-18 05:11:51.360 5230 ERROR cinder.cmd.volume 
[req-85789cbc-b26c-47c5-be34-035fae

Re: [Openstack] Openstack Ocata Error Message

2017-06-20 Thread Alex Evonosky
Chris-

I enabled debugging and also brought up my compute node1 (which I admin
down ealier):


2017-06-20 20:33:35.438 18169 DEBUG oslo_messaging._drivers.amqpdriver [-]
received message msg_id: dd5f438571494a8499980c12a2a90116 reply to
reply_137c1eb50cf64fceb71cecc336b4773d __call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
2017-06-20 20:33:36.986 18169 DEBUG oslo_concurrency.lockutils
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
u'openstack-compute1')" acquired by
"nova.scheduler.host_manager._locked_update" :: waited 0.000s inner
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273
2017-06-20 20:33:37.004 18169 DEBUG nova.scheduler.host_manager
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state from compute
node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "AMD",
"model": "cpu64-rhel6", "arch": "x86_64", "features": ["pge", "avx",
"clflush", "sep", "syscall", "sse4a", "msr", "xsave", "cmov", "nx", "pat",
"lm", "tsc", "3dnowprefetch", "fpu", "fxsr", "sse4.1", "pae", "sse4.2",
"pclmuldq", "cmp_legacy", "vme", "mmx", "osxsave", "cx8", "mce",
"fxsr_opt", "cr8legacy", "ht", "pse", "pni", "abm", "popcnt", "mca",
"apic", "sse", "mmxext", "lahf_lm", "rdtscp", "aes", "sse2", "hypervisor",
"misalignsse", "ssse3", "de", "cx16", "pse36", "mtrr", "x2apic"],
"topology": {"cores": 2, "cells": 1, "threads": 1, "sockets":
1}}',created_at=2017-06-21T00:18:37Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=6,free_disk_gb=12,free_ram_mb=2495,host='openstack-compute1',host_ip=10.10.10.8,hypervisor_hostname='openstack-compute1',hypervisor_type='QEMU',hypervisor_version=2008000,id=9,local_gb=12,local_gb_used=5,memory_mb=3007,memory_mb_used=626,metrics='[]',numa_topology='{"nova_object.version":
"1.2", "nova_object.changes": ["cells"], "nova_object.name":
"NUMATopology", "nova_object.data": {"cells": [{"nova_object.version":
"1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset",
"mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name":
"NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0,
"cpuset": [0, 1], "pinned_cpus": [], "siblings": [], "memory": 3007,
"mempages": [{"nova_object.version": "1.1", "nova_object.changes":
["total", "used", "reserved", "size_kb"], "nova_object.name":
"NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 769991,
"reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"},
{"nova_object.version": "1.1", "nova_object.changes": ["total", "used",
"reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology",
"nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb":
2048}, "nova_object.namespace": "nova"}], "id": 0},
"nova_object.namespace": "nova"}]}, "nova_object.namespace":
"nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.5,running_vms=0,service_id=None,stats={},supported_hv_specs=[HVSpec,HVSpec],updated_at=2017-06-21T00:32:52Z,uuid=9fd1b365-5ff9-4f75-a771-777fbe7a54ad,vcpus=2,vcpus_used=0)
_locked_update
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:168
2017-06-20 20:33:37.209 18169 DEBUG nova.scheduler.host_manager
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with aggregates:
[] _locked_update
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:171
2017-06-20 20:33:37.217 18169 DEBUG nova.scheduler.host_manager
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with service
dict: {'binary': u'nova-compute', 'deleted': False, 'created_at':
datetime.datetime(2017, 5, 17, 3, 26, 12, tzinfo=),
'updated_at': datetime.datetime(2017, 6, 21, 0, 33, 34,
tzinfo=), 'report_count': 96355, 'topic': u'compute', 'host':
u'openstack-compute1', 'version': 16, 'disabled': False, 'forced_down':
False, 'last_seen_up': datetime.datetime(2017, 6, 21, 0, 33, 34,
tzinfo=), 'deleted_at': None, 'disabled_reason': None, 'id':
7} _locked_update
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:174
2017-06-20 20:33:37.218 18169 DEBUG nova.scheduler.host_manager
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with instances:
{} _locked_update
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:177
2017-06-20 20:33:37.219 18169 DEBUG oslo_concurrency.lockutils
[req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
u'openstack-compute1')" released by
"nova.scheduler.host_manager._locked_update" :: held 0.232s inner
/usr/lib/python2.7/dist-packages/oslo_conc

Re: [Openstack] Production openstack on 5 node

2017-06-20 Thread Erik McCormick
This is a religious discussion for most, but I would suggest Kolla. It
takes out a lot of the guess work, has a good upgrade mechanism, and is
well supported by the community via mailing list and ORC. Take a look.

-Erik


On Jun 20, 2017 7:15 PM, "Satish Patel"  wrote:

> We are deploying 5 node openstack cloud for internal use and wondering
> what method we should use, initial test was on RDO packstack but i
> heard packstack isn't good for production, some people on google
> suggesting using triplo. I found its little complicated because you
> need one more extra server run undercloud openstack to deploy
> overcloud openstack, should i really use triplo or is there any other
> and easy method which can allow us to upgrade in future also.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Production openstack on 5 node

2017-06-20 Thread Satish Patel
We are deploying 5 node openstack cloud for internal use and wondering
what method we should use, initial test was on RDO packstack but i
heard packstack isn't good for production, some people on google
suggesting using triplo. I found its little complicated because you
need one more extra server run undercloud openstack to deploy
overcloud openstack, should i really use triplo or is there any other
and easy method which can allow us to upgrade in future also.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack Ocata Error Message

2017-06-20 Thread Alex Evonosky
Chris

I have not enabled debug on scheduler but I will tonight.  thank you for
the feedback.





Sent from my Samsung S7 Edge


On Jun 20, 2017 4:22 PM, "Chris Friesen" 
wrote:

> On 06/20/2017 01:45 PM, Alex Evonosky wrote:
>
>> Openstackers-
>>
>> I am getting the familiar: *No hosts found* when launching an instance.
>> After
>> research I found many issues such as this at least going back to 2015.
>> However,
>> the solutions that were presented did not really seem to help mine, so I
>> am
>> checking if my error maybe a more common one that could be fixed.
>>
>
> 
>
> some from nova-scheduler:
>>
>> 2017-06-20 15:18:14.879 11720 INFO nova.filters
>> [req-128bca26-06da-49de-9d14-ad1ae967d084 7e7176b79f94483c8b802a7004466e
>> 66
>> 5f8b2c83921b4b3eb74e448667b267b1 -
>> - -] Filter RetryFilter returned 0 hosts
>> 2017-06-20 15:18:14.888 11720 INFO nova.filters
>> [req-128bca26-06da-49de-9d14-ad1ae967d084 7e7176b79f94483c8b802a7004466e
>> 66
>> 5f8b2c83921b4b3eb74e448667b267b1 -
>> - -] Filtering removed all hosts for the request with instance ID
>> '1a461902-4b93-40e5-9a95-76bb9ccbae63'. Filter results: ['RetryFilter:
>> (start:
>> 0, end: 0)']
>> 2017-06-20 15:19:10.930 11720 INFO nova.scheduler.host_manager
>> [req-003eec5d-441a-45af-9784-0d857a9d111a - - - - -] Successfully synced
>> instances from host 'o
>> penstack-compute2'.
>> 2017-06-20 15:20:15.113 11720 INFO nova.scheduler.host_manager
>> [req-b1c4044c-6973-4f28-94bc-5b40c957ff48 - - - - -] Successfully synced
>> instances from host 'o
>> penstack-compute3'.
>>
>
> If you haven't already, enable debug logs on nova-scheduler.  You should
> be able to see which filter is failing and hopefully why.
>
> In your example, look for nova-scheduler logs with
> "req-c4d5e734-ba41-4fe8-9397-4b4165f4a133" in them since that is the
> failed request.  The timestamp should be around 15:30:56.
>
> Chris
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack Ocata Error Message

2017-06-20 Thread Chris Friesen

On 06/20/2017 01:45 PM, Alex Evonosky wrote:

Openstackers-

I am getting the familiar: *No hosts found* when launching an instance.  After
research I found many issues such as this at least going back to 2015.  However,
the solutions that were presented did not really seem to help mine, so I am
checking if my error maybe a more common one that could be fixed.





some from nova-scheduler:

2017-06-20 15:18:14.879 11720 INFO nova.filters
[req-128bca26-06da-49de-9d14-ad1ae967d084 7e7176b79f94483c8b802a7004466e66
5f8b2c83921b4b3eb74e448667b267b1 -
- -] Filter RetryFilter returned 0 hosts
2017-06-20 15:18:14.888 11720 INFO nova.filters
[req-128bca26-06da-49de-9d14-ad1ae967d084 7e7176b79f94483c8b802a7004466e66
5f8b2c83921b4b3eb74e448667b267b1 -
- -] Filtering removed all hosts for the request with instance ID
'1a461902-4b93-40e5-9a95-76bb9ccbae63'. Filter results: ['RetryFilter: (start:
0, end: 0)']
2017-06-20 15:19:10.930 11720 INFO nova.scheduler.host_manager
[req-003eec5d-441a-45af-9784-0d857a9d111a - - - - -] Successfully synced
instances from host 'o
penstack-compute2'.
2017-06-20 15:20:15.113 11720 INFO nova.scheduler.host_manager
[req-b1c4044c-6973-4f28-94bc-5b40c957ff48 - - - - -] Successfully synced
instances from host 'o
penstack-compute3'.


If you haven't already, enable debug logs on nova-scheduler.  You should be able 
to see which filter is failing and hopefully why.


In your example, look for nova-scheduler logs with 
"req-c4d5e734-ba41-4fe8-9397-4b4165f4a133" in them since that is the failed 
request.  The timestamp should be around 15:30:56.


Chris


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack Ocata Error Message

2017-06-20 Thread Alex Evonosky
Openstackers-

I am getting the familiar: *No hosts found* when launching an instance.
After research I found many issues such as this at least going back to
2015.  However, the solutions that were presented did not really seem to
help mine, so I am checking if my error maybe a more common one that could
be fixed.

++-+-+--+---+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP  | State |
++-+-+--+---+
|  4 | compute3| QEMU| 10.10.10.123 | up|
|  5 | compute2| QEMU| 10.10.10.122 | up|
|  6 | compute1| QEMU| 10.10.10.8   | down  |
++-+-+--+---+

+--++---+---+---+---+---+
| ID   | Agent Type | Host
 | Availability Zone | Alive | State | Binary|
+--++---+---+---+---+---+
| 11f6c380-d5ce-4705-a10c-8d9b8e918a7a | Metadata agent |
openstack-controller1 | None  | True  | UP|
neutron-metadata-agent|
| 1ad11613-bf10-4bf9-b7b2-8e7f9d020cb1 | DHCP agent |
openstack-controller1 | nova  | True  | UP|
neutron-dhcp-agent|
| 8860a2ae-e19b-4305-8db4-291ddbbdbda8 | Linux bridge agent |
openstack-compute2| None  | True  | UP|
neutron-linuxbridge-agent |
| 8a03860c-fca1-4fb7-8a2b-f8476c281896 | Linux bridge agent |
openstack-compute1| None  | False | UP|
neutron-linuxbridge-agent |
| 9e9b2730-3b95-45ed-a593-02412c14bb4b | Linux bridge agent |
openstack-compute3| None  | True  | UP|
neutron-linuxbridge-agent |
| dcf331b5-fdf2-443c-a05b-dde8b54d2ca4 | Linux bridge agent |
openstack-controller1 | None  | True  | UP|
neutron-linuxbridge-agent |
+--++---+---+---+---+---+


Now, *compute1*, I have shutdown, so this is known.


Some portions of the DEBUG nova-conductor:

2017-06-20 15:30:54.904 13306 DEBUG oslo_messaging._drivers.amqpdriver
[req-8ab0ebdd-0ef2-4812-be84-c1733ffafd71 - - - - -] sending reply msg_id:
c3f21dae7c764e338b2263d444b4edba reply queue:
reply_640e0d85401e4c1eb76a67af5ce2d42d time elapsed: 0.0311933740013s
_send_reply
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
2017-06-20 15:30:56.714 13307 DEBUG oslo_messaging._drivers.amqpdriver [-]
received reply msg_id: 77c4bce37866466d92e4bf6b6aaf492a __call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager
[req-c4d5e734-ba41-4fe8-9397-4b4165f4a133 7e7176b79f94483c8b802a7004466e66
664dc5e6023140eca0faeb2d0ecc31c2 - - -] Failed to schedule instances
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager Traceback (most
recent call last):
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in
_schedule_instances
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager hosts =
self.scheduler_client.select_destinations(context, spec_obj)
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in
wrapped
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager return
func(*args, **kwargs)
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
51, in select_destinations
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager return
self.queryclient.select_destinations(context, spec_obj)
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
37, in __run_method
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager return
getattr(self.instance, __name)(*args, **kwargs)
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager   File
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32,
in select_destinations
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manager return
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-06-20 15:30:56.801 13307 ERROR nova.conductor.manag

[Openstack] Horizon, Barbican and LBAASv2

2017-06-20 Thread Blane Bramble
Hi, I'm having a problem getting the lbaasv2 dashboard functioning
correctly with Barbican. I've followed all the guides I can find, and can
get as far as this:

Barbican installed and working - I can create secrets and containers from
the CLI fine.
Horizon installed and working.
lbaasv2 ui installed (ngloadbalancersv2) and mostly working.

My issue is I still have TERMINATED_HTTPS greyed out, and checking the
network requests being made, I can see AJAX requests made to
/dashboard/api/barbican/certificates/ and then to
/dashboard/api/barbican/secrets/ - these both fail with a 404 not found
response.

Have I got something missing or misconfigured, or is this functionality
just not ready yet?

Thanks,

Blane.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Policy enforcement in Glance (Ocata Release)

2017-06-20 Thread Markus Hentsch
After some more thorough analysis of Glance, I think we finally got this
figured out.

The access control to Glance seems to be a two-staged process.

The first stage is the policy check. This stage determines, which API
requests are accepted. Nothing special here, the rules defined in the
"policy.json" are applied.

The second stage is a hardcoded process that can get confusing. At this
stage, the requesting user is checked whether they are the "glance
admin" or not. This is true if either or both of the following
conditions are met:

  * the user matches the rule defined as "context_is_admin" in the
"policy.json"
  * the user has a role that matches the "admin_role" entry in the
"glance-api.conf"

As soon as the user is identified to be a "glance admin", they are
allowed to access and manipulate images at will, not being bound to the
project context at all. In any other case the project context seems to
be automatically enforced in the second stage, the check for
"project_id" or "owner" respectively is unnecessary in the policies.


This leads to the solution to our problem:

We simply created a new role called "project-admin" and defined
corresponding rules and restrictions within the "policy.json":

...
"context_is_admin":  "role:admin",
"admin_restriction": "role:admin or role:project-admin",
...
"delete_image": "rule:admin_restriction",

It is important to note, that the "role:admin" within
"admin_restriction" is still absolutely necessary for the global admin
to pass the 1st stage, even if "context_is_admin" is approprietly
defined. The "context_is_admin" on the other hand, allows the global
admin to pass the 2nd stage as the "glance admin" but is out of scope
for the 1st stage.

The "project-admin" role now allows for project-scoped admins being able
to delete images, while normal project members can't but still retaining
the global (and hence glance-) admin's omnipotence.


The thing to be learned from this is that the statement "You can use the
policies to control what users can make the call, but you can't use the
policies to determine what will be included in the response" has one
exception. That is the "context_is_admin", since it has influence on the
second stage that follows the actual policy check (together with the
"admin_role" config entry).

Mind you, none of this may apply to other OpenStack components at all,
since this analysis was Glance-specific. There is currently no
consistent way of identifying the "global admin" across the components,
so each one of them might implement their unique way of handling this.
However, it seems there is some movement going on - see the following
(quite recent) blog post for more details:
http://adam.younglogic.com/2017/05/fixing-bug-96869/#bug-968696


Kind regards,

Markus Hentsch
Cloud&Heat Technologies

> Dear Brian,
>
> thanks for your answer to my previous question!
>
> I've tried changing the definition of "admin_role" in the
> configuration of the Glance API component but it did not change the
> behavior in the way I was expecting it.
>
> This role is evaluated and sets the "self.is_admin" property, see:
>
> 
> https://github.com/openstack/glance/blob/0a2074ecef67beb7a3b3531534c9d97b1c3ea828/glance/api/middleware/context.py#L195
>
> However before that check, the policy check possibly alters the same
> "self.is_admin" property in the RequestContext constructor, see:
>
> 
> https://github.com/openstack/glance/blob/0a2074ecef67beb7a3b3531534c9d97b1c3ea828/glance/context.py#L36
>
> The "check_is_admin()" in turn evaluates the policy rule for
> "context_is_admin".
>
> If I interpret this correctly, the value of the "self.is_admin"
> property determines who is what you describe as the "glance admin".
> However, due to the code cited above, the assignment of who is the
> "glance admin" seems to be influenced by /both/ the "admin_role"
> config entry /as well as/ the "context_is_admin" policy rule.
>
> This leads to the problem that is affecting my setup. Every project
> member that has the "admin" role within their project, seems to be
> automatically treated as a glance admin (self.is_admin). This explains
> why admins from specific projects can still see _private_ images of
> other projects they are not even a member of, just because their role
> happens to be called "admin". In my case this is a serious security
> issue and I'd like to prevent this.
> I want to prevent users from seeing or accessing private images of
> projects they are not assigned to!
>
> Normally, I would solve this by changing
>
> "context_is_admin":  "role:admin"
>
> to
>
> "context_is_admin":  "is_admin:True"
>
>
> which actually does restrict the set of images that is returned from
> the "image list" call for every user to the desired project context.
> However the "is_admin:True" bit which is intended to identify the
> global admin only, does not work. Even the global
> (project-independent) admin is not able to see all images an

Re: [Openstack] Floating IP issues in multiple cloud installs.

2017-06-20 Thread Tomáš Vondra
Hi!
Do you have Neutron DVR enabled? Multiple Network nodes?
Tomas

-Original Message-
From: Brian Haley [mailto:haleyb@gmail.com] 
Sent: Tuesday, June 20, 2017 3:30 AM
To: Ken D'Ambrosio
Cc: Openstack
Subject: Re: [Openstack] Floating IP issues in multiple cloud installs.

On 06/19/2017 08:51 AM, Ken D'Ambrosio wrote:
> Hi, all.  We've got two Canonical Newton installs using VLANs and 
> we're having intermittent issues we simply can't figure out.  (Note 
> that a third installation using flat networks is not having this 
> issue.) Floating IPs set up and work... sporadically.
> 
> * Stateful connections (e.g., SSH) often drop after seconds of use to 
> both the FIP and when SSH'd in from the
> * We see RSTs in our TCP dumps
> * Pings work for a while, then don't.
> * We see lots of ARP requests -- even one right after another -- to 
> resolve hosts on the internal subnets:
> 05:43:25.859448 ARP, Request who-has 80.0.0.3 tell 80.0.0.1, length 28
> 05:43:25.859563 ARP, Reply 80.0.0.3 is-at fa:16:3e:28:af:77, length 28
> 05:43:25.964417 ARP, Request who-has 80.0.0.3 tell 80.0.0.1, length 46
> 05:43:25.964572 ARP, Reply 80.0.0.3 is-at fa:16:3e:28:af:77, length 28
> 05:43:26.963989 ARP, Request who-has 80.0.0.3 tell 80.0.0.1, length 46
> 05:43:26.964156 ARP, Reply 80.0.0.3 is-at fa:16:3e:28:af:77, length 28

Was that run with '-i any' or on a single interface?  I would check the ARP 
cache to make sure things entries are in a complete/reachable state. 
  Or even syslog for any other errors.

> 80.0.0.1 is the qrouter.  I can't imagine why it asked -- and was 
> ACK'd in each case -- three times in just over a second.  In 
> hindsight, I should have checked to have seen if the ACK showed up in 
> the qrouter's ARP table.  Next time...

I'd also specify -e to tcpdump to see the MACs involved.  Possibly there is 
something else configured with the same IP on the VLAN (shouldn't happen, but 
worth checking).

-Brian

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack