[Openstack] [neutron] ICMP host unreachable - admin prohibited

2016-06-27 Thread Adhi Priharmanto
Hi, all I've setup liberty release with neutron-openvswitch using gre
tunnel at Centos. I've an problems when iptables service started at network
and compute node.
Instance couldn't get the internal IP address(DHCP) when it boot, if dump
the packet using tcpdump on both of tunnel interface it says like this :

13:03:08.164944 IP 10.24.0.23 > opstcomp1-srg.dev.jcamp.net: ICMP host
10.24.0.23 unreachable - admin prohibited, length 106

10.24.0.0/24 is my tunnel IP network. I've already add this rule on both
node but its no luck

iptables -A INPUT -p gre -j ACCEPT
iptables -A FORWARD -p gre -j ACCEPT

Can someone help me to solve this problem ?

-- 
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi

+62-812-82121584
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Keystone] Why not OAuth 2.0 provider?

2016-06-27 Thread Steve Martinelli
So, the os-oauth routes you mention in the documentation do not make
keystone a proper oauth provider. We simply perform delegation (one user
handing some level of permission on a project to another entity) with the
standard flow established in the oauth1.0b specification.

Historically we chose oauth1.0 because one of the implementers was very
much against a flow based on oauth2.0 (though the names are similar, these
can be treated as two very different beasts, you can read about it here
[1]). Even amongst popular service providers the choice is split down the
middle, some providing support for both [2]

We haven't bothered to implement support for oauth2.0 since there has been
no feedback or desire from operators to do so. Mostly, we don't want
yet-another-delegation mechanism in keystone, we have trusts and oauth1.0;
should an enticing use case arise to include another, then we can revisit
the discussion.

[1] https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/
[2] https://en.wikipedia.org/wiki/List_of_OAuth_providers


On Mon, Jun 27, 2016 at 11:15 PM, 林自均  wrote:

> Hi all,
>
> When I am searching for OAuth provider in Keystone, I found only OAuth
> 1.0. I am a little bit curious about the decision of 1.0 over 2.0. I failed
> to see the reason in the documentation
> 
> and this blueprint
> .
> Is OAuth 2.0 not compatible with design of Keystone?
>
> John
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack] [tools] OpenStack client in a Docker container

2016-06-27 Thread Gerard Braad
> Usage is as easy as:
> $ docker pull gbraad/openstack-client:centos

Just now I added an Alpine-based image

$ docker pull gbraad/openstack-client:alpine

Hope this is also useful to you.


-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [swift] Storage node failure modes

2016-06-27 Thread John Dickinson
A few years ago, I gave this talk at LCA which covers a lot of these details.

https://www.youtube.com/watch?v=_sUvfGKhaMo&list=PLIr7I80Leee5NpoYTd9ffNvWq0pG18CN3&index=9

--John




On 27 Jun 2016, at 17:36, Mark Kirkwood wrote:

> Hi,
>
> I'm in the process of documenting failure modes (for ops documentation etc). 
> Now I think I understand the intent:
>
> - swift tries to ensure you always have the number of configured replicas
>
> In the case of missing or unmounted devices I'm seeing the expected behaviour 
> i.e:
>
> - new object creation results in the configured number of replicas (some 
> stored on handoff nodes)
> - existing objects replicated on handoff to produce the correct replica number
>
> In the case of a node (or a region) I'm *not* seeing analogous behaviour for 
> *existing* objects, i.e I am a replica down after shutting down on of my 
> nodes and waiting a while.
>
> I am testing using swift 2.7.on a small cluster of vms (4 nodes, 4 devices, 2 
> regions) - now it may be that my setup is just too trivial (or maybe I 
> haven't waited long enough for swift decide my storage node is really down). 
> Any thoughts? I'd like to understand precisely what is supposed to happen 
> when a node (and also an entire region) is unavailable.
>
> Cheers
>
> Mark
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack] [tools] OpenStack client in a Docker container

2016-06-27 Thread Gerard Braad
Hi all,


When you reinstall workstations or test environments as often as I do,
you would like to automate everything... or containerize it. So, I
packaged the OpenStack client in a Docker container on Ubuntu and
CentOS. And to make it more convenient, I added Lars's 'stack' helper
tool. Just have a look at the registry [1] or the source [2].

Usage is as easy as:

Store your stackrc in ~/.stack named as an endpoint; e.g. ~/.stack/trystack
$ docker pull gbraad/openstack-client:centos
$ alias stack='docker run -it  --rm -v ~/.stack:/root/.stack
gbraad/openstack-client:centos stack'
$ stack trystack openstack server list

Comments welcomed...

regards,


Gerard

[1] https://hub.docker.com/r/gbraad/openstack-client/
[2] https://github.com/gbraad/docker-openstack-client/

-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Keystone] Why not OAuth 2.0 provider?

2016-06-27 Thread 林自均
Hi all,

When I am searching for OAuth provider in Keystone, I found only OAuth 1.0.
I am a little bit curious about the decision of 1.0 over 2.0. I failed to
see the reason in the documentation

and this blueprint
.
Is OAuth 2.0 not compatible with design of Keystone?

John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Issue with IPsec ESP packets dropped even if the security-groups and port security are disabled (using openstack-mitaka release on CentO7.2 system)

2016-06-27 Thread Chinmaya Dwibedy
Hi All,


I have installed openstack-mitaka release on CentO7.2 system.I have
disabled the security-groups and port security for all the neutron
ports/all VMs using below stated.

ML2 port security is enabled in /etc/neutron/plugins/ml2/ml2_conf.ini:
extension_drivers
= port_security

 #!/bin/bash

for port in $(neutron port-list -c id -c port_security_enabled -c fixed_ips
| grep True |  cut -d '|' -f2); do

echo "Removing security-groups and port_security for port: $port"

neutron port-update --no-security-groups
--port_security_enabled=False $port

done

echo "Completed"


Thereafter when I send IPsec ESP traffic from One VM1 to another VM2, it is
being received and captured (by tcpdump) by the corresponding tap device
but the same is not being received on Linux bridge (qbrxxx) and qvbxxx (of
VM1). Note that, if I send UDP traffic then I do not find any issue. It is
being carried forwarded to VM2.


The VM1's eth0 interface is connected to a Linux tap device tap2caa3b0e-e3
which is plugged into a Linux bridge, qbr2caa3b0e-e3. There are no iptables
filtering applied when packets passing into or out of the Linux bridge. Can
anyone please suggest what might the issue and its solution? Thank you in
advance for your time and support. Here goes the configurations. Please
feel free to let me know if you need any additional information.




[root@stag48 ~(keystone_admin)]# brctl show

bridge name bridge id   STP enabled interfaces

qbr2caa3b0e-e3  8000.1ec72d90a310   no
qvb2caa3b0e-e3

tap2caa3b0e-e3

qbr408fa3a3-b4  8000.e6f0e680f28f   no
qvb408fa3a3-b4

tap408fa3a3-b4

qbr5fa991b5-de  8000.02c32f416df0   no
qvb5fa991b5-de

tap5fa991b5-de

qbraf134785-23  8000.46e43737b69f   no
qvbaf134785-23

tapaf134785-23

qbre698fa07-9c  8000.5ea17f458f55   no
qvbe698fa07-9c

tape698fa07-9c

qbrf6756f4d-08  8000.b2f79fe90f20   no
qvbf6756f4d-08

tapf6756f4d-08

[root@stag48 ~(keystone_admin)]# iptables -S | grep tap2caa3b0e-e3

[root@stag48 ~(keystone_admin)]#

[root@stag48 ~(keystone_admin)]# neutron security-group-rule-list

+--++---+---+---+--+

| id   | security_group | direction |
ethertype | port/protocol | remote   |

+--++---+---+---+--+

| 16c2d8c8-a286-4b71-8045-94cd303b5c02 | default| ingress   |
IPv4  | 22/tcp| 0.0.0.0/0 (CIDR) |

| 2332057f-8c66-4aa6-8700-561b26a5b906 | default| ingress   |
IPv4  | any   | default (group)  |

| 4798772b-561f-4960-85b2-2453613d527e | default| ingress   |
IPv6  | any   | default (group)  |

| 5142e3b2-d2ff-40c5-87eb-5d646852f2d4 | default| ingress   |
IPv4  | icmp  | 0.0.0.0/0 (CIDR) |

| 7179fc0a-5533-433a-8cc9-3099eeff5a4b | default| egress|
IPv4  | any   | any  |

| 7cb2f140-6c97-499a-b5f7-6bcc16f6c9a3 | default| ingress   |
IPv6  | any   | default (group)  |

| 829e7607-463a-4c7a-b162-8357f47924d1 | default| ingress   |
IPv4  | 1-65535/udp   | 0.0.0.0/0 (CIDR) |

| 9f1b8571-3c46-4f53-ac80-835d2186a3c0 | default| egress|
IPv6  | any   | any  |

| bd46535b-6311-46f6-9b5c-cda78194ac01 | default| egress|
IPv4  | any   | any  |

| e1b7ab35-8426-4c07-b5bc-d5760b291520 | default| ingress   |
IPv4  | any   | default (group)  |

| e82da2bf-f2e1-4d33-916b-ecb90b5db857 | default| egress|
IPv6  | any   | any  |

+--++---+---+---+--+

[root@stag48 ~(keystone_admin)]# nova secgroup-list-rules default

+-+---+-+---+--+

| IP Protocol | From Port | To Port | IP Range  | Source Group |

+-+---+-+---+--+

| |   | |   | default  |

| icmp| -1| -1  | 0.0.0.0/0 |  |

| udp | 1 | 65535   | 0.0.0.0/0 |  |

| tcp | 22| 22  | 0.0.0.0/0 |  |

| |   | |   | default  |

+-+---+-+---+--+

[root@stag48 ~(keystone_admin)]#





[root@stag48 ~(keystone_admin)]# nova lis

Re: [Openstack] [Keystone] Source IP address in tokens

2016-06-27 Thread 林自均
Hi Craig,

Okay, I will read some documents on how to implement such mechanism. Thanks!

John

Craig A Lee  於 2016年6月27日 週一 下午3:38寫道:

> All,
>
>
>
> This issue of *delegation of trust* (user -> nova -> glance, i.e.,
> enabling nova to auth to glance on behalf of the user) is a fundamental
> capability.  This is precisely why PKI *proxy certs* (IETF 3820) were
> developed back in the grid era enabling chains of trust to be established
> up to a specifiable length.  The OAuth approach essentially enables one
> step of delegation but is certainly getting more widely used.  What’s the
> best approach for Keystone, however, is not going to be simple to pin down.
>
>
>
> --Craig
>
>
>
> *From:* Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
> *Sent:* Sunday, June 26, 2016 11:11 PM
> *To:* 林自均 
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] [Keystone] Source IP address in tokens
>
>
>
>
> On Jun 26, 2016 19:39, "林自均"  wrote:
> >
> > Hi all,
> >
> > I have the following scenario:
> >
> > 1. On client machine A, a user obtains an auth token with a username and
> password.
> > 2. The user can use the auth token to do operations on client machine A.
> > 3. A thief steals the auth token, and do operations on client machine B.
> >
> > Can Keystone check the auth token's source IP (which is client machine A
> in the above example) to prevent thieves to use it? Does this feature
> exist? Or is it a work in progress? Thanks for the help!
> >
> > John
> >
> > ___
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
> Unfortunately, validating tokens in this way will induce a number of
> failures. The user's token is passed through from one service to another
> for subsequent actions (e.g. nova talking to glance to get the proper
> image).
>
> We are working on changing how AuthZ is handled when it is service to
> service (nova to glance or cinder) vs when it is user to service.
>
> While we have had the concept of token binding (requiring an x509 client
> cert for example) the above mentioned limitation has made the feature a
> non-starter. Generally speaking bearer tokens are known to have this issue
> and keystone tokens are bearer tokens.
>
> The best mitigation is to use TLS for communication to the endpoints (user
> -> service) and limit the life span of the tokens to the shortest window
> possible (making replay attacks significantly more difficult as the tokens
> expire quickly).
>
> Eventually we can work on solving this, but there is a bunch of work
> needed before it can be worked on/explored.
>
> --Morgan
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] how to change the admin password

2016-06-27 Thread Adam Young

On 06/27/2016 10:37 AM, Venkatesh Kotipalli wrote:

Hi All,


i want to change the admin password for openstack mitaka by using CLI.

i installed on centos7

when i am tried to change the password in

admin-openrc, after changing the password i am unable to login with 
the password i changed, as i am getting the below error as


"Discovering versions from the identity service failed when creating 
the password plugin. Attempting to determine version from URL.

Internal Server Error (HTTP 500)"


A 500 error means something is wrong with your server.  Look in 
/var/log/keystone.log and the /var/log/httpd/error_log files to see what 
error your server is actualler reporting.  You might also be able to see 
an error message if you just run curl $OS_AUTH_URL





can some one help me regarding this as soon as possible with staright 
forward commands to execute on command line.


Regards,
Venkatesh.k




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [swift] Storage node failure modes

2016-06-27 Thread Mark Kirkwood

Hi,

I'm in the process of documenting failure modes (for ops documentation 
etc). Now I think I understand the intent:


- swift tries to ensure you always have the number of configured replicas

In the case of missing or unmounted devices I'm seeing the expected 
behaviour i.e:


- new object creation results in the configured number of replicas (some 
stored on handoff nodes)
- existing objects replicated on handoff to produce the correct replica 
number


In the case of a node (or a region) I'm *not* seeing analogous behaviour 
for *existing* objects, i.e I am a replica down after shutting down on 
of my nodes and waiting a while.


I am testing using swift 2.7.on a small cluster of vms (4 nodes, 4 
devices, 2 regions) - now it may be that my setup is just too trivial 
(or maybe I haven't waited long enough for swift decide my storage node 
is really down). Any thoughts? I'd like to understand precisely what is 
supposed to happen when a node (and also an entire region) is unavailable.


Cheers

Mark

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Networking - next step?

2016-06-27 Thread Remo Mattei
Quick question how was this installed? Centos packstack, devstack etc. 

What does the nova service list show? 

How about neutron agent-list? 

Remo

Inviato da iPhone

> Il giorno 27 giu 2016, alle ore 15:25, Turbo Fredriksson  
> ha scritto:
> 
> I'm not sure what to do next. I've finally got my first
> instance up and running. But it doesn't get a DHCP address.
> Which is the first thing I can't figure out.
> 
> I assume(d) that the Control node is [going to be] the gateway
> to the rest of the network (because the Control node is also
> the Network node) and the Compute should route all traffic coming
> from the VMs to that host.
> 
> 
> In Openstack I have created the "physical" (provider) network,
> with a allocation pool of IP address that is available on the,
> surprise, surprise, the physical network (which is eventually
> NATed out to the Internet) where everything else not related
> to Openstack is located.
> 
> I also have three tenant networks, which won't be routed outside
> of Openstack.
> 
> There is a Openstack router, with a leg (port) on each of these
> networks. Unfortunately, all ports on that router is "Down".
> That's the second thing I can't figure out how to change. I can't
> seem to figure out a way to do anything about that and I see
> anything obvious to this in the logs:
> 
> - s n i p -
> bladeA01b:~# grep 57fa1869-fc0d-4c5c-924c-402782b5bd24 
> /var/log/neutron/neutron-openvswitch-agent.log
> 2016-06-27 10:50:17.575 17559 INFO neutron.agent.common.ovs_lib 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 not present in bridge br-physical
> 2016-06-27 10:50:18.385 17559 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 was not found on the integration bridge 
> and will therefore not be processed
> 2016-06-27 10:50:19.329 17559 INFO neutron.agent.securitygroups_rpc 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Preparing filters for 
> devices set([u'57fa1869-fc0d-4c5c-924c-402782b5bd24', 
> u'657fbe47-babe-4a0e-afd6-5dbfd05d5748', 
> u'1e7c4621-a4ff-4057-8ce7-3ecdca717b27', 
> u'1b37164c-834d-4765-9829-87c621b2dc8c'])
> 2016-06-27 10:50:47.293 17559 INFO neutron.agent.common.ovs_lib 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 not present in bridge br-physical
> 2016-06-27 10:50:48.103 17559 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 was not found on the integration bridge 
> and will therefore not be processed
> 2016-06-27 10:50:49.044 17559 INFO neutron.agent.securitygroups_rpc 
> [req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Preparing filters for 
> devices set([u'57fa1869-fc0d-4c5c-924c-402782b5bd24', 
> u'657fbe47-babe-4a0e-afd6-5dbfd05d5748', 
> u'1e7c4621-a4ff-4057-8ce7-3ecdca717b27', 
> u'1b37164c-834d-4765-9829-87c621b2dc8c'])
> 2016-06-27 11:15:26.635 20929 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-430be11d-8f34-4750-9aef-71af9fb8994d - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 updated. Details: {u'profile': {}, 
> u'network_qos_policy_id': None, u'qos_policy_id': None, 
> u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
> u'eadb3df0-3c4b-46e5-afb7-fe5d2ef09328', u'segmentation_id': None, 
> u'device_owner': u'network:router_gateway', u'physical_network': u'external', 
> u'mac_address': u'fa:16:3e:46:b8:f2', u'device': 
> u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'port_security_enabled': False, 
> u'port_id': u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'fixed_ips': 
> [{u'subnet_id': u'172bdf64-9291-415a-8930-455f1f59453f', u'ip_address': 
> u'10.0.0.200'}], u'network_type': u'flat', u'security_groups': []}
> 2016-06-27 11:15:28.833 20929 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-430be11d-8f34-4750-9aef-71af9fb8994d - - - - -] Configuration for 
> devices up [u'57fa1869-fc0d-4c5c-924c-402782b5bd24'] and devices down [] 
> completed.
> 2016-06-27 17:07:15.302 23086 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-06aee021-d73f-4984-b5fc-7ccb73edf20f - - - - -] Port 
> 57fa1869-fc0d-4c5c-924c-402782b5bd24 updated. Details: {u'profile': {}, 
> u'network_qos_policy_id': None, u'qos_policy_id': None, 
> u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
> u'eadb3df0-3c4b-46e5-afb7-fe5d2ef09328', u'segmentation_id': None, 
> u'device_owner': u'network:router_gateway', u'physical_network': u'external', 
> u'mac_address': u'fa:16:3e:46:b8:f2', u'device': 
> u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'port_security_enabled': False, 
> u'port_id': u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'fixed_ips': 
> [{u'subnet_id': u'172bdf64-9291-415a-89

Re: [Openstack] Using multiple compute drivers in Nova?

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 11:33 PM, Chris Friesen wrote:

> Without some way to divide the resources between your two nova-compute 
> instances, they're both going to think that they have access to the whole 
> system and won't know what resources are used by the other one.


I completely understand that, I'm just saying that the documentation
indicate(d) that it is/should be possible to use multiple hypervisors.

Although, the config file DO say "(string value)", not "(list value)".


It was the contradiction between the documentation (as I read/understood
it) and the configuration file that made me hope that maybe, just maybe
I'd be in luck this time.
-- 
Em - The battle cry of the cronical masturbater.
- Charlie Harper


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Networking - next step?

2016-06-27 Thread Turbo Fredriksson
I'm not sure what to do next. I've finally got my first
instance up and running. But it doesn't get a DHCP address.
Which is the first thing I can't figure out.

I assume(d) that the Control node is [going to be] the gateway
to the rest of the network (because the Control node is also
the Network node) and the Compute should route all traffic coming
from the VMs to that host.


In Openstack I have created the "physical" (provider) network,
with a allocation pool of IP address that is available on the,
surprise, surprise, the physical network (which is eventually
NATed out to the Internet) where everything else not related
to Openstack is located.

I also have three tenant networks, which won't be routed outside
of Openstack.

There is a Openstack router, with a leg (port) on each of these
networks. Unfortunately, all ports on that router is "Down".
That's the second thing I can't figure out how to change. I can't
seem to figure out a way to do anything about that and I see
anything obvious to this in the logs:

- s n i p -
bladeA01b:~# grep 57fa1869-fc0d-4c5c-924c-402782b5bd24 
/var/log/neutron/neutron-openvswitch-agent.log
2016-06-27 10:50:17.575 17559 INFO neutron.agent.common.ovs_lib 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 not present in bridge br-physical
2016-06-27 10:50:18.385 17559 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 was not found on the integration bridge 
and will therefore not be processed
2016-06-27 10:50:19.329 17559 INFO neutron.agent.securitygroups_rpc 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Preparing filters for 
devices set([u'57fa1869-fc0d-4c5c-924c-402782b5bd24', 
u'657fbe47-babe-4a0e-afd6-5dbfd05d5748', 
u'1e7c4621-a4ff-4057-8ce7-3ecdca717b27', 
u'1b37164c-834d-4765-9829-87c621b2dc8c'])
2016-06-27 10:50:47.293 17559 INFO neutron.agent.common.ovs_lib 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 not present in bridge br-physical
2016-06-27 10:50:48.103 17559 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 was not found on the integration bridge 
and will therefore not be processed
2016-06-27 10:50:49.044 17559 INFO neutron.agent.securitygroups_rpc 
[req-6627cbfc-f9c4-4cf8-b07f-92b53eba1ccc - - - - -] Preparing filters for 
devices set([u'57fa1869-fc0d-4c5c-924c-402782b5bd24', 
u'657fbe47-babe-4a0e-afd6-5dbfd05d5748', 
u'1e7c4621-a4ff-4057-8ce7-3ecdca717b27', 
u'1b37164c-834d-4765-9829-87c621b2dc8c'])
2016-06-27 11:15:26.635 20929 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-430be11d-8f34-4750-9aef-71af9fb8994d - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'eadb3df0-3c4b-46e5-afb7-fe5d2ef09328', u'segmentation_id': None, 
u'device_owner': u'network:router_gateway', u'physical_network': u'external', 
u'mac_address': u'fa:16:3e:46:b8:f2', u'device': 
u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'port_security_enabled': False, 
u'port_id': u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'fixed_ips': 
[{u'subnet_id': u'172bdf64-9291-415a-8930-455f1f59453f', u'ip_address': 
u'10.0.0.200'}], u'network_type': u'flat', u'security_groups': []}
2016-06-27 11:15:28.833 20929 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-430be11d-8f34-4750-9aef-71af9fb8994d - - - - -] Configuration for devices 
up [u'57fa1869-fc0d-4c5c-924c-402782b5bd24'] and devices down [] completed.
2016-06-27 17:07:15.302 23086 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-06aee021-d73f-4984-b5fc-7ccb73edf20f - - - - -] Port 
57fa1869-fc0d-4c5c-924c-402782b5bd24 updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'eadb3df0-3c4b-46e5-afb7-fe5d2ef09328', u'segmentation_id': None, 
u'device_owner': u'network:router_gateway', u'physical_network': u'external', 
u'mac_address': u'fa:16:3e:46:b8:f2', u'device': 
u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'port_security_enabled': False, 
u'port_id': u'57fa1869-fc0d-4c5c-924c-402782b5bd24', u'fixed_ips': 
[{u'subnet_id': u'172bdf64-9291-415a-8930-455f1f59453f', u'ip_address': 
u'10.0.0.200'}], u'network_type': u'flat', u'security_groups': []}
2016-06-27 17:07:17.037 23086 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-06aee021-d73f-4984-b5fc-7ccb73edf20f - - - - -] Configuration for devices 
up [u'57fa1869-fc0d-4c5c-924c-402782b5bd24'] and devices down [] completed.
2016-06-27 17:07:20.473 23086 INFO 
neutron.plugins.ml2.drivers

Re: [Openstack] Cannot able to connect to putty from another system while installing controller node in openstack

2016-06-27 Thread Steve Martinelli
It would also be helpful to append --debug to the commands and compare the
output.

On Mon, Jun 27, 2016 at 6:13 PM, Kaustubh Kelkar <
kaustubh.kel...@casa-systems.com> wrote:

>
>
>
>
> *From:* venkat boggarapu [mailto:venkat.boggar...@gmail.com]
> *Sent:* Monday, June 27, 2016 5:44 AM
> *To:* openstack@lists.openstack.org
> *Subject:* [Openstack] Cannot able to connect to putty from another
> system while installing controller node in openstack
>
>
>
> Hi All,
>
>
>
> We are installing openstack mitaka,
>
>
>
> Centos 7.2
>
>
>
> when we take one putty from controller node we can able to install.
>
>
>
>  openstack user create --domain default  --password-prompt admin
>
> Missing parameter(s):
>
> new password:
>
>
>
>
>
> if we are trying to take the same controller node with another putty from
> another system with same network i am getting the following the below error
>
>
>
>
>
>  openstack user create --domain default  --password-prompt admin
>
> Missing parameter(s):
>
> Set a username with --os-username, OS_USERNAME, or auth.username
>
> Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
>
> Set a scope, such as a project or domain, set a project scope with
> --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
> with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name
>
> *[Kaustubh] Those environment variables (OS_AUTH_URL, OS_USERNAME etc.)
> are visible to the current shell only. You need to export the values again
> when you login with a new session. Create a credentials file to source into
> (http://docs.openstack.org/mitaka/install-guide-rdo/keystone-openrc.html
> )
> so that you won’t have to set them explicitly every time you login.*
>
>
>
>
>
> can someone please help to solve this issue. waiting for reply
>
>
>
>
>
> With regards
> venkat
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3

2016-06-27 Thread Adam Young

On 06/24/2016 03:16 AM, Soputhi Sea wrote:


Hi,


I'm using Mitaka release (the very latest public release one from 
Jun-02), and i'm having issue with List Project in Horizon. In my case 
i have multiple projects created and when i login to Horizon the drop 
down list of project (on the top left corner) doesn't list properly, 
it only list one project only. And as I use Apache wsgi as a service 
instead of keystone python web service, i checked apache log and here 
is what i found



 [23/Jun/2016:17:09:37 +0700] "GET /v3/tenants HTTP/1.1" 404 93 "-" 
"python-keystoneclient"
 [23/Jun/2016:18:47:18 +0700] "POST /v3/tokens HTTP/1.1" 404 93 "-" 
"keystoneauth1/2.4.1 python-requests/2.10.0 CPython/2.7.5"


You can see here the URI "/v3/tenants" should be "/v2.0/tenants" or 
"/v3/projects" (i think)


and /v3/tokens should be "/v2.0/tokens" or "/v3/auth/tokens"


So i wonder if this is a bug in the python-keystoneclient or is there 
any configuration i can do to force the client/keystone/horizon to use 
a proper URI call?



As a side, i applied a workaround to fix this issue by creating a 
redirect rule in apache as follow


RewriteEngine on
Redirect /v3/tenants /v2.0/tenants
Redirect /v3/tokens /v2.0/tokens


Set the API version explicitly to 3. It looks like the AuthURL is set to 
/v3 and the API version to 2.0




Thanks in advance for any help.

Puthi




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using multiple compute drivers in Nova?

2016-06-27 Thread Chris Friesen

On 06/27/2016 09:31 AM, Turbo Fredriksson wrote:

On Jun 27, 2016, at 4:07 PM, Chris Friesen wrote:


ow about running two containers on your host, with one nova-compute in each?


And those two is configured as docker and kvm respectively? And
in those containers i run either a container or a kvm?

Sounds way to complicated to actually work in the long run..


Without some way to divide the resources between your two nova-compute 
instances, they're both going to think that they have access to the whole system 
and won't know what resources are used by the other one.


Chris


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cannot able to connect to putty from another system while installing controller node in openstack

2016-06-27 Thread Kaustubh Kelkar


From: venkat boggarapu [mailto:venkat.boggar...@gmail.com]
Sent: Monday, June 27, 2016 5:44 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Cannot able to connect to putty from another system while 
installing controller node in openstack

Hi All,

We are installing openstack mitaka,

Centos 7.2

when we take one putty from controller node we can able to install.

 openstack user create --domain default  --password-prompt admin
Missing parameter(s):
new password:


if we are trying to take the same controller node with another putty from 
another system with same network i am getting the following the below error


 openstack user create --domain default  --password-prompt admin
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with 
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope 
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name
[Kaustubh] Those environment variables (OS_AUTH_URL, OS_USERNAME etc.) are 
visible to the current shell only. You need to export the values again when you 
login with a new session. Create a credentials file to source into 
(http://docs.openstack.org/mitaka/install-guide-rdo/keystone-openrc.html) so 
that you won’t have to set them explicitly every time you login.


can someone please help to solve this issue. waiting for reply


With regards
venkat
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack][glance] glance-scrubber dead but subsys locked

2016-06-27 Thread Mick McCarthy
Hi there, 

I’m having an issue enabling Glance scrubber in Icehouse:

$ service openstack-glance-scrubber status
openstack-glance-scrubber dead but subsys locked
I’ve tried:
- checking the PID, but there are no running processes found:
$ cat /var/run/glance/glance-scrubber.pid
2834
$ kill -9 2834
bash: kill: (2834) - No such process
- removing the lockfile and restarting the process, but it goes back to “dead 
but subsys locked” immediately:

$ service openstack-glance-scrubber status
openstack-glance-scrubber dead but subsys locked


My Glance scrubber config is as follows:

[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True

# Show debugging output in logs (sets DEBUG log level output)
debug = False


# Log to this directory
log_dir=/var/log/glance

# Format string to use for log messages with context. (string
# value)
logging_context_format_string=%(asctime)s.%(msecs)03d %(levelname)s %(message)s 
pid[%(process)s] %(request_id)s %(instance)s
# Format string to use for log messages without context.
# (string value)
logging_default_format_string=%(asctime)s.%(msecs)03d %(levelname)s %(message)s 
pid[%(process)s]

# Data to append to log format when level is DEBUG. (string
# value)
logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format.
# (string value)
logging_exception_prefix=%(asctime)s.%(msecs)03d TRACE instance[%(instance)s]

# Should we run our own loop or rely on cron/scheduler to run us
daemon = False

# Loop time between checking the db for new items to schedule for delete
wakeup_time = 300

# Directory that the scrubber will use to remind itself of what to delete
# Make sure this is also set in glance-api.conf
scrubber_datadir = /data/var/lib/glance/scrubber

# Only one server in your deployment should be designated the cleanup host
cleanup_scrubber = False

# pending_delete items older than this time are candidates for cleanup
cleanup_scrubber_time = 86400

# Address to find the registry server for cleanups
registry_host = os-controller.wd5-cedev21.az2.eng.pdx.wd

# Port the registry server is listening on
registry_port = 9191

#  Filesystem Store Options 

# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /data/var/lib/glance/images


When I attempted to launch “glance-scrubber”, the output to scrubber.log is as 
follows:

2016-06-27 13:34:36.953 WARNING Deprecated: glance.store.rbd.Store not found in 
`known_store`. Stores need to be explicitly enabled in the configuration file. 
pid[3983]
2016-06-27 13:34:36.954 WARNING Failed to configure store correctly: Store 
gridfs could not be configured correctly. Reason: Missing dependencies: pymongo 
Disabling add method. pid[3983]
2016-06-27 13:34:36.955 WARNING Deprecated: glance.store.gridfs.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:36.993 WARNING Failed to configure store correctly: Store 
cinder could not be configured correctly. Reason: Cinder storage requires a 
context. Disabling add method. pid[3983]
2016-06-27 13:34:36.994 WARNING Deprecated: glance.store.cinder.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:37.000 ERROR Could not find swift_store_auth_address in 
configuration options. pid[3983]
2016-06-27 13:34:37.001 WARNING Failed to configure store correctly: Store 
swift could not be configured correctly. Reason: Could not find 
swift_store_auth_address in configuration options. Disabling add method. 
pid[3983]
2016-06-27 13:34:37.001 WARNING Deprecated: glance.store.swift.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:37.004 WARNING Failed to configure store correctly: Store s3 
could not be configured correctly. Reason: Could not find s3_store_host in 
configuration options. Disabling add method. pid[3983]
2016-06-27 13:34:37.005 WARNING Deprecated: glance.store.s3.Store not found in 
`known_store`. Stores need to be explicitly enabled in the configuration file. 
pid[3983]
2016-06-27 13:34:37.007 INFO Initializing scrubber with configuration: 
{'cleanup_time': 86400, 'registry_host': 
'os-controller.wd5-cedev21.az2.eng.pdx.wd', 'cleanup': False, 
'scrubber_datadir': '/data/var/lib/glance/scrubber', 'registry_port': 9191} 
pid[3983]
Despite glance-scrubber not working, there is not a massive discrepancy between 
active glance images & those stored locally on the filesystem:

$ ls /data/var/lib/glance/images/ | wc -l
1094
$ glance image-list --all-tenants | grep active | wc -l
1083
My questions are:

- i) Based upon my config and stack trace, does anyone have any ideas on why 
the Glance scrubber process would be stuck in a "dead but subsys locked” state
- ii) If glance-scrubber is not working, are these images be

Re: [Openstack] lbaas not showing in neutron agent-list

2016-06-27 Thread Turbo Fredriksson
On Jun 20, 2016, at 5:32 AM, Priyanka wrote:

> However when I do|neutron agent-list|it does not show lbaas running.

Did you ever figure this out? I'm now having the same problem..
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using multiple compute drivers in Nova?

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 4:07 PM, Chris Friesen wrote:

> ow about running two containers on your host, with one nova-compute in each?

And those two is configured as docker and kvm respectively? And
in those containers i run either a container or a kvm?

Sounds way to complicated to actually work in the long run..
-- 
I love deadlines. I love the whooshing noise they
make as they go by.
- Douglas Adams


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] how to change the admin password

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 3:37 PM, Venkatesh Kotipalli wrote:

> when i am tried to change the password in admin-openrc

That doesn't actually change the password! It will only change the
password you're using to connect, not the one that's actually used!
-- 
Med ett schysst järnrör slår man hela världen med häpnad
- Sockerconny


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [glance] glance-scrubber dead but subsys locked

2016-06-27 Thread Mick McCarthy
Hi there, 

I’m having an issue enabling Glance scrubber in Icehouse:

$ service openstack-glance-scrubber status
openstack-glance-scrubber dead but subsys locked
I’ve tried:
- checking the PID, but there are no running processes found:
$ cat /var/run/glance/glance-scrubber.pid
2834
$ kill -9 2834
bash: kill: (2834) - No such process
- removing the lockfile and restarting the process, but it goes back to “dead 
but subsys locked” immediately:

$ service openstack-glance-scrubber status
openstack-glance-scrubber dead but subsys locked


My Glance scrubber config is as follows:

[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True

# Show debugging output in logs (sets DEBUG log level output)
debug = False


# Log to this directory
log_dir=/var/log/glance

# Format string to use for log messages with context. (string
# value)
logging_context_format_string=%(asctime)s.%(msecs)03d %(levelname)s %(message)s 
pid[%(process)s] %(request_id)s %(instance)s
# Format string to use for log messages without context.
# (string value)
logging_default_format_string=%(asctime)s.%(msecs)03d %(levelname)s %(message)s 
pid[%(process)s]

# Data to append to log format when level is DEBUG. (string
# value)
logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format.
# (string value)
logging_exception_prefix=%(asctime)s.%(msecs)03d TRACE instance[%(instance)s]

# Should we run our own loop or rely on cron/scheduler to run us
daemon = False

# Loop time between checking the db for new items to schedule for delete
wakeup_time = 300

# Directory that the scrubber will use to remind itself of what to delete
# Make sure this is also set in glance-api.conf
scrubber_datadir = /data/var/lib/glance/scrubber

# Only one server in your deployment should be designated the cleanup host
cleanup_scrubber = False

# pending_delete items older than this time are candidates for cleanup
cleanup_scrubber_time = 86400

# Address to find the registry server for cleanups
registry_host = os-controller.wd5-cedev21.az2.eng.pdx.wd

# Port the registry server is listening on
registry_port = 9191

#  Filesystem Store Options 

# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /data/var/lib/glance/images


When I attempted to launch “glance-scrubber”, the output to scrubber.log is as 
follows:

2016-06-27 13:34:36.953 WARNING Deprecated: glance.store.rbd.Store not found in 
`known_store`. Stores need to be explicitly enabled in the configuration file. 
pid[3983]
2016-06-27 13:34:36.954 WARNING Failed to configure store correctly: Store 
gridfs could not be configured correctly. Reason: Missing dependencies: pymongo 
Disabling add method. pid[3983]
2016-06-27 13:34:36.955 WARNING Deprecated: glance.store.gridfs.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:36.993 WARNING Failed to configure store correctly: Store 
cinder could not be configured correctly. Reason: Cinder storage requires a 
context. Disabling add method. pid[3983]
2016-06-27 13:34:36.994 WARNING Deprecated: glance.store.cinder.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:37.000 ERROR Could not find swift_store_auth_address in 
configuration options. pid[3983]
2016-06-27 13:34:37.001 WARNING Failed to configure store correctly: Store 
swift could not be configured correctly. Reason: Could not find 
swift_store_auth_address in configuration options. Disabling add method. 
pid[3983]
2016-06-27 13:34:37.001 WARNING Deprecated: glance.store.swift.Store not found 
in `known_store`. Stores need to be explicitly enabled in the configuration 
file. pid[3983]
2016-06-27 13:34:37.004 WARNING Failed to configure store correctly: Store s3 
could not be configured correctly. Reason: Could not find s3_store_host in 
configuration options. Disabling add method. pid[3983]
2016-06-27 13:34:37.005 WARNING Deprecated: glance.store.s3.Store not found in 
`known_store`. Stores need to be explicitly enabled in the configuration file. 
pid[3983]
2016-06-27 13:34:37.007 INFO Initializing scrubber with configuration: 
{'cleanup_time': 86400, 'registry_host': 
'os-controller.wd5-cedev21.az2.eng.pdx.wd', 'cleanup': False, 
'scrubber_datadir': '/data/var/lib/glance/scrubber', 'registry_port': 9191} 
pid[3983]
Despite glance-scrubber not working, there is not a massive discrepancy between 
active glance images & those stored locally on the filesystem:

$ ls /data/var/lib/glance/images/ | wc -l
1094
$ glance image-list --all-tenants | grep active | wc -l
1083
My questions are:

- i) Based upon my config and stack trace, does anyone have any ideas on why 
the Glance scrubber process would be stuck in a "dead but subsys locked” state
- ii) If glance-scrubber is not working, are these images be

Re: [Openstack] Using multiple compute drivers in Nova?

2016-06-27 Thread Chris Friesen

On 06/24/2016 05:47 PM, Turbo Fredriksson wrote:

On Jun 25, 2016, at 12:22 AM, Clint Byrum wrote:


Excerpts from Turbo Fredriksson's message of 2016-06-24 22:50:40 +0100:

The page 
http://docs.openstack.org/mitaka/config-reference/compute/hypervisors.html
states:

  Most installations use only one hypervisor. However, you can use
  ComputeFilter and ImagePropertiesFilter to schedule different
  hypervisors within the same installation.


If you want to use different compute drivers on one machine, you need
to run two copies of nova-compute on that machine.


Yeah, that's what I've heard before. I just found that documentation 
information,
and that suggests otherwise..


One has to ask though... what two hypervisors are you wanting to use on
the same box?


libvirt (KVM) and nova-docker. I have need for both containers and real
VMs.

I'd very much like to limit my power/cooling requirements by only
run physical machines absolutly necessary. Having to specify one+ host
for containers and one+ host for VMs will mean that these two+ hosts
will individually run "empty" for the most part..

Yes, they will fill up eventually, but I rather only have ONE Compute
running if the containers and VMs fits on ONE..



How about running two containers on your host, with one nova-compute in each?

Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] how to change the admin password

2016-06-27 Thread Venkatesh Kotipalli
Hi All,


i want to change the admin password for openstack mitaka by using CLI.

i installed on centos7

when i am tried to change the password in

admin-openrc, after changing the password i am unable to login with the
password i changed, as i am getting the below error as

"Discovering versions from the identity service failed when creating the
password plugin. Attempting to determine version from URL.
Internal Server Error (HTTP 500)"


can some one help me regarding this as soon as possible with staright
forward commands to execute on command line.

Regards,
Venkatesh.k
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Projects deals tricky job

2016-06-27 Thread Eugen Block
Thanks for the information, I'll definitely get to it. But right now  
I'm having some trouble with domain_id in the keystone_policy.json. I  
believe I'm also affected by this bug  
https://bugs.launchpad.net/python-openstackclient/+bug/1538804


I switched to the stable/liberty policy.v3cloudsample.json because the  
value for "token.is_admin_project:True or domain_id:admin_domain_id"  
lead to errors in authentication. Using "rule:admin_required and  
domain_id:default" works if I use Horizon (I see the output in  
keystone.log), but it fails to authenticate while using CLI because  
for some reason "domain_id" is never read by the client.

As a workaround I changed the rule to

"cloud_admin": "rule:admin_required and (domain_id:default or  
user_domain_id:default)"


that seems to work fine, and I already tried it with user_id instead  
of domain_id, but I can't predict the consequences. What is the  
recommendation here until the CLI client will be able to read domain_id?


Regards,
Eugen


Zitat von Timothy Symanczyk :


We implemented something here at Symantec that sounds very similar to what
you¹re both talking about. We have three levels of Admin - Cloud, Domain,
and Project. If you¹re interested in checking it out, we actually
presented on this topic in Austin.

The presentation : https://www.youtube.com/watch?v=v79kNddKbLc

All the referenced files can be found in our github here :
https://github.com/Symantec/Openstack_RBAC

Specifically you may want to check out our keystone policy file that
defines cloud_admin domain_admin and project_admin :
https://github.com/Symantec/Openstack_RBAC/blob/master/keystone/policy.json

Tim

On 6/20/16, 5:17 AM, "Eugen Block"  wrote:


I believe you are trying to accomplish the same configuration as I do,
so I think domains are the answer. You can devide your cloud into
different domains and grant admin rights to specific users, which are
not authorized to see the other domains. Although I'm still not sure
if I did it correctly and it's not fully resolved yet, here is a
thread I started a few days ago:

http://lists.openstack.org/pipermail/openstack/2016-June/016454.html

Regards,
Eugen

Zitat von Venkatesh Kotipalli :


Hi Folks,

Is it possible to create a project admin in openstack.

As we identified when ever we created a project admin it will show
entire
cloud (Like : other users and all services completely admin access).
but i
want to see the particular project users,admins and control all the
services.

Guys please help me this part. I am really very confused.

Regards,
Venkatesh.k




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] ADMIN PASSWORD CHANGE

2016-06-27 Thread venkat boggarapu
Hi All,

how to change my admin password inorder to login to dashboard using command
line

presently i am using mitaka version installed on centos 7


With regards
venkat
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Launching an instance by using NFS shared volume

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 2:32 PM, Jean-Pierre Ribeauville wrote:

> As told page 17 of this document :
>  
> If you specify NFS, you must specify a list of NFS exports to mount. For 
> example:
> ip-address:/export-name
> Enter a single or comma seprated list of NFS exports to use
> with Cinder [^([\d]{1,3}\.){3}[\d]{1,3}:/.*]:
>  
>  
> 1)  Do I have to specify a different volume for each instance or one 
> volume is enough ?

One is enough. Each instance will create a file on that share/volume.

> 2)  By doing that , does it mans that LVM is not enabled and that all 
> instances will be located on NFS ?

You can select that at creation time. Well, you're supposed to. I haven't 
figured
out how to do that in the web GUI.

But I have both LVM and NFS enabled, and I can at least create a OS Volume
on either of them and then attach that to the instance..

bladeA01b:~# cinder service-list
+--+---+--+-+---++-+
|  Binary  |  Host | Zone |  Status | State | 
Updated_at | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |   bladeA01b   | nova | enabled |   up  | 
2016-06-27T13:50:22.00 |-|
| cinder-scheduler |   bladeA01b   | nova | enabled |   up  | 
2016-06-27T13:50:19.00 |-|
|  cinder-volume   | bladeA01b@lvm | nova | enabled |   up  | 
2016-06-27T13:50:18.00 |-|
|  cinder-volume   | bladeA01b@nfs | nova | enabled |   up  | 
2016-06-27T13:50:22.00 |-|
+--+---+--+-+---++-+

So I should be able to specify host=bladeA01b@lvm or host=bladeA01b@nfs
(bladeA01 is my controller with everything but Nova Compute on it) when
creating a volume..

I think LVM is my default (because I have that in the "default_volume_type"
setting in cinder.conf).

To have them both, I use:

/etc/cinder/cinder.conf:enabled_backends = lvm,nfs

And then the lvm/nfs block in the same file:

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = blade_center
iscsi_protocol = iscsi
#iscsi_helper = tgtadm
iscsi_helper = lioadm
volume_backend_name = LVM_iSCSI
lvm_type = default

# NFS driver
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
volume_group = blade_center
volume_backend_name = nfsbackend
nfs_shares_config = /etc/cinder/nfs.conf
nfs_sparsed_volumes = true
#nfs_mount_options =

You might need more, but those are the most obvious things.

> 3)  Is possible to deploy by using default Packstack parameters and set 
> cinder and nova storages to NFS afterwards ?

I have heard so much bad things of all these different installers, so when
I started my Openstack adventure (and it have been an adventure!! :), I
decided NOT to use any such thing.

I chose to do this manually! By hand! Well, almost. I use Debian GNU/Linux
Sid (unstable) which come with Mitaka and those packages do a lot automatically.
But I still have had to do _A LOT_ (!!) manually.

So I can't really answer that question..
-- 
Try not. Do. Or do not. There is no try!
- Yoda

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Launching an instance by using NFS shared volume

2016-06-27 Thread Jean-Pierre Ribeauville
Hi,

Late to thank you for info.

Sorry.

I’m currently deploying such a configuration :


-  2 compute nodes ( RHEL7.2 Physical servers)

-  1 controller node ( RHEL 7.2 Virtual Machine)

-  RedHat Open Stack Platform v7.  ( Kilo, I presume ??)

My first goal is to deploy it by using Packstack way (following this document) :

Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Deploying_OpenStack_Proof_of_Concept_Environments-en-US.pdf

I want to make cinder use NFS shared storage to be able to migrate instances 
between my two compute nodes.

As told page 17 of this document :

If you specify NFS, you must specify a list of NFS exports to mount. For 
example:
ip-address:/export-name
Enter a single or comma seprated list of NFS exports to use
with Cinder [^([\d]{1,3}\.){3}[\d]{1,3}:/.*]:



1)  Do I have to specify a different volume for each instance or one volume 
is enough ?

2)  By doing that , does it mans that LVM is not enabled and that all 
instances will be located on NFS ?

3)  Is possible to deploy by using default Packstack parameters and set 
cinder and nova storages to NFS afterwards ?


Thx for help.

Regards,


J.P.

From: Tzach Shefi [mailto:tsh...@redhat.com]
Sent: jeudi 16 juin 2016 14:39
To: Jean-Pierre Ribeauville 
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Launching an instance by using NFS shared volume

Hey Jean,
Sorry I'm a bit late,  I'll try to guide/help you in the right direction.
First of all, let your peers know which which version of Openstack you're 
using,  Juno(6) Kilo(7) Liberty(8) Mitaka(9)?

"I just add NFS as backend." -> cool nice job, but by this what do you mean  
NFS backend for what? For Cinder Glance or Nova or for all of them?
You want the instance to live migrate to another compute node right?
For this all you need shared Nova storage (if I recall you can do without but 
it's easier with), no need to use Cinder volume for the instance, unless you 
want to do so.
Check this link for migration stuff, check your version and which hypervisor 
you use, each has it's own settings and special needs.
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html
I didn't get this part  ( i.e. I boot from an .iso image )
You know that the image must be either a liveCD iso which is a bootable OS, or 
a qcow2 image of a preinstalled OS.

From your command I take if you want to boot an instance from an image which 
will create a Cinder volume.
nova boot --image CentOS-7-ISO --block-device 
source=blank,dest=volume,size=10,shutdown=preserve --nic 
net-id=73d765a3-edca-4ac4-822d-1d0f30cc959b --flavor 2 CentOS-7-Bootable-Image

If your Cinder (or Nova in case you don't want/have to use a Cinder volume) is 
configured to use NFS your done.
You don't have to tell Nova boot command to use NFS  it will automatically use 
Cinder (or Nova) backend storage.
Say you have Nova running over NFS, if you nova boot an instance, it's disk 
will be created on Nova's backend which is NFS.
Same for Cinder if the backend is configured for NFS, boot an instance create a 
volume, the volume will be created on the NFS backend.

Hope I helped a bit.
If you have more questions fire away.
Tzach








On Fri, Jun 10, 2016 at 10:38 AM, Jean-Pierre Ribeauville 
mailto:jpribeauvi...@axway.com>> wrote:
Hi,

I just add NFS as backend .

Now, I want to launch an instance so it will be bootable from a nfs shared 
device .( and then ( it’s my hope) may migrate to another compute node)
( i.e. I boot from an .iso image )


I don’t find how , in the following nova boot command, to specify that I want 
that a “nfs” volume will be created to hold this instance  :


nova boot --image CentOS-7-ISO --block-device 
source=blank,dest=volume,size=10,shutdown=preserve --nic 
net-id=73d765a3-edca-4ac4-822d-1d0f30cc959b --flavor 2 CentOS-7-Bootable-Image


Thx for help

Regards,

Jean-Pierre RIBEAUVILLE

+33 1 4717 2049

[axway_logo_tagline_87px]


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



--
Tzach Shefi
Quality Engineer, Redhat OSP
+972-54-4701080
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] [Fuel] Failed to build the bootstrap image

2016-06-27 Thread Maksim Malchuk
Hi,

The error about an internet connection is the only common case. You should
check the actual error in the log file /var/log/fuel-bootstrap-image-
build.log.
You can also share this file with us via any public service (GoogleDrive,
DropBox, etc) and we will check it for you.

On Mon, Jun 27, 2016 at 4:08 PM, Maksim Malchuk 
wrote:

> Hi,
>
> The error about an internet connection is the only common case. You should
> check the actual error in the log file /var/log/fuel-bootstrap-image-
> build.log.
> You can also share this file with us via any public service (GoogleDrive,
> DropBox, etc) and we will check it for you.
>
>
> On Mon, Jun 27, 2016 at 1:43 PM, Alioune  wrote:
>
>> hi all,
>>
>> I'm trying to install fuel in VirtualBox, the fuel master is correctly
>> running but I'm receiving the this error on the fuel GUI:
>>
>> WARNING: Failed to build the bootstrap image, see
>> /var/log/fuel-bootstrap-image-build.log for details.
>> Perhaps your Internet connection is broken. Please fix the problem and
>> run `fuel-bootstrap build --activate`.
>> While you don't activate any bootstrap - new nodes cannot be discovered
>> and added to cluster.
>> For more information please visit
>> https://docs.mirantis.com/openstack/fuel/fuel-master/
>>
>> The fuel server has access to internet even though it could not find
>> depots during install, I ran "yum update" and reboot but I still get this
>> error.
>> I configured a fuel-slave server boot from network attached to the
>> fuel-master but the process failed also.
>> Any help please to solve this ?
>>
>> Regards,
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Maksim Malchuk,
> Senior DevOps Engineer,
> MOS: Product Engineering,
> Mirantis, Inc
> 
>



-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Reg: not able to start openstack-nova-compute.service in compute node

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 12:32 PM, venkat boggarapu wrote:

> Check login credentials

Seems pretty straight forward to me. You're having the wrong username/password
when connecting to RabbitMQ.

Try this:

rgrep -E '^rabbit_' /etc
--
Choose a job you love, and you will never have
to work a day in your life.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Reg: not able to start openstack-nova-compute.service in compute node

2016-06-27 Thread venkat boggarapu
Hi All,

we are installing mitaka openstack on centos 7.2

As per the document
http://docs.openstack.org/mitaka/install-guide-rdo/nova-compute-install.html
.

We are trying to install the compute node,but we are getting the error
while starting the openstack-nova-compute.service

the services both rabbitmq and conductor service are running in controller
node

we found the log in controller node

2016-06-27 16:54:34.759 4591 ERROR oslo.messaging._drivers.impl_rabbit
[req-2873d7c7-31b1-4b99-a2ed-ee9e23a4d72e - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 16:55:09.863 4591 ERROR oslo.messaging._drivers.impl_rabbit
[req-2873d7c7-31b1-4b99-a2ed-ee9e23a4d72e - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 16:55:44.974 4591 ERROR oslo.messaging._drivers.impl_rabbit
[req-2873d7c7-31b1-4b99-a2ed-ee9e23a4d72e - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 16:56:20.077 4591 ERROR oslo.messaging._drivers.impl_rabbit
[req-2873d7c7-31b1-4b99-a2ed-ee9e23a4d72e - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed.

and found the log in compute node

2016-06-27 06:57:15.438 13757 ERROR oslo.messaging._drivers.impl_rabbit
[req-d85c17be-79a6-4ac2-991b-c1577cb58fc2 - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 06:57:30.479 13757 ERROR oslo.messaging._drivers.impl_rabbit
[req-d85c17be-79a6-4ac2-991b-c1577cb58fc2 - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 06:57:47.510 13757 ERROR oslo.messaging._drivers.impl_rabbit
[req-d85c17be-79a6-4ac2-991b-c1577cb58fc2 - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed
2016-06-27 06:58:06.555 13757 ERROR oslo.messaging._drivers.impl_rabbit
[req-d85c17be-79a6-4ac2-991b-c1577cb58fc2 - - - - -] AMQP server
172.16.2.152:5672 closed the connection. Check login credentials: Socket
closed

can someone help me out to find the solution as soon as possible.

With regards
venkat
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack-dev] [Fuel] Failed to build the bootstrap image

2016-06-27 Thread Alioune
hi all,

I'm trying to install fuel in VirtualBox, the fuel master is correctly
running but I'm receiving the this error on the fuel GUI:

WARNING: Failed to build the bootstrap image, see
/var/log/fuel-bootstrap-image-build.log for details.
Perhaps your Internet connection is broken. Please fix the problem and run
`fuel-bootstrap build --activate`.
While you don't activate any bootstrap - new nodes cannot be discovered and
added to cluster.
For more information please visit
https://docs.mirantis.com/openstack/fuel/fuel-master/

The fuel server has access to internet even though it could not find depots
during install, I ran "yum update" and reboot but I still get this error.
I configured a fuel-slave server boot from network attached to the
fuel-master but the process failed also.
Any help please to solve this ?

Regards,
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cannot able to connect to putty from another system while installing controller node in openstack

2016-06-27 Thread Turbo Fredriksson
On Jun 27, 2016, at 10:44 AM, venkat boggarapu wrote:

> Set a username with --os-username, OS_USERNAME, or auth.username
> Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
> Set a scope, such as a project or domain, set a project scope with
> --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
> with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name


You need to create and source a "admin-openrc" file with the following
information (this is mine, yours might look different depending on how
you set it up):

export OS_AUTH_URL="http://controller:35357/v3";
export OS_IDENTITY_API_VERSION="3"
export OS_IMAGE_API_VERSION="2"
export OS_PASSWORD="SecretAdminPassword"
export OS_PROJECT_DOMAIN_NAME="default"
export OS_PROJECT_NAME="admin"
export OS_USERNAME="admin"
export OS_USER_DOMAIN_NAME="default"

Not all of these are necessary, but I'm not good enough yet to tell you
which ones you DON'T need :).
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Cannot able to connect to putty from another system while installing controller node in openstack

2016-06-27 Thread venkat boggarapu
Hi All,

We are installing openstack mitaka,

Centos 7.2

when we take one putty from controller node we can able to install.

 openstack user create --domain default  --password-prompt admin
Missing parameter(s):
new password:


if we are trying to take the same controller node with another putty from
another system with same network i am getting the following the below error


 openstack user create --domain default  --password-prompt admin
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name


can someone please help to solve this issue. waiting for reply


With regards
venkat
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] DHCP Offer doesn't reach the VM

2016-06-27 Thread Vaidyanath Manogaran
Hi All,
I have a compute node and controller node configured in my lab.
I am using vmware as my hypervisor.

controller node have nova neutron dhcp and metadata
compute node have just nova-compute

0b8220bf-6e38-46ed-8abd-e96939485ff5
Bridge br-int
fail_mode: secure
Port "tapb8a9ad98-30"
Interface "tapb8a9ad98-30"
type: internal
Port br-int
Interface br-int
type: internal
Port "eth1"
Interface "eth1"
Bridge br-dvs
Port br-dvs
Interface br-dvs
type: internal
ovs_version: "2.5.0"



root@controller:~# ip netns
qdhcp-8eb9fc31-0f12-4df5-b41b-31be0b9f95c6

root@controller:~#   ip netns exec
qdhcp-8eb9fc31-0f12-4df5-b41b-31be0b9f95c6 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
16: tapb8a9ad98-30:  mtu 1500 qdisc
noqueue state UNKNOWN group default
link/ether fa:16:3e:c4:23:19 brd ff:ff:ff:ff:ff:ff
inet 192.168.14.40/24 brd 192.168.14.255 scope global tapb8a9ad98-30
   valid_lft forever preferred_lft forever
inet 169.254.169.254/16 brd 169.254.255.255 scope global tapb8a9ad98-30
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fec4:2319/64 scope link
   valid_lft forever preferred_lft forever

Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 available DHCP subnet:
192.168.14.0/255.255.255.0
Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 vendor class: MSFT 5.0
Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 client provides name:
blb44cehvrt463
Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 DHCPDISCOVER(tapb8a9ad98-30)
00:50:56:b8:6c:e6 ignored
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 available DHCP subnet:
192.168.14.0/255.255.255.0
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 client provides name:
devstack
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134
DHCPDISCOVER(tapb8a9ad98-30) fa:16:3e:4e:3e:b2
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 tags: tag0, known,
tapb8a9ad98-30
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 DHCPOFFER(tapb8a9ad98-30)
192.168.14.41 fa:16:3e:4e:3e:b2
Jun 27 13:33:29 dnsmasq-dhcp[17906]: Ignoring duplicate dhcp-option 26
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options:
1:netmask, 28:broadcast, 2:time-offset, 3:router,
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options:
15:domain-name, 6:dns-server, 119:domain-search,
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options:
12:hostname, 44:netbios-ns, 47:netbios-scope,
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options: 26:mtu,
121:classless-static-route, 42:ntp-server
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 next server: 192.168.14.40
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  1 option: 53
message-type  2
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option: 54
server-identifier  192.168.14.40
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option: 51
lease-time  1d
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option: 58 T1
 12h
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option: 59 T2
 21h
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option:  1
netmask  255.255.255.0
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option: 28
broadcast  192.168.14.255
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 14 option: 15
domain-name  openstacklocal
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option:  3
router  192.168.14.1
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 14 option:121
classless-static-route  20:a9:fe:a9:fe:c0:a8:0e:28:00:c0:a8:0e:01
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  4 option:  6
dns-server  192.168.13.12
Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size:  2 option: 26
mtu  1454
Jun 27 13:33:31 dnsmasq-dhcp[17906]: 361721128 available DHCP subnet:
192.168.14.0/255.255.255.0
Jun 27 13:33:31 dnsmasq-dhcp[17906]: 361721128 vendor class: MSFT 5.0
Jun 27 13:33:31 dnsmasq-dhcp[17906]: 361721128 client provides name:
blb44cehvrt463
Jun 27 13:33:31 dnsmasq-dhcp[17906]: 361721128 DHCPDISCOVER(tapb8a9ad98-30)
00:50:56:b8:6c:e6 ignored
Jun 27 13:33:39 dnsmasq-dhcp[17906]: 361721128 available DHCP subnet:
192.168.14.0/255.255.255.0
Jun 27 13:33:39 dnsmasq-dhcp[17906]: 361721128 vendor class: MSFT 5.0
Jun 27 13:33:39 dnsmasq-dhcp[17906]: 361721128 client provides name:
blb44cehvrt463
Jun 27 13:33:39 dnsmasq-dhcp[17906]: 361721128 DHCPDISCOVER(tapb8a9ad98-30)
00:50:56:b8:6c:e6 ignored


My Compute node have the following

root@compute01:~# brctl show

bridge name bridge id   STP enabled interfaces

virbr0  8000.52540066f515   yes   

Re: [Openstack] [Keystone] Source IP address in tokens

2016-06-27 Thread 林自均
Hi Steve & Morgan,

Thank you for your reply! I see the reasons not to validate tokens with
theirs source IP addresses.

One more question to Morgan: you mentioned that I should use the shortest
life span of tokens (perhaps 1 hour?), but this will make the users type in
their usernames and passwords too often. Let's say if I want to provide a
"Remember me for 30 days" checkbox, is there a better way other than
setting the life span of tokens to 30 days?

John

Morgan Fainberg  於 2016年6月27日 週一 下午2:11寫道:

>
> On Jun 26, 2016 19:39, "林自均"  wrote:
> >
> > Hi all,
> >
> > I have the following scenario:
> >
> > 1. On client machine A, a user obtains an auth token with a username and
> password.
> > 2. The user can use the auth token to do operations on client machine A.
> > 3. A thief steals the auth token, and do operations on client machine B.
> >
> > Can Keystone check the auth token's source IP (which is client machine A
> in the above example) to prevent thieves to use it? Does this feature
> exist? Or is it a work in progress? Thanks for the help!
> >
> > John
> >
>
> > ___
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
> Unfortunately, validating tokens in this way will induce a number of
> failures. The user's token is passed through from one service to another
> for subsequent actions (e.g. nova talking to glance to get the proper
> image).
>
> We are working on changing how AuthZ is handled when it is service to
> service (nova to glance or cinder) vs when it is user to service.
>
> While we have had the concept of token binding (requiring an x509 client
> cert for example) the above mentioned limitation has made the feature a
> non-starter. Generally speaking bearer tokens are known to have this issue
> and keystone tokens are bearer tokens.
>
> The best mitigation is to use TLS for communication to the endpoints (user
> -> service) and limit the life span of the tokens to the shortest window
> possible (making replay attacks significantly more difficult as the tokens
> expire quickly).
>
> Eventually we can work on solving this, but there is a bunch of work
> needed before it can be worked on/explored.
>
> --Morgan
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack