Re: [Openstack] (no subject)

2017-10-14 Thread Trinath Somanchi
What you really want to achieve ? What setup you are trying ?


Best Regards,
Trinath Somanchi | NXP | HSDC, INDIA

From: Raja Siddharth Raju [mailto:rsrajuoffic...@gmail.com]
Sent: Saturday, October 14, 2017 3:19 PM
To: openstack@lists.openstack.org
Subject: [Openstack] (no subject)


3:15 PM  I have been asked to.put host ip during installation of 
openstack . I am running ubuntu 16.43 in oracle virtual box.

Can you please help me with the solution ?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Subject: [Keystone][Tempest][QA] Tempest full fails with policy.v3cloudsample.json and gate is using old policy.json

2017-01-24 Thread Rodrigo Duarte
Hi Liam,

As you said, this is a known issue of the "policy.v3cloudsample.json"
policy file. The cloud_admin rule is supposed to be something like: or I
have a project scoped token for the "is_admin" project with the admin role,
or I have a domain scoped token for the specified domain with the admin
role.

Currently, we are studying the possibility to merge both files. We also
have weekly meetings focused only on policy [1].

[1] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting

On Tue, Jan 24, 2017 at 9:16 AM, Liam Young 
wrote:

> Hi,
>
> Firstly, apologies for the cross post from openstack@l.o.o but I think
> this is a more appropriate mailing list and I'd like to add some more
> information.
>
> I have been running tempest full against a Keystone v3 enabled cloud using
> the stable newton policy.v3cloudsample.json *1 and it is failing for me. I
> then checked what was happening at Keystone gate *2 and saw that the v3
> gate jobs appear to be using the old policy.json *3 which I assume is
> deprecated for v3 as granting the admin role on anything in-effect gives a
> user cloud-admin.
>
> My questions are:
> 1) Should gate be using policy.v3cloudsample.json to run v3 tests?
> 2) Should I expect a tempest full run to pass against a Newton deployment
> using policy.v3cloudsample.json ?
>
> What I'm seeing is that some tests (like 
> tempest.api.compute.admin.test_quotas)
> fail when they try and list_domains. This seems to be because the test
> creates:
>
> 1) A new project in the admin domain
> 2) A new user in the admin domain
> 3) Grants the admin role on the new project to the new user.
>
> The test then authenticates with the new users credentials and attempts to
> list_domains. The policy.json, however, has:
>
>
> "cloud_admin": "role:admin and (token.is_admin_project:True or
> domain_id:363ab68785c24c81a784edca1bceb935)",
> ...
> "identity:list_domains": "rule:cloud_admin",
>
> From tempest I see:
>
> ==
> FAIL: tempest.api.compute.admin.test_quotas.QuotasAdminTestJSON.test_
> delete_quota[id-389d04f0-3a41-405f-9317-e5f86e3c44f0]
> tags: worker-0
> --
> Empty attachments:
>   stderr
>   stdout
>
> pythonlogging:'': {{{2017-01-23 15:57:09,806 2014 INFO
> [tempest.lib.common.rest_client] Request 
> (QuotasAdminTestJSON:test_delete_quota):
> 403 GET http://10.5.36.109:35357/v3/domains?name=admin_domain 0.066s}}}
>
> Traceback (most recent call last):
>   File "tempest/api/compute/admin/test_quotas.py", line 128, in
> test_delete_quota
> project = self.identity_utils.create_project(name=project_name,
>   File "tempest/test.py", line 470, in identity_utils
> project_domain_name=domain)
>   File "tempest/lib/common/cred_client.py", line 210, in get_creds_client
> roles_client, domains_client, project_domain_name)
>   File "tempest/lib/common/cred_client.py", line 142, in __init__
> name=domain_name)['domains'][0]
>   File "tempest/lib/services/identity/v3/domains_client.py", line 57, in
> list_domains
> resp, body = self.get(url)
>   File "tempest/lib/common/rest_client.py", line 290, in get
> return self.request('GET', url, extra_headers, headers)
>   File "tempest/lib/common/rest_client.py", line 663, in request
> self._error_checker(resp, resp_body)
>   File "tempest/lib/common/rest_client.py", line 755, in _error_checker
> raise exceptions.Forbidden(resp_body, resp=resp)
> tempest.lib.exceptions.Forbidden: Forbidden
> Details: {u'message': u'You are not authorized to perform the requested
> action: identity:list_domains', u'code': 403, u'title': u'Forbidden'}
>
> In the keystone log I see:
>
> (keystone.policy.backends.rules): 2017-01-23 15:35:57,198 DEBUG enforce
> identity:list_domains: {'is_delegated_auth': False,
> 'access_token_id': None,
> 'user_id': u'3fd9e70825d648d996080d855cf9c181',
> 'roles': [u'Admin'],
> 'user_domain_id': u'363ab68785c24c81a784edca1bceb935',
> 'consumer_id': None,
> 'trustee_id': None,
> 'is_domain': False,
> 'trustor_id': None,
> 'token':  audit_chain_id=4cQHEfwhSvuvibK4TAjKUw)
> at 0x7fbcceaa33c8>,
> 'project_id': u'b48ba24e96d84de4a48077b9310faac7',
> 'trust_id': None,
> 'project_domain_id': u'363ab68785c24c81a784edca1bceb935'}
> (keystone.common.wsgi): 2017-01-23 15:35:57,199 WARNING You are not
> authorized to perform the requested action: identity:list_domains
>
> This appears to be project scoped. If I update the policy.json to grant
> cloud_admin if the project is the admin domain then that seems to fix
> things. The change I'm trying is:
>
>  3c3,4
> < "cloud_admin": "role:admin and (token.is_admin_project:True or
> domain_id:admin_domain_id)",
> ---
> > "bob": "project_domain_id:363ab68785c24c81a784edca1bceb935 or
> domain_id:363ab68785c24c81a784edca1bceb935",
> > "cloud_admin": "role:admin and (token.is_admin_project:True or
> rule:bob)",
>
> 

Re: [Openstack] (no subject)

2016-12-21 Thread Atif Munir
Thanks everyone for reply.

The problem got resolved by adding

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

in
the /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
file.and I was able to launch my first Instance.


This is really a great product and a game changer for the next 5-10 years.


Regards,
Atif



On Wed, Dec 21, 2016 at 7:37 PM, Jose Manuel Ferrer Mosteiro <
jmferrer.paradigmatecnolog...@gmail.com> wrote:

> Double check you have closed all ' and " in vars you changed.
>
> Maybe the problem could en in values of OPENSTACK_KEYSTONE_DEFAULT_DOMAIN,
> CACHES['default']['LOCATION'], OPENSTACK_HOST or TIME_ZONE ?
>
> You can use meld to compare original configuration file and your
> configuration file.
>
> Regards,
>   Jose Manuel
>
>
>
>
> El 2016-12-21 12:57, Neil Jerram escribió:
>
> Hi Atif,
>
> There is incorrect Python indentation in the local_settings file,
> /usr/share/openstack-dashboard/openstack_dashboard/
> local/local_settings.py.
>
> From the perspective of the vanilla Horizon project, I believe
> local_settings.py is a file that the user can create and/or modify in order
> to influence how their own web UI looks.  So it could be that you created
> that file yourself, or it could be that it was created by the install
> method that you are using.
>
> But either way, you can just open the file yourself and see if you can see
> and fix the indentation problem.
>
> Regards,
> Neil
>
>
> On Wed, Dec 21, 2016 at 11:45 AM Atif Munir  wrote:
>
>>
>> After successful installation of openstack. I am getting this error while
>> I was going to open http://controller/horizon. The error message is for
>> Apache2 erro logs. Please advise. Thanks
>>
>> Atif
>>
>>
>> [Wed Dec 21 16:30:36.170646 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] mod_wsgi (pid=5302): Target
>> WSGI script 
>> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'
>> cannot be loaded as Python module.
>> [Wed Dec 21 16:30:36.170708 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] mod_wsgi (pid=5302):
>> Exception occurred processing WSGI script '/usr/share/openstack-
>> dashboard/openstack_dashboard/wsgi/django.wsgi'.
>> [Wed Dec 21 16:30:36.170734 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] Traceback (most recent call
>> last):
>> [Wed Dec 21 16:30:36.170757 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File "/usr/share/openstack-
>> dashboard/openstack_dashboard/wsgi/django.wsgi", line 16, in 
>> [Wed Dec 21 16:30:36.170790 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] application =
>> get_wsgi_application()
>> [Wed Dec 21 16:30:36.170803 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/dist-packages/django/core/wsgi.py", line 14, in
>> get_wsgi_application
>> [Wed Dec 21 16:30:36.170820 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] django.setup()
>> [Wed Dec 21 16:30:36.170830 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/dist-packages/django/__init__.py", line 17, in setup
>> [Wed Dec 21 16:30:36.170844 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]
>> configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
>> [Wed Dec 21 16:30:36.170853 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in
>> __getattr__
>> [Wed Dec 21 16:30:36.170868 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] self._setup(name)
>> [Wed Dec 21 16:30:36.170894 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in
>> _setup
>> [Wed Dec 21 16:30:36.170910 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] self._wrapped =
>> Settings(settings_module)
>> [Wed Dec 21 16:30:36.170920 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in
>> __init__
>> [Wed Dec 21 16:30:36.170933 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] mod =
>> importlib.import_module(self.SETTINGS_MODULE)
>> [Wed Dec 21 16:30:36.170943 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754]   File
>> "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
>> [Wed Dec 21 16:30:36.170957 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 172.16.72.2:40754] __import__(name)
>> [Wed Dec 21 16:30:36.170973 2016] [wsgi:error] [pid 5302:tid
>> 140489127257856] [remote 

Re: [Openstack] (no subject)

2016-12-21 Thread Jose Manuel Ferrer Mosteiro
 

Double check you have closed all ' and " in vars you changed. 

Maybe the problem could en in values of
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN, CACHES['default']['LOCATION'],
OPENSTACK_HOST or TIME_ZONE ? 

You can use meld to compare original configuration file and your
configuration file. 

Regards,
 Jose Manuel 

El 2016-12-21 12:57, Neil Jerram escribió: 

> Hi Atif,
> 
> There is incorrect Python indentation in the local_settings file, 
> /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py.
> 
> From the perspective of the vanilla Horizon project, I believe 
> local_settings.py is a file that the user can create and/or modify in order 
> to influence how their own web UI looks. So it could be that you created that 
> file yourself, or it could be that it was created by the install method that 
> you are using.
> 
> But either way, you can just open the file yourself and see if you can see 
> and fix the indentation problem.
> 
> Regards, Neil
> 
> On Wed, Dec 21, 2016 at 11:45 AM Atif Munir  wrote: 
> 
>> After successful installation of openstack. I am getting this error while I 
>> was going to open http://controller/horizon [1]. The error message is for 
>> Apache2 erro logs. Please advise. Thanks 
>> 
>> Atif 
>> 
>> [Wed Dec 21 16:30:36.170646 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] mod_wsgi (pid=5302): Target 
>> WSGI script 
>> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi' cannot 
>> be loaded as Python module. 
>> [Wed Dec 21 16:30:36.170708 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] mod_wsgi (pid=5302): 
>> Exception occurred processing WSGI script 
>> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'. 
>> [Wed Dec 21 16:30:36.170734 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] Traceback (most recent call 
>> last): 
>> [Wed Dec 21 16:30:36.170757 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi", line 
>> 16, in  
>> [Wed Dec 21 16:30:36.170790 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] application = 
>> get_wsgi_application() 
>> [Wed Dec 21 16:30:36.170803 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/dist-packages/django/core/wsgi.py", line 14, in 
>> get_wsgi_application 
>> [Wed Dec 21 16:30:36.170820 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] django.setup() 
>> [Wed Dec 21 16:30:36.170830 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/dist-packages/django/__init__.py", line 17, in setup 
>> [Wed Dec 21 16:30:36.170844 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] 
>> configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) 
>> [Wed Dec 21 16:30:36.170853 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in 
>> __getattr__ 
>> [Wed Dec 21 16:30:36.170868 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] self._setup(name) 
>> [Wed Dec 21 16:30:36.170894 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in 
>> _setup 
>> [Wed Dec 21 16:30:36.170910 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] self._wrapped = 
>> Settings(settings_module) 
>> [Wed Dec 21 16:30:36.170920 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in 
>> __init__ 
>> [Wed Dec 21 16:30:36.170933 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] mod = 
>> importlib.import_module(self.SETTINGS_MODULE) 
>> [Wed Dec 21 16:30:36.170943 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module 
>> [Wed Dec 21 16:30:36.170957 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] __import__(name) 
>> [Wed Dec 21 16:30:36.170973 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] File 
>> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py",
>>  line 317, in  
>> [Wed Dec 21 16:30:36.170991 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] [remote 172.16.72.2:40754 [2]] from local.local_settings 
>> import * # noqa 
>> [Wed Dec 21 16:30:36.171047 2016] [wsgi:error] [pid 5302:tid 
>> 140489127257856] 

Re: [Openstack] (no subject)

2016-12-21 Thread Neil Jerram
Hi Atif,

There is incorrect Python indentation in the local_settings file,
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py.

>From the perspective of the vanilla Horizon project, I believe
local_settings.py is a file that the user can create and/or modify in order
to influence how their own web UI looks.  So it could be that you created
that file yourself, or it could be that it was created by the install
method that you are using.

But either way, you can just open the file yourself and see if you can see
and fix the indentation problem.

Regards,
Neil


On Wed, Dec 21, 2016 at 11:45 AM Atif Munir  wrote:

>
> After successful installation of openstack. I am getting this error while
> I was going to open http://controller/horizon. The error message is for
> Apache2 erro logs. Please advise. Thanks
>
> Atif
>
>
> [Wed Dec 21 16:30:36.170646 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] mod_wsgi (pid=5302): Target
> WSGI script
> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'
> cannot be loaded as Python module.
> [Wed Dec 21 16:30:36.170708 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] mod_wsgi (pid=5302):
> Exception occurred processing WSGI script
> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
> [Wed Dec 21 16:30:36.170734 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] Traceback (most recent call
> last):
> [Wed Dec 21 16:30:36.170757 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi", line
> 16, in 
> [Wed Dec 21 16:30:36.170790 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] application =
> get_wsgi_application()
> [Wed Dec 21 16:30:36.170803 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/dist-packages/django/core/wsgi.py", line 14, in
> get_wsgi_application
> [Wed Dec 21 16:30:36.170820 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] django.setup()
> [Wed Dec 21 16:30:36.170830 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/dist-packages/django/__init__.py", line 17, in setup
> [Wed Dec 21 16:30:36.170844 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]
> configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
> [Wed Dec 21 16:30:36.170853 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in
> __getattr__
> [Wed Dec 21 16:30:36.170868 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] self._setup(name)
> [Wed Dec 21 16:30:36.170894 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in
> _setup
> [Wed Dec 21 16:30:36.170910 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] self._wrapped =
> Settings(settings_module)
> [Wed Dec 21 16:30:36.170920 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in
> __init__
> [Wed Dec 21 16:30:36.170933 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] mod =
> importlib.import_module(self.SETTINGS_MODULE)
> [Wed Dec 21 16:30:36.170943 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
> [Wed Dec 21 16:30:36.170957 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] __import__(name)
> [Wed Dec 21 16:30:36.170973 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py",
> line 317, in 
> [Wed Dec 21 16:30:36.170991 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] from local.local_settings
> import *  # noqa
> [Wed Dec 21 16:30:36.171047 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/local/local_settings.py",
> line 322
> [Wed Dec 21 16:30:36.171061 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] 'profile_support': None,
> [Wed Dec 21 16:30:36.171066 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] ^
> [Wed Dec 21 16:30:36.171071 2016] [wsgi:error] [pid 5302:tid
> 140489127257856] [remote 172.16.72.2:40754] IndentationError: unexpected
> indent
>
> 

Re: [Openstack] (no subject)

2016-12-21 Thread Turbo Fredriksson
On 21 Dec 2016, at 11:33, Atif Munir  wrote:

> [Wed Dec 21 16:30:36.170646 2016] [wsgi:error] [pid 5302:tid 140489127257856] 
> [remote 172.16.72.2:40754] mod_wsgi (pid=5302): Target WSGI script 
> '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi' cannot 
> be loaded as Python module.

Does this file exists? Is it executable, is it a python script?

> [Wed Dec 21 16:30:36.171047 2016] [wsgi:error] [pid 5302:tid 140489127257856] 
> [remote 172.16.72.2:40754]   File 
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/local/local_settings.py",
>  line 322
> [Wed Dec 21 16:30:36.171061 2016] [wsgi:error] [pid 5302:tid 140489127257856] 
> [remote 172.16.72.2:40754] 'profile_support': None,
> [Wed Dec 21 16:30:36.171066 2016] [wsgi:error] [pid 5302:tid 140489127257856] 
> [remote 172.16.72.2:40754] ^
> [Wed Dec 21 16:30:36.171071 2016] [wsgi:error] [pid 5302:tid 140489127257856] 
> [remote 172.16.72.2:40754] IndentationError: unexpected indent

What do the ten lines before and after line 322 look like?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2016-11-15 Thread Hakan Unal
hello prasad , is your deployment
product or test environment ?

first time i see a tacker questions .  i think tacker will be most important 
part of openstack .
Hakan .

Saygılar & Sevgiler


 Original message 
From: prasad kokkula 
Date: 15/11/2016 19:31 (GMT+03:00)
To: openstack@lists.openstack.org
Subject: [Openstack] (no subject)

Hi,

[Tacker]   I have tried to launch the vnf Instance using Tacker. vnf is 
launched succesfully and able to do SSH.

I have faced the issue, the connection points (CP2, CP3) are not getting ip 
addreess except managament CP (CP1). Could you please let me know is this 
Tacker issue or any configuration mismatch.

I have installed openstack newton release on Centos 7. Please let me know if 
you need any other configuration.



=
Below are the net-list ip's

[root@localhost (keystone_admin)]# neutron net-list
+--+-+---+
| id   | name| subnets  
 |
+--+-+---+
| 55077c0e-8291-4730-99b4-f280967cb69e | public  | 
39256aad-d075-4c38-bf2c-14613df2252e 172.24.4.224/28  |
| 73bbaf70-9bdd-4359-a3a2-09dbd5734341 | private | 
09b9018c-ca3b-46ee-9a4e-507e5124139f 10.0.0.0/24  |
| d0560ee9-9ab0-4df8-a0d2-14064950a17c | vnf_mgmt| 
01d2b67c-ee28-4875-92e0-a8e51fdf8401 192.168.200.0/24 |
| f98f38b8-8b6c-4adb-b0e9-a265ce969acf | vnf_private | 
61d39f59-2ff7-4292-afd9-536f007fd30c 192.168.201.0/24 |
+--+-+---+
[root@localhost (keystone_admin)]#

Tosca file used for vnf creation.


[root@localhost (keystone_admin)]# cat sample-vnfd.yaml

tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: Demo vCPE example

metadata:
  template_name: sample-tosca-vnfd

topology_template:
  node_templates:
VDU1:
  type: tosca.nodes.nfv.VDU.Tacker
  capabilities:
nfv_compute:
  properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
  properties:
image: cirros1
availability_zone: nova
mgmt_driver: noop
user_data_format: RAW
config: |
  param0: key1
  param1: key2

CP1:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
management: true
  requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

CP2:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1

CP3:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1

VL1:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_mgmt
vendor: Tacker

VL2:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_private
vendor: Tacker

VL3:
  type: tosca.nodes.nfv.VL
  properties:
network_name: private
vendor: Tacker

===

Regards,
Varaprasad
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [OpenStack-Infra] Subject: Re: [openstack-manuals] Create a stable/newton branch for openstack-manuals

2016-11-12 Thread Olena Logvinova
Thanks Clark! And thanks Andreas for the update!

Cheers
Olena

On Sat, Nov 12, 2016 at 8:53 PM, Andreas Jaeger  wrote:

> On 11/11/2016 12:41 PM, Olena Logvinova wrote:
>
>> Thanks Clark!
>>
>> The HEAD git ref is fine.
>>
>
>
> FYI, Clark just did this - thanks!
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>


-- 
Best regards,
Olena Logvinova,
Technical Writer | Mirantis | 38, Nauki av., Kharkiv, Ukraine
ologvin...@mirantis.com | +380950903196
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Subject: Re: [openstack-manuals] Create a stable/newton branch for openstack-manuals

2016-11-12 Thread Andreas Jaeger

On 11/11/2016 12:41 PM, Olena Logvinova wrote:

Thanks Clark!

The HEAD git ref is fine.



FYI, Clark just did this - thanks!

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack] (no subject)

2016-06-23 Thread Eugen Block

Before you can execute any administrational tasks you have to authenticate.
So according to the docs I use  
(http://docs.openstack.org/mitaka/install-guide-obs/keystone-services.html)  
you need some credentials in your environment, at least


OS_TOKEN (only for initializing the identity service)
OS_URL
OS_IDENTITY_API_VERSION

The example looks like this:

export OS_TOKEN=294a4c8a8a475f9b9836
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

The token is created in a previous step, so make sure you have  
followed the guide you are using, otherwise you won't get very far. ;-)


Regards,
Eugen


Zitat von venkat boggarapu :


Hi All,

We are getting the below error while installing glance service in our
environment.


[root@controller ~]# openstack user create --domain default
--password-prompt glance
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name.

can some please help regarding this issue.


With regards
venkat




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2016-06-23 Thread zhaolihuisky
You should execute command ". admin-openrc"  first.
--From:venkat 
boggarapu Send Time:2016年6月23日(星期四) 
19:12To:openstack Subject:[Openstack] (no 
subject)
Hi All,
We are getting the below error while installing glance service in our 
environment.

[root@controller ~]# openstack user create --domain default --password-prompt 
glanceMissing parameter(s):Set a username with --os-username, OS_USERNAME, or 
auth.usernameSet an authentication URL, with --os-auth-url, OS_AUTH_URL or 
auth.auth_urlSet a scope, such as a project or domain, set a project scope with 
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope 
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name.
can some please help regarding this issue.

With regards
venkat

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2015-10-09 Thread nithish B
Hi Ayushi,
The below link should answer your question:

https://ask.openstack.org/en/question/25815/can-i-build-a-vm-in-swift-object-storage/

Regards,
Nitish B.

On Fri, Oct 9, 2015 at 12:59 AM, Ayushi Kumar 
wrote:

> Hi,
>
> why cant we use object storage for launching a vm. Please help .
>
>
> Regards
> Ayushi
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2015-09-29 Thread Abhishek Shrivastava
Hi Twinkle,

You can use the following link for contributing to OpenStack nova:

   - https://wiki.openstack.org/wiki/How_To_Contribute


On Tue, Sep 29, 2015 at 3:56 PM, Twinkle Chawla 
wrote:

> Hello,
> I am an openstack aspirant, seeking to contribute in 'nova' but as I am
> new to this, could not find any way.. Need help!
>
> Regards,
> Twinkle Chawla
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 


*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. *
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2015-09-28 Thread Alexandra Settle
Hi Vaishali,

Thanks for your email!
Please join us in #openstack-opw channel on 
irc.freenode.org. We can chat about different 
projects and helping you get started with OpenStack.

Thanks!

Alexandra Settle
Information Developer II
Rackspace Private Cloud, Australia



[Description: 
http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/rackspace_logo.png]
alexandra.set...@rackspace.com
phone: +61 1800 722 577
mobile: +61 437 234 494

www.rackspace.com.au

[Description: 
http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/twitter.png]
  [Description: 
http://600a2794aa4ab5bae6bd-8d3014ab8e4d12d3346853d589a26319.r53.cf1.rackcdn.com/signatures/images/linkedin.png]




From: Vaishali Sharma >
Date: Monday, 28 September 2015 10:11 pm
To: "openstack@lists.openstack.org" 
>
Subject: [Openstack] (no subject)


Hello...I am an outreachy 2015 aspirant...and I was looking for some bugs I 
could work on...as I am totally new to this I am having difficulty in choosing 
one...so any one who can help me with it?



Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Level 1, 37 Pitt Street, Sydney, NSW 2000, Australia. 
Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php - This e-mail message 
may contain confidential or privileged information intended for the recipient. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited. If you receive this transmission in error, please notify us 
immediately by e-mail at ab...@rackspace.com and delete the original message. 
Your cooperation is appreciated.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2015-02-22 Thread Mark collier
If you are attempting to sign in to vote on summit sessions, please email 
sum...@openstack.org with the details. That will open a support ticket, and our 
summit team will follow up. 




 On Feb 21, 2015, at 11:55 PM, Israel Koffman isra...@runcom.com wrote:
 
  
 I still can't sign in to the Openstack  website with my email and password, 
 please check !
  
 
  
 
  
 
  
 
 Best Regards,
 
  
 
 Israel Koffman
 
 CEO
 
 Runcom Technologies. Ltd.
 
 Direct Phone:+972-3-9428874
 
 Office Phone:+972-3-942
 
 Mobile Phone:+972-545-303110
 
 USA Mobile Phone: 1-646-530-1502
 
 Skype: Israel.Koffman
 
 FAX:+972-3-9528805
 
 Websites: www.runcom.com and 
 http://rf-mw.org/multiple_access_method_ofdma.html
  
  Please consider the environment before printing this e-mail. Thank you.
  
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2015-01-19 Thread Nikesh Kumar Mahalka
actually swift was disable in my local.conf of devstack but below
things were uncommented
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5

SWIFT_REPLICAS=1

SWIFT_DATA_DIR=$DEST/data


So after unstack and clean,i commented these entries and again tried
stack and run tempest tests and now no error.


Regards
Nikesh

On Tue, Jan 20, 2015 at 12:04 AM, Anne Gentle a...@openstack.org wrote:
 Is it related to needing to cap boto version 2.35.0?

 Read through this for more:

 https://bugs.launchpad.net/nova/+bug/1408987

 On Mon, Jan 19, 2015 at 12:11 PM, Nikesh Kumar Mahalka
 nikeshmaha...@vedams.com wrote:

 below test case is failing on lvm in kilo devstack

 ==
 FAIL: tearDownClass
 (tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest)
 --
 Traceback (most recent call last):
 _StringException: Traceback (most recent call last):
   File /opt/stack/tempest/tempest/test.py, line 301, in tearDownClass
 teardown()
   File /opt/stack/tempest/tempest/thirdparty/boto/test.py, line 272, in
 resource_cleanup
 raise exceptions.TearDownException(num=fail_count)
 TearDownException: 1 cleanUp operation failed



 did any one face this?




 Regards
 nikesh

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-11-07 Thread Sadia Bashir
Hi All,

I fixed the stack_user_domain ID not set in heat.conf falling back to
using default error by following links given below:

http://hardysteven.blogspot.fr/2014/04/heat-auth-model-updates-part-2-stack.html
http://docs.openstack.org/developer/keystone/cli_examples.html

but soon after this I started getting oslo rootwrap error in nova.log,
openvswitch-agent.log and in celometer-api.log files while glance, keystone
and heat commands and services are working correctly.

I followed the link:
https://ask.openstack.org/en/question/29758/why-does-nova-api-fail-icehouse-usrbinnova-rootwrap-not-found/
to resolve the problem but even after reinstalling and reconfiguring nova
and neutron services, verified existence of nova and neutron rootwrap files
in /usr/bin/, I am still getting these errors:

nova-api.log:

2014-11-07 13:40:30.609 30639 TRACE nova Traceback (most recent call last):
2014-11-07 13:40:30.609 30639 TRACE nova   File /usr/bin/nova-api, line
10, in module
2014-11-07 13:40:30.609 30639 TRACE nova sys.exit(main())
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/cmd/api.py, line 53, in main
2014-11-07 13:40:30.609 30639 TRACE nova server =
service.WSGIService(api, use_ssl=should_use_ssl)
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/service.py, line 330, in __init__
2014-11-07 13:40:30.609 30639 TRACE nova self.manager =
self._get_manager()
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/service.py, line 374, in
_get_manager
2014-11-07 13:40:30.609 30639 TRACE nova return manager_class()
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/api/manager.py, line 30, in __init__
2014-11-07 13:40:30.609 30639 TRACE nova
self.network_driver.metadata_accept()
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 666, in
metadata_accept
2014-11-07 13:40:30.609 30639 TRACE nova iptables_manager.apply()
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 434, in
apply
2014-11-07 13:40:30.609 30639 TRACE nova self._apply()
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line
249, in inner
2014-11-07 13:40:30.609 30639 TRACE nova return f(*args, **kwargs)
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 454, in
_apply
2014-11-07 13:40:30.609 30639 TRACE nova attempts=5)
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 1211, in
_execute
2014-11-07 13:40:30.609 30639 TRACE nova return utils.execute(*cmd,
**kwargs)
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/utils.py, line 165, in execute
2014-11-07 13:40:30.609 30639 TRACE nova return
processutils.execute(*cmd, **kwargs)
2014-11-07 13:40:30.609 30639 TRACE nova   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py,
line 195, in execute
2014-11-07 13:40:30.609 30639 TRACE nova cmd=sanitized_cmd)
2014-11-07 13:40:30.609 30639 TRACE nova ProcessExecutionError: Unexpected
error while running command.
2014-11-07 13:40:30.609 30639 TRACE nova Command: sudo nova-rootwrap
/etc/nova/rootwrap.conf iptables-save -c
2014-11-07 13:40:30.609 30639 TRACE nova Exit code: 1
2014-11-07 13:40:30.609 30639 TRACE nova Stdout: u''
2014-11-07 13:40:30.609 30639 TRACE nova Stderr: u'Traceback (most recent
call last):\n  File /usr/bin/nova-rootwrap, line 6, in module\n  $
2014-11-07 13:40:30.609 30639 TRACE nova
2014-11-07 13:40:30.701 30652 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.702 30647 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.702 30645 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.702 30652 INFO nova.wsgi [-] Stopping WSGI server.
2014-11-07 13:40:30.702 30647 INFO nova.wsgi [-] Stopping WSGI server.
2014-11-07 13:40:30.702 30645 INFO nova.wsgi [-] Stopping WSGI server.
2014-11-07 13:40:30.702 30652 INFO nova.wsgi [-] WSGI server has stopped.
2014-11-07 13:40:30.703 30647 INFO nova.wsgi [-] WSGI server has stopped.
2014-11-07 13:40:30.703 30645 INFO nova.wsgi [-] WSGI server has stopped.
2014-11-07 13:40:30.705 30651 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.706 30650 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.701 30649 INFO nova.openstack.common.service [-] Parent
process has died unexpectedly, exiting
2014-11-07 13:40:30.706 30651 INFO nova.wsgi [-] Stopping WSGI server.


Re: [Openstack] (no subject)

2014-04-17 Thread Ken Peng

于 2014-4-17 18:29, Soumaya Almorabeti 写道:

this command gives me an error

ERROR: HTTPConnectionPool(host='controller', port=8774): request time out



can you run the command:
telnet controller 8774 ?

it's maybe due to the network issues with the API host.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-28 Thread Roman Kravets
Adam,

I understend it, but I see that Swift always make object-replication for
all date on cluseter.
It make big load to hard drives on server.
Is it normal?

--
Best regards,
Roman Kravets


On Fri, Mar 28, 2014 at 5:29 AM, Adam Lawson alaw...@aqorn.com wrote:

 Swift is said to be eventually consistent because the data is stored then
 eventually distributed in a balanced way. You don't need to manually
 re-balance the rings constantly. Swift will do that for you. Re-balancing
 rings is usually initiated after you *change the ring structure*(add/remove 
 regions, add/remove zones, change device weights, etc).

 In your case since you only have one node, Swift will distribute the
 replicas across all 3 zones assuming you've configured 3x replication. When
 you add a node and update the rings, yes you'll want to re-balance. That
 will tell Swift to put a replica on the new node since Swift default
 behavior is to keep replica placements as unique as possible. That's the
 actual Swift vernacular everyone uses. ; )

 Unique replica placement strategy is as follows:

 Region (if defined)  Zone  Node  Device  Device with fewest replicas


 Good luck.

 Adam


 *Adam Lawson*
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (888) 406-7620



 On Thu, Mar 27, 2014 at 12:48 PM, Roman Kravets soft...@gmail.com wrote:

 Dear Adam,

 I have one storage server and 12 hard drives on it.
 For test I split disk to 4 zones. If I rightly understood, swift load
 date during re-balance ring and load data right away to correct node.

 --
 Best regards,
 Roman Kravets


 On Thu, Mar 27, 2014 at 10:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 Probably has to do with the fact you (I'm guessing) don't have very many
 drives on that server. Is that a correct statement? I know that even with
 50 drives across a cluster (still very small), rings balance is at 100%
 until the rings are adequately balanced. Look at your ring stats, drive
 count and 5 zones for more consistent reports.


 *Adam Lawson*
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (888) 406-7620



 On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.comwrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-27 Thread Adam Lawson
Probably has to do with the fact you (I'm guessing) don't have very many
drives on that server. Is that a correct statement? I know that even with
50 drives across a cluster (still very small), rings balance is at 100%
until the rings are adequately balanced. Look at your ring stats, drive
count and 5 zones for more consistent reports.


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (888) 406-7620



On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.com wrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-27 Thread Clay Gerrard
because it's on repeat.


On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.com wrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-27 Thread Roman Kravets
Dear Adam,

I have one storage server and 12 hard drives on it.
For test I split disk to 4 zones. If I rightly understood, swift load date
during re-balance ring and load data right away to correct node.

--
Best regards,
Roman Kravets


On Thu, Mar 27, 2014 at 10:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 Probably has to do with the fact you (I'm guessing) don't have very many
 drives on that server. Is that a correct statement? I know that even with
 50 drives across a cluster (still very small), rings balance is at 100%
 until the rings are adequately balanced. Look at your ring stats, drive
 count and 5 zones for more consistent reports.


 *Adam Lawson*
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (888) 406-7620



 On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.com wrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-27 Thread Кравец Роман
If I understood rightly, swift will be check all data on all hard disk
for make sure that all node have correct data?
It make very much load to hard drive when node is idle. And make
problem when user load date to cluster (to reduce load speed).
Is it normal situation for Openstack Swift? Or I can make very lover
speed to this process for unnecessary load from hard drive?

--
Best regards,
Roman Kravets


On Thu, Mar 27, 2014 at 10:12 PM, Clay Gerrard clay.gerr...@gmail.com wrote:
 because it's on repeat.


 On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.com wrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2014-03-27 Thread Adam Lawson
Swift is said to be eventually consistent because the data is stored then
eventually distributed in a balanced way. You don't need to manually
re-balance the rings constantly. Swift will do that for you. Re-balancing
rings is usually initiated after you *change the ring
structure*(add/remove regions, add/remove zones, change device
weights, etc).

In your case since you only have one node, Swift will distribute the
replicas across all 3 zones assuming you've configured 3x replication. When
you add a node and update the rings, yes you'll want to re-balance. That
will tell Swift to put a replica on the new node since Swift default
behavior is to keep replica placements as unique as possible. That's the
actual Swift vernacular everyone uses. ; )

Unique replica placement strategy is as follows:

Region (if defined)  Zone  Node  Device  Device with fewest replicas


Good luck.

Adam


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (888) 406-7620



On Thu, Mar 27, 2014 at 12:48 PM, Roman Kravets soft...@gmail.com wrote:

 Dear Adam,

 I have one storage server and 12 hard drives on it.
 For test I split disk to 4 zones. If I rightly understood, swift load date
 during re-balance ring and load data right away to correct node.

 --
 Best regards,
 Roman Kravets


 On Thu, Mar 27, 2014 at 10:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 Probably has to do with the fact you (I'm guessing) don't have very many
 drives on that server. Is that a correct statement? I know that even with
 50 drives across a cluster (still very small), rings balance is at 100%
 until the rings are adequately balanced. Look at your ring stats, drive
 count and 5 zones for more consistent reports.


 *Adam Lawson*
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (888) 406-7620



 On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман soft...@gmail.com wrote:

 Hello.

 I installed Openstack Swift to test server and upload 50 gb data.
 Now I see it in the log:
 root@storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
 grep  replicated
 Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
 partitions replicated in 300.01s (375.81/sec, 3m remaining)
 Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
 partitions replicated in 499.71s (374.32/sec, 0s remaining)
 Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
 partitions replicated in 300.01s (376.20/sec, 3m remaining)
 Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
 partitions replicated in 494.53s (378.27/sec, 0s remaining)
 Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
 partitions replicated in 300.01s (374.47/sec, 3m remaining)
 Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
 partitions replicated in 498.55s (375.25/sec, 0s remaining)
 Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
 partitions replicated in 300.01s (374.71/sec, 3m remaining)

 Why object-replicator show different percent every time?

 Thank you!

 --
 Best regards,
 Roman Kravets

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Subject: [nova][vmware] VMwareAPI sub-team status 2013-12-08

2013-12-18 Thread Gary Kotton
Would it be possible that an additional core please take a look at 
https://review.openstack.org/#/c/51793/?
Thanks
Gary

From: Shawn Hartsock harts...@acm.orgmailto:harts...@acm.org
Date: Wednesday, December 18, 2013 6:32 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Subject: [openstack-dev][nova][vmware] VMwareAPI sub-team status 
2013-12-08


Greetings Stackers!

BTW: Reviews by fitness at the end.

It's Wednesday so it's time for me to cheer-lead for our VMwareAPI subteam. Go 
team! Our normal Wednesday meetings fall on December 25th and January 1st 
coming up so, no meetings until January 8th. If there's a really strong 
objection to that we can organize an impromptu meeting.

Here's the community priorities so far for IceHouse.

== Blueprint priorities ==

Icehouse-2

Nova

*. 
https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-managementhttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/vmware-image-cache-managementk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=05e3a71c9662717a7129b4204627534366b5e678917b03140fca2d97cd23eb1c

*. 
https://blueprints.launchpad.net/nova/+spec/vmware-vsan-supporthttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/vmware-vsan-supportk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=5c9402d30409e65473e93ffd49383af99262d2aa5ad5e837f91337e10a59ea91

*. 
https://blueprints.launchpad.net/nova/+spec/autowsdl-repairhttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/autowsdl-repairk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=9cc20651933d6dfc738d744740219bdc50ca8ee9dace8d4498feb724c5f88e49

Glance

*. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backendhttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/glance/%2Bspec/vmware-datastore-storage-backendk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=ff7848ba793a0f31e1e106615da1c1d3750057b69c23f416fabf3ff20bd6cf49

Icehouse-3

*. 
https://blueprints.launchpad.net/nova/+spec/config-validation-scripthttps://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/config-validation-scriptk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=8e1631833966591b6ba5f0e698961d4b6431542d51a1962ed9572226e622727b


== Bugs by priority: ==

The priority here is an aggregate, Nova Priority / VMware Driver priority where 
the priorities are determined independently.


* High/Critical, needs review : 'vmware driver does not work with more than one 
datacenter in vC'

https://review.openstack.org/62587https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/62587k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=df81237939ed49c909bff4e0d98cda67b6cfe143446077aec98dbf902f77c48c

* High/High, needs one more +2/approval : 'VMware: NotAuthenticated occurred in 
the call to RetrievePropertiesEx'

https://review.openstack.org/61555https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/61555k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=28bf16f26dc907fc6d9334ea3046d1d7f911570dc8f79071f2c0b0084e1566a6

* High/High, needs review : 'VMware: spawning large amounts of VMs concurrently 
sometimes causes VMDK lock error'

https://review.openstack.org/58598https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/58598k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=4d25b665c267fcb3a86febb2f1462032dcb6cc71b24bb29e177b7f84957754d5

* High/High, needs review : 'VMWare: AssertionError: Trying to re-send() an 
already-triggered event.'

https://review.openstack.org/54808https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/54808k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=2Z6cPGiN95TzM3Icsm04rGTUvZ6LPXzCtZtlTdFDx6I%3D%0As=fa306c2d6b74a360e765c33f6742ff8447e8e4a66e6d9b9b5367759c8620e448

* High/High, needs review : 'VMware: timeouts due to nova-compute stuck at 100% 
when using deploying 100 VMs'


Re: [Openstack] (no subject)

2013-11-05 Thread Giuliano
I solved my problem. The problem was that i forgot to update the repository
with the command

add-apt-repository cloud-archive:havana

so the config files showed some kind of old format.
Giuliano



2013/11/5 Giuliano ine...@gmail.com

 Hi stackers, i have a doubt on configuration files in Havana.

 Following the OVS multinode 
 guidehttps://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rstI
  noticed that I have to modify the /etc/nova/nova-compute.conf file.

 However, that file in my host (compute node) already has the following
 content:

 --libvirt_type=kvm

 It doesn't seem to be the classic INI format. I suppose its a parameter
 that is appended once the program is executed.


 Is it safe to append to it the following content (as specified in the
 mentioned  guide, chapter 4.6)?

 [DEFAULT]
 libvirt_type=kvm
 libvirt_ovs_bridge=br-int
 libvirt_vif_type=ethernet
 libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
 libvirt_use_virtio_for_bridges=True


 Thanks,
 Giuliano

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2013-09-25 Thread Aaron Rosen
Hi Albert,

Are you sure this is happening. I'm positive that neutron's dhcp agent will
only hand out ip addresses for ports that it knows about and I'm sure
nova-network does the same as well.

Aaron

On Wed, Sep 25, 2013 at 12:17 PM, Albert Vonpupp vonp...@gmail.com wrote:

 Hello,

 I'm trying DevStack at the university lab. When I tried to deploy a VM I
 noticed that all the machines from the lab started renewing their leases
 with the DevStack DHCP server. That is inconvenient for me since I'm not
 the only user of this lab and it could cause troubles. I thought that
 perhaps changing the default port on the controller as on the compute nodes
 would work, but I don't know how to do that.

 How can I change the dnsmasq DHCP port on DevStack? (controller and
 compute nodes)

 Thanks a lot!

 Albert.

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2013-09-25 Thread John Griffith
On Wed, Sep 25, 2013 at 1:26 PM, Aaron Rosen aro...@nicira.com wrote:

 Hi Albert,

 Are you sure this is happening. I'm positive that neutron's dhcp agent
 will only hand out ip addresses for ports that it knows about and I'm sure
 nova-network does the same as well.

 Aaron

 On Wed, Sep 25, 2013 at 12:17 PM, Albert Vonpupp vonp...@gmail.comwrote:

 Hello,

 I'm trying DevStack at the university lab. When I tried to deploy a VM I
 noticed that all the machines from the lab started renewing their leases
 with the DevStack DHCP server. That is inconvenient for me since I'm not
 the only user of this lab and it could cause troubles. I thought that
 perhaps changing the default port on the controller as on the compute nodes
 would work, but I don't know how to do that.

 How can I change the dnsmasq DHCP port on DevStack? (controller and
 compute nodes)

 Thanks a lot!

 Albert.

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 Hi Albert,

I inadvertently did this once in our lab.  The issue I believe (if my
memory is correct) you're probably using nova-networking and you've
configured FlatDHCP.  The problem is that you're that your public network
is accessing your internal/private network (check your bridge setting) so
the result is that external DHCP requests can be received from your
OpenStack private network.

It might be helpful if you include your localrc file and some info
regarding your systems nics and how they're configured.

John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2013-09-25 Thread Albert Vonpupp
Thanks Aaron and John for your fast answers.

Unfortunatelly I forgot to include the subject on this message.

To be honest I'm not totally sure what is going on, but what I notice is
that when I do start a VM on top of OpenStack, after the bridge br100 is
created, I cannot login the rest of the machines on the lab (those which
are not related to the OpenStack tests, but on the same physical network)

*Here is a cat of my /var/log/messages on the controller node*

[root@*controller* ~]# *tail -20 /var/log*/messages

Sep 25 16:51:40 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
e0:b9:ba:ae:78:dd no address
available

Sep 25 16:51:41 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
4c:bc:a5:92:e1:c7 no address
available

Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
192.168.30.52
00:1b:b1:28:64:ae

Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 192.168.30.52
00:1b:b1:28:64:ae
Leliane-NB

Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
30:39:26:83:7c:6a no address
available

Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
84:8f:69:c7:45:26 no address
available

Sep 25 16:51:44 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
4c:b1:99:83:77:82 no address available
Sep 25 16:51:45 controller dnsmasq-dhcp[5860]: BOOTP(br100)
14:5a:05:1e:43:1a no address available
Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
18:3f:47:ba:0b:a8 no address available
Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
e0:b9:ba:ae:78:dd no address available
Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPREQUEST(br100) 10.0.0.2
fa:16:3e:64:cd:82
Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 10.0.0.2
fa:16:3e:64:cd:82 ubuntu01
Sep 25 16:51:50 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
30:39:26:83:7c:6a no address available
Sep 25 16:51:51 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
84:8f:69:c7:45:26 no address available
Sep 25 16:51:52 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
7c:c3:a1:ac:a1:ed no address available
Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
4c:b1:99:83:77:82 no address available
Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
192.168.60.2 00:25:11:cf:f0:54
Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 192.168.60.2
00:25:11:cf:f0:54 sed06
Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
7c:c3:a1:ac:a1:ed no address available
Sep 25 16:51:55 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
7c:c3:a1:ac:a1:ed no address available

I don't think I'm using neutron, as far as I know I'm using nova-network.
On the example I booted an ubuntu VM and it got the IP 10.0.0.2, and then I
cound't logon on another computer from the lab.

*Here are some lines from dmesg from a regular workstation that was already
logged in while DevStack has a running VM:*

albert ~ $ dmesg
[23898.208033] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[23898.212053] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[23907.504018] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[23907.504038] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[23907.504041] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[23921.824036] lockd: server lua.eclipse.ime.usp.br not responding, still
trying
[24041.754304] lockd: server lua.eclipse.ime.usp.br OK
[24041.754316] lockd: server lua.eclipse.ime.usp.br OK
[24101.753756] lockd: server lua.eclipse.ime.usp.br OK
[24161.753178] lockd: server lua.eclipse.ime.usp.br OK
[24161.808018] lockd: server lua.eclipse.ime.usp.br not responding, still
trying

*Here is my localrc (controller)*

[stack@*controller* ~]$ *cat devstack/localrc*
#VIRT_DRIVER=docker

#SERVICE_HOST=10.11.0.40 # REMOVE THIS LINE FOR THE CONTROLLER
# Stop DevStack polluting /opt/stack
DESTDIR=/opt/stack/src/openstack

# Switch to use QPid instead of RabbitMQ
disable_service rabbit
disable_service n-cpu
enable_service qpid
#enable_service qpid, n-cpu,n-net,n-api,n-vol

# Replace with your primary interface name
HOST_IP_IFACE=em1
PUBLIC_INTERFACE=em1
VLAN_INTERFACE=em1
FLAT_INTERFACE=em1

# Replace with whatever password you wish to use
MYSQL_PASSWORD=badpassword
SERVICE_TOKEN=badpassword
SERVICE_PASSWORD=badpassword
ADMIN_PASSWORD=badpassword

# Pre-populate glance with a minimal image and a Fedora 17 image
IMAGE_URLS=
http://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-uec.tar.gz#,
http://berrange.fedorapeople.org/images/2012-11-15/f17-x86_64-openstack-sda.qcow2


#ENABLED_SERVICES=n-cpu,n-net,n-api,n-vol # REMOVE THIS LINE FOR THE
CONTROLLER

*Here is my localrc on any compute node:*

[stack@*compute02* ~]$ cat *devstack/localrc*
#SERVICE_HOST=10.7.22.7 # REMOVE THIS LINE FOR THE CONTROLLER
SERVICE_HOST=marte.eclipse.ime.usp.br # REMOVE THIS LINE FOR THE CONTROLLER
# Stop DevStack polluting /opt/stack

Re: [Openstack] (no subject)

2013-09-25 Thread Calvin Austin
It is also worth looking to see what dnsmasq processes you have (ps -ef|
grep dnsmasq)
2 (and only 2) dnsmasq processes are  configured/launched by nova-network
to a listen address which should be the ip address of the br100 bridge eg
192.168.0.1 . the only interface it explicits excludes is lo (loopback)

The suspicious thing for me is that the server also did an ack for
192.168.20.* and 192.168.30.* as well as the 10 network.

regards
calvin




On Wed, Sep 25, 2013 at 1:46 PM, Albert Vonpupp vonp...@gmail.com wrote:

 Thanks Aaron and John for your fast answers.

 Unfortunatelly I forgot to include the subject on this message.

 To be honest I'm not totally sure what is going on, but what I notice is
 that when I do start a VM on top of OpenStack, after the bridge br100 is
 created, I cannot login the rest of the machines on the lab (those which
 are not related to the OpenStack tests, but on the same physical network)

 *Here is a cat of my /var/log/messages on the controller node*

 [root@*controller* ~]# *tail -20 /var/log*/messages

 Sep 25 16:51:40 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 e0:b9:ba:ae:78:dd no address
 available

 Sep 25 16:51:41 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:bc:a5:92:e1:c7 no address
 available

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
 192.168.30.52
 00:1b:b1:28:64:ae

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPACK(br100)
 192.168.30.52 00:1b:b1:28:64:ae
 Leliane-NB

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 30:39:26:83:7c:6a no address
 available

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 84:8f:69:c7:45:26 no address
 available

 Sep 25 16:51:44 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:b1:99:83:77:82 no address available
 Sep 25 16:51:45 controller dnsmasq-dhcp[5860]: BOOTP(br100)
 14:5a:05:1e:43:1a no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 18:3f:47:ba:0b:a8 no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 e0:b9:ba:ae:78:dd no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPREQUEST(br100) 10.0.0.2
 fa:16:3e:64:cd:82
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 10.0.0.2
 fa:16:3e:64:cd:82 ubuntu01
 Sep 25 16:51:50 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 30:39:26:83:7c:6a no address available
 Sep 25 16:51:51 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 84:8f:69:c7:45:26 no address available
 Sep 25 16:51:52 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:b1:99:83:77:82 no address available
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
 192.168.60.2 00:25:11:cf:f0:54
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 192.168.60.2
 00:25:11:cf:f0:54 sed06
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available
 Sep 25 16:51:55 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available

 I don't think I'm using neutron, as far as I know I'm using nova-network.
 On the example I booted an ubuntu VM and it got the IP 10.0.0.2, and then I
 cound't logon on another computer from the lab.

 *Here are some lines from dmesg from a regular workstation that was
 already logged in while DevStack has a running VM:*

 albert ~ $ dmesg
 [23898.208033] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [23898.212053] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [23907.504018] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [23907.504038] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [23907.504041] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [23921.824036] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying
 [24041.754304] lockd: server lua.eclipse.ime.usp.br OK
 [24041.754316] lockd: server lua.eclipse.ime.usp.br OK
 [24101.753756] lockd: server lua.eclipse.ime.usp.br OK
 [24161.753178] lockd: server lua.eclipse.ime.usp.br OK
 [24161.808018] lockd: server lua.eclipse.ime.usp.br not responding, still
 trying

 *Here is my localrc (controller)*

 [stack@*controller* ~]$ *cat devstack/localrc*
 #VIRT_DRIVER=docker

 #SERVICE_HOST=10.11.0.40 # REMOVE THIS LINE FOR THE CONTROLLER
 # Stop DevStack polluting /opt/stack
 DESTDIR=/opt/stack/src/openstack

 # Switch to use QPid instead of RabbitMQ
 disable_service rabbit
 disable_service n-cpu
 enable_service qpid
 #enable_service qpid, n-cpu,n-net,n-api,n-vol

 # Replace with your primary interface name
 HOST_IP_IFACE=em1
 PUBLIC_INTERFACE=em1
 VLAN_INTERFACE=em1
 FLAT_INTERFACE=em1

 # Replace with whatever password you wish to use
 MYSQL_PASSWORD=badpassword
 SERVICE_TOKEN=badpassword
 SERVICE_PASSWORD=badpassword
 

Re: [Openstack] (no subject)

2013-09-25 Thread Albert Vonpupp
Thanks for your answer Calvin,

It seems ok for me, what do you think?

[stack@*controller* ~]$ ps aux | grep dnsmasq
nobody   24107  0.0  0.0  13160   688 ?S11:55   0:00
/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
--pid-file=/opt/stack/data/nova/networks/nova-br100.pid
--listen-address=10.0.0.1 --except-interface=lo
--dhcp-range=set:private,10.0.0.2,static,255.255.255.0,120s
--dhcp-lease-max=256
--dhcp-hostsfile=/opt/stack/data/nova/networks/nova-br100.conf
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocal
root 24108  0.0  0.0  13160   324 ?S11:55   0:00
/sbin/dnsmasq --strict-order --bind-interfaces --conf-file=
--pid-file=/opt/stack/data/nova/networks/nova-br100.pid
--listen-address=10.0.0.1 --except-interface=lo
--dhcp-range=set:private,10.0.0.2,static,255.255.255.0,120s
--dhcp-lease-max=256
--dhcp-hostsfile=/opt/stack/data/nova/networks/nova-br100.conf
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocal
stack24247  0.0  0.0 109404   912 tty1 S+   12:01   0:00 grep
--color=auto dnsmasq

Regards.


On Wed, Sep 25, 2013 at 6:33 PM, Calvin Austin caus...@bitglass.com wrote:

 It is also worth looking to see what dnsmasq processes you have (ps -ef|
 grep dnsmasq)
 2 (and only 2) dnsmasq processes are  configured/launched by nova-network
 to a listen address which should be the ip address of the br100 bridge eg
 192.168.0.1 . the only interface it explicits excludes is lo (loopback)

 The suspicious thing for me is that the server also did an ack for
 192.168.20.* and 192.168.30.* as well as the 10 network.

 regards
 calvin




 On Wed, Sep 25, 2013 at 1:46 PM, Albert Vonpupp vonp...@gmail.com wrote:

 Thanks Aaron and John for your fast answers.

 Unfortunatelly I forgot to include the subject on this message.

 To be honest I'm not totally sure what is going on, but what I notice is
 that when I do start a VM on top of OpenStack, after the bridge br100 is
 created, I cannot login the rest of the machines on the lab (those which
 are not related to the OpenStack tests, but on the same physical network)

 *Here is a cat of my /var/log/messages on the controller node*

 [root@*controller* ~]# *tail -20 /var/log*/messages

 Sep 25 16:51:40 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 e0:b9:ba:ae:78:dd no address
 available

 Sep 25 16:51:41 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:bc:a5:92:e1:c7 no address
 available

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
 192.168.30.52
 00:1b:b1:28:64:ae

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPACK(br100)
 192.168.30.52 00:1b:b1:28:64:ae
 Leliane-NB

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 30:39:26:83:7c:6a no address
 available

 Sep 25 16:51:42 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 84:8f:69:c7:45:26 no address
 available

 Sep 25 16:51:44 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:b1:99:83:77:82 no address available
 Sep 25 16:51:45 controller dnsmasq-dhcp[5860]: BOOTP(br100)
 14:5a:05:1e:43:1a no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 18:3f:47:ba:0b:a8 no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 e0:b9:ba:ae:78:dd no address available
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPREQUEST(br100)
 10.0.0.2 fa:16:3e:64:cd:82
 Sep 25 16:51:48 controller dnsmasq-dhcp[5860]: DHCPACK(br100) 10.0.0.2
 fa:16:3e:64:cd:82 ubuntu01
 Sep 25 16:51:50 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 30:39:26:83:7c:6a no address available
 Sep 25 16:51:51 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 84:8f:69:c7:45:26 no address available
 Sep 25 16:51:52 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 4c:b1:99:83:77:82 no address available
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPINFORM(br100)
 192.168.60.2 00:25:11:cf:f0:54
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPACK(br100)
 192.168.60.2 00:25:11:cf:f0:54 sed06
 Sep 25 16:51:53 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available
 Sep 25 16:51:55 controller dnsmasq-dhcp[5860]: DHCPDISCOVER(br100)
 7c:c3:a1:ac:a1:ed no address available

 I don't think I'm using neutron, as far as I know I'm using nova-network.
 On the example I booted an ubuntu VM and it got the IP 10.0.0.2, and then I
 cound't logon on another computer from the lab.

 *Here are some lines from dmesg from a regular workstation that was
 already logged in while DevStack has a running VM:*

 albert ~ $ dmesg
 [23898.208033] lockd: server lua.eclipse.ime.usp.br not responding,
 still trying
 [23898.212053] lockd: server lua.eclipse.ime.usp.br not responding,
 still trying
 [23907.504018] lockd: server lua.eclipse.ime.usp.br not responding,
 still trying
 [23907.504038] lockd: server