Re: [Openstack-operators] Flavors

2017-03-16 Thread Chris Friesen

On 03/16/2017 07:06 PM, Blair Bethwaite wrote:


Statement: breaks bin packing / have to match flavor dimensions to hardware
dimensions.
Comment: neither of these ring true to me given that most operators tend to
agree that memory is there first constraining resource dimension and it is
difficult to achieve high CPU utilisation before memory is exhausted. Plus
virtualisation is inherently about resource sharing and over-provisioning,
unless you have very detailed knowledge of your workloads a priori (or some
cycle-stealing/back-filling mechanism) you will always have under-utilisation
(possibly quite high on average) in some resource dimensions.


I think this would be highly dependent on the workload.  A virtual router is 
going to run out of CPU/network bandwidth far before memory is exhausted.


For similar reasons I'd disagree that virtualization is inherently about 
over-provisioning and suggest that (in some cases at least) it's more about 
flexibility over time.  Our customers generally care about maximizing 
performance and so nothing is over-provisioned...disk, NICs, CPUs, RAM are 
generally all exclusively allocated.


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] [neutron] Hooks for instance actions like creation, deletion

2017-03-16 Thread Masha Atakova

Hi everyone,

Is there any up-to-date functionality in nova / neutron which allows to 
run some additional code triggered by changes in instance like creating 
or deleting an instance?


I see that nova hooks are deprecated as of Nova 13:

https://github.com/openstack/nova/blob/master/nova/hooks.py#L19

While it's hard to find the reason for this deprecation, I also struggle 
to find if there's any up-to-date alternative to those hooks.


Thanks


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Flavors

2017-03-16 Thread Blair Bethwaite
There have been previous proposals (and if memory serves, even some
blueprints) for API extensions to allow this but they have apparently
stagnated. On the face of it I think OpenStack should support this (more
choice = win!) - doesn't mean that every cloud needs to use the feature. Is
it worth trying to resurrect some feature development around this? Sounds
like a potential forum session? We have already seen responses here from a
number of active and prominent operators, some seem to be quite emotive,
which could indicate this hits on some sore-points/bugbears.

Some comments about points raised already in this thread...

Statement: makes pricing hard because users can monopolise a subset of
infrastructure resource dimensions (e.g. memory, disk IOPs) leading to some
dimensions (e.g. CPU) being underutilised.
Comment: it may exacerbate the problem if users can request resource
dimensions with no limits or dependencies imposed, but that problem
typically already exists to some extent in any general-purpose deployment -
as others have mentioned they mostly find themselves constrained by memory.
That does not seem like a hard problem to workaround, e.g., dimensions
could have both absolute lower and upper bounds plus dynamic bounds based
on the setting of other dimensions.

Statement: breaks bin packing / have to match flavor dimensions to hardware
dimensions.
Comment: neither of these ring true to me given that most operators tend to
agree that memory is there first constraining resource dimension and it is
difficult to achieve high CPU utilisation before memory is exhausted. Plus
virtualisation is inherently about resource sharing and over-provisioning,
unless you have very detailed knowledge of your workloads a priori (or some
cycle-stealing/back-filling mechanism) you will always have
under-utilisation (possibly quite high on average) in some resource
dimensions.

Other thoughts...

A feature such as this also opens up the interesting possibility of
soft/fuzzy resource requests, which could be very useful in a private (i.e.
constrained) cloud environment, e.g., "give me an instance with 2-4 cores
and 8-16GB RAM and at least 500 IOPs".

Some of the statements in this thread also lend credence to the need for a
way to provide preemptible instances which would provide one way to
back-fill compute/cpu based resources.

Cheers,

On 17 March 2017 at 04:21, Tomáš Vondra  wrote:

> We at Homeatcloud.com do exactly this in our VPS service. The user can
> configure the VPS with any combination of CPU, RAM, and disk. However a)
> the configurations are all about 10% the size of the physical machines and
> b) the disks are in a SAN array, provisioned as volumes. So I give the
> users some flexibility and can better see what configurations they actually
> want and build new hypervisors with that in mind. They mostly want up to 4
> GB RAM anyway, si it’s not a big deal.
>
> Tomas Vondra
>
>
>
> *From:* Adam Lawson [mailto:alaw...@aqorn.com]
> *Sent:* Thursday, March 16, 2017 5:57 PM
> *To:* Jonathan D. Proulx
> *Cc:* OpenStack Operators
> *Subject:* Re: [Openstack-operators] Flavors
>
>
>
> One way I know some providers work around this when using OpenStack is by
> fronting the VM request with some code in the web server that checks if the
> requested spec has an existing flavor. if so, use the flavor, if not, use
> an admin account that creates a new flavor and assign use it for that user
> request then remove if when the build is complete. This naturally impacts
> your control over hardware efficiency but it makes your scenario possible
> (for better or for worse). I also hate being forced to do what someone else
> decided was going to be best for me. That's my decision and thanksully with
> openStack, this kind of thing is rather easy to do.
>
>
>
> //adam
>
>
>
> *Adam Lawson*
>
>
>
> Principal Architect, CEO
>
> Office: +1-916-794-5706 <(916)%20794-5706>
>
>
>
> On Thu, Mar 16, 2017 at 7:52 AM, Jonathan D. Proulx 
> wrote:
>
>
> I have always hated flavors and so do many of my users.
>
> On Wed, Mar 15, 2017 at 03:22:48PM -0700, James Downs wrote:
> :On Wed, Mar 15, 2017 at 10:10:00PM +, Fox, Kevin M wrote:
> :> I think the really short answer is something like: It greatly
> simplifies scheduling and billing.
> :
> :The real answer is that once you buy hardware, it's in a fixed radio of
> CPU/Ram/Disk/IOPS, etc.
>
> This while apparently reasonable is BS (at least in private cloud
> space).  What users request and what they actualy use are wildly
> divergent.
>
> *IF* usage of claimed resorces were at or near optimal then this might
> be true .  But if people are claiming 32G of ram because that how much
> you assigned to a 16 vCPU instance type but really just need 16
> threads with 2G or 4G then you packing still sucks.
>
> I'm mostly bound on memory so I mostly have my users select on that
> basis and over provide and over provision CPU since that can be
> effectively shared between VMs where memory needs 

Re: [Openstack-operators] Milan Ops Midcycle - Cells v2 session

2017-03-16 Thread Matt Riedemann

On 3/14/2017 4:11 AM, Arne Wiebalck wrote:

A first list of topics for the Cells v2 session is available here:

https://etherpad.openstack.org/p/MIL-ops-cellsv2

Please feel free to add items you’d like to see discussed.

Thanks!
 Belmiro & Arne

--
Arne Wiebalck
CERN IT



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



Hi,

I've gone through the MIL ops midcycle etherpad for cells v2 [1] and 
left some notes, answers, links to the PTG cells v2 recap, and some 
questions/feedback of my own.


Specifically, there was a request that some nova developers could be at 
the ops meetup session and as noted in the etherpad, the fact this was 
happening came as a late surprise to several of us. The developers are 
already trying to get funding to the PTG and the summit (if they are 
lucky), and throwing in a third travel venue is tough, especially with 
little to no direct advance notice. Please ping us in IRC or direct 
email, or put it on the weekly nova meeting agenda as a reminder. Then 
we can try and get someone there if possible.


If you're going to be in Boston for the Forum and are interested in 
talking about Nova, our topic brainstorming etherpad is here [2].


[1] https://etherpad.openstack.org/p/MIL-ops-cellsv2
[2] https://etherpad.openstack.org/p/BOS-Nova-brainstorming

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [gnocchi] Gnocchi, keystone, and Openstack Mitaka

2017-03-16 Thread Tracy Comstock Roesler
I just saw this email, apologize for the delay.  I installed the openstack
version, rather than the version in pip:

[r...@openstack-controller01.a.pc.ostk.com ~] # rpm -qa | grep
python-gnocchi
python-gnocchiclient-2.2.0-1.el7.noarch
python-gnocchi-2.1.4-1.el7.noarch




r...@openstack-controller01.a.pc.ostk.com ~] # rpm -qa | grep oslo-policy
python2-oslo-policy-1.6.0-1.el7.noarch



On 2/27/17, 1:17 PM, "gordon chung"  wrote:

>can you add what version of gnocchi, gnocchiclient and oslo.policy you
>have? might be easier if open a bug[1]. i don't see anything wrong at
>first glance and i don't recall there being a similar issue in past.
>
>[1] https://bugs.launchpad.net/gnocchi
>
>On 23/02/17 11:54 AM, Tracy Comstock Roesler wrote:
>> I¹ve run into a problem with the gnocchi CLI.  Whenever I run Œgnocchi
>> status¹ I get a 403 Forbidden, but I can run other commands like
>> 'gnocchi resource create¹ no problem.
>>
>> I¹ve checked the policy.json and it looks like ³admin² has rights to get
>> status, the same as create resources.  I cannot figure out why get
>> status would show a 403 forbidden, but I can run other commands just
>>fine.
>>
>> [root ~] # gnocchi status --debug
>> REQ: curl -g -i -X GET http://keystone:35357/v3 -H "Accept:
>> application/json" -H "User-Agent: keystoneauth1/2.4.1
>> python-requests/2.10.0 CPython/2.7.5"
>> Starting new HTTP connection (1): keystone
>> "GET /v3 HTTP/1.1" 200 277
>> RESP: [200] Content-Type: application/json Content-Length: 277
>> Connection: keep-alive Date: Thu, 23 Feb 2017 16:52:40 GMT Server:
>> Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 Vary: X-Auth-Token
>> x-openstack-request-id: req-189a8db8-6210-4735-bc66-b2dc90b00a38
>> RESP BODY: {"version": {"status": "stable", "updated":
>> "2016-04-04T00:00:00Z", "media-types": [{"base": "application/json",
>> "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.6",
>> "links": [{"href": "http://keystone:35357/v3/";, "rel": "self"}]}}
>>
>> Making authentication request to http://keystone:35357/v3/auth/tokens
>> "POST /v3/auth/tokens HTTP/1.1" 201 3874
>> REQ: curl -g -i -X GET http://gnocchi:8041/v1/status -H "User-Agent:
>> keystoneauth1/2.4.1 python-requests/2.10.0 CPython/2.7.5" -H "Accept:
>> application/json, */*" -H "X-Auth-Token: {SHA1}AAA"
>> Starting new HTTP connection (1): gnocchi
>> "GET /v1/status HTTP/1.1" 403 54
>> RESP: [403] Content-Type: application/json; charset=UTF-8
>> Content-Length: 54 Connection: keep-alive Server: Werkzeug/0.9.1
>> Python/2.7.5 Date: Thu, 23 Feb 2017 16:52:40 GMT
>> RESP BODY: {"code": 403, "description": "", "title": "Forbidden"}
>>
>> Forbidden (HTTP 403)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/cliff/app.py", line 346, in
>> run_subcommand
>> result = cmd.run(parsed_args)
>>   File "/usr/lib/python2.7/site-packages/cliff/display.py", line 79, in
>>run
>> column_names, data = self.take_action(parsed_args)
>>   File
>> "/usr/lib/python2.7/site-packages/gnocchiclient/v1/status_cli.py", line
>> 21, in take_action
>> status = self.app.client.status.get()
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/v1/status.py",
>> line 21, in get
>> return self._get(self.url).json()
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/v1/base.py", line
>> 37, in _get
>> return self.client.api.get(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line
>> 173, in get
>> return self.request(url, 'GET', **kwargs)
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/client.py", line
>> 38, in request
>> raise exceptions.from_response(resp, method)
>> Forbidden: Forbidden (HTTP 403)
>> Traceback (most recent call last):
>>   File "/usr/bin/gnocchi", line 10, in 
>> sys.exit(main())
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/shell.py", line
>> 211, in main
>> return GnocchiShell().run(args)
>>   File "/usr/lib/python2.7/site-packages/cliff/app.py", line 226, in run
>> result = self.run_subcommand(remainder)
>>   File "/usr/lib/python2.7/site-packages/cliff/app.py", line 346, in
>> run_subcommand
>> result = cmd.run(parsed_args)
>>   File "/usr/lib/python2.7/site-packages/cliff/display.py", line 79, in
>>run
>> column_names, data = self.take_action(parsed_args)
>>   File
>> "/usr/lib/python2.7/site-packages/gnocchiclient/v1/status_cli.py", line
>> 21, in take_action
>> status = self.app.client.status.get()
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/v1/status.py",
>> line 21, in get
>> return self._get(self.url).json()
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/v1/base.py", line
>> 37, in _get
>> return self.client.api.get(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line
>> 173, in get
>> return self.request(url, 'GET', **kwargs)
>>   File "/usr/lib/python2.7/site-packages/gnocchiclient/client.py", line
>> 38, in request
>> 

Re: [Openstack-operators] need input on log translations

2017-03-16 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-03-10 09:38:43 -0500:
> There is a discussion on the -dev mailing list about the i18n team
> decision to stop translating log messages [1]. The policy change means
> that we may be able to clean up quite a lot of "clutter" throughout the
> service code, because without anyone actually translating the messages
> there is no need for the markup code used to tag those strings.
> 
> If we do remove the markup from log messages, we will be effectively
> removing "multilingual logs" as a feature. Given the amount of work
> and code churn involved in the first roll out, I would not expect
> us to restore that feature later.
> 
> Therefore, before we take what would almost certainly be an
> irreversible action, we would like some input about whether log
> message translations are useful to anyone. Please let us know if
> you or your customers use them.
> 
> Thanks,
> Doug
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html
> 

Thank you all for your input; this has been very valuable.

Based on the apparently lack of interest in the feature, I have
recommended that teams go ahead and start removing the translation
markup from log messages, while keeping it for error messages associated
with the API.

Doug

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [HA] MIL Ops meetup -- Rabbitmq Pitfalls with HA etherpad

2017-03-16 Thread Adam Spiers

Rochelle Grober  wrote:

Please add your comments, suggestions, info, etc. to the Rabbitmq with HA 
pitfalls etherpad.  There are a few things out there already that will 
hopefully induce more and better items to include.

See you tomorrow!

https://etherpad.openstack.org/p/MIL-ops-rabbitmq-pitfalls-ha


Thanks for organising this discussion which looks like it was an
interesting one! - I'm sad to have missed it.

I'd like to make a few related points:

- In case anyone wasn't aware, there is an #openstack-ha IRC channel
  on Freenode where discussions on all things HA are very welcome.
  RabbitMQ HA war stories would certainly be considered on topic :-)

- Similarly, there is a weekly IRC meeting where all OpenStack HA
  topics are up for discussion:

https://wiki.openstack.org/wiki/Meetings/HATeamMeeting

- We have just started an initiative to revamp the HA Guide:


http://specs.openstack.org/openstack/docs-specs/specs/ocata/improve-ha-guide.html

  Any input regarding content relating to RabbitMQ HA or indeed any
  other aspects of OpenStack HA are most welcome.  For example, it
  might be great to capture some of these war stories in the guide,
  or maybe put them in the Operations Guide and then cross-link from
  the HA Guide.  If you might interested in getting involved in
  improving the HA Guide in any way (no matter how small), please
  join #openstack-docs and say hi :-)

Cheers,
Adam

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Flavors

2017-03-16 Thread Tomáš Vondra
We at Homeatcloud.com do exactly this in our VPS service. The user can 
configure the VPS with any combination of CPU, RAM, and disk. However a) the 
configurations are all about 10% the size of the physical machines and b) the 
disks are in a SAN array, provisioned as volumes. So I give the users some 
flexibility and can better see what configurations they actually want and build 
new hypervisors with that in mind. They mostly want up to 4 GB RAM anyway, si 
it’s not a big deal.

Tomas Vondra

 

From: Adam Lawson [mailto:alaw...@aqorn.com] 
Sent: Thursday, March 16, 2017 5:57 PM
To: Jonathan D. Proulx
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Flavors

 

One way I know some providers work around this when using OpenStack is by 
fronting the VM request with some code in the web server that checks if the 
requested spec has an existing flavor. if so, use the flavor, if not, use an 
admin account that creates a new flavor and assign use it for that user request 
then remove if when the build is complete. This naturally impacts your control 
over hardware efficiency but it makes your scenario possible (for better or for 
worse). I also hate being forced to do what someone else decided was going to 
be best for me. That's my decision and thanksully with openStack, this kind of 
thing is rather easy to do.

 

//adam





Adam Lawson

 

Principal Architect, CEO

Office: +1-916-794-5706

 

On Thu, Mar 16, 2017 at 7:52 AM, Jonathan D. Proulx  wrote:


I have always hated flavors and so do many of my users.

On Wed, Mar 15, 2017 at 03:22:48PM -0700, James Downs wrote:
:On Wed, Mar 15, 2017 at 10:10:00PM +, Fox, Kevin M wrote:
:> I think the really short answer is something like: It greatly simplifies 
scheduling and billing.
:
:The real answer is that once you buy hardware, it's in a fixed radio of 
CPU/Ram/Disk/IOPS, etc.

This while apparently reasonable is BS (at least in private cloud
space).  What users request and what they actualy use are wildly
divergent.

*IF* usage of claimed resorces were at or near optimal then this might
be true .  But if people are claiming 32G of ram because that how much
you assigned to a 16 vCPU instance type but really just need 16
threads with 2G or 4G then you packing still sucks.

I'm mostly bound on memory so I mostly have my users select on that
basis and over provide and over provision CPU since that can be
effectively shared between VMs where memory needs to be dedicated
(well mostly)

I'm sure I've ranted abotu this before but as you see from other
responses we seem to be in the minority position so mostly I rant at
the walls while my office mates look on perplexed (actually they're
pretty used to it by now and ignore me :) )

-Jon


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Flavors

2017-03-16 Thread Adam Lawson
One way I know some providers work around this when using OpenStack is by
fronting the VM request with some code in the web server that checks if the
requested spec has an existing flavor. if so, use the flavor, if not, use
an admin account that creates a new flavor and assign use it for that user
request then remove if when the build is complete. This naturally impacts
your control over hardware efficiency but it makes your scenario possible
(for better or for worse). I also hate being forced to do what someone else
decided was going to be best for me. That's my decision and thanksully with
openStack, this kind of thing is rather easy to do.

//adam


*Adam Lawson*

Principal Architect, CEO
Office: +1-916-794-5706

On Thu, Mar 16, 2017 at 7:52 AM, Jonathan D. Proulx 
wrote:

>
> I have always hated flavors and so do many of my users.
>
> On Wed, Mar 15, 2017 at 03:22:48PM -0700, James Downs wrote:
> :On Wed, Mar 15, 2017 at 10:10:00PM +, Fox, Kevin M wrote:
> :> I think the really short answer is something like: It greatly
> simplifies scheduling and billing.
> :
> :The real answer is that once you buy hardware, it's in a fixed radio of
> CPU/Ram/Disk/IOPS, etc.
>
> This while apparently reasonable is BS (at least in private cloud
> space).  What users request and what they actualy use are wildly
> divergent.
>
> *IF* usage of claimed resorces were at or near optimal then this might
> be true .  But if people are claiming 32G of ram because that how much
> you assigned to a 16 vCPU instance type but really just need 16
> threads with 2G or 4G then you packing still sucks.
>
> I'm mostly bound on memory so I mostly have my users select on that
> basis and over provide and over provision CPU since that can be
> effectively shared between VMs where memory needs to be dedicated
> (well mostly)
>
> I'm sure I've ranted abotu this before but as you see from other
> responses we seem to be in the minority position so mostly I rant at
> the walls while my office mates look on perplexed (actually they're
> pretty used to it by now and ignore me :) )
>
> -Jon
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Telecom-NFV] Telecom/NFV Forum brainstorming - final request for discussion topics

2017-03-16 Thread Kathy Cacciatore

Please wrap up your telecom/NFV Forum discussion topics this week, adding them 
here [1]. Also, help prioritize the topics already there with your +1s and -1s. 
There are great topics already! Everyone, all working groups, especially 
Telecom/NFV Operators and the LCOO are welcome to participate in this 
category-based etherpad (if you don't have your own). Thank you.
 
[1] [ https://etherpad.openstack.org/p/BOS-UC-brainstorming-Telecom&NFV ]( 
https://etherpad.openstack.org/p/BOS-UC-brainstorming-Telecom&NFV )
-- 
Regards, 

Kathy Cacciatore
Consulting Marketing Manager
OpenStack Foundation
1-512-970-2807 (mobile)
Part time: Monday - Thursday, 9am - 2pm US CT
kat...@openstack.org___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Flavors

2017-03-16 Thread Jonathan D. Proulx

I have always hated flavors and so do many of my users.

On Wed, Mar 15, 2017 at 03:22:48PM -0700, James Downs wrote:
:On Wed, Mar 15, 2017 at 10:10:00PM +, Fox, Kevin M wrote:
:> I think the really short answer is something like: It greatly simplifies 
scheduling and billing.
:
:The real answer is that once you buy hardware, it's in a fixed radio of 
CPU/Ram/Disk/IOPS, etc.

This while apparently reasonable is BS (at least in private cloud
space).  What users request and what they actualy use are wildly
divergent.

*IF* usage of claimed resorces were at or near optimal then this might
be true .  But if people are claiming 32G of ram because that how much
you assigned to a 16 vCPU instance type but really just need 16
threads with 2G or 4G then you packing still sucks.

I'm mostly bound on memory so I mostly have my users select on that
basis and over provide and over provision CPU since that can be
effectively shared between VMs where memory needs to be dedicated
(well mostly)

I'm sure I've ranted abotu this before but as you see from other
responses we seem to be in the minority position so mostly I rant at
the walls while my office mates look on perplexed (actually they're
pretty used to it by now and ignore me :) )

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack][openstack-ansible] VMs not able to access external n/w

2017-03-16 Thread Amit Kumar
Hi All,

I have deployed Openstack Newton release using Openstack-Ansible 14.0.8
with target hosts (Controller and Compute) having Ubuntu 16.04. I want to
ping/access my external lab n/w through the VMs (instances in Openstack
Compute node) but not able to do so. Here are my environment and respective
configurations.

Basically, I had tried to create the example test n/w configuration (
https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/app-config-test.html#test-environment-config)
with
some changes in /etc/network/interfaces file because each Compute and
Controller Node are having two NICs in my setup. As per the example
environment, I am also having 1 Compute and 1 Controller Node in my setup.
Each physical node (Controller and Compute) is having two NICs. eth0 of
each machine is connected to a switch for any kind of communication b/w
compute and controller node. eth1 of each machine is connected to my lab
n/w (192.168.255.XXX). My */etc/network/interfaces* file from Controller
and Compute node are attached with this e-mail. I have also attached
*openstack_user_config.yml* file which I am using.

As my requirement is to provide external connectivity to VMs running inside
Openstack environment. Could you guys please have a look at my network
interfaces files and openstack_user_config.yml file to see if there is
anything wrong in these configurations which is blocking me in providing
the external connectivity to my VMs. Few things which might be helpful in
analyzing these files:

   - My lab n/w (192.168.255.XXX) is not tagged n/w, it doesn't expect VLAN
   tagged packets. So, do I need to create flat external n/w? As you can
   notice in my openstack_user_config.yml file, I have commented flat n/w
   section from provider networks section. It was done because when I first
   created this setup, I was unable to launch VMs and on discussion on
   openstack-ansible channel and looking at logs, it was found that "eth12"
   interface was non-existent on Compute node and hence error was appearing in
   logs. So, folks in openstack-ansible channel suggested to comment the flat
   n/w configuration from openstack_user_config.yml and re-configure neutron
   and give it a try. After this, I was able to launch VMs. But now with the
   requirement to ping/access outside lab n/w, it seems that flat n/w
   configuration would be required again. Please also suggest what changes are
   required to be made so that flat n/w configuration is also successful.
   - One more thing, if you look at my network interfaces file, br-vlan is
   having eth0 as bridge-port but my eth0 is not connected to the outside
   world i.e. lab n/w. Shouldn't br-vlan be using bridge-port as eth1 instead
   of eth0 considering that br-vlan is providing connectivity to external n/w?
   Now this may seem strange but when I am deleting eth0 from br-vlan and
   adding eth1 on br-vlan of both controller and compute node, these hosts are
   not reachable or able to ping (after some time) any other lab machine on
   192.168.255.XXX n/w and vice-versa whereas with having eth0 in br-vlan,
   compute and controller are able to access lab machines and vice-versa.


Just to include more information, if required, I am adding the commands I
used to create n/ws.
*My VMs are on INTERNAL_NET1 created using following commands:*
 - openstack network create --provider-network-type vlan INTERNAL_NET1
 - openstack subnet create INTERNAL_SUBNET_1_1 --network INTERNAL_NET1
--subnet-range 192.168.2.0/24

*EXTERNAL_NET is created using following commands:*
 - neutron net-create --provider:physical_network=flat
--provider:network_type=flat --shared --router:external=true GATEWAY_NET
 - neutron subnet-create GATEWAY_NET 192.168.255.0/24 --name GATEWAY_SUBNET
--gateway=192.168.255.1 --allocation-pool
start=192.168.255.81,end=192.168.255.100

* Earlier I had tried --provider:physical_network=vlan
--provider:network_type=vlan and things didn't work because lab n/w seems
not expecting VLAN tagged packets. So, thinking of having flat n/w now.*

*Router is created and GW and interfaces are set using following commands:*
 - neutron router-create NEUTRON-ROUTER
 - neutron router-gateway-set NEUTRON-ROUTER GATEWAY_NET
 - neutron router-interface-add NEUTRON-ROUTER INTERNAL_SUBNET_1_1

Thanks.

Regards,
Amit
# Controller Node (Infra).
# This illustrates the configuration of the first
# Infrastructure host and the IP addresses assigned should be adapted
# for implementation on the other hosts.
#
# After implementing this configuration, the host will need to be
# rebooted.

# Physical interface
auto eth0
iface eth0 inet manual

# Container/Host management VLAN interface
auto eth0.10
iface eth0.10 inet manual
vlan-raw-device eth0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto eth0.30
iface eth0.30 inet manual
vlan-raw-device eth0

# Storage network VLAN interface (optional)
auto eth0.20
iface eth0.20 inet manual
vlan-raw-device eth0


Re: [Openstack-operators] need input on log translations

2017-03-16 Thread Akihiro Motoki
2017-03-14 0:15 GMT+09:00 Doug Hellmann :
> Excerpts from Doug Hellmann's message of 2017-03-10 09:38:43 -0500:
>> There is a discussion on the -dev mailing list about the i18n team
>> decision to stop translating log messages [1]. The policy change means
>> that we may be able to clean up quite a lot of "clutter" throughout the
>> service code, because without anyone actually translating the messages
>> there is no need for the markup code used to tag those strings.
>>
>> If we do remove the markup from log messages, we will be effectively
>> removing "multilingual logs" as a feature. Given the amount of work
>> and code churn involved in the first roll out, I would not expect
>> us to restore that feature later.
>>
>> Therefore, before we take what would almost certainly be an
>> irreversible action, we would like some input about whether log
>> message translations are useful to anyone. Please let us know if
>> you or your customers use them.
>>
>> Thanks,
>> Doug
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html
>>
>
> We've heard from several European operators on this thread, and their
> input makes it clear that log translations are not useful to them.
>
> IIRC, the translation feature was originally added for some of our APAC
> users. Is anyone from APAC actively using the feature?

As a former Japanese Language team coordinator,
I can say Japanese OpenStack operators are not interested in using
translated version of log messages.
I made a quick survey on translations in OpenStack Days Tokyo two
years ago (2015).
There were around 300~400 attendees and nobody raised hands for log
translations.
(I am not sure how many operators were there, but I saw at least
several major OpenStack operators in Japan.)

Akihiro

>
> Doug
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] need input on log translations

2017-03-16 Thread Jaesuk Ahn
I’ve heard from several Korea operators, and I too was involved in OpenStack 
operation. I cannot represent everyone here in Korea, but It is mostly clear 
that log translation are not beneficial for operators, at least here in Korea. 

Thanks. 

Jaesuk Ahn, Ph.D. 

Software Defined Infra. Laboratory, SK Telecom

On 2017년 3월 14일 (화) at 오전 12:15 Doug Hellmann

<
mailto:Doug Hellmann 
> wrote:

a, pre, code, a:link, body { word-wrap: break-word !important; }

Excerpts from Doug Hellmann's message of 2017-03-10 09:38:43 -0500:

> There is a discussion on the -dev mailing list about the i18n team

> decision to stop translating log messages [1]. The policy change means

> that we may be able to clean up quite a lot of "clutter" throughout the

> service code, because without anyone actually translating the messages

> there is no need for the markup code used to tag those strings.

>

> If we do remove the markup from log messages, we will be effectively

> removing "multilingual logs" as a feature. Given the amount of work

> and code churn involved in the first roll out, I would not expect

> us to restore that feature later.

>

> Therefore, before we take what would almost certainly be an

> irreversible action, we would like some input about whether log

> message translations are useful to anyone. Please let us know if

> you or your customers use them.

>

> Thanks,

> Doug

>

> [1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html
>

We've heard from several European operators on this thread, and their

input makes it clear that log translations are not useful to them.

IIRC, the translation feature was originally added for some of our APAC

users. Is anyone from APAC actively using the feature?

Doug

___

OpenStack-operators mailing list
mailto:OpenStack-operators@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators