[Openstack-operators] [openstack] [Octavia] unable to contact instance

2016-06-29 Thread Joris S’heeren
Hi all,

We are in the process of configuring the octavia lbaasv2 service.
On our mitaka environment we are using neutron dvr with openvswitch.  We have 
multiple controllers (3 to be exact).
We're following 
https://github.com/openstack/octavia/blob/stable/mitaka/devstack/plugin.sh as a 
guide.

After creating a load balancer, the amphora instance is booted.  It receives 
ips in the lb-net and the private network.

The octavia-worker log show:
DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/192.168.1.126 request 
/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.246.11:9443/0.5/plug/vip/192.168.1.126 request 
/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect 
to instance. Retrying.
WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect 
to instance. Retrying.
etc...

I can ping the amphora instance on the lb-net; not on the private net as it has 
not received the request to activate the second interface.  I can ssh into the 
amphora instance from the lb-net namespace.
The amphora agent is running inside the amphora instance and listening on port 
9443, the correct sec group is attached to the amphora instance so that should 
not be an issue.

Also we can see in dmesg :
[216682.378571] BLOCKED OUTPUT : IN= OUT=eth0 SRC=1.2.3.4 DST=192.168.246.11 
LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=19651 DF PROTO=TCP SPT=34464 DPT=9443 
WINDOW=29200 RES=0x00 SYN URGP=0

We can see the worker trying to contact the amphora instance
octavia-w 31502 octavia   13u  IPv4 27647551  0t0  TCP 
1.2.3.4:35304->192.168.246.11:9443 (SYN_SENT)

Do the octavia processes try to contact the amphora instance directly at it's 
lb-net ip? Not through a namespace?
For this to work, do we need to configure the health manager ports as described 
here: 
https://github.com/openstack/octavia/blob/stable/mitaka/devstack/plugin.sh#L123
Or setup some iptables rules or routes?

Kind regards
Joris S'heeren

--
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Reaching VXLAN tenant networks from outside (without floating IPs)

2016-06-29 Thread Mike Spreitzer
Gustavo Randich  wrote on 06/29/2016 03:17:54 
PM:

> Hi operators...
> 
> Transitioning from nova-network to Neutron (Mitaka), one of the key 
> issues we are facing is how to reach VMs in VXLAN tenant networks 
> without using precious floating IPs.
> 
> Things that are outside Neutron in our case are:
> 
> - in-house made application orchestrator: needs SSH access to 
> instances to perform various tasks (start / shutdown apps, configure
> filesystems, etc.)
> 
> - various centralized and external monitoring/metrics pollers: need 
> SNMP / SSH access to gather status and trends
> 
> - internal customers: need SSH access to instance from non-openstack
> VPN service
> 
> - ideally, non-VXLAN aware traffic balancer appliances
> 
> We have considered these approaches:
> 
> - putting some of the external components inside a Network Node: 
> inviable because components need access to multiple Neutron deployments
> 
> - Neutron's VPNaaS: cannot figure how to configure a client-to-site 
> VPN topology
> 
> - integrate hardware switches capable of VXLAN VTEP: for us in this 
> stage, it is complex and expensive
> 
> - other?

You know Neutron includes routers that can route between tenant networks 
and external networks, right?  You could use those, if your tenant 
networks use disjoint IP subnets.

Regards,
Mike



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] App Catalog IRC meeting Thursday June 30th

2016-06-29 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for June 30th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Tomorrow we expect to talk in some detail about our next steps with
implement GLARE as a back-end for the Community App Catalog.
(Mirantis has thrown several resources at making this happen, thanks!)

Hope to see you there tomorrow!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Seeking feedback: Active User Contributor (AUC) eligibility requirements

2016-06-29 Thread Shamail Tahir
Hi everyone,

The AUC Recognition WG has been hard at work on milestone-4 of our plan
which is to identify the eligibility criteria for each community
contributor role that is covered by AUC.  We had a great mix of community
people involved in defining these thresholds but we wanted to also open
this up for broader community feedback before we propose them to the user
committee.  AUC is a new concept and we hope to make iterative improvements
going forward... you can consider the guidelines below as "version 1" and I
am certain they will evolve as lessons are learned.  Thank you in advance
for your feedback!

·  Official User Group organizers

o   Listed as an organizer or coordinator for an official OpenStack user
group

·  Active members of official UC Working Groups

o   Attend 25% of the IRC meetings and have spoken more than 25 times OR
have spoken more than 100 times regardless of attendance count over the
last six months

o   WG that do not use IRC for their meetings will depend on the meeting
chair(s) to identify active participation from attendees

·  Ops meetup moderators

o   Moderate a session at the operators meetup over the last six
months AND/OR

o   Host the operators meetup (limit 2 people from the hosting
organization) over the last six months

·  Contributions to any repository under UC governance (ops
repositories, user stories repository, etc.)

o   Submitted two or more patches to a UC governed repository over the last
six months

·  Track chairs for OpenStack Summits

o   Identified track chair for the upcoming OpenStack Summit (based on when
data is gathered) [this is a forward-facing metric]

·  Contributors to Superuser (articles, interviews, user stories, etc.)

o   Listed as author in at least one publication at superuser.openstack.org
over the last six months

·  Submission for eligibility to AUC review panel

o   No formal criteria, anyone can self-nominate, and nominations will be
reviewed per guidance established in milestone-5

·  Active moderators on ask.openstack

o   Listed as moderator on Ask OpenStack and have over 500 karma

There is additional information available in the etherpad[1] the AUC
recognition WG has been using for this task which includes Q (question
and answers) between team members.

[1] https://etherpad.openstack.org/p/uc-recog-metrics

-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [NFV] [Tacker] Problem installing Tacker on Mitaka when creating DB

2016-06-29 Thread Sridhar Ramaswamy
Hi Pedro,

We have a bug against this,

https://bugs.launchpad.net/tacker/+bug/1594807

.. and a fix is in the works. Meanwhile, the workaround is to downgrade to
an older MariaDB version.

- Sridhar

On Wed, Jun 29, 2016 at 10:27 AM, Pedro Sousa  wrote:

> Hi all,
>
> I'm trying to install tacker on mitaka Centos 7.2 RDO following this
> howto:
> http://docs.openstack.org/developer/tacker/install/manual_installation.html
> .
>
> I'm having a DB error  when I run the command "tacker-db-manage
> --config-file /etc/tacker/tacker.conf upgrade head":
>
> root@overcloud-controller-0 tacker-0.3.1]# tacker-db-manage --config-file 
> /etc/tacker/tacker.conf upgrade head
> INFO  [alembic.runtime.migration] Context impl MySQLImpl.
> INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
> INFO  [alembic.runtime.migration] Running upgrade  -> 1c6b0d82afcd, add 
> tables for tacker framework
> INFO  [alembic.runtime.migration] Running upgrade 1c6b0d82afcd -> 
> 81ffa86020d, rpc_proxy
> INFO  [alembic.runtime.migration] Running upgrade 81ffa86020d -> 
> 4c31092895b8, empty message
> INFO  [alembic.runtime.migration] Running upgrade 4c31092895b8 -> 
> 13c0e0661015, add descrition to vnf
> INFO  [alembic.runtime.migration] Running upgrade 13c0e0661015 -> 
> 5958429bcb3c, modify datatype of value
> INFO  [alembic.runtime.migration] Running upgrade 5958429bcb3c -> 
> 12a57080b277, Add Service related dbs
> INFO  [alembic.runtime.migration] Running upgrade 12a57080b277 -> 
> 12a57080b278, Alter devices
> Traceback (most recent call last):
>   File "/bin/tacker-db-manage", line 10, in 
> sys.exit(main())
>   File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py", line 
> 153, in main
> CONF.command.func(config, CONF.command.name)
>   File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py", line 
> 67, in do_upgrade_downgrade
> do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
>   File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py", line 
> 45, in do_alembic_command
> getattr(alembic_command, cmd)(config, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in 
> upgrade
> script.run_env()
>   File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, 
> in run_env
> util.load_python_file(self.dir, 'env.py')
>   File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, 
> in load_python_file
> module = load_module_py(module_id, path)
>   File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
> load_module_py
> mod = imp.load_source(module_id, path, fp)
>   File 
> "/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/env.py",
>  line 84, in 
> run_migrations_online()
>   File 
> "/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/env.py",
>  line 76, in run_migrations_online
> context.run_migrations()
>   File "", line 8, in run_migrations
>   File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", 
> line 797, in run_migrations
> self.get_context().run_migrations(**kw)
>   File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 
> 312, in run_migrations
> step.migration_fn(**kw)
>   File 
> "/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/versions/12a57080b278_alter_devices.py",
>  line 36, in upgrade
> nullable=False)
>   File "", line 8, in alter_column
>   File "", line 3, in alter_column
>   File "/usr/lib/python2.7/site-packages/alembic/operations/ops.py", line 
> 1414, in alter_column
> return operations.invoke(alt)
>   File "/usr/lib/python2.7/site-packages/alembic/operations/base.py", line 
> 318, in invoke
> return fn(self, operation)
>   File "/usr/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 
> 53, in alter_column
> **operation.kw
>   File "/usr/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 66, in 
> alter_column
> else existing_autoincrement
>   File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 118, in 
> _exec
> return conn.execute(construct, *multiparams, **params)
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
> 914, in execute
> return meth(self, multiparams, params)
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, 
> in _execute_on_connection
> return connection._execute_ddl(self, multiparams, params)
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
> 968, in _execute_ddl
> compiled
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
> 1146, in _execute_context
> context)
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
> 1341, in _handle_dbapi_exception
> exc_info
>   File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
> 200, in 

Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Matt Fischer
We automate all our flavor creation and don't allow people to make their
own, therefore everyone gets a flavor with some restriction, but that may
not fit your use case.

On Wed, Jun 29, 2016 at 12:03 PM, Daniel Levy  wrote:

> Thanks for the responses; I'm aware of the QOS policies in Openstack,
> however I'd like them to be applied automatically. Using predefined flavors
> as described by Matt Fischer above seems like a good approach, are there
> any solutions for non-predefined flavors?
>
>
> - Original message -
> From: Assaf Muller 
> To: Joseph Bajin 
> Cc: Daniel Levy/Austin/IBM@IBMUS, OpenStack Operators <
> openstack-operators@lists.openstack.org>
> Subject: Re: [Openstack-operators] Bandwidth limitations
> Date: Wed, Jun 29, 2016 12:46 PM
>
> On Wed, Jun 29, 2016 at 12:43 PM, Joseph Bajin 
> wrote:
> > Hi there,
> >
> > It looks like QOS is already available within the Mitaka release.
> Maybe it
> > doesn't have all the features you need, but looks to be a good start.
> > http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.html
>
> It's available from Neutron's Liberty release even. The new feature
> provides a new QoS bandwidth limitation API, which when using the OVS
> agent, is implemented via an OVS feature as such [1].
>
> It sets the 'ingress_policing_rate' and 'ingress_policing_burst'
> attributes on the VM's interface record in the ovsdb. Internally to
> OVS that is implemented via 'tc' and by dropping packets over the
> specified rate as detailed here [2].
>
> [1]
> https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/common/ovs_lib.py#L539
> [2] http://openvswitch.org/support/config-cookbooks/qos-rate-limiting/
>
> >
> > I haven't used it yet, but maybe someone else will pipe up with some
> > expierence.
> >
> > --Joe
> >
> > On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy  wrote:
> >>
> >> Hi all,
> >> I'd like to learn about potential solutions anyone out there is using
> for
> >> bandwidth limitations on VMs. Potentially applying QOS (quality of
> service)
> >> rules on the VM ports in an automated fashion.
> >> If there are no current solutions, I might submit a blue print to tackle
> >> this issue
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Daniel Levy
Thanks for the responses; I'm aware of the QOS policies in Openstack, however I'd like them to be applied automatically. Using predefined flavors as described by Matt Fischer above seems like a good approach, are there any solutions for non-predefined flavors?
 
- Original message -From: Assaf Muller To: Joseph Bajin Cc: Daniel Levy/Austin/IBM@IBMUS, OpenStack Operators Subject: Re: [Openstack-operators] Bandwidth limitationsDate: Wed, Jun 29, 2016 12:46 PM 
On Wed, Jun 29, 2016 at 12:43 PM, Joseph Bajin  wrote:> Hi there,>> It looks like QOS is already available within the Mitaka release.   Maybe it> doesn't have all the features you need, but looks to be a good start.> http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.htmlIt's available from Neutron's Liberty release even. The new featureprovides a new QoS bandwidth limitation API, which when using the OVSagent, is implemented via an OVS feature as such [1].It sets the 'ingress_policing_rate' and 'ingress_policing_burst'attributes on the VM's interface record in the ovsdb. Internally toOVS that is implemented via 'tc' and by dropping packets over thespecified rate as detailed here [2].[1] https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/common/ovs_lib.py#L539[2] http://openvswitch.org/support/config-cookbooks/qos-rate-limiting/>> I haven't used it yet, but maybe someone else will pipe up with some> expierence.>> --Joe>> On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy  wrote: Hi all,>> I'd like to learn about potential solutions anyone out there is using for>> bandwidth limitations on VMs. Potentially applying QOS (quality of service)>> rules on the VM ports in an automated fashion.>> If there are no current solutions, I might submit a blue print to tackle>> this issue>> ___>> OpenStack-operators mailing list>> OpenStack-operators@lists.openstack.org>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators> ___> OpenStack-operators mailing list> OpenStack-operators@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators> 
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Assaf Muller
On Wed, Jun 29, 2016 at 12:43 PM, Joseph Bajin  wrote:
> Hi there,
>
> It looks like QOS is already available within the Mitaka release.   Maybe it
> doesn't have all the features you need, but looks to be a good start.
> http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.html

It's available from Neutron's Liberty release even. The new feature
provides a new QoS bandwidth limitation API, which when using the OVS
agent, is implemented via an OVS feature as such [1].

It sets the 'ingress_policing_rate' and 'ingress_policing_burst'
attributes on the VM's interface record in the ovsdb. Internally to
OVS that is implemented via 'tc' and by dropping packets over the
specified rate as detailed here [2].

[1] 
https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/common/ovs_lib.py#L539
[2] http://openvswitch.org/support/config-cookbooks/qos-rate-limiting/

>
> I haven't used it yet, but maybe someone else will pipe up with some
> expierence.
>
> --Joe
>
> On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy  wrote:
>>
>> Hi all,
>> I'd like to learn about potential solutions anyone out there is using for
>> bandwidth limitations on VMs. Potentially applying QOS (quality of service)
>> rules on the VM ports in an automated fashion.
>> If there are no current solutions, I might submit a blue print to tackle
>> this issue
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [NFV] [Tacker] Problem installing Tacker on Mitaka when creating DB

2016-06-29 Thread Pedro Sousa
Hi all,

I'm trying to install tacker on mitaka Centos 7.2 RDO following this howto:
http://docs.openstack.org/developer/tacker/install/manual_installation.html
.

I'm having a DB error  when I run the command "tacker-db-manage
--config-file /etc/tacker/tacker.conf upgrade head":

root@overcloud-controller-0 tacker-0.3.1]# tacker-db-manage
--config-file /etc/tacker/tacker.conf upgrade head
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> 1c6b0d82afcd,
add tables for tacker framework
INFO  [alembic.runtime.migration] Running upgrade 1c6b0d82afcd ->
81ffa86020d, rpc_proxy
INFO  [alembic.runtime.migration] Running upgrade 81ffa86020d ->
4c31092895b8, empty message
INFO  [alembic.runtime.migration] Running upgrade 4c31092895b8 ->
13c0e0661015, add descrition to vnf
INFO  [alembic.runtime.migration] Running upgrade 13c0e0661015 ->
5958429bcb3c, modify datatype of value
INFO  [alembic.runtime.migration] Running upgrade 5958429bcb3c ->
12a57080b277, Add Service related dbs
INFO  [alembic.runtime.migration] Running upgrade 12a57080b277 ->
12a57080b278, Alter devices
Traceback (most recent call last):
  File "/bin/tacker-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py",
line 153, in main
CONF.command.func(config, CONF.command.name)
  File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py",
line 67, in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File "/usr/lib/python2.7/site-packages/tacker/db/migration/cli.py",
line 45, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/alembic/command.py", line
174, in upgrade
script.run_env()
  File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line
397, in run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py",
line 81, in load_python_file
module = load_module_py(module_id, path)
  File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line
79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/env.py",
line 84, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/env.py",
line 76, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py",
line 797, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py",
line 312, in run_migrations
step.migration_fn(**kw)
  File 
"/usr/lib/python2.7/site-packages/tacker/db/migration/alembic_migrations/versions/12a57080b278_alter_devices.py",
line 36, in upgrade
nullable=False)
  File "", line 8, in alter_column
  File "", line 3, in alter_column
  File "/usr/lib/python2.7/site-packages/alembic/operations/ops.py",
line 1414, in alter_column
return operations.invoke(alt)
  File "/usr/lib/python2.7/site-packages/alembic/operations/base.py",
line 318, in invoke
return fn(self, operation)
  File "/usr/lib/python2.7/site-packages/alembic/operations/toimpl.py",
line 53, in alter_column
**operation.kw
  File "/usr/lib/python2.7/site-packages/alembic/ddl/mysql.py", line
66, in alter_column
else existing_autoincrement
  File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line
118, in _exec
return conn.execute(construct, *multiparams, **params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py",
line 914, in execute
return meth(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py",
line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py",
line 968, in _execute_ddl
compiled
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1146, in _execute_context
context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1341, in _handle_dbapi_exception
exc_info
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py",
line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1139, in _execute_context
context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py",
line 450, in do_execute
cursor.execute(statement, parameters)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line
174, in execute
self.errorhandler(self, exc, value)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py",

Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Kris G. Lindgren
I would also look at seeing how its doing it.  IN the past what it did was drop 
packets over a specific threshold which is really really really terrible.  We 
do some traffic policing on some of our vm's – but we do it outside of 
openstack via a qemu hook and setting up our own qdisc and ifb device for each 
tap device that we want to police.

https://github.com/godaddy/openstack-traffic-shaping

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Joseph Bajin >
Date: Wednesday, June 29, 2016 at 10:43 AM
To: Daniel Levy >
Cc: OpenStack Operators 
>
Subject: Re: [Openstack-operators] Bandwidth limitations

Hi there,

It looks like QOS is already available within the Mitaka release.   Maybe it 
doesn't have all the features you need, but looks to be a good start.
http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.html

I haven't used it yet, but maybe someone else will pipe up with some expierence.

--Joe

On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy 
> wrote:
Hi all,
I'd like to learn about potential solutions anyone out there is using for 
bandwidth limitations on VMs. Potentially applying QOS (quality of service) 
rules on the VM ports in an automated fashion.
If there are no current solutions, I might submit a blue print to tackle this 
issue


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Matt Fischer
We've been using this for some time now (since at least Kilo). We set them
per flavor not per instance.

https://wiki.openstack.org/wiki/InstanceResourceQuota

Bandwidth limits

Nova Extra Specs keys:

   - vif_inbound_average
   - vif_outbound_average
   - vif_inbound_peak
   - vif_outbound_peak
   - vif_inbound_burst
   - vif_outbound_burst



On Wed, Jun 29, 2016 at 10:36 AM, Daniel Levy  wrote:

> Hi all,
> I'd like to learn about potential solutions anyone out there is using for
> bandwidth limitations on VMs. Potentially applying QOS (quality of service)
> rules on the VM ports in an automated fashion.
> If there are no current solutions, I might submit a blue print to tackle
> this issue
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Bandwidth limitations

2016-06-29 Thread Daniel Levy
Hi all,
I'd like to learn about potential solutions anyone out there is using for bandwidth limitations on VMs. Potentially applying QOS (quality of service) rules on the VM ports in an automated fashion.
If there are no current solutions, I might submit a blue print to tackle this issue


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Joseph Bajin
Hi there,

It looks like QOS is already available within the Mitaka release.   Maybe
it doesn't have all the features you need, but looks to be a good start.
http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.html

I haven't used it yet, but maybe someone else will pipe up with some
expierence.

--Joe

On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy  wrote:

> Hi all,
> I'd like to learn about potential solutions anyone out there is using for
> bandwidth limitations on VMs. Potentially applying QOS (quality of service)
> rules on the VM ports in an automated fashion.
> If there are no current solutions, I might submit a blue print to tackle
> this issue
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [HA] RFC: user story including hypervisor reservation / host maintenance / storage AZs / event history (fwd)

2016-06-29 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Thanks again for help and comments Adam.

I need to look those other discussions you have linked here. Will take some
time as going on a holiday on Friday and coming back in august.

Meanwhile begin to think that having this new field in Nova would really be
just for maintenance and maybe no need for the URL to something external. Any
external tool could anyhow consume the notification and further logic could
be inside the tool. Downside is as Nova team anyhow did not want a big change
for this, "just one new field", it is not that usable for different maintenance
state information. Some different "states" one might need:
- Maintenance window (begin time - end time: if end time missing, the HW is not
coming back. This is needed if VM would be left on host during maintenance)
- In maintenance (visible to VMs left on host)
- Test (only operator can use this host after maintenance to test it works.
Needs new "MaintenanceModeFilter" for this purpose)

Ok, looking these "3 states", 2 could be reserved words that one can expect:
- In maintenance
- Test
For normal running situation we would know that there is no value, but for
"maintenance  window" it could be tricky. Also would one want to tell more
details about this, meaning it would be behind some URL. Then one might need
to know difference between not maintained and maintained system. To launch
VM to maintained or not maintained system. As a some kind of state that
might be ugly as running versioning number and not convenient if again some
"MaintenanceModeFilter" would need to map to that.

Need to continue to find the best solution. Discuss also with nova guys and
in review when back from holiday.

Br,
Tomi

> -Original Message-
> From: Adam Spiers [mailto:aspi...@suse.com]
> Sent: Tuesday, June 28, 2016 6:42 PM
> To: Juvonen, Tomi (Nokia - FI/Espoo) 
> Cc: openstack-operators mailing list  operat...@lists.openstack.org>
> Subject: Re: [Openstack-operators] [HA] RFC: user story including
> hypervisor reservation / host maintenance / storage AZs / event history
> (fwd)
> 
> Juvonen, Tomi (Nokia - FI/Espoo)  wrote:
> > Thank you very much from the interest. Need to look over other
> > discussion and perhaps have a session in Barcelona to look the
> > way forward after change in Nova.
> 
> Indeed, sounds good!
> 
> > > -Original Message-
> > > From: Adam Spiers [mailto:aspi...@suse.com]
> > > Sent: Monday, June 20, 2016 4:43 PM
> > > To: Juvonen, Tomi (Nokia - FI/Espoo) 
> > > Cc: openstack-operators mailing list  > > operat...@lists.openstack.org>
> > > Subject: Re: [Openstack-operators] [HA] RFC: user story including
> > > hypervisor reservation / host maintenance / storage AZs / event history
> > > (fwd)
> > >
> > > Hi Tomi,
> > >
> > > Juvonen, Tomi (Nokia - FI/Espoo)  wrote:
> > > > I'm working in the OPNFV Doctor project that is about fault
> > > > management and maintenance (NFV). The goal of the project is to
> > > > build fault management and maintenance framework for high
> > > > availability of Network Services on top of virtualized
> > > > infrastructure.
> > > >
> > > > https://wiki.opnfv.org/display/doctor
> > > >
> > > > Currently there is already landed effort to OpenStack to have
> > > > ability to detect failures fast, change states in OpenStack (Nova),
> > > > add state information that was missing and also to expose that to
> > > > owner of a VM. Also alarm is triggered. By all this one can now rely
> > > > the states and get notice about faults in a split second. Surely
> > > > with system configured monitor different faults and make actions
> > > > based configured policies, or leave some actions for consumers of
> > > > the alarms risen.
> > >
> > > Sounds very interesting - thanks.  Does this really have to be limited
> > > to OPNFV though?  It sounds like it would be very useful within
> > > OpenStack generally.
> > Surely not just for OPNFV, but for all operators.
> 
> Right - so why is it part of the OPNFV project?  That gives the
> impression that it would only be usable in NFV contexts.
> 
> > If playing with the idea
> > of having link to some external tool to have more than
> > "host_maintenance_reason", like it now would seem some more generic
> > "host_details", where one could have external REST API to call to have
> any
> > wanted host specific details that one would like to expose also to
> > tenant/owner of server.
> 
> Sounds like you are talking about some kind of "whiteboard" feature
> per instance which would act as a sort of communication channel
> between the project user/owner and the cloud operator?  Can you
> describe a use case which is unrelated to maintenance?
> 
> > If having that tool it could also have maintenance
> > or host failure specific scenarios implemented. Could have admin to do
> > things manually, or configure tool VNF / instance specifically to do some
> > actions..
> 
> I think we 

Re: [Openstack-operators] Data-Migration Juno -> Mitaka

2016-06-29 Thread Michael Stang
Hi Blair,
 
thank you for your answere. The Tool Roland suggested was what we are looking
for, we want to migrate the enduser data from one cloud to another.
 
Your suggestion with the database transfer sounds also interessting but if i
dump my Juno DB and import it into the Mitaka TestDB, would this work? AFAIK the
DB changes also between versions of OpenStack, is it possible to import and
"old" DB and get it to work in a newer version?
 
Regards,
Michael
 
 
 

> Blair Bethwaite  hat am 29. Juni 2016 um 02:43
> geschrieben:
>
>
> Hi Roland -
>
> GUTS looks cool! But I took Michael's question to be more about
> control plane data than end-user instances etc...?
>
> Michael - If that's the case then you probably want to start with
> dumping your present Juno DBs, importing into your Mitaka test DB and
> then attempting the migrations to get to Mitaka, if they work then you
> might be able to bring up a "clone cloud" (of course there is probably
> a whole lot of network specific config in there that won't work unless
> you are doing this in a separate/isolated name-and-address space,
> there's also all the config files...). Also, as others have noted on
> this list recently, live upgrades are only supported/tested(?) between
> successive versions.
>
> Cheers,
>
> On 29 June 2016 at 09:54, Roland Chan  wrote:
> > Hi Michael
> >
> > We built a tool called GUTS to migrate various assets between OpenStack
> > deployment (and other things as well). You can check it out at
> > https://github.com/aptira/guts. It can migrate Instances, Volumes, Networks,
> > Tenants, Users and Security Groups from one OpenStack to another.
> >
> > It's a work in progress, but we're always happy to accept input.
> >
> > Hope this helps, feel free to contact me if you need anything.
> >
> > Roland
> >
> >
> >
> > On 28 June 2016 at 16:07, Michael Stang 
> > wrote:
> >>
> >> Hello all,
> >>
> >>
> >>
> >> we setup a small test environment of Mitaka to learn about the
> >> installation and the new features. Before we try the Upgrade of out Juno
> >> production environment we want to migrate all it’s data to the Mitaka
> >> installation as a backup and also to make tests.
> >>
> >>
> >>
> >> Is there an easy way to migrate the data from the Juno environment to the
> >> mitaka environment or has this to be done manually piece by piece? I found
> >> already a tool named CloudFerry but the instructions to use it are not much
> >> and also there seems to be no support for mitaka by now, is there any other
> >> software/tool to help for migrating data?
> >>
> >>
> >>
> >> Thanks and kind regards,
> >>
> >> Michael
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Cheers,
> ~Blairo
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators