Re: [Openstack-operators] database hoarding

2014-10-31 Thread Daniele Venzano

On 10/30/14 23:30, Abel Lopez wrote:

As an operator, I'd prefer to have time based criteria over number of rows, too.
I envision something like `nova-manage db purge [days]` where we can leave it 
up to the administrator to decide how much of their old data (if any) they'd be 
OK losing.
I would certainly use this feature, but please, please, please: make it 
work across all OpenStack databases, not just for Nova, but also for 
Cinder, Neutron, etc.
Let's try to get something integrated project-wide instead of having a 
number of poorly documented, slightly inconsistent commands.




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] floatin ip issue

2014-10-31 Thread George Shuklin
I was wrong, sorry. Floatings assigned as /32 on external interface 
inside network namespace. The signle idea I have now - is try to remove 
all iptables with NAT (it's destructive up to moment of network node 
reboot or router delete/create), and check out if address will reply to 
ping.


If 'yes' - means problems in routing/nat
If 'no' - means problem are outside openstack router (external net, 
provider routing, etc).


On 10/29/2014 06:23 PM, Paras pradhan wrote:

Hi George,


You mean .193 and .194 should be in the different subnets? 
192.168.122.193/24 http://192.168.122.193/24 reserved  from the 
allocation pool and 192.168.122.194/32 http://192.168.122.194/32 is 
the floating ip.


Here are the outputs for the commands

*neutron port-list --device-id=8725dd16-8831-4a09-ae98-6c5342ea501f
*

+--+--+---++

| id   | name | mac_address   | 
fixed_ips   |


+--+--+---++

| 6f835de4-c15b-44b8-9002-160ff4870643 |  | fa:16:3e:85:dc:ee | 
{subnet_id: 0189699c-8ffc-44cb-aebc-054c8d6001ee, ip_address: 
192.168.122.193} |


| be3c4294-5f16-45b6-8c21-44b35247d102 |  | fa:16:3e:72:ae:da | 
{subnet_id: d01a6522-063d-40ba-b4dc-5843177aab51, ip_address: 
10.10.0.1}   |


+--+--+---++


*neutron floatingip-list*

+--+--+-+--+

| id   | fixed_ip_address | 
floating_ip_address | port_id   |


+--+--+-+--+

| 55b00e9c-5b79-4553-956b-e342ae0a430a | 10.10.0.9| 
192.168.122.194 | 82bcbb91-827a-41aa-9dd9-cb7a4f8e7166 |


+--+--+-+--+


*neutron net-list*

+--+--+---+

| id   | name | subnets   
|


+--+--+---+

| dabc2c18-da64-467b-a2ba-373e460444a7 | demo-net | 
d01a6522-063d-40ba-b4dc-5843177aab51 10.10.0.0/24 
http://10.10.0.0/24 |


| ceaaf189-5b6f-4215-8686-fbdeae87c12d | ext-net | 
0189699c-8ffc-44cb-aebc-054c8d6001ee 192.168.122.0/24 
http://192.168.122.0/24 |


+--+--+---+


*neutron subnet-list*

+--+-+--++

| id   | name   | cidr | 
allocation_pools   |


+--+-+--++

| d01a6522-063d-40ba-b4dc-5843177aab51 | demo-subnet | 10.10.0.0/24 
http://10.10.0.0/24 | {start: 10.10.0.2, end: 
10.10.0.254}   |


| 0189699c-8ffc-44cb-aebc-054c8d6001ee | ext-subnet  | 
192.168.122.0/24 http://192.168.122.0/24 | {start: 
192.168.122.193, end: 192.168.122.222} |


+--+-+--++


P.S: External subnet is 192.168.122.0/24 http://192.168.122.0/24 and 
internal vm instance's subnet is 10.10.0.0/24 http://10.10.0.0/24



Thanks

Paras.


On Mon, Oct 27, 2014 at 5:51 PM, George Shuklin 
george.shuk...@gmail.com mailto:george.shuk...@gmail.com wrote:



I don't like this:

15: qg-d351f21a-08: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UNKNOWN group default
inet 192.168.122.193/24 http://192.168.122.193/24 brd
192.168.122.255 scope global qg-d351f21a-08
   valid_lft forever preferred_lft forever
inet 192.168.122.194/32 http://192.168.122.194/32 brd
192.168.122.194 scope global qg-d351f21a-08
   valid_lft forever preferred_lft forever

Why you got two IPs on same interface with different netmasks?

I just rechecked it on our installations - it should not be happens.

Next: or this is a bug, or this is uncleaned network node (lesser
bug), or someone messing with neutron.

Starts from neutron:

show ports for router:

neutron port-list --device-id=router-uuid-here
neutron 

Re: [Openstack-operators] database hoarding

2014-10-31 Thread Maish Saidel-Keesing

On 31/10/2014 11:01, Daniele Venzano wrote:
 On 10/30/14 23:30, Abel Lopez wrote:
 As an operator, I'd prefer to have time based criteria over number of
 rows, too.
 I envision something like `nova-manage db purge [days]` where we can
 leave it up to the administrator to decide how much of their old data
 (if any) they'd be OK losing.
 I would certainly use this feature, but please, please, please: make
 it work across all OpenStack databases, not just for Nova, but also
 for Cinder, Neutron, etc.
A Huge +1 for across all projects!!
Maish
 Let's try to get something integrated project-wide instead of having a
 number of poorly documented, slightly inconsistent commands.



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Maish Saidel-Keesing


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] database hoarding

2014-10-31 Thread Britt Houser (bhouser)
That would imply some extensible plugin type architecture, b/c all
projects changes over time and from setup to setup.  So each project
would define its own cleanup routines, but those routines could be called
using a common set of parameters from a single tool.

Thx,
britt

On 10/31/14, 8:19 AM, Maish Saidel-Keesing
maishsk+openst...@maishsk.com wrote:


On 31/10/2014 11:01, Daniele Venzano wrote:
 On 10/30/14 23:30, Abel Lopez wrote:
 As an operator, I'd prefer to have time based criteria over number of
 rows, too.
 I envision something like `nova-manage db purge [days]` where we can
 leave it up to the administrator to decide how much of their old data
 (if any) they'd be OK losing.
 I would certainly use this feature, but please, please, please: make
 it work across all OpenStack databases, not just for Nova, but also
 for Cinder, Neutron, etc.
A Huge +1 for across all projects!!
Maish
 Let's try to get something integrated project-wide instead of having a
 number of poorly documented, slightly inconsistent commands.



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Maish Saidel-Keesing


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] database hoarding

2014-10-31 Thread Abel Lopez
I've added the same blueprint to cinder/glance/etc. yes, the plan was for
this to be ${service}-manage feature.

On Friday, October 31, 2014, Britt Houser (bhouser) bhou...@cisco.com
wrote:

 That would imply some extensible plugin type architecture, b/c all
 projects changes over time and from setup to setup.  So each project
 would define its own cleanup routines, but those routines could be called
 using a common set of parameters from a single tool.

 Thx,
 britt

 On 10/31/14, 8:19 AM, Maish Saidel-Keesing
 maishsk+openst...@maishsk.com javascript:; wrote:

 
 On 31/10/2014 11:01, Daniele Venzano wrote:
  On 10/30/14 23:30, Abel Lopez wrote:
  As an operator, I'd prefer to have time based criteria over number of
  rows, too.
  I envision something like `nova-manage db purge [days]` where we can
  leave it up to the administrator to decide how much of their old data
  (if any) they'd be OK losing.
  I would certainly use this feature, but please, please, please: make
  it work across all OpenStack databases, not just for Nova, but also
  for Cinder, Neutron, etc.
 A Huge +1 for across all projects!!
 Maish
  Let's try to get something integrated project-wide instead of having a
  number of poorly documented, slightly inconsistent commands.
 
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 --
 Maish Saidel-Keesing
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [NFV][Telco] Working Group Session

2014-10-31 Thread Steve Gordon
Hi all,

I wanted to highlight that as mentioned in the OpenStack NFV subteam meeting 
yesterday [1] there is a Ops Summit session aiming to bring together those 
interested in the use of OpenStack to provide the infrastructure for 
communication services. This includes, but is not limited to:

- Communication service providers
- Network equipment providers
- Developers

Anyone else interested in this space is also of course welcome to join. Among 
other things we will discuss use cases and requirements, both for new 
functionality and improving existing functionality to meet the needs of this 
sector. We will also discuss ways to work more productively with the OpenStack 
community.

The session is currently scheduled to occur @ 9 AM on Thursday in the 
Batignolles room at the Hyatt Hotel near the summit venue (this hotel also 
hosts a number of other sessions). For more details see the session entry in 
the sched.org schedule [2].

Thanks,

Steve

[1] http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-10-30-16.03.html
[2] http://kilodesignsummit.sched.org/event/b3ccf1464e335b703fc126f068142792

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators