Re: [Openstack-operators] [openstack-dev][openstack-operators]flush expired tokens and moves deleted instance

2015-01-27 Thread Fischer, Matt
On 1/27/15, 10:21 AM, gustavo panizzo (gfa) g...@zumbi.com.ar wrote:



On 01/28/2015 01:13 AM, Fischer, Matt wrote:
 Our keystone database is clustered across regions, so we have this job
 running on node1 in each site on alternating hours. I don¹t think you¹d
 want a bunch of cron jobs firing off all at once to cleanup tokens on
 multiple clustered nodes. That¹s one reason I know not to put this in
 the code.

i prefer a cronjob to something on the code that i have to test,
configure and possible troubleshot

besides, i think is well documented. i don't see a problem there.


maybe distributions could ship the script into /etc/cron.daily by
default? i would remove it on my case but is a good default for simple
openstack installs

--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333


Just to be clear we¹re also using cronjobs, we just configure and set them
up with puppet, so not really custom code. If the package starts
installing them, we¹ll just remove them with puppet and do our own again
anyway.

What I¹d like to know is whether there are reasons that someone might want
to keep their tokens and not have this running by default.


This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their network gateway

2015-01-27 Thread Joe Topjian
Thanks, Kris. I'm going to see if there's any oddities between the version
of dnsmasq packaged with 12.04/Icehouse and systemd-dhcp.

On Tue, Jan 27, 2015 at 9:25 AM, Kris G. Lindgren klindg...@godaddy.com
wrote:

  I can't help as we use config-drive to set networking and are just
 starting to roll out Cent7 vm's.  However, a huge change from Cent6 to
 Cent7 was the switch from upstart/dhclient to systemd/systemd-dhcp.
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



   From: Joe Topjian j...@topjian.net
 Date: Tuesday, January 27, 2015 at 9:08 AM
 To: openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their
 network gateway

   Hello,

  I have run into two different OpenStack clouds where instances running
 either RHEL 7 or CentOS 7 images are randomly losing their network gateway.

  There's nothing in the logs that show any indication of why. There's no
 DHCP hiccup or anything like that. The gateway has just disappeared.

  If I log into the instance via another instance (so on the same subnet
 since there's no gateway), I can manually re-add the gateway and everything
 works... until it loses it again.

  One cloud is running Havana and the other is running Icehouse. Both are
 using nova-network and both are Ubuntu 12.04.

  On the Havana cloud, we decided to install the dnsmasq package from
 Ubuntu 14.04. This looks to have resolved the issue as this was back in
 November and I haven't heard an update since.

  However, we don't want to do that just yet on the Icehouse cloud. We'd
 like to understand exactly why this is happening and why updating dnsmasq
 resolves an issue that only one specific type of image is having.

  I can make my way around CentOS, but I'm not as familiar with it as I am
 with Ubuntu (especially CentOS 7). Does anyone know what change in
 RHEL7/CentOS7 might be causing this? Or does anyone have any other ideas on
 how to troubleshoot the issue?

  I currently have access to two instances in this state, so I'd be happy
 to act as remote hands and eyes. :)

  Thanks,
 Joe

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Configuration file validator

2015-01-27 Thread j
 

IMHO this should be built in per daemon. ex: apachectl -t 

On 2015-01-27 02:45, Christian Berendt wrote: 

 Do you think there is a need for a configuration file validator?
 
 Sometimes I have nasty issues in manual created configuration files
 (e.g. a parameter in a wrong section or a mistyped name of a section or
 parameter).
 
 The services themself to not validate the configuration files for
 logical issues. They check if the syntax is valid. They do not check if
 the parameter or section names makes sense (e.g. from time to time
 parameters change the sections or are deprecated in a new release) or if
 the used values for a parameter are correct.
 
 I started implementing a quick and dircty proof of concept script (I
 called it validus) to validate configuration files based on oslo.config
 and stevedore.
 
 If there exists a need I will cleanup, complete, and publish my proof of
 concept script on Stackforge. Just want to clarify first whether a need
 exists or not.
 
 Christian.
 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread Mathieu Gagné

On 2015-01-27 5:03 PM, Jesse Keating wrote:


Which one do you think is better?



What do the other various things that take lists expect? I'd say that's
more of a consideration too, uniformity across the inputs.



+1 again

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread Mathieu Gagné

On 2015-01-27 4:54 PM, Sam Morrison wrote:

Hi operators,

I have a review up to fix this filter to allow multiple tenants, there
are 2 proposed ways in which this can be specified.

1. using a comma e.g., tenantid1,tenantid2
2. Using a json list eg. [“tenantid1”, “tenantid2”]

Which one do you think is better?

https://review.openstack.org/148807



Is there a similar precedent when listing values in the config? I don't 
recall any configs using JSON notation.


I personnally use Puppet to populate similar config values and simply 
join values with ,.


I don't know any operators managing config files by hand.

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2015-01-27 Thread Tom Fifield
Hi all,

Based on Gustavo's excellent work below, talking with many ops, and
after a brief chats with Jeremey and a few other TC folks, here's what
I'd propose as an end goal:


* A git repository that has raw, sample configs in it for each project
that will be automagically updated

* Raw configs distributed in the tar files we make as part of the release


Does that seem acceptable for us all?


Regards,



Tom



On 21/01/15 13:22, gustavo panizzo (gfa) wrote:
 
 
 On 12/18/2014 09:57 AM, Jeremy Stanley wrote:
 4. Set up a service that periodically regenerates sample
 configuration and tracks it over time. This attempts to address the
 stated desire to be able to see how sample configurations change,
 but note that this is a somewhat artificial presentation since there
 are a lot of variables (described earlier) influencing the contents
 of such samples--any attempt to render it as a linear/chronological
 series could be misleading.
 
 
 i've setup a github repo where i dump sample config files for the
 projects that autogenerate them, because i know nothing about rpm build
 tools i only do it for debian and ubuntu packages.
 
 if you build your deb packages you can use my, very simple and basic,
 scripts to autogenerate the sample config files.
 
 
 the repo is here
 
 https://github.com/gfa/os-sample-configs
 
 i will happily move to osops or other community repo
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread Sam Morrison
Hi operators,

I have a review up to fix this filter to allow multiple tenants, there are 2 
proposed ways in which this can be specified.

1. using a comma e.g., tenantid1,tenantid2
2. Using a json list eg. [“tenantid1”, “tenantid2”]

Which one do you think is better?

https://review.openstack.org/148807 https://review.openstack.org/148807

Thanks,
Sam
 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread Jesse Keating

On 1/27/15 1:54 PM, Sam Morrison wrote:

Hi operators,

I have a review up to fix this filter to allow multiple tenants, there
are 2 proposed ways in which this can be specified.

1. using a comma e.g., tenantid1,tenantid2
2. Using a json list eg. [“tenantid1”, “tenantid2”]

Which one do you think is better?

https://review.openstack.org/148807



Is this intended to be written by a human using the command line tools, 
or to be filled in via other methods?


For command line tools, making it json seems wrong, and a comma list 
would be more friendly. Internally the command line tools could easily 
format it as a json list to pass along.


What do the other various things that take lists expect? I'd say that's 
more of a consideration too, uniformity across the inputs.


--
-jlk

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their network gateway

2015-01-27 Thread George Shuklin
How many network interfaces have your instance? If more than one - check 
settings for second network (subnet). It can have own dhcp settings 
which may mess up with routes for the main network.


On 01/27/2015 06:08 PM, Joe Topjian wrote:

Hello,

I have run into two different OpenStack clouds where instances running 
either RHEL 7 or CentOS 7 images are randomly losing their network 
gateway.


There's nothing in the logs that show any indication of why. There's 
no DHCP hiccup or anything like that. The gateway has just disappeared.


If I log into the instance via another instance (so on the same subnet 
since there's no gateway), I can manually re-add the gateway and 
everything works... until it loses it again.


One cloud is running Havana and the other is running Icehouse. Both 
are using nova-network and both are Ubuntu 12.04.


On the Havana cloud, we decided to install the dnsmasq package from 
Ubuntu 14.04. This looks to have resolved the issue as this was back 
in November and I haven't heard an update since.


However, we don't want to do that just yet on the Icehouse cloud. We'd 
like to understand exactly why this is happening and why updating 
dnsmasq resolves an issue that only one specific type of image is having.


I can make my way around CentOS, but I'm not as familiar with it as I 
am with Ubuntu (especially CentOS 7). Does anyone know what change in 
RHEL7/CentOS7 might be causing this? Or does anyone have any other 
ideas on how to troubleshoot the issue?


I currently have access to two instances in this state, so I'd be 
happy to act as remote hands and eyes. :)


Thanks,
Joe


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their network gateway

2015-01-27 Thread Joe Topjian
Hi George,

All instances have only a single interface.

Thanks,
Joe

On Tue, Jan 27, 2015 at 1:38 PM, George Shuklin george.shuk...@gmail.com
wrote:

  How many network interfaces have your instance? If more than one - check
 settings for second network (subnet). It can have own dhcp settings which
 may mess up with routes for the main network.


 On 01/27/2015 06:08 PM, Joe Topjian wrote:

 Hello,

  I have run into two different OpenStack clouds where instances running
 either RHEL 7 or CentOS 7 images are randomly losing their network gateway.

  There's nothing in the logs that show any indication of why. There's no
 DHCP hiccup or anything like that. The gateway has just disappeared.

  If I log into the instance via another instance (so on the same subnet
 since there's no gateway), I can manually re-add the gateway and everything
 works... until it loses it again.

  One cloud is running Havana and the other is running Icehouse. Both are
 using nova-network and both are Ubuntu 12.04.

  On the Havana cloud, we decided to install the dnsmasq package from
 Ubuntu 14.04. This looks to have resolved the issue as this was back in
 November and I haven't heard an update since.

  However, we don't want to do that just yet on the Icehouse cloud. We'd
 like to understand exactly why this is happening and why updating dnsmasq
 resolves an issue that only one specific type of image is having.

  I can make my way around CentOS, but I'm not as familiar with it as I am
 with Ubuntu (especially CentOS 7). Does anyone know what change in
 RHEL7/CentOS7 might be causing this? Or does anyone have any other ideas on
 how to troubleshoot the issue?

  I currently have access to two instances in this state, so I'd be happy
 to act as remote hands and eyes. :)

  Thanks,
 Joe


 ___
 OpenStack-operators mailing 
 listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Specifying multiple tenants for aggregate_multitenancy_isolation_filter

2015-01-27 Thread John Dewey


On Tuesday, January 27, 2015 at 2:03 PM, Jesse Keating wrote:

 On 1/27/15 1:54 PM, Sam Morrison wrote:
  Hi operators,
   
  I have a review up to fix this filter to allow multiple tenants, there
  are 2 proposed ways in which this can be specified.
   
  1. using a comma e.g., tenantid1,tenantid2
  2. Using a json list eg. [“tenantid1”, “tenantid2”]
   
  Which one do you think is better?
   
  https://review.openstack.org/148807
  
 Is this intended to be written by a human using the command line tools,  
 or to be filled in via other methods?
  
 For command line tools, making it json seems wrong, and a comma list  
 would be more friendly. Internally the command line tools could easily  
 format it as a json list to pass along.
  
  

Yeah, I agree with this perspective.
  
  
 What do the other various things that take lists expect? I'd say that's  
 more of a consideration too, uniformity across the inputs.
  
 --  
 -jlk
  
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 (mailto:OpenStack-operators@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] :document an OpenStack production environment

2015-01-27 Thread evanlinwood evanlinwood

 
  
  Hi Dani,
  I’m currently working on development of a product that supports complex hardware solution design, resulting in ‘active diagrams’ having detailed knowledge of configured riser cards, adapters, processor trays, drive bays, physical cabling etc. It also then assists management of order fulfilment / equipment implementation activities and can update integrated monitoring  help desk / CMDB platforms.
  http://www.arenacore.com
  At this stage there is no specific OpenStack support (although I’m excited about OpenStack and working towards that), but the data centre context is specifically where ArenaCore is intended to used.
  If you’re exploring this area, I’d be very interested in any comments or suggestions, particularly re equipment management in an OpenStack context?
  Regards Evan
  Please excuse the current website is a bit wordy – needs some pruning!
  
   On 28 January 2015 at 10:07 Daniel Comnea comnea.d...@gmail.com wrote:
   
   
   

 If anyone can share more info will be much appreciated.
 
 
Thanks,
Dani

   
   


 On Mon, Jan 26, 2015 at 8:55 PM, Daniel Comnea 
 comnea.d...@gmail.com wrote:
 
 
  
   

 Excellent input, please keep it going.
 
 
Maybe someone from HP will shed more light on their cmdb?



   
   
Dani
   
  
  
   

 
 
  On Mon, Jan 26, 2015 at 4:28 PM, j 
  j...@no-io.net wrote:
  
  
   
   
I use LibreOffice Draw for architecture diagrams. VRT Network Equipment pretty much blows everything out of the water.
For text documentation that I manually create (chef for example does this automatically) Ill put in plaintext files per environment that includes basic stuff such as hostname, IP, and server function.

 
  
   
  
  On 2015-01-25 09:15, Daniel Comnea wrote:
 


 
  
   

 
  
   Hi all,
   
   
  Can anyone who runs Openstack in a production environment/ data center share how you document the whole infrastructure, what tools are used for drawing diagrams(i guess you need some pictures otherwise is hard to understand it :)), maybe even an inventory etc?
  
  
  
  
 Thanks,
 Dani
 
 
 
 
P.S in the past - 10+ - i used to have maintain a red book but i suspect situation is different in 2015
   
   
  
 
 ___OpenStack-operators mailing listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

   
  
 
 

   
  
 


   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
   
  
  
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] :document an OpenStack production environment

2015-01-27 Thread Daniel Comnea
If anyone can share more info will be much appreciated.

Thanks,
Dani

On Mon, Jan 26, 2015 at 8:55 PM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Excellent input, please keep it going.

 Maybe someone from HP will shed more light on their cmdb?


 Dani

 On Mon, Jan 26, 2015 at 4:28 PM, j j...@no-io.net wrote:

  I use LibreOffice Draw for architecture diagrams. VRT Network Equipment
 http://extensions.libreoffice.org/extension-center/vrt-network-equipment
 pretty much blows everything out of the water.

 For text documentation that I manually create (chef for example does this
 automatically) I'll put in plaintext files per environment that includes
 basic stuff such as hostname, IP, and server function.


 On 2015-01-25 09:15, Daniel Comnea wrote:

   Hi all,

 Can anyone who runs Openstack in a production environment/ data center
 share how you document the whole infrastructure, what tools are used for
 drawing diagrams(i guess you need some pictures otherwise is hard to
 understand it :)), maybe even an inventory etc?



 Thanks,
 Dani



 P.S in the past - 10+ - i used to have maintain a red book but i suspect
 situation is different in 2015

 ___
 OpenStack-operators mailing 
 listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Telco][NFV] Meeting facilitator for January 28th

2015-01-27 Thread Steve Gordon
- Original Message -
 From: Marc Koderer m...@koderer.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-...@lists.openstack.org
 
 Hi Steve,
 
 I can host it.
 
 Regards
 Marc

Thanks Marc!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Telco][NFV] Meeting reminder - Wednesday 28th @ 1400 UTC in #openstack-meeting-alt

2015-01-27 Thread Steve Gordon
Hi all,

Just a friendly reminder that this week's OpenStack Telco Working Group meeting 
is tomorrow, Wednesday the 28th, at 1400 UTC in #openstack-meeting-alt. Please 
add any items you wish to discuss to the agenda at:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Marc Koderer has kindly stepped up to run the meeting in my absence.

Thanks,

Steve

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Configuration file validator

2015-01-27 Thread Daniele Venzano
An external tool is probably able to cross-check across different openstack 
services. But in any form I would welcome something that tells me:

-  OK this configuration is consistent and makes sense

-  This option doesn’t do anything in this context

-  Setting X is missing, feature Y will not work

-  This is totally broken

 

Dan

 

From: j [mailto:j...@no-io.net] 
Sent: Tuesday 27 January 2015 09:02
To: Christian Berendt
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Configuration file validator

 

IMHO this should be built in per daemon. ex: apachectl -t

 

 

On 2015-01-27 02:45, Christian Berendt wrote:

Do you think there is a need for a configuration file validator?
 
Sometimes I have nasty issues in manual created configuration files
(e.g. a parameter in a wrong section or a mistyped name of a section or
parameter).
 
The services themself to not validate the configuration files for
logical issues. They check if the syntax is valid. They do not check if
the parameter or section names makes sense (e.g. from time to time
parameters change the sections or are deprecated in a new release) or if
the used values for a parameter are correct.
 
I started implementing a quick and dircty proof of concept script (I
called it validus) to validate configuration files based on oslo.config
and stevedore.
 
If there exists a need I will cleanup, complete, and publish my proof of
concept script on Stackforge. Just want to clarify first whether a need
exists or not.
 
Christian.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Large deployment] Meetings

2015-01-27 Thread Matt Van Winkle
Hey folks,
I dropped the ball following the holidays and didn't get a doodle out to pick a 
time for the APAC friendly meeting this month.  And, I missed the 3rd Thursday 
to boot – sorry folks.

That being said, I'd still like to get together this week to catch up for 
January.  We can find out if anyone in the group went to the Nova mid-cycle 
and/or has caught up with attendees (I'll be trying to sync with some our devs 
on Thursday as well).  Also, we can start planning what we want, as a working 
group, form the Ops mid cycle.

Let's get together at 02:00 UTC on Friday, January 30th (20:00 Central on 
Thursday, January 19th) in #openstack-meeting-4

We agreed in the Dec meeting that 3rd Thursdays/Fridays are best.  We also 
agreed that alternating between the 16:00 UTC / 10:00 Central and an APAC 
friendly time slot.  To be fair, I'll get a doodle out for the March meeting in 
case 02:00 UTC / 20:00 Central isn't ideal

I'll get a rough agenda put together and get another update out tomorrow.

Thanks!
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] RHEL 7 / CentOS 7 instances losing their network gateway

2015-01-27 Thread Joe Topjian
Hello,

I have run into two different OpenStack clouds where instances running
either RHEL 7 or CentOS 7 images are randomly losing their network gateway.

There's nothing in the logs that show any indication of why. There's no
DHCP hiccup or anything like that. The gateway has just disappeared.

If I log into the instance via another instance (so on the same subnet
since there's no gateway), I can manually re-add the gateway and everything
works... until it loses it again.

One cloud is running Havana and the other is running Icehouse. Both are
using nova-network and both are Ubuntu 12.04.

On the Havana cloud, we decided to install the dnsmasq package from Ubuntu
14.04. This looks to have resolved the issue as this was back in November
and I haven't heard an update since.

However, we don't want to do that just yet on the Icehouse cloud. We'd like
to understand exactly why this is happening and why updating dnsmasq
resolves an issue that only one specific type of image is having.

I can make my way around CentOS, but I'm not as familiar with it as I am
with Ubuntu (especially CentOS 7). Does anyone know what change in
RHEL7/CentOS7 might be causing this? Or does anyone have any other ideas on
how to troubleshoot the issue?

I currently have access to two instances in this state, so I'd be happy to
act as remote hands and eyes. :)

Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to allow users to list services by modifying the policy.json file of Keystone

2015-01-27 Thread Fischer, Matt
On 1/26/15, 8:46 AM, Christian Berendt bere...@b1-systems.de wrote:

On 01/26/2015 04:02 PM, Fischer, Matt wrote:
 Is there any reason that the user can¹t just run keystone catalog which
 does not require admin permissions?

Matt, this is just an example. We tried it with different list methods
and it is also not working.

We read your blog post about the topic
(http://www.mattfischer.com/blog/?p=524) and it is working for us for
services like Neutron or Nova. But not for Keystone.

Christian.


Christian,

I discussed this with the keystone folks this morning and it looks like
there are some internal checks for admin that are causing this. One of the
keystone team will be following-up with more details.


This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev][openstack-operators]flush expired tokens and moves deleted instance

2015-01-27 Thread Fischer, Matt
Our keystone database is clustered across regions, so we have this job running 
on node1 in each site on alternating hours. I don’t think you’d want a bunch of 
cron jobs firing off all at once to cleanup tokens on multiple clustered nodes. 
That’s one reason I know not to put this in the code.

Are there other reasons that an operator might like to keep old tokens? 
Auditing?

From: Tim Bell tim.b...@cern.chmailto:tim.b...@cern.ch
Date: Sunday, January 25, 2015 at 11:10 PM
To: Mike Smith mism...@overstock.commailto:mism...@overstock.com, Daniel 
Comnea comnea.d...@gmail.commailto:comnea.d...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org, 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev][openstack-operators]flush 
expired tokens and moves deleted instance

This is often mentioned as one of those items which catches every OpenStack 
cloud operator at some time. It’s not clear to me that there could not be a 
scheduled job built into the system with a default frequency (configurable, 
ideally).

If we are all configuring this as a cron job, is there a reason that it could 
not be built into the code ?

Tim

From: Mike Smith [mailto:mism...@overstock.com]
Sent: 24 January 2015 18:08
To: Daniel Comnea
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev][openstack-operators]flush 
expired tokens and moves deleted instance

It is still mentioned in the Juno installation docs:

By default, the Identity service stores expired tokens in the database 
indefinitely. The
accumulation of expired tokens considerably increases the database size and 
might degrade
service performance, particularly in environments with limited resources.
We recommend that you use cron to configure a periodic task that purges expired 
tokens
hourly:
# (crontab -l -u keystone 21 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush /var/log/keystone/
keystone-tokenflush.log 21' \
 /var/spool/cron/keystone



Mike Smith
Principal Engineer, Website Systems
Overstock.comhttp://Overstock.com


On Jan 24, 2015, at 10:03 AM, Daniel Comnea 
comnea.d...@gmail.commailto:comnea.d...@gmail.com wrote:

Hi all,

I just bumped into Sebastien's blog where he suggested a cron job should run in 
production to tidy up expired tokens - see blog[1]
Could you please remind me if this is still required in IceHouse/ Juno? (i kind 
of remember i've seen some work being done in this direction but i can't find 
the emails)

Thanks,
Dani

[1] 
http://www.sebastien-han.fr/blog/2014/08/18/a-must-have-cron-job-on-your-openstack-cloud/
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message solely to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify 
sender immediately by telephone or return email. Thank you.


This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Configuration file validator

2015-01-27 Thread Anne Gentle
On Tue, Jan 27, 2015 at 1:45 AM, Christian Berendt bere...@b1-systems.de
wrote:

 Do you think there is a need for a configuration file validator?


This topic has come up at the last couple of operators sessions I've been
to, both at the summit and the midcycle, so I think there's a need you'll
meet with one! Sounds great, thanks Christian for being willing to put it
on Stackforge.

Anne



 Sometimes I have nasty issues in manual created configuration files
 (e.g. a parameter in a wrong section or a mistyped name of a section or
 parameter).

 The services themself to not validate the configuration files for
 logical issues. They check if the syntax is valid. They do not check if
 the parameter or section names makes sense (e.g. from time to time
 parameters change the sections or are deprecated in a new release) or if
 the used values for a parameter are correct.

 I started implementing a quick and dircty proof of concept script (I
 called it validus) to validate configuration files based on oslo.config
 and stevedore.

 If there exists a need I will cleanup, complete, and publish my proof of
 concept script on Stackforge. Just want to clarify first whether a need
 exists or not.

 Christian.

 --
 Christian Berendt
 Cloud Solution Architect
 Mail: bere...@b1-systems.de

 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their network gateway

2015-01-27 Thread Kris G. Lindgren
I can't help as we use config-drive to set networking and are just starting to 
roll out Cent7 vm's.  However, a huge change from Cent6 to Cent7 was the switch 
from upstart/dhclient to systemd/systemd-dhcp.


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



From: Joe Topjian j...@topjian.netmailto:j...@topjian.net
Date: Tuesday, January 27, 2015 at 9:08 AM
To: 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: [Openstack-operators] RHEL 7 / CentOS 7 instances losing their network 
gateway

Hello,

I have run into two different OpenStack clouds where instances running either 
RHEL 7 or CentOS 7 images are randomly losing their network gateway.

There's nothing in the logs that show any indication of why. There's no DHCP 
hiccup or anything like that. The gateway has just disappeared.

If I log into the instance via another instance (so on the same subnet since 
there's no gateway), I can manually re-add the gateway and everything works... 
until it loses it again.

One cloud is running Havana and the other is running Icehouse. Both are using 
nova-network and both are Ubuntu 12.04.

On the Havana cloud, we decided to install the dnsmasq package from Ubuntu 
14.04. This looks to have resolved the issue as this was back in November and I 
haven't heard an update since.

However, we don't want to do that just yet on the Icehouse cloud. We'd like to 
understand exactly why this is happening and why updating dnsmasq resolves an 
issue that only one specific type of image is having.

I can make my way around CentOS, but I'm not as familiar with it as I am with 
Ubuntu (especially CentOS 7). Does anyone know what change in RHEL7/CentOS7 
might be causing this? Or does anyone have any other ideas on how to 
troubleshoot the issue?

I currently have access to two instances in this state, so I'd be happy to act 
as remote hands and eyes. :)

Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev][openstack-operators]flush expired tokens and moves deleted instance

2015-01-27 Thread Jesse Keating

On 1/27/15 9:13 AM, Fischer, Matt wrote:

Our keystone database is clustered across regions, so we have this job
running on node1 in each site on alternating hours. I don’t think you’d
want a bunch of cron jobs firing off all at once to cleanup tokens on
multiple clustered nodes. That’s one reason I know not to put this in
the code.

Are there other reasons that an operator might like to keep old tokens?
Auditing?


Well, without knowing keystone code (yay great start to the email) I 
would imagine this would be like other periodic tasks, where the task 
gets generated and one of the workers picks it up, wherever that worker 
may be. But maybe that's wishful thinking.


--
-jlk

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev][openstack-operators]flush expired tokens and moves deleted instance

2015-01-27 Thread Jesse Keating

On 1/27/15 9:21 AM, gustavo panizzo (gfa) wrote:

i prefer a cronjob to something on the code that i have to test,
configure and possible troubleshot

besides, i think is well documented. i don't see a problem there.


maybe distributions could ship the script into /etc/cron.daily by
default? i would remove it on my case but is a good default for simple
openstack installs


Packagers should ship it in a documentation sample, not install it into 
the cron tab directly. Sites that run multiple keystone servers would 
wind up with competing cron jobs, not a great situation.


--
-jlk

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators