Re: [Openstack-operators] glance directory traversal bug and havana

2015-01-07 Thread George Shuklin
I spend few hours trying to backport to Havana, but than I found,  that
Havana seems be immune to the bug.  I'm not 100% sure, so someone else
advised to look too.

The bug was that icehouse+ accepts all supported schemas. Fix excludes
'bad' schemes. Although Havana have explicitly given list of accepted
schemes for location field, and 'bad' schemes are not in it.
On Jan 6, 2015 8:34 PM, Jesse Keating j...@bluebox.net wrote:

 Hopefully all of you have seen http://seclists.org/oss-sec/2015/q1/64
 which is the glance v2 api directory traversal bug. Upstream has fixed
 master (kilo) and juno, but havana has not been fixed.

 We, unfortunately, have a few havana installs out there and we'd like to
 patch this ahead of our planned upgrade to Juno. I'm curious if anybody
 else out there is in the same situation and is working on backporting the
 glance patch. If not, I'll share the patch when I'm done, but if so I'd
 love to share in the work and help the effort.

 Cheers, and happy patching!

 --
 -jlk

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Keystone] LDAP Assignment Backend Use Survey

2015-01-07 Thread Morgan Fainberg
As a note, since I've seen some responses about users and/or groups on this 
survey, I will be sending a survey about identity out today. This survey is 
strictly about projects/tenants and roles/role assignments in LDAP. 

Sent via mobile

 On Jan 6, 2015, at 11:22, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 The Keystone team is evaluating the support of the LDAP Assignment backend 
 within OpenStack and how it is used in deployments. The assignment backend 
 covers “Projects/Tenants”, “Roles/Grants”, and in the case of SQL “Domains”.
 
 There is a concern that the assignment backend implemented against LDAP is 
 falling further and further behind the SQL implementation. To get a good read 
 on the deployments and how the LDAP assignment backend is being used, the 
 Keystone development team would appreciate feedback from the community.  
 Please fill out the following form and let us know if you are using LDAP 
 Assignment, what it provides you that the SQL assignment backend is not 
 providing, and the release of OpenStack (specifically Keystone) you are using.
 
 http://goo.gl/forms/xz6xJQOQf5
 
 This poll is only meant to get information on the use of the LDAP Assignment 
 backend which only contains Projects/Tenants and Roles/Grants.
 
 Cheers,
 Morgan Fainberg
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron DVR HA

2015-01-07 Thread Pedro Sousa
Hi all,

after some more tests it seems some gratuitous arp issue because if I start
a new connection (ping) from an inside instance to an external host like
google it will work.

This means that instance advertises the switch that in fact  something has
changed and it should update arp table.

Anyone has seen this behavior?

Thanks



On Tue, Dec 30, 2014 at 6:56 PM, Pedro Sousa pgso...@gmail.com wrote:

 Hi,

 as I stated, if I ping from an openstack instance the request appears on
 *Compute01* and is *OK*:

 *18:50:38.721115 IP 10.0.30.23  172.16.28.32 http://172.16.28.32: ICMP
 echo request, id 29956, seq 36, length 64*
 *18:50:38.721304 IP 172.16.28.32  10.0.30.23 http://10.0.30.23: ICMP
 echo reply, id 29956, seq 36, length 64   *


 if I ping outside from my outside network the request appears on *Compute02
 *and is *NOT OK*:

 *18:50:40.104025 ethertype IPv4, IP 192.168.8.4  172.16.28.32
 http://172.16.28.32: ICMP echo request, id 13981, seq 425, length 64*
 *18:50:40.104025 ethertype IPv4, IP 192.168.8.4  172.16.28.32
 http://172.16.28.32: ICMP echo request, id 13981, seq 425, length 64*


 I appreciate if someone can help me with this.

 Thanks.







 On Tue, Dec 30, 2014 at 3:17 PM, Pedro Sousa pgso...@gmail.com wrote:

 Hi Assaf,

 another update, if I ping the floating ip from my instance it works. If I
 ping from outside/provider network, from my pc, it doesn't.

 Thanks

 On Tue, Dec 30, 2014 at 11:35 AM, Pedro Sousa pgso...@gmail.com wrote:

 Hi Assaf,

 According your instructions I can confirm that I have l2pop disabled.

 Meanwhile, I've made another test, yesterday when I left the office this
 wasn't working, but when I arrived this morning it was pinging again, and I
 didn't changed or touched anything. So my interpretation that this has some
 sort of timeout issue.

 Thanks






 On Tue, Dec 30, 2014 at 11:27 AM, Assaf Muller amul...@redhat.com
 wrote:

 Sorry I can't open zip files on this email. You need l2pop to not exist
 in the ML2 mechanism drivers list in neutron.conf where the Neutron
 server
 is, and you need l2population = False in each OVS agent.

 - Original Message -
 
  [Text File:warning1.txt]
 
  Hi Asaf,
 
  I think I disabled it, but maybe you can check my conf files? I've
 attached
  the zip.
 
  Thanks
 
  On Tue, Dec 30, 2014 at 8:27 AM, Assaf Muller  amul...@redhat.com 
 wrote:
 
 
 
 
  - Original Message -
   Hi Britt,
  
   some update on this after running tcpdump:
  
   I have keepalived master running on controller01, If I reboot this
 server
   it
   failovers to controller02 which now becomes Keepalived Master, then
 I see
   ping packets arriving to controller02, this is good.
  
   However when the controller01 comes online I see that ping requests
 stop
   being forwarded to controller02 and start being sent to
 controller01 that
   is
   now in Backup State, so it stops working.
  
 
  If traffic is being forwarded to a backup node, that sounds like
 L2pop is on.
  Is that true by chance?
 
   Any hint for this?
  
   Thanks
  
  
  
   On Mon, Dec 29, 2014 at 11:06 AM, Pedro Sousa  pgso...@gmail.com
  wrote:
  
  
  
   Yes,
  
   I was using l2pop, disabled it, but the issue remains.
  
   I also stopped bogus VRRP messages configuring a user/password for
   keepalived, but when I reboot the servers, I see keepalived process
 running
   on them but I cannot ping the virtual router ip address anymore.
  
   So I rebooted the node that is running Keepalived as Master, starts
 pinging
   again, but when that node comes online, everything stops working.
 Anyone
   experienced this?
  
   Thanks
  
  
   On Tue, Dec 23, 2014 at 5:03 PM, David Martin  dmart...@gmail.com
  wrote:
  
  
  
   Are you using l2pop? Until
 https://bugs.launchpad.net/neutron/+bug/1365476
   is
   fixed it's pretty broken.
  
   On Tue, Dec 23, 2014 at 10:48 AM, Britt Houser (bhouser) 
   bhou...@cisco.com
wrote:
  
  
  
   Unfortunately I've not had a chance yet to play with neutron router
 HA, so
   no
   hints from me. =( Can you give a little more details about it stops
   working? I.e. You see packets dropped while controller 1 is down?
 Do
   packets begin flowing before controller1 comes back online? Does
   controller1
   come back online successfully? Do packets begin to flow after
 controller1
   comes back online? Perhaps that will help.
  
   Thx,
   britt
  
   From: Pedro Sousa  pgso...@gmail.com 
   Date: Tuesday, December 23, 2014 at 11:14 AM
   To: Britt Houser  bhou...@cisco.com 
   Cc:  OpenStack-operators@lists.openstack.org  
   OpenStack-operators@lists.openstack.org 
   Subject: Re: [Openstack-operators] Neutron DVR HA
  
   I understand Britt, thanks.
  
   So I disabled DVR and tried to test L3_HA, but it's not working
 properly,
   it
   seems a keepalived issue. I see that it's running on 3 nodes:
  
   [root@controller01 keepalived]# neutron
 l3-agent-list-hosting-router
   harouter
  
 

[Openstack-operators] [Cinder] volume / host relation

2015-01-07 Thread Arne Wiebalck
Hi,

Will a Cinder volume creation request ever timeout and be rescheduled in case 
the host with the volume service it has been scheduled to is not consuming the 
corresponding message?

Similarly: if the host the volume has been created on and to which later the 
deletion request is scheduled has disappeared (e.g. meanwhile retired), will 
the scheduler try to schedule to another host?

From what I see, the answer to both of these questions seems to be ’no'. Things 
can get stuck in these scenarios and can only be unblocked by resurrecting the 
down host or by manually changing the Cinder database.

Is my understanding correct?

Is there a way to tag hosts so that any of my Cinder hosts can pick up the 
creation (and in particular deletion) message? I tried with the “host” 
parameter in cinder.conf which seems to “work, but is probably not meant for 
this, in particular as it touches the services database and makes the hosts 
indistinguishable
(which in turn breaks cinder-manage).

How do people deal with this issue?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT



 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Cinder] volume / host relation

2015-01-07 Thread Warren Wang
Your understanding is correct. I have the same problem as well. For now, my
plan is to just move cinder-volume to our more robust hosts, and run
database changes to modify the host, as needed.

I have noticed a growing trend to replace the host parameter with a
generic, but I agree that this presents other problems as well. This option
may be just as problematic as having to modify the database in the event of
a cinder-volume host outage. Probably worth having a discussion with the
Cinder dev community.

--
Warren

Warren

On Wed, Jan 7, 2015 at 8:42 AM, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 Will a Cinder volume creation request ever timeout and be rescheduled in
 case the host with the volume service it has been scheduled to is not
 consuming the corresponding message?

 Similarly: if the host the volume has been created on and to which later
 the deletion request is scheduled has disappeared (e.g. meanwhile retired),
 will the scheduler try to schedule to another host?

 From what I see, the answer to both of these questions seems to be ’no'.
 Things can get stuck in these scenarios and can only be unblocked by
 resurrecting the down host or by manually changing the Cinder database.

 Is my understanding correct?

 Is there a way to tag hosts so that any of my Cinder hosts can pick up the
 creation (and in particular deletion) message? I tried with the “host”
 parameter in cinder.conf which seems to “work, but is probably not meant
 for this, in particular as it touches the services database and makes the
 hosts indistinguishable
 (which in turn breaks cinder-manage).

 How do people deal with this issue?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mar 9-10, 2015, Philadelphia, USA - Next Ops Meetup

2015-01-07 Thread Ignace Mouzannar
Hey Tom,

Thanks for organizing this. We're really looking forward to this event.

On 29/12/14 02:53 AM, Tom Fifield wrote:
 Registration and details to come. We'll do our regular etherpad
 brainstorming, but since some of us are having a well-earned break at
 the moment, let's hold off on that for a few days at least.

I was just wondering if I had missed the registration link? Or if it had
not been sent yet.

Anyhow, thanks for your help.

Cheers,
 Ignace M
-- 
// eNovance Inc.  http://enovance.com
// ✉ ign...@enovance.com  ☎ (+1)-514-907-0068
// 127 rue Saint-Pierre, Montréal, Québec H2Y 2L6



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Cinder] volume / host relation

2015-01-07 Thread Amit Das
+1 on the issue faced.

We too had to use a generic name and modify db records to make this work.
However we had to struggle for couple of days on a multi node setup before
digging out the root cause.

Worth looping the dev group if they have some alternate suggestions.
On 7 Jan 2015 20:45, Warren Wang war...@wangspeed.com wrote:

 Your understanding is correct. I have the same problem as well. For now,
 my plan is to just move cinder-volume to our more robust hosts, and run
 database changes to modify the host, as needed.

 I have noticed a growing trend to replace the host parameter with a
 generic, but I agree that this presents other problems as well. This option
 may be just as problematic as having to modify the database in the event of
 a cinder-volume host outage. Probably worth having a discussion with the
 Cinder dev community.

 --
 Warren

 Warren

 On Wed, Jan 7, 2015 at 8:42 AM, Arne Wiebalck arne.wieba...@cern.ch
 wrote:

 Hi,

 Will a Cinder volume creation request ever timeout and be rescheduled in
 case the host with the volume service it has been scheduled to is not
 consuming the corresponding message?

 Similarly: if the host the volume has been created on and to which later
 the deletion request is scheduled has disappeared (e.g. meanwhile retired),
 will the scheduler try to schedule to another host?

 From what I see, the answer to both of these questions seems to be ’no'.
 Things can get stuck in these scenarios and can only be unblocked by
 resurrecting the down host or by manually changing the Cinder database.

 Is my understanding correct?

 Is there a way to tag hosts so that any of my Cinder hosts can pick up
 the creation (and in particular deletion) message? I tried with the “host”
 parameter in cinder.conf which seems to “work, but is probably not meant
 for this, in particular as it touches the services database and makes the
 hosts indistinguishable
 (which in turn breaks cinder-manage).

 How do people deal with this issue?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Cinder] volume / host relation

2015-01-07 Thread Mike Smith
+1 in that we experienced this as well.   We implemented an HA iscsi cinder and 
failover from one node to the other works great until you have to do some 
administrative action on the volume and the mismatched name in the database 
messes that up.  We’ve had to change hostnames in the DB or fail back to make 
it work.  We’ve been meaning to genericize the name on our side, just like we 
do for nova with the nova-network host



On Jan 7, 2015, at 9:45 AM, Amit Das 
amit@cloudbyte.commailto:amit@cloudbyte.com wrote:


+1 on the issue faced.

We too had to use a generic name and modify db records to make this work.  
However we had to struggle for couple of days on a multi node setup before 
digging out the root cause.

Worth looping the dev group if they have some alternate suggestions.

On 7 Jan 2015 20:45, Warren Wang 
war...@wangspeed.commailto:war...@wangspeed.com wrote:
Your understanding is correct. I have the same problem as well. For now, my 
plan is to just move cinder-volume to our more robust hosts, and run database 
changes to modify the host, as needed.

I have noticed a growing trend to replace the host parameter with a generic, 
but I agree that this presents other problems as well. This option may be just 
as problematic as having to modify the database in the event of a cinder-volume 
host outage. Probably worth having a discussion with the Cinder dev community.

--
Warren

Warren

On Wed, Jan 7, 2015 at 8:42 AM, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hi,

Will a Cinder volume creation request ever timeout and be rescheduled in case 
the host with the volume service it has been scheduled to is not consuming the 
corresponding message?

Similarly: if the host the volume has been created on and to which later the 
deletion request is scheduled has disappeared (e.g. meanwhile retired), will 
the scheduler try to schedule to another host?

From what I see, the answer to both of these questions seems to be ’no'. Things 
can get stuck in these scenarios and can only be unblocked by resurrecting the 
down host or by manually changing the Cinder database.

Is my understanding correct?

Is there a way to tag hosts so that any of my Cinder hosts can pick up the 
creation (and in particular deletion) message? I tried with the “host” 
parameter in cinder.conf which seems to “work, but is probably not meant for 
this, in particular as it touches the services database and makes the hosts 
indistinguishable
(which in turn breaks cinder-manage).

How do people deal with this issue?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message solely to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify 
sender immediately by telephone or return email. Thank you.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators