[openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Eduard Matei
Hi,

I'm trying to install devstack and it keeps failing with:

2015-02-20 07:26:12.693 | ++ is_fedora
2015-02-20 07:26:12.693 | ++ [[ -z Ubuntu ]]
2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora ']'
2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']'
2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']'
2015-02-20 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']'
2015-02-20 07:26:12.693 | + [[ -d /var/cache/pip ]]
2015-02-20 07:26:12.693 | + [[ ! -d /opt/stack/.wheelhouse ]]
2015-02-20 07:26:12.693 | + source tools/build_wheels.sh
2015-02-20 07:26:12.693 | /opt/devstack/stack.sh: line 688:
tools/build_wheels.sh: No such file or directory
2015-02-20 07:26:12.693 | ++ exit_trap
2015-02-20 07:26:12.693 | ++ local r=1
2015-02-20 07:26:12.693 | +++ jobs -p
2015-02-20 07:26:12.693 | ++ jobs=
2015-02-20 07:26:12.693 | ++ [[ -n '' ]]
2015-02-20 07:26:12.694 | ++ kill_spinner
2015-02-20 07:26:12.694 | ++ '[' '!' -z '' ']'
2015-02-20 07:26:12.694 | ++ [[ 1 -ne 0 ]]
2015-02-20 07:26:12.694 | ++ echo 'Error on exit'
2015-02-20 07:26:12.694 | Error on exit
2015-02-20 07:26:12.694 | ++ [[ -z /opt/stack/logs ]]
2015-02-20 07:26:12.694 | ++ /opt/devstack/tools/worlddump.py -d /opt/stack/logs
2015-02-20 07:26:12.767 | ++ exit 1


Any ideas how to get around this?


Eduard

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/20/2015 12:02 PM, Eduard Matei wrote:
 
 Hi,
 
 I'm trying to install devstack and it keeps failing with:
 
 2015-02-20 07:26:12.693 | ++ is_fedora 2015-02-20 07:26:12.693 | ++
 [[ -z Ubuntu ]] 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora
 ']' 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']' 
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']' 2015-02-20
 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']' 2015-02-20
 07:26:12.693 | + [[ -d /var/cache/pip ]] 2015-02-20 07:26:12.693 |
 + [[ ! -d /opt/stack/.wheelhouse ]] 2015-02-20 07:26:12.693 | +
 source tools/build_wheels.sh 2015-02-20 07:26:12.693 |
 /opt/devstack/stack.sh: line 688: tools/build_wheels.sh: No such
 file or directory 2015-02-20 07:26:12.693 | ++ exit_trap 2015-02-20
 07:26:12.693 | ++ local r=1 2015-02-20 07:26:12.693 | +++ jobs -p 
 2015-02-20 07:26:12.693 | ++ jobs= 2015-02-20 07:26:12.693 | ++ [[
 -n '' ]] 2015-02-20 07:26:12.694 | ++ kill_spinner 2015-02-20
 07:26:12.694 | ++ '[' '!' -z '' ']' 2015-02-20 07:26:12.694 | ++ [[
 1 -ne 0 ]] 2015-02-20 07:26:12.694 | ++ echo 'Error on exit' 
 2015-02-20 07:26:12.694 | Error on exit 2015-02-20 07:26:12.694 |
 ++ [[ -z /opt/stack/logs ]] 2015-02-20 07:26:12.694 | ++
 /opt/devstack/tools/worlddump.py -d /opt/stack/logs 2015-02-20
 07:26:12.767 | ++ exit 1
 
 
 Any ideas how to get around this?
 

build_wheels.sh script is included into devstack as of
Idff1ea69a5ca12ba56098e664dbf6924fe6a2e47. Do you have the commit in
your checkout?

 
 Eduard
 
 --
 
 *Eduard Biceri Matei, Senior Software Developer* 
 www.cloudfounders.com http://www.cloudfounders.com/ |
 eduard.ma...@cloudfounders.com
 mailto:eduard.ma...@cloudfounders.com __ __
 
 __ __ *CloudFounders, The Private Cloud Software Company* __
 __ Disclaimer: This email and any files transmitted with it are
 confidential and intended solely for the use of the individual or
 entity to whom they are addressed. If you are not the named
 addressee or an employee or agent responsible for delivering this
 message to the named addressee, you are hereby notified that you
 are not authorized to read, print, retain, copy or disseminate this
 message or any part of it. If you have received this email in error
 we request you to notify us by reply e-mail and to delete all
 electronic files of the message. If you are not the intended 
 recipient you are notified that disclosing, copying, distributing
 or taking any action in reliance on the contents of this
 information is strictly prohibited. E-mail transmission cannot be
 guaranteed to be secure or error free as information could be
 intercepted, corrupted, lost, destroyed, arrive late or incomplete,
 or contain viruses. The sender therefore does not accept liability
 for any errors or omissions in the content of this message, and
 shall have no liability for any loss or damage suffered by the
 user, which arise as a result of e-mail transmission.
 

Please avoid sending those disclaimers to public mailing lists. ^^

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU5xZTAAoJEC5aWaUY1u57lfIH/0u7daEg9PotXku0k1UctExq
JKQOXSLGWE+pdmB9kvxMCUuaMe6ABNCkVpJB+qn1RP5gL4sQnB3+KyoYn10iM1Ck
EKZuPuXk6CzVRlyIavUxBBkQcApjyHTJgNCTePTYkSHVGNGf/Lm+d1UjiGdVLEm+
kG7sU5xvby3oUAdGNcZcqSY77IqJE9slBqBpwYcaOwoegnUI4zlS2NKU5Eda+kAk
+jw6pFogyFNoF09f1FnSjwP26zCsAI2cvukrs65gfRGYFtIBnExp+WoqcEiyMTrh
xfJsu1rr6TPsSCbhWC0ronphYStheuUFTGsHIv2SJYAzYwkN8W+M/1WIqffsKbo=
=q8TL
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] qemu-img disk corruption bug to patch or not?

2015-02-20 Thread Philipp Marek
  Potentially corrupted images are bad; depending on the affected data it 
  might only be diagnosed some time after installation, so IMO the fix is 
  needed.
 
 Sure but there is a (potentially hefty) performance impact.
Well, do you want fast or working/consistent images?

  Only a minor issue: I'd like to see it have a check for the qemu version, 
  and only then kick in.
 
 That dosn't work as $distro may have a qemu 2.0.x that has the fix backported.
 That's why the workarounds group was created.  You specify the safe default 
 and
 if a distributor knows it's package is safe it can alter the default.
Oh, okay... with the safe default being derived from the qemu version ;)


 At some point you can invert that or remove the option altogether.
Yeah, 5 years down the line ... ;[

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.

2015-02-20 Thread Miguel Ángel Ajo

On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote:  
 [Adding neutron tag to subject, comments below.]
  
 On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com 
 (mailto:b...@nicira.com) wrote:
  [moving this conversation to openstack-dev because it's more
  interesting there and that avoids crossposting to a subscribers-only
  list]
   
  On Thu, Feb 19, 2015 at 10:57:02AM +0100, Miguel ??ngel Ajo wrote:
  I specially liked the VIF port lifecycle, looks good to me, Ionly miss 
   some
   ???port_security??? concepts we have in neutron, which I guess could have 
   been
   deliberately omitted for a start.
  
  In neutron we have something called security groups, and every port
   belongs to 1 or more security groups.  Each security group has a list of
   rules to control traffic at port level in a very fine grained fashion 
   (ingress/egress
   protocol/flags/etc???   remote_ip/mask or security_group ID)
  
   I guess we could build  render security_group ID to multiple IPs for each 
   port,
   but then we will miss the ingress/egress and protocol flags (like type  
   of protocol,
   ports, etc.. [1])
  
   Also, be aware, that not having security group ID references from neutron,
   when lot???s of ports go to the same security group we end up with an 
   exponential
   growth of rules / OF entries per port, we solved this in the 
   server-agent
   communication for the reference OVS solution by keeping a lists of IPs
   belonging to security group IDs, and then, separately having the
   references from the rules.
   
  Thanks a lot for the comment.
   
  We aim to fully support security groups in OVN.  The current documents
  don't capture that intent.  (That's partly because we're planning to
  implement them in terms of some new connection tracking code for OVS
  that's still in the process of getting committed, and partly because I
  haven't fully thought them through yet.)
   
Ah, yes, I know it, I’m tracking that effort to benchmark and
shed some numbers over the OVS+OF  vs   OVS+veths+LB+iptables
stateful firewalling/security groups.

I guess also routing namespaceless router benchmarking would make
sense too.

  My initial reaction is that we can implement security groups as
  another action in the ACL table that is similar to allow but in
  addition permits reciprocal inbound traffic.  Does that sound
  sufficient to you?
Yes, having fine grained allows (matching on protocols, ports, and
remote ips would satisfy the neutron use case).

Also we use connection tracking to allow reciprocal inbound traffic
via ESTABLISHED/RELATED, any equivalent solution would do.

For reference, our SG implementation, currently is able to match on
combinations of:

* direction: ingress/egress
* protocol: icmp/tcp/udp/raw number
* port_range:  min-max   (it’s always dst)
* L2 packet ethertype: IPv4, IPv6, etc...
* remote_ip_prefix: as a CIDR   or* remote_group_id (to reference all other 
IPs in a certain group)

All of them assume connection tracking so known connection packets will
go the other way around.

   
  Is the exponential explosion due to cross-producting, that is, because
  you have, say, n1 source addresses and n2 destination addresses and so
  you need n1*n2 flows to specify all the combinations?  We aim to solve
  that in OVN by giving the CMS direct support for more sophisticated
  matching rules, so that it can say something like:
   
  ip.src in {a, b, c, ...}  ip.dst in {d, e, f, ...}
   (tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080})

That sounds good and very flexible.
   
  and let OVN implement it in OVS via the conjunctive match feature
  recently added, which is like a set membership match but more
  powerful.  
Hmm, where can I find examples about that feature, sounds interesting.
  
  It might still be nice to support lists of IPs (or
  whatever), since these lists could still recur in a number of
  circumstances, but my guess is that this will help a lot even without
  that.
   
As afar as I understood, given the way megaflows resolve rules via hashes
even if we had lots of rules with different ip addresses, that would be very 
fast,
probably as fast or more than our current ipset solution.

The only caveat would be having to update lots of flow rules when a port goes
in or out of a security group, since you have to go and clear/add the rules to 
each
single port on the same security group (as long as they have 1 rule referencing 
the sg).

  Thoughts?
   
 This all sounds really good to me Ben. I look forward to seeing the 
 connection tracking code land  
 and some design details on the security groups aspects of OVN published once 
 that happens!
  
  
  
  
  
  

  
 Thanks,
 Kyle
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Sean Dague
On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps (direct and
 transitive), while requirements.in http://requirements.in reflects
 what people will actually use.  Whatever is in requirements.txt affects
 the egg's requires.txt. Instead, we can keep requirements.txt unchanged
 and have it still be the canonical list of dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an upstream
 utility we produce and use to keep things sane on our slaves.
 
 Maybe all we need is:
 
 * update the existing post-merge job on the requirements repo to produce
 a requirements.txt (as it does now) as well the compiled version.  
 
 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary
 
 I'm not sure how the second bit jives with the existing devstack
 installation code, specifically with the libraries from git-or-master
 but we can probably add something to warm the system with dependencies
 from the compiled version prior to calling pip/setup.py/etc
 http://setup.py/etc

It sounds like you are suggesting we take the tool we use to ensure that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.

Which I'm fine with.

But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and just
double down on the devstack venv work instead.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Behavior of default security group

2015-02-20 Thread Hirofumi Ichihara
Neutron experts,

I caught a bug report[1].

Currently, Neutron enable admin to delete default security group. But Neutron 
doesn’t allow default security group to keep deleted. Neutron regenerates 
default security group as security group api is called next. I have two 
questions about the behavior.

1. Why does Neutron regenerate default security group? If default security 
group is essential, we shouldn’t enable admin to delete it.
2. Why is security group named “default essential? Users may want to change 
its name.

I'd like neutron experts' suggestions.

[1]: https://bugs.launchpad.net/neutron/+bug/1423475

Thanks,
Hirofumi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Silvan Kaiser
Hmm, had the same issue like three hours ago but no longer now.
Either DevStack was updated in the last hour/mins or my fiddling with cwd
settings of my ansible scripts did the job, not sure which.
If you're running stack.sh not manually but scripted check your
environments cwd settings, is my hint.

Silvan

2015-02-20 12:02 GMT+01:00 Eduard Matei eduard.ma...@cloudfounders.com:


 Hi,

 I'm trying to install devstack and it keeps failing with:

 2015-02-20 07:26:12.693 | ++ is_fedora
 2015-02-20 07:26:12.693 | ++ [[ -z Ubuntu ]]
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']'
 2015-02-20 07:26:12.693 | + [[ -d /var/cache/pip ]]
 2015-02-20 07:26:12.693 | + [[ ! -d /opt/stack/.wheelhouse ]]
 2015-02-20 07:26:12.693 | + source tools/build_wheels.sh
 2015-02-20 07:26:12.693 | /opt/devstack/stack.sh: line 688: 
 tools/build_wheels.sh: No such file or directory
 2015-02-20 07:26:12.693 | ++ exit_trap
 2015-02-20 07:26:12.693 | ++ local r=1
 2015-02-20 07:26:12.693 | +++ jobs -p
 2015-02-20 07:26:12.693 | ++ jobs=
 2015-02-20 07:26:12.693 | ++ [[ -n '' ]]
 2015-02-20 07:26:12.694 | ++ kill_spinner
 2015-02-20 07:26:12.694 | ++ '[' '!' -z '' ']'
 2015-02-20 07:26:12.694 | ++ [[ 1 -ne 0 ]]
 2015-02-20 07:26:12.694 | ++ echo 'Error on exit'
 2015-02-20 07:26:12.694 | Error on exit
 2015-02-20 07:26:12.694 | ++ [[ -z /opt/stack/logs ]]
 2015-02-20 07:26:12.694 | ++ /opt/devstack/tools/worlddump.py -d 
 /opt/stack/logs
 2015-02-20 07:26:12.767 | ++ exit 1


 Any ideas how to get around this?


 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Sean Dague
Yes, it looks like a bug that was merged which the gate didn't catch:
https://review.openstack.org/#/c/157720/ is the fix (I just pushed it up).

Could you share your localrc or local.conf? It would also be worth
tracking down which piece of devstack is changing your working directory
differently than how it is in the gate config.

-Sean

On 02/20/2015 06:02 AM, Eduard Matei wrote:
 
 Hi,
 
 I'm trying to install devstack and it keeps failing with:
 
 2015-02-20 07:26:12.693 | ++ is_fedora
 2015-02-20 07:26:12.693 | ++ [[ -z Ubuntu ]]
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']'
 2015-02-20 07:26:12.693 | + [[ -d /var/cache/pip ]]
 2015-02-20 07:26:12.693 | + [[ ! -d /opt/stack/.wheelhouse ]]
 2015-02-20 07:26:12.693 | + source tools/build_wheels.sh
 2015-02-20 07:26:12.693 | /opt/devstack/stack.sh: line 688: 
 tools/build_wheels.sh: No such file or directory
 2015-02-20 07:26:12.693 | ++ exit_trap
 2015-02-20 07:26:12.693 | ++ local r=1
 2015-02-20 07:26:12.693 | +++ jobs -p
 2015-02-20 07:26:12.693 | ++ jobs=
 2015-02-20 07:26:12.693 | ++ [[ -n '' ]]
 2015-02-20 07:26:12.694 | ++ kill_spinner
 2015-02-20 07:26:12.694 | ++ '[' '!' -z '' ']'
 2015-02-20 07:26:12.694 | ++ [[ 1 -ne 0 ]]
 2015-02-20 07:26:12.694 | ++ echo 'Error on exit'
 2015-02-20 07:26:12.694 | Error on exit
 2015-02-20 07:26:12.694 | ++ [[ -z /opt/stack/logs ]]
 2015-02-20 07:26:12.694 | ++ /opt/devstack/tools/worlddump.py -d 
 /opt/stack/logs
 2015-02-20 07:26:12.767 | ++ exit 1
 
 
 Any ideas how to get around this?
 
 
 Eduard
 
 -- 
 
 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com http://www.cloudfounders.com/
  | eduard.ma...@cloudfounders.com mailto:eduard.ma...@cloudfounders.com
 __ __
 
 __ __
 *CloudFounders, The Private Cloud Software Company*
 __ __
 Disclaimer:
 This email and any files transmitted with it are confidential and
 intended solely for the use of the individual or entity to whom they are
 addressed.
 If you are not the named addressee or an employee or agent responsible
 for delivering this message to the named addressee, you are hereby
 notified that you are not authorized to read, print, retain, copy or
 disseminate this message or any part of it. If you have received this
 email in error we request you to notify us by reply e-mail and to delete
 all electronic files of the message. If you are not the intended
 recipient you are notified that disclosing, copying, distributing or
 taking any action in reliance on the contents of this information is
 strictly prohibited. 
 E-mail transmission cannot be guaranteed to be secure or error free as
 information could be intercepted, corrupted, lost, destroyed, arrive
 late or incomplete, or contain viruses. The sender therefore does not
 accept liability for any errors or omissions in the content of this
 message, and shall have no liability for any loss or damage suffered by
 the user, which arise as a result of e-mail transmission.
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware][ironic] Configuring active/passive HA Nova compute

2015-02-20 Thread Matthew Booth
Gary Kotton came across a doozy of a bug recently:

https://bugs.launchpad.net/nova/+bug/1419785

In short, when you start a Nova compute, it will query the driver for
instances and compare that against the expected host of the the instance
according to the DB. If the driver is reporting an instance the DB
thinks is on a different host, it assumes the instance was evacuated
while Nova compute was down, and deletes it on the hypervisor. However,
Gary found that you trigger this when starting up a backup HA node which
has a different `host` config setting. i.e. You fail over, and the first
thing it does is delete all your instances.

Gary and I both agree on a couple of things:

1. Deleting all your instances is bad
2. HA nova compute is highly desirable for some drivers

We disagree on the approach to fixing it, though. Gary posted this:

https://review.openstack.org/#/c/154029/

I've already outlined my objections to this approach elsewhere, but to
summarise I think this fixes 1 symptom of a design problem, and leaves
the rest untouched. If the value of nova compute's `host` changes, then
the assumption that instances associated with that compute can be
identified by the value of instance.host becomes invalid. This
assumption is pervasive, so it breaks a lot of stuff. The worst one is
_destroy_evacuated_instances(), which Gary found, but if you scan
nova/compute/manager for the string 'self.host' you'll find lots of
them. For example, all the periodic tasks are broken, including image
cache management, and the state of ResourceTracker will be unusual.
Worse, whenever a new instance is created it will have a different value
of instance.host, so instances running on a single hypervisor will
become partitioned based on which nova compute was used to create them.

In short, the system may appear to function superficially, but it's
unsupportable.

I had an alternative idea. The current assumption is that the `host`
managing a single hypervisor never changes. If we break that assumption,
we break Nova, so we could assert it at startup and refuse to start if
it's violated. I posted this VMware-specific POC:

https://review.openstack.org/#/c/154907/

However, I think I've had a better idea. Nova creates ComputeNode
objects for its current configuration at startup which, amongst other
things, are a map of host:hypervisor_hostname. We could assert when
creating a ComputeNode that hypervisor_hostname is not already
associated with a different host, and refuse to start if it is. We would
give an appropriate error message explaining that this is a
misconfiguration. This would prevent the user from hitting any of the
associated problems, including the deletion of all their instances.

We can still do active/passive HA!

If we configure both nodes in the active/passive cluster identically,
including with the same value of `host`, I don't see why this shouldn't
work today. I don't even think the configuration is onerous. All we
would be doing is preventing the user from accidentally running a
misconfigured HA which leads to inconsistent state, and will eventually
require manual cleanup.

We would still have to be careful that we don't bring up both nova
computes simultaneously. The VMware driver, at least, has hardcoded
assumptions that it is the only writer in certain circumstances. That
problem would have to be handled separately, perhaps at the messaging layer.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]

2015-02-20 Thread Dmitry Tantsur

On 02/20/2015 06:14 AM, Ganapathy, Sandhya wrote:

Hi All,

I would like to discuss the Chassis Discovery Tool Blueprint -
https://review.openstack.org/#/c/134866/

The blueprint suggests Hardware enrollment and introspection for
properties at the Chassis layer. Suitable for micro-servers that have an
active chassis to query for details.

Initially, the idea was proposed as an API change at the Ironic layer.
We found many complexities such as interaction with conductor and the
point of nodes in a chassis being mapped to different conductors.

So, decision was taken to keep it as a separate tool above the Ironic
API layer.  It is a generic tool that can be plugged in for specific
hardware.


I'll reiterate over my points:

0. /tool directory is a no-go, we have development tooling there. We 
won't run tests from there, distributions won't package it, etc.


So valid options are:

1. create a driver vendor passthru, not sure why you want to care about 
node mapping here


2. create a new proper CLI. this does not feel right to create too 
specific tool (which will actually be vendor-specific for a long time or 
forever)


3. create a new repo (ironic-extras?) for cool tools for Ironic. that's 
the way we went with ironic-discoverd, and that's my vote if you can't 
create a vendor passthru.


I see Ironic as a bare metal API, not just set of tools, so that e.g. 
every feature added to Ironic can be consumed from UI. If it should be a 
tool I see no reason for Ironic core team to start handling it (we have 
enough reviews honestly :).


Dmitry.



There are different opinions from the community on this and it will be
good to come to a consensus.

I have also added the topic as an agenda in the Ironic IRC meeting.

Thanks,

Sandhya



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-20 Thread Oleg Gelbukh
Aleksey,

Thank you for clarification. Personally, I'm more interested in IP-based
display/grouping/filtering of deployed nodes.

And yes, it would be super-useful to have filtering in back-end and API,
not only in UI.

--
Best regards,
Oleg

On Fri, Feb 20, 2015 at 12:30 PM, Aleksey Kasatkin akasat...@mirantis.com
wrote:

 Oleg, The problem with IP addresses (for all networks but admin-pxe) is
 that they are not available until deployment is started or
 /clusters/(?Pcluster_id\d+)/orchestrator/deployment/defaults/ is called.
 Nailgun just doesn't allocate them in advance. It was discussed some time
 before (
 https://blueprints.launchpad.net/fuel/+spec/assign-ips-on-nodes-addition
 ) but not planned yet. There is no problem with admin-pxe addresses though.

 I agree that filtering is better be done in backend but it seems that it
 will not be done recently. AFAIC, it will not be 6.1.
 We didn't even decide what to do with API versioning yet.


 Aleksey Kasatkin


 On Thu, Feb 19, 2015 at 12:05 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 I think all these operations for nodes (grouping, sorting, filtering) can
 be done on the backend, but we can do it completely on the UI side and
 shouldn't wait for backend implementation. We can switch to it after it
 becomes available.

 2015-02-17 19:44 GMT+07:00 Sergey Vasilenko svasile...@mirantis.com:

 +1, sorting is should be...

 Paginating may be too, but not activated by default.


 /sv




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-20 Thread Aleksey Kasatkin
Oleg, The problem with IP addresses (for all networks but admin-pxe) is
that they are not available until deployment is started or
/clusters/(?Pcluster_id\d+)/orchestrator/deployment/defaults/ is called.
Nailgun just doesn't allocate them in advance. It was discussed some time
before (
https://blueprints.launchpad.net/fuel/+spec/assign-ips-on-nodes-addition )
but not planned yet. There is no problem with admin-pxe addresses though.

I agree that filtering is better be done in backend but it seems that it
will not be done recently. AFAIC, it will not be 6.1.
We didn't even decide what to do with API versioning yet.


Aleksey Kasatkin


On Thu, Feb 19, 2015 at 12:05 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 I think all these operations for nodes (grouping, sorting, filtering) can
 be done on the backend, but we can do it completely on the UI side and
 shouldn't wait for backend implementation. We can switch to it after it
 becomes available.

 2015-02-17 19:44 GMT+07:00 Sergey Vasilenko svasile...@mirantis.com:

 +1, sorting is should be...

 Paginating may be too, but not activated by default.


 /sv



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Thierry Carrez
Matt Riedemann wrote:
 On 2/19/2015 6:06 PM, Henry Gessau wrote:
 The fix: https://review.openstack.org/157606
 
 That's busted by other things at the moment, it sounds like the solution
 starts here:
 
 https://review.openstack.org/#/c/157535/

Do you know where it ends ? We could set up Depends lines on those
requirements stable/* reviews and line them up so that they are ready to
merge when openstackclient is fixed in devstack.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Akihiro Motoki
It is not a topic specific to neutronclient.
We need more consistent versioning policy on client releases.
Assuming x.y.z as version number, just incrementing z should keep
backward compat.
For a client release for a new cycle (like this), it is better to bump
at least y version.
By doing so, client release compatible with some stable releases can
be defined as x.(y+1)
and we can reserve version numbers for important fixes.
(I believe the version cap for icehouse 2.4 is expected to a policy like this).

Thanks,
Akihiro

2015-02-20 8:44 GMT+09:00 Joe Gordon joe.gord...@gmail.com:
 neutronclient is requiring a keystone client that is way too new for
 icehouse. 2.3.11 was released (And breaks with semver), but icehouse has a
 limit of 2.4. So global-requirements for icehouse needs to be fixed.

 2015-02-19 22:21:21.419 | ERROR: openstackclient.shell Exception raised:
 (python-keystoneclient 0.11.2 (/usr/local/lib/python2.7/dist-packages),
 Requirement.parse('python-keystoneclient=1.1.0'),
 set(['python-neutronclient']))


 Note: I am not pushing the patch to fix this myself, we need more people who
 are able to monitor and fix these types of issues.


 On Thu, Feb 19, 2015 at 3:35 PM, Joe Gordon joe.gord...@gmail.com wrote:

 And this just broke icehouse jobs. Which means devstack-gate is broken.


 http://logs.openstack.org/53/157553/1/check/check-tempest-dsvm-full-icehouse/6c63b71//logs/devstacklog.txt.gz#_2015-02-19_22_21_21_419

 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt?h=stable/icehouse#n89

 On Thu, Feb 19, 2015 at 1:35 PM, Kyle Mestery mest...@mestery.com wrote:

 The Neutron team is proud to announce the release of the latest version
 of python-neutronclient. This release includes the following bug fixes and
 improvements:

 3e5c6ba Updated from global requirements
 a774e84 Add unit tests for agentscheduler related commands
 069b14c Fix for incorrect parameter in user-id error message in shell.py
 57adb7f Fix CSV formatting of fixed_ips field in port-list command
 0be3b62 Implement LBaaS object model v2
 3d6769c Fix typo in test_cli20_agentschedulers filename
 e1633ed Add ip_version to extra dhcp opts
 59d7564 Skip None id when getting security_group_ids
 6f7cd14 Reverse order of tests to avoid incompatibility
 b0923a3 Utility method for boolean argument
 68fc402 Split base function of v2_0.Client into a separate class
 2dce00b Updated from global requirements
 51d2a23 Add parser options for port-update and port-create
 5b1c45a Add floating-ip-address to floatingip-create
 845f461 Fix KeyError when filtering SG rule listing
 30bd81c Updated from global requirements
 86fede6 Remove unreachable code from test_cli20 class
 cb5d462 Parse provider network attributes in net_create
 78b6310 Parameter support both id and name
 096fd1b Add '--router:external' option to 'net-create'
 aed3faf Fix TypeError for six.text_type
 d6e40b5 Add Python 3 classifiers
 4fa57fe Namespace of arguments is incorrectly used
 4beadef Fix True/False to accept Camel and Lower case
 799e288 Use adapter from keystoneclient
 5822d61 Use requests_mock instead of mox
 4b181cd Updated from global requirements
 04a0ec8 firewall policy update for a rule is not working
 0560f85 Fix columns setup base on csv formatter
 187c36c Correct the bash completion of CLI
 2f23623 Workflow documentation is now in infra-manual
 62063c1 Fix issues with Unicode compatibility for Py3

 For more details on the release, please see the git log history in the
 release notes in the LP page here:

 https://launchpad.net/python-neutronclient/+milestone/2.3.11

 Please report any bugs in LP.

 Thanks!
 Kyle


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Sean Dague
On 02/20/2015 06:14 AM, Sean Dague wrote:
 Yes, it looks like a bug that was merged which the gate didn't catch:
 https://review.openstack.org/#/c/157720/ is the fix (I just pushed it up).
 
 Could you share your localrc or local.conf? It would also be worth
 tracking down which piece of devstack is changing your working directory
 differently than how it is in the gate config.

This was just sent into the gate, hopefully merged in ~ 1hr.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Alan Pevec
 https://review.openstack.org/#/c/157535/

 Do you know where it ends ? We could set up Depends lines on those
 requirements stable/* reviews and line them up so that they are ready to
 merge when openstackclient is fixed in devstack.

Alternative workaround is https://review.openstack.org/157654 which is
blocked on swift-dsvm-functional issue fixed by
https://review.openstack.org/157670 which is blocked on neutronclient
i.e. we got a cyclic dep here which will require ninja merge to
resolve.

I suggest to start with ninja merging 157670 which looks the most innocent.

Once we get icehouse working again we can look at backporting venv
patch series to devstack icehouse.


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The strange case of osapi_compute_unique_server_name_scope

2015-02-20 Thread Matthew Booth
On 19/02/15 18:57, Jay Pipes wrote:
 On 02/19/2015 05:18 AM, Matthew Booth wrote:
 Nova contains a config variable osapi_compute_unique_server_name_scope.
 Its help text describes it pretty well:

 When set, compute API will consider duplicate hostnames invalid within
 the specified scope, regardless of case. Should be empty, project or
 global.

 So, by default hostnames are not unique, but depending on this setting
 they could be unique either globally or in the scope of a project.

 Ideally a unique constraint would be enforced by the database but,
 presumably because this is a config variable, that isn't the case here.
 Instead it is enforced in code, but the code which does this predictably
 races. My first attempt to fix this using the obvious SQL solution
 appeared to work, but actually fails in MySQL as it doesn't support that
 query structure[1][2]. SQLite and PostgreSQL do support it, but they
 don't support the query structure which MySQL supports. Note that this
 isn't just a syntactic thing. It looks like it's still possible to do
 this if we compound the workaround with a second workaround, but I'm
 starting to wonder if we'd be better fixing the design.

 First off, do we need this config variable? Is anybody actually using
 it? I suspect the answer's going to be yes, but it would be extremely
 convenient if it's not.

 Assuming this configurability is required, is there any way we can
 instead use it to control a unique constraint in the db at service
 startup? This would be something akin to a db migration. How do we
 manage those?

 Related to the above, I'm not 100% clear on which services run this
 code. Is it possible for different services to have a different
 configuration of this variable, and does that make sense? If so, that
 would preclude a unique constraint in the db.
 
 Is there a bug that you are attempting to fix here? If not, I'd suggest
 just leaving this code as it is...

The bug is the race. If a user sets
osapi_compute_unique_server_name_scope they're presumably expecting the
associating uniqueness guarantees, but we're not providing them.

John suggested I deprecate it and see who complains:

https://review.openstack.org/157731

In the mean time, I'm starting to think the most prudent course of
action would be to not fix the race: it's not the most important race in
that function, it's been broken for a long time, and I suspect few
people are using it. We could document that it's broken... In fact, I
might add that to the deprecation notice.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Alan Pevec
 (I believe the version cap for icehouse 2.4 is expected to a policy like 
 this).

Yes, that assumed Semantic Versioning (semver.org) which 2.3.11 broke.


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread M Ranga Swami Reddy
I too got this problem very frequently.



On Fri, Feb 20, 2015 at 4:32 PM, Eduard Matei
eduard.ma...@cloudfounders.com wrote:

 Hi,

 I'm trying to install devstack and it keeps failing with:

 2015-02-20 07:26:12.693 | ++ is_fedora
 2015-02-20 07:26:12.693 | ++ [[ -z Ubuntu ]]
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']'
 2015-02-20 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']'
 2015-02-20 07:26:12.693 | + [[ -d /var/cache/pip ]]
 2015-02-20 07:26:12.693 | + [[ ! -d /opt/stack/.wheelhouse ]]
 2015-02-20 07:26:12.693 | + source tools/build_wheels.sh
 2015-02-20 07:26:12.693 | /opt/devstack/stack.sh: line 688:
 tools/build_wheels.sh: No such file or directory
 2015-02-20 07:26:12.693 | ++ exit_trap
 2015-02-20 07:26:12.693 | ++ local r=1
 2015-02-20 07:26:12.693 | +++ jobs -p
 2015-02-20 07:26:12.693 | ++ jobs=
 2015-02-20 07:26:12.693 | ++ [[ -n '' ]]
 2015-02-20 07:26:12.694 | ++ kill_spinner
 2015-02-20 07:26:12.694 | ++ '[' '!' -z '' ']'
 2015-02-20 07:26:12.694 | ++ [[ 1 -ne 0 ]]
 2015-02-20 07:26:12.694 | ++ echo 'Error on exit'
 2015-02-20 07:26:12.694 | Error on exit
 2015-02-20 07:26:12.694 | ++ [[ -z /opt/stack/logs ]]
 2015-02-20 07:26:12.694 | ++ /opt/devstack/tools/worlddump.py -d
 /opt/stack/logs
 2015-02-20 07:26:12.767 | ++ exit 1


 Any ideas how to get around this?


 Eduard

 --

 Eduard Biceri Matei, Senior Software Developer
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 CloudFounders, The Private Cloud Software Company

 Disclaimer:
 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for
 delivering this message to the named addressee, you are hereby notified that
 you are not authorized to read, print, retain, copy or disseminate this
 message or any part of it. If you have received this email in error we
 request you to notify us by reply e-mail and to delete all electronic files
 of the message. If you are not the intended recipient you are notified that
 disclosing, copying, distributing or taking any action in reliance on the
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as
 information could be intercepted, corrupted, lost, destroyed, arrive late or
 incomplete, or contain viruses. The sender therefore does not accept
 liability for any errors or omissions in the content of this message, and
 shall have no liability for any loss or damage suffered by the user, which
 arise as a result of e-mail transmission.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joe Gordon
On Fri, Feb 20, 2015 at 12:10 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
  On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps (direct
 and
 transitive), while requirements.in http://requirements.in
 reflects
 what people will actually use.  Whatever is in requirements.txt
 affects
 the egg's requires.txt. Instead, we can keep requirements.txt
 unchanged
 and have it still be the canonical list of dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an
 upstream
 utility we produce and use to keep things sane on our slaves.

 Maybe all we need is:

 * update the existing post-merge job on the requirements repo to
   produce
 a requirements.txt (as it does now) as well the compiled version.

 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary

 I'm not sure how the second bit jives with the existing devstack
 installation code, specifically with the libraries from
 git-or-master
 but we can probably add something to warm the system with
 dependencies
 from the compiled version prior to calling pip/setup.py/etc
 http://setup.py/etc
   
It sounds like you are suggesting we take the tool we use to ensure
 that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.
   
Which I'm fine with.
   
But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and
 just
double down on the devstack venv work instead.
  
   I don't disagree with your conclusion, but that's not how I read what
 he
   proposed. :-)
  
  
  Sean was reading between the lines here. We are doing all this extra work
  to make sure OpenStack can be run together in a single environment, but
  it
  seems like more and more people are moving away from deploying with that
  model anyway. Moving to this model would require a little more then just
  installing everything in separate venvs.  We would need to make sure we
  don't cap oslo libraries etc. in order to prevent conflicts inside a
  single

 Something I've noticed in this discussion: We should start talking about
 our libraries, not just Oslo libraries. Oslo isn't the only project
 managing libraries used by more than one other team any more. It never
 really was, if you consider the clients, but we have PyCADF and various
 middleware and other things now, too. We can base our policies on what
 we've learned from Oslo, but we need to apply them to *all* libraries,
 no matter which team manages them.


My mistake, you are correct. I was incorrectly using oslo as a shorthand
for all openstack libraries.


  service. And we would still need a story around what to do with stable
  branches, how do we make sure new libraries don't break stable branches
  --
  which in turn can break master via grenade and other jobs.

 I'm comfortable using simple caps based on minor number increments for
 stable branches. New libraries won't end up in the stable branches
 unless they are a patch release. We can set up test jobs for stable
 branches of libraries to run tempest just like we do against master, but
 using all stable branch versions of the source files (AFAIK, we don't
 have a job like that now, but I could be wrong).


In general I agree, this is the right way forward for openstack libraries.
But as made clear this week, we will have to be a little more careful about
what is a valid patch release.



 I'm less confident that we have identified all of the issues with more
 limited pins, so I'm reluctant to back that approach for now. That may
 be an excess of caution on my part, though.

 
 
 
   Joe wanted requirements.txt to be the pinned requirements computed from
   the list of all global requirements that work together. Pinning to a
   single version works in our gate, but makes installing everything else
   together *outside* of the gate harder because if the projects don't all
   sync all requirements changes pretty much at the same time they won't
 be
   compatible.
  
   Adam suggested leaving requirements.txt alone and creating a different
   list of pinned requirements that is *only* used in our gate. That way
 we
   still get the pinning for our gate, and the values are computed from
 the
   requirements used in the projects but they aren't propagated back out
 to
   the projects in a way that breaks their PyPI or distro packages.
  
   Another benefit of Adam's proposal is that we would only need to keep
   the list of pins in 

Re: [openstack-dev] [Ironic] *ED states strike back

2015-02-20 Thread John Villalovos
Ruby,

What you say makes sense to me.  On keeping things consistent.  So sounds
good to me to always use them and not have them be optional.

John

On Thu, Feb 19, 2015 at 9:32 AM, Ruby Loo rlooya...@gmail.com wrote:

 I think that if there is a use case for an *ED state, then we should have
 it. And if we have one *ED state, I think it makes sense (and is
 consistent) to have them for all the active states.

 If we have *ED states, I would prefer that we add them in when the active
 state is added. So add stateING, stateED, stateFAIL. If a particular
 driver has nothing it wants to do in an *ED state, it can cause a
 transition from the *ED state to the passive/stable state.

 I don't want the *ED states to be optional because that puts the onus on
 the developer that needs the *ED state, to add it in (assuming they are
 aware that this is possible) and put in whatever plumbing might be needed.
 Which may mean that they'd have to modify code in another driver, that
 didn't need *ED in the first place. (If an *ED state is added, all drivers
 using that active state should handle the *ED state too because it is in
 the state machine and I'd rather not complicate things by having state-ING
 - state-ED - stable-state and state-ING - stable-state.

 --ruby

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 03:36 PM, Joe Gordon wrote:
 On Fri, Feb 20, 2015 at 12:10 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
   On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
   wrote:
  
   
   
On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps (direct
  and
  transitive), while requirements.in http://requirements.in
  reflects
  what people will actually use.  Whatever is in requirements.txt
  affects
  the egg's requires.txt. Instead, we can keep requirements.txt
  unchanged
  and have it still be the canonical list of dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream
  utility we produce and use to keep things sane on our slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo to
produce
  a requirements.txt (as it does now) as well the compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing devstack
  installation code, specifically with the libraries from
  git-or-master
  but we can probably add something to warm the system with
  dependencies
  from the compiled version prior to calling pip/setup.py/etc
  http://setup.py/etc

 It sounds like you are suggesting we take the tool we use to ensure
  that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and
  just
 double down on the devstack venv work instead.
   
I don't disagree with your conclusion, but that's not how I read what
  he
proposed. :-)
   
   
   Sean was reading between the lines here. We are doing all this extra work
   to make sure OpenStack can be run together in a single environment, but
   it
   seems like more and more people are moving away from deploying with that
   model anyway. Moving to this model would require a little more then just
   installing everything in separate venvs.  We would need to make sure we
   don't cap oslo libraries etc. in order to prevent conflicts inside a
   single
 
  Something I've noticed in this discussion: We should start talking about
  our libraries, not just Oslo libraries. Oslo isn't the only project
  managing libraries used by more than one other team any more. It never
  really was, if you consider the clients, but we have PyCADF and various
  middleware and other things now, too. We can base our policies on what
  we've learned from Oslo, but we need to apply them to *all* libraries,
  no matter which team manages them.
 
 
 My mistake, you are correct. I was incorrectly using oslo as a shorthand
 for all openstack libraries.

Yeah, I've been doing it, too, but the thing with neutronclient today
made me realize we shouldn't. :-)

 
 
   service. And we would still need a story around what to do with stable
   branches, how do we make sure new libraries don't break stable branches
   --
   which in turn can break master via grenade and other jobs.
 
  I'm comfortable using simple caps based on minor number increments for
  stable branches. New libraries won't end up in the stable branches
  unless they are a patch release. We can set up test jobs for stable
  branches of libraries to run tempest just like we do against master, but
  using all stable branch versions of the source files (AFAIK, we don't
  have a job like that now, but I could be wrong).
 
 
 In general I agree, this is the right way forward for openstack
 libraries.
 But as made clear this week, we will have to be a little more careful
 about
 what is a valid patch release.

Sure. With caps in place, and incrementing the minor version at the
start of each cycle, I think the issues that come up can be minimized
though.

 
 
 
  I'm less confident that we have identified all of the issues with more
  limited pins, so I'm reluctant to back that approach for now. That may
  be an excess of caution on my part, though.
 
  
  
  
Joe wanted requirements.txt to be the pinned requirements computed from
the list of all global requirements that work together. Pinning to a
single version works in our gate, but makes installing everything else
together *outside* of the gate harder because if the projects don't all
sync all requirements changes pretty much at the same time they won't
  be
compatible.
   
Adam suggested leaving requirements.txt 

Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo

2015-02-20 Thread Sourabh Patwardhan (sopatwar)
Nova core reviewers,

May I request an FFE for Cisco VIF driver:
https://review.openstack.org/#/c/157616/

This is a small isolated change similar to the vhostuser / open contrail vif 
drivers for which FFE has been granted.

Thanks,
Sourabh


From: Christopher Yeoh cbky...@gmail.commailto:cbky...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 17, 2015 at 3:34 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Outcome of the nova FFE meeting for Kilo



On Wed, Feb 18, 2015 at 6:18 AM, Matt Riedemann 
mrie...@linux.vnet.ibm.commailto:mrie...@linux.vnet.ibm.com wrote:


On 2/16/2015 9:57 PM, Jay Pipes wrote:
Hi Mikal, sorry for top-posting. What was the final decision regarding
the instance tagging work?

Thanks,
-jay

On 02/16/2015 09:44 PM, Michael Still wrote:
Hi,

we had a meeting this morning to try and work through all the FFE
requests for Nova. The meeting was pretty long -- two hours or so --
and we did in in the nova IRC channel in an attempt to be as open as
possible. The agenda for the meeting was the list of FFE requests at
https://etherpad.openstack.org/p/kilo-nova-ffe-requests

I recognise that this process is difficult for all, and that it is
frustrating when your FFE request is denied. However, we have tried
very hard to balance distractions from completing priority tasks and
getting as many features into Kilo as possible. I ask for your
patience as we work to finalize the Kilo release.

That said, here's where we ended up:

Approved:

 vmware: ephemeral disk support
 API: Keypair support for X509 public key certificates

We were also presented with a fair few changes which are relatively
trivial (single patch, not very long) and isolated to a small part of
the code base. For those, we've selected the ones with the greatest
benefit. These ones are approved so long as we can get the code merged
before midnight on 20 February 2015 (UTC). The deadline has been
introduced because we really are trying to focus on priority work and
bug fixes for the remainder of the release, so I want to time box the
amount of distraction these patches cause.

Those approved in this way are:

 ironic: Pass the capabilities to ironic node instance_info
 libvirt: Nova vif driver plugin for opencontrail
 libvirt: Quiescing filesystems with QEMU guest agent during image
snapshotting
 libvirt: Support vhost user in libvirt vif driver
 libvirt: Support KVM/libvirt on System z (S/390) as a hypervisor
platform

It should be noted that there was one request which we decided didn't
need a FFE as it isn't feature work. That may proceed:

 hyperv: unit tests refactoring

Finally, there were a couple of changes we were uncomfortable merging
this late in the release as we think they need time to bed down
before a release we consider stable for a long time. We'd like to see
these merge very early in Liberty:

 libvirt: use libvirt storage pools
 libvirt: Generic Framework for Securing VNC and SPICE
Proxy-To-Compute-Node Connections

Thanks again to everyone with their patience with our process, and
helping to make Kilo an excellent Nova release.

Michael


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


There are notes in the etherpad,

https://etherpad.openstack.org/p/kilo-nova-ffe-requests

but I think we wanted to get cyeoh and Ken'ichi's thoughts on the v2 and/or 
v2.1 question about the change, i.e. should it be v2.1 only with microversions 
or if that is going to block it, is it fair to keep out the v2 change that's 
already in the patch?


So if it can be fully merged by end of week I'm ok with it going into v2 and 
v2.1. Otherwise I think it needs to wait for microversions. I'd like to see 
v2.1 enabled next Monday (I don't want it go in just before a weekend). And the 
first microversion change (which is ready to go) a couple of days after). And 
we want a bit of an API freeze while that is happening.

Chris



--

Thanks,

Matt Riedemann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [nova][cinder][neutron][security] Rootwrap discussions at OSSG mid-cycle

2015-02-20 Thread Lucas Fisher

All,
We spent some time at the OSSG mid-cycle meet-up this week discussing root 
wrap, looking at the existing code, and considering some of the mailing list 
discussions.

Summary of our discussions: 
https://github.com/hyakuhei/OSSG-Security-Practices/blob/master/ossg_rootwrap.md

The one line summary is we like the idea of a privileged daemon with higher 
level interfaces to the commands being run. It has a number of advantages such 
as easier to audit, enables better input sanitization, cleaner interfaces, and 
easier to take advantage of Linux capabilities, SELinux, AppArmour, etc. The 
write-up has some more details.

--
Lucas Fisher
Senior Security Software Engineer | Nebula Inc.
lucas.fis...@nebula.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CINDER] Exception request : Making Introducing micro_states for create workflow part of K-3

2015-02-20 Thread Vilobh Meshram
As discussed in Cinder Weekly meeting on 02/12 the deadline for K3 (kilo-3
k3 : Cinder https://launchpad.net/cinder/+milestone/kilo-3) is Feb28
(please correct me if I am wrong). So I have a working prototype for
micro-states feature https://review.openstack.org/#/c/124205 and is been
already out for review for quite some time now; if it gets the needed
attention then it should definitely be able to make it to K-3. I see lot of
features that are planned for K-3 still in Started or Needs code review
 so I thought it would be wise to request you for the same.

Please let me know your thoughts regarding the same.

Thanks,
Vilobh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Sumit Naiksatam
Inline...

On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary
vikram.choudh...@huawei.com wrote:
 Hi,

 You can write your own driver. You can refer to below links for getting some 
 idea about the architecture.

 https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework

This is a legacy construct and should not be used.

 https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent


The above pointer is to a LBaaS Agent which is very different from a
FWaaS driver (which was the original question in the email).

FWaaS does use pluggable drivers and the default is configured here:
https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.ini

For example for FWaaS driver implementation you can check here:
https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/services/firewall/drivers

 Thanks
 Vikram

 -Original Message-
 From: Sławek Kapłoński [mailto: ]
 Sent: 19 February 2015 02:33
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] FWaaS - question about drivers

 Hello,

 I'm looking to use FWaaS service plugin with my own router solution (I'm not 
 using L3 agent at all). If I want to use FWaaS plugin also, should I write 
 own driver to it, or should I write own service plugin?
 I will be grateful for any links to some description about this FWaaS and 
 it's architecture :) Thx a lot for any help


 --
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Sławek Kapłoński
Hello,

Thx for tips. I have one more question. You point me fo neutron-fwaas project 
which for me looks like different project then neutron. I saw fwaas service 
plugin directly in neutron in Juno. So which version should I use: this 
neutron-fwaas or service plugin from neutron? Or maybe it is the same or I 
misunderstand something?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:
 Inline...
 
 On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary
 
 vikram.choudh...@huawei.com wrote:
  Hi,
  
  You can write your own driver. You can refer to below links for getting
  some idea about the architecture.
  
  https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework
 
 This is a legacy construct and should not be used.
 
  https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent
 
 The above pointer is to a LBaaS Agent which is very different from a
 FWaaS driver (which was the original question in the email).
 
 FWaaS does use pluggable drivers and the default is configured here:
 https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.ini
 
 For example for FWaaS driver implementation you can check here:
 https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/service
 s/firewall/drivers
  Thanks
  Vikram
  
  -Original Message-
  From: Sławek Kapłoński [mailto: ]
  Sent: 19 February 2015 02:33
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [Neutron] FWaaS - question about drivers
  
  Hello,
  
  I'm looking to use FWaaS service plugin with my own router solution (I'm
  not using L3 agent at all). If I want to use FWaaS plugin also, should I
  write own driver to it, or should I write own service plugin? I will be
  grateful for any links to some description about this FWaaS and it's
  architecture :) Thx a lot for any help
  
  
  --
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] HTTP-based switch config/mgmt with API control

2015-02-20 Thread Adam Lawson
Howdy folks.

Does anyone know of any open source projects that facilitate managing
switches from within a browser (not CLI)? I know there are mechanism
drivers today that allow the ML2 plug-in to connect directly to a switch
via SSH and add/remove VLAN's, I'm wondering if this has been implemented
somewhere where it can be done within a browser or with some guts governed
by some kind of API.

We are wondering if we need to analyze how Neutron goes about this and
create something from scratch or if we can build upon the efforts of others.

Mahalo,
Adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Doug Wiegley
Same project, shiny new repo.

doug


 On Feb 20, 2015, at 4:05 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 
 Hello,
 
 Thx for tips. I have one more question. You point me fo neutron-fwaas project 
 which for me looks like different project then neutron. I saw fwaas service 
 plugin directly in neutron in Juno. So which version should I use: this 
 neutron-fwaas or service plugin from neutron? Or maybe it is the same or I 
 misunderstand something?
 
 --
 Pozdrawiam / Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl
 
 Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:
 Inline...
 
 On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary
 
 vikram.choudh...@huawei.com wrote:
 Hi,
 
 You can write your own driver. You can refer to below links for getting
 some idea about the architecture.
 
 https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework
 
 This is a legacy construct and should not be used.
 
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent
 
 The above pointer is to a LBaaS Agent which is very different from a
 FWaaS driver (which was the original question in the email).
 
 FWaaS does use pluggable drivers and the default is configured here:
 https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.ini
 
 For example for FWaaS driver implementation you can check here:
 https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/service
 s/firewall/drivers
 Thanks
 Vikram
 
 -Original Message-
 From: Sławek Kapłoński [mailto: ]
 Sent: 19 February 2015 02:33
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] FWaaS - question about drivers
 
 Hello,
 
 I'm looking to use FWaaS service plugin with my own router solution (I'm
 not using L3 agent at all). If I want to use FWaaS plugin also, should I
 write own driver to it, or should I write own service plugin? I will be
 grateful for any links to some description about this FWaaS and it's
 architecture :) Thx a lot for any help
 
 
 --
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Sumit Naiksatam
Hi Slawek,

During the Kilo development cycle the repositories of the three
services - FWaaS, LBaaS, VPNaaS, have split from the main neutron
repo. The pointers I posted were from this split repo. What you are
referring to is the Juno version of the code (during which all the
code was in one Neutron repo). FWaaS driver behavior is same in both
cases, so it really depends on which OpenStack release you want to
work with. Hope that clarifies.

Thanks,
~Sumit.

On Fri, Feb 20, 2015 at 3:05 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 Hello,

 Thx for tips. I have one more question. You point me fo neutron-fwaas project
 which for me looks like different project then neutron. I saw fwaas service
 plugin directly in neutron in Juno. So which version should I use: this
 neutron-fwaas or service plugin from neutron? Or maybe it is the same or I
 misunderstand something?

 --
 Pozdrawiam / Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl

 Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:
 Inline...

 On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary

 vikram.choudh...@huawei.com wrote:
  Hi,
 
  You can write your own driver. You can refer to below links for getting
  some idea about the architecture.
 
  https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework

 This is a legacy construct and should not be used.

  https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent

 The above pointer is to a LBaaS Agent which is very different from a
 FWaaS driver (which was the original question in the email).

 FWaaS does use pluggable drivers and the default is configured here:
 https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.ini

 For example for FWaaS driver implementation you can check here:
 https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/service
 s/firewall/drivers
  Thanks
  Vikram
 
  -Original Message-
  From: Sławek Kapłoński [mailto: ]
  Sent: 19 February 2015 02:33
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [Neutron] FWaaS - question about drivers
 
  Hello,
 
  I'm looking to use FWaaS service plugin with my own router solution (I'm
  not using L3 agent at all). If I want to use FWaaS plugin also, should I
  write own driver to it, or should I write own service plugin? I will be
  grateful for any links to some description about this FWaaS and it's
  architecture :) Thx a lot for any help
 
 
  --
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Matt Riedemann



On 2/20/2015 6:23 AM, Alan Pevec wrote:

https://review.openstack.org/#/c/157535/


Do you know where it ends ? We could set up Depends lines on those
requirements stable/* reviews and line them up so that they are ready to
merge when openstackclient is fixed in devstack.


Alternative workaround is https://review.openstack.org/157654 which is
blocked on swift-dsvm-functional issue fixed by
https://review.openstack.org/157670 which is blocked on neutronclient
i.e. we got a cyclic dep here which will require ninja merge to
resolve.

I suggest to start with ninja merging 157670 which looks the most innocent.

Once we get icehouse working again we can look at backporting venv
patch series to devstack icehouse.


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We disabled the swift functional job on stable/icehouse since it doesn't 
have the tox target and shouldn't have been running on stable/icehouse:


https://review.openstack.org/#/c/157825/

That freed up Adam's devstack workaround change:

https://review.openstack.org/#/c/157654/

So we can now focus on the neutronclient cap:

https://review.openstack.org/#/c/157606/

That hit a problem with an uncapped pyghmi version in the gate so we're 
also now capping that library as well.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Sławek Kapłoński
Hello,

Thx guys. Now it is clear for me :)
One more question. I saw that in this service plugin there is hardcoded quota 
1 firewall per tenant. Do you know why it is so limited? Is there any 
important reason for that?
And second thing. As there is only one firewall per tenant so all rules from 
it will be applied on all routers (L3 agents) from this tenant and for all 
tenant networks, am I right? If yes, how it is solved to set firewall rules 
when for example new router is created? L3 agent is asking about rules via rpc 
or FwaaS is sending such notification to L3 agent?
Sorry if my questions are silly but I didn't do anything with this service 
plugins yet :)

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia piątek, 20 lutego 2015 16:27:33 Doug Wiegley pisze:
 Same project, shiny new repo.
 
 doug
 
  On Feb 20, 2015, at 4:05 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
  
  Hello,
  
  Thx for tips. I have one more question. You point me fo neutron-fwaas
  project which for me looks like different project then neutron. I saw
  fwaas service plugin directly in neutron in Juno. So which version
  should I use: this neutron-fwaas or service plugin from neutron? Or maybe
  it is the same or I misunderstand something?
  
  --
  Pozdrawiam / Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:
  Inline...
  
  On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary
  
  vikram.choudh...@huawei.com wrote:
  Hi,
  
  You can write your own driver. You can refer to below links for getting
  some idea about the architecture.
  
  https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework
  
  This is a legacy construct and should not be used.
  
  https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent
  
  The above pointer is to a LBaaS Agent which is very different from a
  FWaaS driver (which was the original question in the email).
  
  FWaaS does use pluggable drivers and the default is configured here:
  https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.i
  ni
  
  For example for FWaaS driver implementation you can check here:
  https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/serv
  ice s/firewall/drivers
  
  Thanks
  Vikram
  
  -Original Message-
  From: Sławek Kapłoński [mailto: ]
  Sent: 19 February 2015 02:33
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [Neutron] FWaaS - question about drivers
  
  Hello,
  
  I'm looking to use FWaaS service plugin with my own router solution (I'm
  not using L3 agent at all). If I want to use FWaaS plugin also, should I
  write own driver to it, or should I write own service plugin? I will be
  grateful for any links to some description about this FWaaS and it's
  architecture :) Thx a lot for any help
  
  
  --
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-20 Thread Sumit Naiksatam
Inline...

On Fri, Feb 20, 2015 at 3:38 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 Hello,

 Thx guys. Now it is clear for me :)
 One more question. I saw that in this service plugin there is hardcoded quota
 1 firewall per tenant. Do you know why it is so limited? Is there any
 important reason for that?

This is a current limitation of the reference implementation, since we
associate the FWaaS firewall resource with all the neutron routers.
Note that this is not a limitation of the FWaaS model, hence, if your
backend can support it, you can override this limitation.

 And second thing. As there is only one firewall per tenant so all rules from
 it will be applied on all routers (L3 agents) from this tenant and for all
 tenant networks, am I right? If yes, how it is solved to set firewall rules

In general, this limitation is going away in the Kilo release. See the
following patch under review which removes the limitation of one
router per tenant:
https://review.openstack.org/#/c/152697/

 when for example new router is created? L3 agent is asking about rules via rpc
 or FwaaS is sending such notification to L3 agent?

In the current implementation this is automatically reconciled.
Whenever a new router comes up, the FWaaS agent pulls the rules, and
applies it on the interfaces of the new router.

 Sorry if my questions are silly but I didn't do anything with this service
 plugins yet :)

 --
 Pozdrawiam / Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl

 Dnia piątek, 20 lutego 2015 16:27:33 Doug Wiegley pisze:
 Same project, shiny new repo.

 doug

  On Feb 20, 2015, at 4:05 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 
  Hello,
 
  Thx for tips. I have one more question. You point me fo neutron-fwaas
  project which for me looks like different project then neutron. I saw
  fwaas service plugin directly in neutron in Juno. So which version
  should I use: this neutron-fwaas or service plugin from neutron? Or maybe
  it is the same or I misunderstand something?
 
  --
  Pozdrawiam / Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
 
  Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:
  Inline...
 
  On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary
 
  vikram.choudh...@huawei.com wrote:
  Hi,
 
  You can write your own driver. You can refer to below links for getting
  some idea about the architecture.
 
  https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework
 
  This is a legacy construct and should not be used.
 
  https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent
 
  The above pointer is to a LBaaS Agent which is very different from a
  FWaaS driver (which was the original question in the email).
 
  FWaaS does use pluggable drivers and the default is configured here:
  https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.i
  ni
 
  For example for FWaaS driver implementation you can check here:
  https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/serv
  ice s/firewall/drivers
 
  Thanks
  Vikram
 
  -Original Message-
  From: Sławek Kapłoński [mailto: ]
  Sent: 19 February 2015 02:33
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [Neutron] FWaaS - question about drivers
 
  Hello,
 
  I'm looking to use FWaaS service plugin with my own router solution (I'm
  not using L3 agent at all). If I want to use FWaaS plugin also, should I
  write own driver to it, or should I write own service plugin? I will be
  grateful for any links to some description about this FWaaS and it's
  architecture :) Thx a lot for any help
 
 
  --
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
 
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
On Feb 21, 2015 12:20 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-20 16:29:31 +0100 (+0100), Deepak Shetty wrote:
  Couldn't find anything strong in the logs to back the reason for
  OOM. At the time OOM happens, mysqld and java processes have the
  most RAM hence OOM selects mysqld (4.7G) to be killed.
 [...]

 Today I reran it after you rolled back some additional tests, and it
 runs for about 117 minutes before the OOM killer shoots nova-compute
 in the head. At your request I've added /var/log/glusterfs into the
 tarball this time: http://fungi.yuggoth.org/tmp/logs2.tar

Thanks jeremy, can we get ssh access to one of these env to debug?

Thanks
Deepak

 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
On Feb 21, 2015 12:26 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty dpkshe...@gmail.com
wrote:

 Hi Jeremy,
   Couldn't find anything strong in the logs to back the reason for OOM.
 At the time OOM happens, mysqld and java processes have the most RAM
hence OOM selects mysqld (4.7G) to be killed.

 From a glusterfs backend perspective, i haven't found anything
suspicious, and we don't have the logs of glusterfs (which is typically in
/var/log/glusterfs) so can't delve inside glusterfs too much :(

 BharatK (in CC) also tried to re-create the issue in local VM setup, but
it hasn't yet!

 Having said that, we do know that we started seeing this issue after we
enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
to enable non-admin to create hyp-assisted snaps). We think that enabling
online snaps might have added to the number of tests and memory load 
thats the only clue we have as of now!


 It looks like OOM killer hit while qemu was busy and during
a ServerRescueTest. Maybe libvirt logs would be useful as well?

Thanks for the data point, will look at this test to understand more what's
happening


 And I don't see any tempest tests calling assisted-volume-snapshots

Maybe it still hasn't reached to it yet.

Thanks
Deepak


 Also this looks odd: Feb 19 18:47:16
devstack-centos7-rax-iad-916633.slave.openstack.org libvirtd[3753]: missing
__com.redhat_reason in disk io error event



 So :

   1) BharatK  has merged the patch (
https://review.openstack.org/#/c/157707/ ) to revert the policy.json in the
glusterfs job. So no more nova-assisted-snap tests.

   2) We also are increasing the timeout of our job in patch (
https://review.openstack.org/#/c/157835/1 ) so that we can get a full run
without timeouts to do a good analysis of the logs (logs are not posted if
the job times out)

 Can you please re-enable our job, so that we can confirm that disabling
online snap TCs is helping the issue, which if it does, can help us narrow
down the issue.

 We also plan to monitor  debug over the weekend hence having the job
enabled can help us a lot.

 thanx,
 deepak


 On Thu, Feb 19, 2015 at 10:37 PM, Jeremy Stanley fu...@yuggoth.org
wrote:

 On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
 [...]
  For some reason we are seeing the centos7 glusterfs CI job getting
  aborted/ killed either by Java exception or the build getting
  aborted due to timeout.
 [...]
  Hoping to root cause this soon and get the cinder-glusterfs CI job
  back online soon.

 I manually reran the same commands this job runs on an identical
 virtual machine and was able to reproduce some substantial
 weirdness.

 I temporarily lost remote access to the VM around 108 minutes into
 running the job (~17:50 in the logs) and the out of band console
 also became unresponsive to carriage returns. The machine's IP
 address still responded to ICMP ping, but attempts to open new TCP
 sockets to the SSH service never got a protocol version banner back.
 After about 10 minutes of that I went out to lunch but left
 everything untouched. To my excitement it was up and responding
 again when I returned.

 It appears from the logs that it runs well past the 120-minute mark
 where devstack-gate tries to kill the gate hook for its configured
 timeout. Somewhere around 165 minutes in (18:47) you can see the
 kernel out-of-memory killer starts to kick in and kill httpd and
 mysqld processes according to the syslog. Hopefully this is enough
 additional detail to get you a start at finding the root cause so
 that we can reenable your job. Let me know if there's anything else
 you need for this.

 [1] http://fungi.yuggoth.org/tmp/logs.tar
 --
 Jeremy Stanley


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][cisco][apic] entry point for APIC agent

2015-02-20 Thread Ivar Lazzaro
Hi Ihar,

That was missed for the Juno release, I'll post a patch on master and then
backport it to stable/juno.

Thanks for catching that,
Ivar.

On Fri Feb 20 2015 at 11:06:11 AM Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 does anyone know why we don't maintain an entry point for APIC agent
 in setup.cfg? The code in [1] looks like there is a main() function
 for the agent, but for some reason it's not exposed to any
 console_script during installation of neutron.

 Is there any reason not to do it?

 [1]:
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron
 /plugins/ml2/drivers/cisco/apic/apic_topology.py#n320

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU54WDAAoJEC5aWaUY1u57m/oH/1sUvwf9v/sKYbfZXU23h0I4
 GJpPfI70l0NktkFIObO2tikshaSKygC0wv7zk6HGEiE2b0ATC5fRv0VaNtk7WMKu
 PqCWK6PV6IvuphZbt4f6A7mJ7JpSn06SQe0TEPABx9DUhybjXJ6iP0hSSb/Te+2M
 MP17IlepgHNasegiCD1VsWKAy3ZmnC5GwM+H6qKIe2pmn7NjBqXh8uRxbv/IzGjJ
 3YxHhS35xHd31neR9B7V16peXy1lTjwFkyw8XlJNufAmOhCVsN0uIDAhwv3XJRHF
 +9MOgpB0fpVqxbEWrflW1Lmy06Hr/scq/t7bQt4Lntu3A+PQEQJ0kx4aHElreyw=
 =K7In
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Victor Stinner
Hi,

Davanum Srinivas wrote:
 +1 to fix Oslo's service module any ways, irrespective of this bug.

By the way, the Service class is a blocker point for the implementation of 
asyncio and threads specs:

   https://review.openstack.org/#/c/153298/
   https://review.openstack.org/#/c/156711/

We may allow to execute a function before fork() to explicitly share some 
things with all child processes. But most things (instanciate the application, 
open DB connections, etc.) should be done after the fork.

Well, it looks like everyone agree. We just need someone to implement the idea 
:-)

We may write a new class instead of modifying the existing class, to not break 
applications. Doug Hellamnn even proposed once to have an abstraction of the 
concurrency model (eventlet, threads, asyncio). I don't know if it's worth it.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The strange case of osapi_compute_unique_server_name_scope

2015-02-20 Thread Johannes Erdfelt
On Thu, Feb 19, 2015, Matthew Booth mbo...@redhat.com wrote:
 Assuming this configurability is required, is there any way we can
 instead use it to control a unique constraint in the db at service
 startup? This would be something akin to a db migration. How do we
 manage those?

Ignoring if this particular feature is useful or not, this is possible.

With sqlalchemy-migrate, there could be code to check the config
option at startup and add/remove the unique constraint. This would leave
some schema management out of the existing scripts, which would be
mildly ugly.

With my online schema changes patch, this is all driven by the model.
Similar code could add/remove the unique constraint to the model. At
startup, the schema could be compared against the model to ensure
everything matches.

Adding/removing a unique constraint at any time leaves open some user
experience problems with data that violates the constraint preventing it
from being created.

Presumably a tool could help operators deal with that.

All that said, it's kind of messy and nontrivial work, so I'd avoid
trying to support a feature like this if we really don't need to :)

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
Hi Jeremy,
  Couldn't find anything strong in the logs to back the reason for OOM.
At the time OOM happens, mysqld and java processes have the most RAM hence
OOM selects mysqld (4.7G) to be killed.

From a glusterfs backend perspective, i haven't found anything suspicious,
and we don't have the logs of glusterfs (which is typically in
/var/log/glusterfs) so can't delve inside glusterfs too much :(

BharatK (in CC) also tried to re-create the issue in local VM setup, but it
hasn't yet!

Having said that,* we do know* that we started seeing this issue after we
enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
to enable non-admin to create hyp-assisted snaps). We think that enabling
online snaps might have added to the number of tests and memory load 
thats the only clue we have as of now!

So :

  1) BharatK  has merged the patch (
https://review.openstack.org/#/c/157707/ ) to revert the policy.json in the
glusterfs job. So no more nova-assisted-snap tests.

  2) We also are increasing the timeout of our job in patch (
https://review.openstack.org/#/c/157835/1 ) so that we can get a full run
without timeouts to do a good analysis of the logs (logs are not posted if
the job times out)

Can you please re-enable our job, so that we can confirm that disabling
online snap TCs is helping the issue, which if it does, can help us narrow
down the issue.

We also plan to monitor  debug over the weekend hence having the job
enabled can help us a lot.

thanx,
deepak


On Thu, Feb 19, 2015 at 10:37 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
 [...]
  For some reason we are seeing the centos7 glusterfs CI job getting
  aborted/ killed either by Java exception or the build getting
  aborted due to timeout.
 [...]
  Hoping to root cause this soon and get the cinder-glusterfs CI job
  back online soon.

 I manually reran the same commands this job runs on an identical
 virtual machine and was able to reproduce some substantial
 weirdness.

 I temporarily lost remote access to the VM around 108 minutes into
 running the job (~17:50 in the logs) and the out of band console
 also became unresponsive to carriage returns. The machine's IP
 address still responded to ICMP ping, but attempts to open new TCP
 sockets to the SSH service never got a protocol version banner back.
 After about 10 minutes of that I went out to lunch but left
 everything untouched. To my excitement it was up and responding
 again when I returned.

 It appears from the logs that it runs well past the 120-minute mark
 where devstack-gate tries to kill the gate hook for its configured
 timeout. Somewhere around 165 minutes in (18:47) you can see the
 kernel out-of-memory killer starts to kick in and kill httpd and
 mysqld processes according to the syslog. Hopefully this is enough
 additional detail to get you a start at finding the root cause so
 that we can reenable your job. Let me know if there's anything else
 you need for this.

 [1] http://fungi.yuggoth.org/tmp/logs.tar
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release: 2.3.11

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 07:01 AM, Alan Pevec wrote:
  (I believe the version cap for icehouse 2.4 is expected to a policy like 
  this).
 
 Yes, that assumed Semantic Versioning (semver.org) which 2.3.11 broke.

To elaborate, the issue here is that the dependencies between 2.3.10 and
2.3.11 changed, and that should have required at least a minor version
change.

The pbr docs include details about our use of SemVer [1], taken from the
upstream version and modified a bit based on what we and our distro
partners have more or less agreed on. I recommend all library
maintainers familiarize themselves with those policies because we're
building a lot of tooling in the CI infrastructure and elsewhere that
assumes that all libraries are following those rules. The section
relevant to this particular case is in the FAQ [2].

As Akihito points out elsewhere in the thread, it's a good idea to
increment the minor version for the first release at the beginning of
each cycle. We're moving to cap the requirements in stable branches
based on the SemVer rules, using the next minor version as the cap.
Incrementing the minor version for the first release in a cycle 
automatically puts the new release outside of the range of versions used
by the stable branches, and leaves the patch version series open for
actual bug fixes that need to be back-ported in clients and other
libraries. Version numbers are free, and we can easily get more of them.
:-)

That SemVer policy document is not exactly a page turner, and does get a
bit complicated in some of the interpretations of edge cases. If you are
preparing a library release and want to someone to double-check your
proposal for the next version number, ping me on IRC (dhellmann -- I'm
usually in openstack-dev and openstack-oslo, among other channels). I
will be more than happy to offer any advice I can.

Doug

[1] http://docs.openstack.org/developer/pbr/semver.html
[2]
http://docs.openstack.org/developer/pbr/semver.html#what-should-i-do-if-i-update-my-own-dependencies-without-changing-the-public-api
 
 
 Cheers,
 Alan
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Deprecation of Eventlet deployment in Kilo (Removal for M-release)

2015-02-20 Thread Chmouel Boudjnah
Morgan Fainberg morgan.fainb...@gmail.com writes:

 The Keystone development team is planning to deprecate deployment of Keystone
 under Eventlet during the Kilo cycle. Support for deploying under eventlet 
 will
 be dropped as of the “M”-release of OpenStack.

great! glad there is one project that start making deployment under
Apache as the default advised way, will look forward for others to
follow (if it make sense).

Chmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Devstack Install broken

2015-02-20 Thread Eduard Matei
Hi,

Seems to be fixed now, your comments gave me an idea and i included a cd
/opt/devstack in the command for stack.sh, now it looks like this:
HOME=/opt/stack  cd /opt/devstack  sudo -u stack
/opt/devstack/stack.sh

Just as a side note, it was working yesterday, without cd /opt/devstack,
and it has been working for at least 1 month, so not sure what changed, but
it was on devstack side.

Thanks,
Eduard

On Fri, Feb 20, 2015 at 3:48 PM, M Ranga Swami Reddy swamire...@gmail.com
wrote:

 I too got this problem very frequently.



 On Fri, Feb 20, 2015 at 4:32 PM, Eduard Matei
 eduard.ma...@cloudfounders.com wrote:
 
  Hi,
 
  I'm trying to install devstack and it keeps failing with:
 
  2015-02-20 07:26:12.693 | ++ is_fedora
  2015-02-20 07:26:12.693 | ++ [[ -z Ubuntu ]]
  2015-02-20 07:26:12.693 | ++ '[' Ubuntu = Fedora ']'
  2015-02-20 07:26:12.693 | ++ '[' Ubuntu = 'Red Hat' ']'
  2015-02-20 07:26:12.693 | ++ '[' Ubuntu = CentOS ']'
  2015-02-20 07:26:12.693 | ++ '[' Ubuntu = OracleServer ']'
  2015-02-20 07:26:12.693 | + [[ -d /var/cache/pip ]]
  2015-02-20 07:26:12.693 | + [[ ! -d /opt/stack/.wheelhouse ]]
  2015-02-20 07:26:12.693 | + source tools/build_wheels.sh
  2015-02-20 07:26:12.693 | /opt/devstack/stack.sh: line 688:
  tools/build_wheels.sh: No such file or directory
  2015-02-20 07:26:12.693 | ++ exit_trap
  2015-02-20 07:26:12.693 | ++ local r=1
  2015-02-20 07:26:12.693 | +++ jobs -p
  2015-02-20 07:26:12.693 | ++ jobs=
  2015-02-20 07:26:12.693 | ++ [[ -n '' ]]
  2015-02-20 07:26:12.694 | ++ kill_spinner
  2015-02-20 07:26:12.694 | ++ '[' '!' -z '' ']'
  2015-02-20 07:26:12.694 | ++ [[ 1 -ne 0 ]]
  2015-02-20 07:26:12.694 | ++ echo 'Error on exit'
  2015-02-20 07:26:12.694 | Error on exit
  2015-02-20 07:26:12.694 | ++ [[ -z /opt/stack/logs ]]
  2015-02-20 07:26:12.694 | ++ /opt/devstack/tools/worlddump.py -d
  /opt/stack/logs
  2015-02-20 07:26:12.767 | ++ exit 1
 
 
  Any ideas how to get around this?
 
 
  Eduard
 
  --
 
  Eduard Biceri Matei, Senior Software Developer
  www.cloudfounders.com
   | eduard.ma...@cloudfounders.com
 
 
 
  CloudFounders, The Private Cloud Software Company
 
  Disclaimer:
  This email and any files transmitted with it are confidential and
 intended
  solely for the use of the individual or entity to whom they are
 addressed.
  If you are not the named addressee or an employee or agent responsible
 for
  delivering this message to the named addressee, you are hereby notified
 that
  you are not authorized to read, print, retain, copy or disseminate this
  message or any part of it. If you have received this email in error we
  request you to notify us by reply e-mail and to delete all electronic
 files
  of the message. If you are not the intended recipient you are notified
 that
  disclosing, copying, distributing or taking any action in reliance on the
  contents of this information is strictly prohibited.
  E-mail transmission cannot be guaranteed to be secure or error free as
  information could be intercepted, corrupted, lost, destroyed, arrive
 late or
  incomplete, or contain viruses. The sender therefore does not accept
  liability for any errors or omissions in the content of this message, and
  shall have no liability for any loss or damage suffered by the user,
 which
  arise as a result of e-mail transmission.
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The 

Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.

2015-02-20 Thread Ben Pfaff
On Fri, Feb 20, 2015 at 12:45:46PM +0100, Miguel Ángel Ajo wrote:
 On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote:  
  On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com 
  (mailto:b...@nicira.com) wrote:
   My initial reaction is that we can implement security groups as
   another action in the ACL table that is similar to allow but in
   addition permits reciprocal inbound traffic.  Does that sound
   sufficient to you?
 Yes, having fine grained allows (matching on protocols, ports, and
 remote ips would satisfy the neutron use case).
 
 Also we use connection tracking to allow reciprocal inbound traffic
 via ESTABLISHED/RELATED, any equivalent solution would do.
 
 For reference, our SG implementation, currently is able to match on
 combinations of:
 
 * direction: ingress/egress
 * protocol: icmp/tcp/udp/raw number
 * port_range:  min-max   (it’s always dst)
 * L2 packet ethertype: IPv4, IPv6, etc...
 * remote_ip_prefix: as a CIDR   or* remote_group_id (to reference all 
 other IPs in a certain group)
 
 All of them assume connection tracking so known connection packets will
 go the other way around.

OK.  All of those should work OK.  (I don't know for sure whether we'll
have explicit groups; initially, probably not.)

   Is the exponential explosion due to cross-producting, that is, because
   you have, say, n1 source addresses and n2 destination addresses and so
   you need n1*n2 flows to specify all the combinations?  We aim to solve
   that in OVN by giving the CMS direct support for more sophisticated
   matching rules, so that it can say something like:

   ip.src in {a, b, c, ...}  ip.dst in {d, e, f, ...}
(tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080})
 
 That sounds good and very flexible.

   and let OVN implement it in OVS via the conjunctive match feature
   recently added, which is like a set membership match but more
   powerful.  
 Hmm, where can I find examples about that feature, sounds interesting.

If you look at ovs-ofctl(8) in a development version of OVS, such as
http://benpfaff.org/~blp/dist-docs/ovs-ofctl.8.pdf
search for conjunction, which explains the implementation.  (This
isn't the form that Neutron would use with OVN; that is the Boolean
expression syntax above.)

   It might still be nice to support lists of IPs (or
   whatever), since these lists could still recur in a number of
   circumstances, but my guess is that this will help a lot even without
   that.

 As afar as I understood, given the way megaflows resolve rules via hashes
 even if we had lots of rules with different ip addresses, that would be very 
 fast,
 probably as fast or more than our current ipset solution.
 
 The only caveat would be having to update lots of flow rules when a port goes
 in or out of a security group, since you have to go and clear/add the rules 
 to each
 single port on the same security group (as long as they have 1 rule 
 referencing the sg).

That sounds like another good argument for allowing explicit groups.  I
have a design in mind for that but I doubt it's the first thing to
implement.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Doug Hellmann


On Thu, Feb 19, 2015, at 09:45 PM, Mike Bayer wrote:
 
 
 Doug Hellmann d...@doughellmann.com wrote:
 
  5) Allow this sort of connection sharing to continue for a deprecation
  period with apppropriate logging, then make it a hard failure.
  
  This would provide services time to find and fix any sharing problems
  they might have, but would delay the timeframe for a final fix.
  
  6-ish) Fix oslo-incubator service.py to close all file descriptors after
  forking.
  
  
  I'm not sure why 6 is slower, can someone elaborate on that?
 
 So, option “A”, they call engine.dispose() the moment they’re in a fork,
 the activity upon requesting a connection from the pool is: look in pool,
 no connections present, create a connection and return it.

This feels like something we could do in the service manager base class,
maybe by adding a post fork hook or something.

Josh's patch to forcibly close all file descriptors may be something
else we want, but if we can reset open connections cleanly when we
know how, that feels better than relying on detecting broken sockets.

 
 Option “5”, the way the patch is right now to auto-invalidate on
 detection of new fork, the activity upon requesting a connection is from
 the pool is: look in pool, connection present, check that os.pid()
 matches what we’ve associated with the connection record, if not, we
 raise an exception indicating “invalid”, this is immediately caught, sets
 the connection record as “invalid”, the connection record them
 immediately disposes that file descriptor, makes a new connection and
 returns that.
 
 Option “6”, the new fork starts, the activity upon requesting a
 connection from the pool is: look in pool, connection present, perform
 the oslo.db “ping” event, ping event emits “SELECT 1” to the MySQLdb
 driver, driver attempts to emit this statement on the socket, socket
 communication fails, MySQLdb converts to an exception, exception is
 raised, SQLAlchemy catches the exception, sends it to a parser to
 determine the nature of the exception, we see that it’s a “disconnect”
 exception, we set the “invalidate” flag on the exception, we re-raise,
 oslo.db’s exc_filters then catch the exception, more string parsers get
 involved, we determine we need to raise an oslo.db.DBDisconnect
 exception, we raise that, the “SELECT 1” ping handler catches that, we
 then emit “SELECT 1” again so that it reconnects, we then hit the
 connection record that’s in “invalid” state so it knows to reconnect, it
 reconnects and the “SELECT 1” continues on the new connection and we
 start up.
 
 So essentially option “5” (the way the gerrit is right now) has a subset
 of the components of “6”; “6” has the additional steps of: emit a doomed
 statement on the closed socket, then when it fails raise / catch / parse
 / reraise / catch / parse / reraise that exception.   Option “5” just
 has, check the pid, raise / catch an exception.
 
 IMO the two options are: “5”, check the pid and recover or “3” make it a
 hard failure.

And I don't think we want the database library doing anything with this
case at all. Recovery code is tricky, and often prevents valid use cases
(perhaps the parent *meant* for the child to reuse the open connection
and isn't going to continue using it so there won't be a conflict).

The bug here is in the way the application, using Oslo's service module,
is forking. We should fix the service module to make it possible to fork
correctly, and to have that be the default behavior. The db library
shouldn't be concerned with whether or not it's in a forked process --
that's not its job.

Doug

  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps (direct and
  transitive), while requirements.in http://requirements.in reflects
  what people will actually use.  Whatever is in requirements.txt affects
  the egg's requires.txt. Instead, we can keep requirements.txt unchanged
  and have it still be the canonical list of dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an upstream
  utility we produce and use to keep things sane on our slaves.
  
  Maybe all we need is:
  
  * update the existing post-merge job on the requirements repo to produce
  a requirements.txt (as it does now) as well the compiled version.  
  
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
  
  I'm not sure how the second bit jives with the existing devstack
  installation code, specifically with the libraries from git-or-master
  but we can probably add something to warm the system with dependencies
  from the compiled version prior to calling pip/setup.py/etc
  http://setup.py/etc
 
 It sounds like you are suggesting we take the tool we use to ensure that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.
 
 Which I'm fine with.
 
 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and just
 double down on the devstack venv work instead.

I don't disagree with your conclusion, but that's not how I read what he
proposed. :-)

Joe wanted requirements.txt to be the pinned requirements computed from
the list of all global requirements that work together. Pinning to a
single version works in our gate, but makes installing everything else
together *outside* of the gate harder because if the projects don't all
sync all requirements changes pretty much at the same time they won't be
compatible.

Adam suggested leaving requirements.txt alone and creating a different
list of pinned requirements that is *only* used in our gate. That way we
still get the pinning for our gate, and the values are computed from the
requirements used in the projects but they aren't propagated back out to
the projects in a way that breaks their PyPI or distro packages.

Another benefit of Adam's proposal is that we would only need to keep
the list of pins in the global requirements repository, so we would have
fewer tooling changes to make.

Doug

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Davanum Srinivas
+1 to fix Oslo's service module any ways, irrespective of this bug.

+1 to The db library shouldn't be concerned with whether or not it's
in a forked process -- that's not its job

-- dims

On Fri, Feb 20, 2015 at 10:17 AM, Doug Hellmann d...@doughellmann.com wrote:


 On Thu, Feb 19, 2015, at 09:45 PM, Mike Bayer wrote:


 Doug Hellmann d...@doughellmann.com wrote:

  5) Allow this sort of connection sharing to continue for a deprecation
  period with apppropriate logging, then make it a hard failure.
 
  This would provide services time to find and fix any sharing problems
  they might have, but would delay the timeframe for a final fix.
 
  6-ish) Fix oslo-incubator service.py to close all file descriptors after
  forking.
 
 
  I'm not sure why 6 is slower, can someone elaborate on that?

 So, option “A”, they call engine.dispose() the moment they’re in a fork,
 the activity upon requesting a connection from the pool is: look in pool,
 no connections present, create a connection and return it.

 This feels like something we could do in the service manager base class,
 maybe by adding a post fork hook or something.

 Josh's patch to forcibly close all file descriptors may be something
 else we want, but if we can reset open connections cleanly when we
 know how, that feels better than relying on detecting broken sockets.


 Option “5”, the way the patch is right now to auto-invalidate on
 detection of new fork, the activity upon requesting a connection is from
 the pool is: look in pool, connection present, check that os.pid()
 matches what we’ve associated with the connection record, if not, we
 raise an exception indicating “invalid”, this is immediately caught, sets
 the connection record as “invalid”, the connection record them
 immediately disposes that file descriptor, makes a new connection and
 returns that.

 Option “6”, the new fork starts, the activity upon requesting a
 connection from the pool is: look in pool, connection present, perform
 the oslo.db “ping” event, ping event emits “SELECT 1” to the MySQLdb
 driver, driver attempts to emit this statement on the socket, socket
 communication fails, MySQLdb converts to an exception, exception is
 raised, SQLAlchemy catches the exception, sends it to a parser to
 determine the nature of the exception, we see that it’s a “disconnect”
 exception, we set the “invalidate” flag on the exception, we re-raise,
 oslo.db’s exc_filters then catch the exception, more string parsers get
 involved, we determine we need to raise an oslo.db.DBDisconnect
 exception, we raise that, the “SELECT 1” ping handler catches that, we
 then emit “SELECT 1” again so that it reconnects, we then hit the
 connection record that’s in “invalid” state so it knows to reconnect, it
 reconnects and the “SELECT 1” continues on the new connection and we
 start up.

 So essentially option “5” (the way the gerrit is right now) has a subset
 of the components of “6”; “6” has the additional steps of: emit a doomed
 statement on the closed socket, then when it fails raise / catch / parse
 / reraise / catch / parse / reraise that exception.   Option “5” just
 has, check the pid, raise / catch an exception.

 IMO the two options are: “5”, check the pid and recover or “3” make it a
 hard failure.

 And I don't think we want the database library doing anything with this
 case at all. Recovery code is tricky, and often prevents valid use cases
 (perhaps the parent *meant* for the child to reuse the open connection
 and isn't going to continue using it so there won't be a conflict).

 The bug here is in the way the application, using Oslo's service module,
 is forking. We should fix the service module to make it possible to fork
 correctly, and to have that be the default behavior. The db library
 shouldn't be concerned with whether or not it's in a forked process --
 that's not its job.

 Doug


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][lbaas] devstack clean.sh not calling clean for external plugins

2015-02-20 Thread Al Miller
I'm working on the external devstack plugin for neutron-lbaas and found that it 
was leaving behind stale ip network namespaces after running clean.sh.  
Comparing to unstack.sh, which calls run_phase unstack I saw that clean.sh 
was not calling run_phase clean, and thus my cleanup code wasn't running.  I 
just submitted https://review.openstack.org/157856 that adds this to clean.sh.  
Please have a look.

Thanks,

Al


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Mike Bayer


Doug Hellmann d...@doughellmann.com wrote:

 
 And I don't think we want the database library doing anything with this
 case at all. Recovery code is tricky, and often prevents valid use cases
 (perhaps the parent *meant* for the child to reuse the open connection
 and isn't going to continue using it so there won't be a conflict).
 
 The bug here is in the way the application, using Oslo's service module,
 is forking. We should fix the service module to make it possible to fork
 correctly, and to have that be the default behavior. The db library
 shouldn't be concerned with whether or not it's in a forked process --
 that's not its job.

OK.  But should the DB library at least *check* that this condition is present? 
 Because, it saves a ton of time vs. trying to understand the unpredictable and 
subtle race conditions which occur if it is not checked.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 3:06 AM, Sean Dague s...@dague.net wrote:


 It sounds like you are suggesting we take the tool we use to ensure that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and just
 double down on the devstack venv work instead.

 -Sean



Not necessarily. There'd be some tweaks to the tooling but we'd still be
doing the same fundamental thing (installing everything openstack together)
except using a strict set of dependencies that we know wont break each
other when that happens.

This would help tremendously with testing around global-requirements, too.
Currently, a local devstack run today likely produces a set dependency
different than what was tested by jenkins on the last change to
global-requirements.  If proposed changes to global-requirements produced a
compiled list of pinned dependencies and tested against that, we'd know
that the next day's devstack runs are still testing against the dependency
chain produced by the last change to GR.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.

2015-02-20 Thread Miguel Ángel Ajo
On Friday, 20 de February de 2015 at 17:06, Ben Pfaff wrote:
 On Fri, Feb 20, 2015 at 12:45:46PM +0100, Miguel Ángel Ajo wrote:
  On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote:  
   On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com 
   (mailto:b...@nicira.com) wrote:
My initial reaction is that we can implement security groups as
another action in the ACL table that is similar to allow but in
addition permits reciprocal inbound traffic. Does that sound
sufficient to you?
 


   
  Yes, having fine grained allows (matching on protocols, ports, and
  remote ips would satisfy the neutron use case).
   
  Also we use connection tracking to allow reciprocal inbound traffic
  via ESTABLISHED/RELATED, any equivalent solution would do.
   
  For reference, our SG implementation, currently is able to match on
  combinations of:
   
  * direction: ingress/egress
  * protocol: icmp/tcp/udp/raw number
  * port_range: min-max (it’s always dst)
  * L2 packet ethertype: IPv4, IPv6, etc...
  * remote_ip_prefix: as a CIDR or * remote_group_id (to reference all other 
  IPs in a certain group)
   
  All of them assume connection tracking so known connection packets will
  go the other way around.
   
  
  
 OK. All of those should work OK. (I don't know for sure whether we'll
 have explicit groups; initially, probably not.)
  
  

That makes sense.

  
Is the exponential explosion due to cross-producting, that is, because
you have, say, n1 source addresses and n2 destination addresses and so
you need n1*n2 flows to specify all the combinations? We aim to solve
that in OVN by giving the CMS direct support for more sophisticated
matching rules, so that it can say something like:
 
ip.src in {a, b, c, ...}  ip.dst in {d, e, f, ...}
 (tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080})
 

   
   
  That sounds good and very flexible.
 
and let OVN implement it in OVS via the conjunctive match feature
recently added, which is like a set membership match but more
powerful.  
 

   
  Hmm, where can I find examples about that feature, sounds interesting.
   
  
  
 If you look at ovs-ofctl(8) in a development version of OVS, such as
 http://benpfaff.org/~blp/dist-docs/ovs-ofctl.8.pdf
 search for conjunction, which explains the implementation.  
  
  

Amazing, yes, it seems like conjunctions will do the work quite optimally
at OpenFlow level.

My hat off… :)
 (This
 isn't the form that Neutron would use with OVN; that is the Boolean
 expression syntax above.)
  
Of course, understood, I was curious about the low level supporting the
high level above.
  
  
It might still be nice to support lists of IPs (or
whatever), since these lists could still recur in a number of
circumstances, but my guess is that this will help a lot even without
that.
 
 

   
  As afar as I understood, given the way megaflows resolve rules via hashes
  even if we had lots of rules with different ip addresses, that would be 
  very fast,
  probably as fast or more than our current ipset solution.
   
  The only caveat would be having to update lots of flow rules when a port 
  goes
  in or out of a security group, since you have to go and clear/add the rules 
  to each
  single port on the same security group (as long as they have 1 rule 
  referencing the sg).
   
  
  
 That sounds like another good argument for allowing explicit groups. I
 have a design in mind for that but I doubt it's the first thing to
 implement.
  
  

Of course, 1 step at a time. I will do a 2nd pass on your documents, looking a 
bit
more on the higher level. I’m very happy to see that the low level is very well 
tied
up and capable.

Best regards,
Miguel Ángel.
  

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The strange case of osapi_compute_unique_server_name_scope

2015-02-20 Thread Mike Dorman
I can report that we do use this option (‘global' setting.)  We have to 
enforce name uniqueness for instances’ integration with some external 
systems (namely AD and Spacewalk) which require unique naming.

However, we also do some external name validation which I think 
effectively enforces uniqueness as well.  So if this were deprecated, I 
don’t know if it would directly affect us for our specific situation.

Other operators, anyone else using osapi_compute_unique_server_name_scope?

Mike





On 2/19/15, 3:18 AM, Matthew Booth mbo...@redhat.com wrote:

Nova contains a config variable osapi_compute_unique_server_name_scope.
Its help text describes it pretty well:

When set, compute API will consider duplicate hostnames invalid within
the specified scope, regardless of case. Should be empty, project or
global.

So, by default hostnames are not unique, but depending on this setting
they could be unique either globally or in the scope of a project.

Ideally a unique constraint would be enforced by the database but,
presumably because this is a config variable, that isn't the case here.
Instead it is enforced in code, but the code which does this predictably
races. My first attempt to fix this using the obvious SQL solution
appeared to work, but actually fails in MySQL as it doesn't support that
query structure[1][2]. SQLite and PostgreSQL do support it, but they
don't support the query structure which MySQL supports. Note that this
isn't just a syntactic thing. It looks like it's still possible to do
this if we compound the workaround with a second workaround, but I'm
starting to wonder if we'd be better fixing the design.

First off, do we need this config variable? Is anybody actually using
it? I suspect the answer's going to be yes, but it would be extremely
convenient if it's not.

Assuming this configurability is required, is there any way we can
instead use it to control a unique constraint in the db at service
startup? This would be something akin to a db migration. How do we
manage those?

Related to the above, I'm not 100% clear on which services run this
code. Is it possible for different services to have a different
configuration of this variable, and does that make sense? If so, that
would preclude a unique constraint in the db.

Thanks,

Matt

[1] Which has prompted me to get the test_db_api tests running on MySQL.
See this series if you're interested:
https://review.openstack.org/#/c/156299/

[2] For specifics, see my ramblings here:
https://review.openstack.org/#/c/141115/7/nova/db/sqlalchemy/api.py,cm
line 2547
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-20 Thread Markus Zoeller
It seems to me that the blueprint serial-ports[1] didn't implement
everything which was described in its spec. If one of you could have a 
look at the following examples and help me to understand if these 
observations are right/wrong that would be great.

Example 1:
The flavor provides the extra_spec hw:serial_port_count and the image
the property hw_serial_port_count. This is used to decide how many
serial devices (with different ports) should be defined for an instance.
But the libvirt driver returns always only the *first* defined port 
(see method get_serial_console [2]). I didn't find anything in the 
code which uses the other defined ports.

Example 2:
If a user is already connected, then reject the attempt of a second
user to access the console, but have an API to forceably disconnect
an existing session. This would be particularly important to cope
with hung sessions where the client network went away before the
console was cleanly closed. [1]
I couldn't find the described API. If there is a hung session one cannot
gracefully recover from that. This could lead to a bad UX in horizons
serial console client implementation[3].


[1] nova bp serial-ports;

https://github.com/openstack/nova-specs/blob/master/specs/juno/implemented/serial-ports.rst
[2] libvirt driver; return only first port; 

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2518
[3] horizon bp serial-console; 
https://blueprints.launchpad.net/horizon/+spec/serial-console


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-20 Thread Tony Breeds
On Fri, Feb 20, 2015 at 06:03:46PM +0100, Markus Zoeller wrote:
 It seems to me that the blueprint serial-ports[1] didn't implement
 everything which was described in its spec. If one of you could have a 
 look at the following examples and help me to understand if these 
 observations are right/wrong that would be great.

Nope I think you're pretty much correct.  The implementation doesn't
match the details in the spec.

Yours Tony.


pgp4G3jk2z21t.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] qemu-img disk corruption bug to patch or not?

2015-02-20 Thread Tony Breeds
On Fri, Feb 20, 2015 at 12:40:34PM +0100, Philipp Marek wrote:

 Well, do you want fast or working/consistent images?

Sure I often said I'll take slow and correct over fast and buggy anyday.
 
 Oh, okay... with the safe default being derived from the qemu version ;)

Umm excuse my ignorance but can we actually do a runtime determeined default
or are you talking about something more like

def maybe_fsync(path)
if not CONF.option:
return
if CONF.option or qemu_version()  (2.1.0):
do_sync(path)

I'm happy to do what the cinder community wants/needs as long as I know what
that is up front :)

Yours Tony.


pgpZBiUc0QKF0.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests

2015-02-20 Thread Joe Gordon
On Fri, Feb 13, 2015 at 11:57 AM, Joe Gordon joe.gord...@gmail.com wrote:


1.
   A few months back we started the process to remove the tempest CLI
   tests from tempest [0]. Now that we have successfully pulled novaclient 
 CLI
   tests out of tempest, we have the process sorted out. We now have a 
 process
   that should be easy to follow for each project, in fact keystoneclient 
 has
   already begun as well [1].  As stated in [0], the goal is to completely
   remove CLI tests from tempest by the end of the cycle.


   [0]
   
 http://lists.openstack.org/pipermail/openstack-dev/2014-October/048089.html
[1] https://review.openstack.org/#/c/155543/


   *Steps*


- Move unit tests from */tests/ to */tests/unit
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=3561772f8b0cfee746af53fa228375b2ec7dfd9d
- Add OS_TEST_PATH to testr.conf
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=f197c64e05596fc59c8318813d4f69a88ac832fc
- Copy over initial set of CLI tests from tempest/cli/ and add
functional test tox endpoint. Use standard OpenStack environment variables
to get keystone auth, so the tests can be run via 'source openrc  tox
-efunctional'
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=b89da9be28172319a16bece42f068e2d7f359c67
   - At this point you should be able to run the tests against a cloud
- Add client-dsvm-functional job definition using a post_test_hook
   -
   
 http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=c4093cd6d328a87ea9a2335ac2dd4d09a598bc8e


This patch had a bug that was fixed here:
https://review.openstack.org/#/c/157845/



- Add post_test_hook for functional tests in the client repo.
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=d11f960c58c523da7154b3311d6b37ec715392af
   - This patch can be tested out using the non-voting experimental
   job, just leave the comment 'check experimental'
- Make *client-dsvm-functional job gating for client
   -
   
 http://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=147f20f5003cfa4f15a372f7d16493c3bb40775b
   - At this point you should have a working gating functional test
   with a few tests.
- Copy in the rest of the tempest CLI tests
   -
   
 http://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=27cd393028a103d8d52cf25f035e3a2985572ccb
   - Unlike the first set of tests that were copied this is self
   gating.
- Remove tempest CLI tests for your client
   -
   
 http://git.openstack.org/cgit/openstack/tempest/commit/?id=0bd0adecd13e1285d0e938065280816395dbb415


1.
2.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Those are good news!

I commented on the pull request. Briefly, we can fetch from git, but
would prefer an official release.

Thanks,
/Ihar

On 02/19/2015 11:26 PM, Robert Li (baoli) wrote:
 Hi Kyle, Ihar,
 
 It looks promising to have our patch upstreamed. Please take a look
 at this pull request 
 https://github.com/tomaszmrugalski/dibbler/pull/26#issuecomment-75144912.
 Most importantly, Tomek asked if it’s sufficient to have the code
 up in his master branch. I guess you guys may be able to help
 answer that question since I’m not familiar with openstack release
 process.
 
 Cheers, Robert
 
 On 2/13/15, 12:16 PM, Kyle Mestery mest...@mestery.com 
 mailto:mest...@mestery.com wrote:
 
 On Fri, Feb 13, 2015 at 10:57 AM, John Davidge (jodavidg) 
 jodav...@cisco.com mailto:jodav...@cisco.com wrote:
 
 Hi Ihar,
 
 To answer your questions in order:
 
 1. Yes, you are understanding the intention correctly. Dibbler 
 doesn¹t currently support client restart, as doing so causes all
 existing delegated prefixes to be released back to the PD server.
 All subnets belonging to the router would potentially receive a new
 cidr every time a subnet is added/removed.
 
 2. Option 2 cannot be implemented using the current version of 
 dibbler, but it can be done using the version we have modified.
 Option 3 could possibly be done with the current version of
 dibbler, but with some major limitations - only one single router
 namespace would be supported.
 
 Once the dibbler changes linked below are reviewed and finalised we
 will only need to merge a single patch into the upstream dibbler
 repo. No further patches are anticipated.
 
 Yes, you are correct that dibbler is not needed unless prefix 
 delegation is enabled by the deployer. It is intended as an
 optional feature that can be easily disabled (and probably will be
 by default). A test to check for the correct dibbler version would
 certainly be necessary.
 
 Testing in the gate will be an issue until the new version of 
 dibbler is merged and packaged in the various distros. I¹m not sure
 if there is a way to avoid this problem, unless we have devstack
 install from our updated repo while we wait.
 
 To me, this seems like a pretty huge problem. We can't expect 
 distributions to package side-changes to upstream projects. The 
 correct way to solve this problem is to work to get the changes 
 required in the dependent packages upstream into those projects 
 first (dibbler, in this case), and then propose the changes into 
 Neutron to make use of those changes. I don't see how we can
 proceed with this work until the issues around dibbler has been
 resolved.
 
 
 John Davidge OpenStack@Cisco
 
 
 
 
 On 13/02/2015 16:01, Ihar Hrachyshka ihrac...@redhat.com 
 mailto:ihrac...@redhat.com wrote:
 
 Thanks for the write-up! See inline.
 
 On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,
 
 while trying to integrate dibbler client with neutron to support 
 PD, we countered a few issues with the dibbler client (and
 server). With a neutron router, we have the qg-xxx interface that
 is connected to the public network, on which a dhcp server is
 running on the delegating router. For each subnet with PD
 enabled, a router port will be created in the neutron router. As
 a result, a new PD request will be sent that asks for a prefix
 from the delegating router. Keep in mind that the subnet is added
 into the router dynamically.
 
 We thought about the following options:
 
 1. use a single dibbler client to support the above requirement. 
 This means, the client should be able to accept new requests on
 the fly either through configuration reload or other interfaces. 
 Unfortunately, dibbler client doesn¹t support it.
 
 Sorry for my ignorance on PD implementation (I will definitely look
 at it the next week), but what does this entry above mean? Do you
 want a single dibbler instance running per router serving all
 subnets plugged into it? And you want to get configuration updates
 when a new subnet is plugged in, or removed from the router?
 
 If that's the case, why not just restarting the client?
 
 2. start a dibbler client per subnet. All of the dibbler clients 
 will be using the same outgoing interface (which is the qg-xxx 
 interface). Unfortunately, dibbler client uses /etc/dibbler and 
 /var/lib/dibbler for its state (in which it saves duid file, pid 
 file, and other internal states). This means it can only support 
 one client per network node. 3. run a single dibbler client that 
 requests a smaller prefix (say /56) and splits it among the
 subnets with PD enabled (neutron subnet requires /64). Depending
 on the neutron router setup, this may result in significant waste
 of prefixes.
 
 Just to understand all options at the table: can we implement ^^ 
 option with stock dibbler?
 
 
 Given the significant drawback with 3, we are left with 1 and 2. 
 After looking at the dibbler source code, we found that 2 is
 

[openstack-dev] [hacking] disable a check by default

2015-02-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I would like to introduce a hacking check that would be disabled by
default:

https://review.openstack.org/157894

I see the following commit in flake8:

https://gitlab.com/methane/flake8/commit/b301532636b60683b339a2d081728d22957a142f

which suggests it's now possible via specifying entry points in some
special way. But I fail to determine the proper form to achieve this.

Any ideas?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU53fXAAoJEC5aWaUY1u57XjcIALfx/AgR/69W0ecODiIycQLr
3IBjj+muKjpMIY8VilNqvO8rR+tm2Z+T87NOkbJdme10GxR1vkfa7ve95GnqZ3XF
9YAs10Dlc2gxBjL3TWOxJL2pD5reeNOTt3PRWHSW2SdAIfkvypZQRg8oSq2JbloD
V8/z+PPxLU5hkOmCp5GET1qN+PItvilVZ53LAi/VYIJ3VDON0MKjJeZRXAycCy2k
HmEPw9by5qyeGLd5w8V3uwDXeGCDkzOovi8VVbnga3DBMUFLSUyXPDdXesGkLeyU
5mnFrhBxRRZmqjkMlp8C6DzoagFsk4CwFzDk+LM0w/xMlv56CFyGT8zjd4sONcQ=
=7+do
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Joshua Harlow

Doug Hellmann wrote:


On Thu, Feb 19, 2015, at 09:45 PM, Mike Bayer wrote:


Doug Hellmannd...@doughellmann.com  wrote:


5) Allow this sort of connection sharing to continue for a deprecation
period with apppropriate logging, then make it a hard failure.

This would provide services time to find and fix any sharing problems
they might have, but would delay the timeframe for a final fix.

6-ish) Fix oslo-incubator service.py to close all file descriptors after
forking.


I'm not sure why 6 is slower, can someone elaborate on that?

So, option “A”, they call engine.dispose() the moment they’re in a fork,
the activity upon requesting a connection from the pool is: look in pool,
no connections present, create a connection and return it.


This feels like something we could do in the service manager base class,
maybe by adding a post fork hook or something.


+1 to that.

I think it'd be nice to have the service __init__() maybe be something like:

 def __init__(self, threads=1000, prefork_callbacks=None,   
  postfork_callbacks=None):
self.postfork_callbacks = postfork_callbacks or []
self.prefork_callbacks = prefork_callbacks or []
# always ensure we are closing any left-open fds last...
self.prefork_callbacks.append(self._close_descriptors)
...



Josh's patch to forcibly close all file descriptors may be something
else we want, but if we can reset open connections cleanly when we
know how, that feels better than relying on detecting broken sockets.


Option “5”, the way the patch is right now to auto-invalidate on
detection of new fork, the activity upon requesting a connection is from
the pool is: look in pool, connection present, check that os.pid()
matches what we’ve associated with the connection record, if not, we
raise an exception indicating “invalid”, this is immediately caught, sets
the connection record as “invalid”, the connection record them
immediately disposes that file descriptor, makes a new connection and
returns that.

Option “6”, the new fork starts, the activity upon requesting a
connection from the pool is: look in pool, connection present, perform
the oslo.db “ping” event, ping event emits “SELECT 1” to the MySQLdb
driver, driver attempts to emit this statement on the socket, socket
communication fails, MySQLdb converts to an exception, exception is
raised, SQLAlchemy catches the exception, sends it to a parser to
determine the nature of the exception, we see that it’s a “disconnect”
exception, we set the “invalidate” flag on the exception, we re-raise,
oslo.db’s exc_filters then catch the exception, more string parsers get
involved, we determine we need to raise an oslo.db.DBDisconnect
exception, we raise that, the “SELECT 1” ping handler catches that, we
then emit “SELECT 1” again so that it reconnects, we then hit the
connection record that’s in “invalid” state so it knows to reconnect, it
reconnects and the “SELECT 1” continues on the new connection and we
start up.

So essentially option “5” (the way the gerrit is right now) has a subset
of the components of “6”; “6” has the additional steps of: emit a doomed
statement on the closed socket, then when it fails raise / catch / parse
/ reraise / catch / parse / reraise that exception.   Option “5” just
has, check the pid, raise / catch an exception.

IMO the two options are: “5”, check the pid and recover or “3” make it a
hard failure.


And I don't think we want the database library doing anything with this
case at all. Recovery code is tricky, and often prevents valid use cases
(perhaps the parent *meant* for the child to reuse the open connection
and isn't going to continue using it so there won't be a conflict).

The bug here is in the way the application, using Oslo's service module,
is forking. We should fix the service module to make it possible to fork
correctly, and to have that be the default behavior. The db library
shouldn't be concerned with whether or not it's in a forked process --
that's not its job.

Doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joshua Harlow

Sean Dague wrote:

On 02/20/2015 12:26 AM, Adam Gandelman wrote:

Its more than just the naming.  In the original proposal,
requirements.txt is the compiled list of all pinned deps (direct and
transitive), while requirements.inhttp://requirements.in  reflects
what people will actually use.  Whatever is in requirements.txt affects
the egg's requires.txt. Instead, we can keep requirements.txt unchanged
and have it still be the canonical list of dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.

Maybe all we need is:

* update the existing post-merge job on the requirements repo to produce
a requirements.txt (as it does now) as well the compiled version.

* modify devstack in some way with a toggle to have it process
dependencies from the compiled version when necessary

I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master
but we can probably add something to warm the system with dependencies
from the compiled version prior to calling pip/setup.py/etc
http://setup.py/etc


It sounds like you are suggesting we take the tool we use to ensure that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.

Which I'm fine with.

But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and just
double down on the devstack venv work instead.


It'd be interesting to see what a distribution (canonical, redhat...) 
would think about this movement. I know yahoo! has been looking into it 
for similar reasons (but we are more flexibly then I think a packager 
such as canonical/redhat/debian/... would/culd be). With a move to 
venv's that seems like it would just offload the work to find the set of 
dependencies that work together (in a single-install) to packagers instead.


Is that ok/desired at this point?

-Josh



-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/18/2015 08:45 PM, Armando M. wrote:
 
 
 On 17 February 2015 at 22:00, YAMAMOTO Takashi
 yamam...@valinux.co.jp mailto:yamam...@valinux.co.jp wrote:
 
 hi,
 
 i want to add an extra requirement specific to OVS-agent. (namely,
 I want to add ryu for ovs-ofctl-to-python blueprint. [1] but the
 question is not specific to the blueprint.) to avoid messing
 deployments without OVS-agent, such a requirement should be
 per-agent/driver/plugin/etc.  however, there currently seems no
 standard mechanism for such a requirement.
 
 some ideas:
 
 a. don't bother to make it per-agent. add it to neutron's
 requirements. (and global-requirement) simple, but this would make
 non-ovs plugin users unhappy.
 
 b. make devstack look at per-agent extra requirements file in 
 neutron tree. eg. neutron/plugins/$Q_AGENT/requirements.txt
 
 c. move OVS agent to a separate repository, just like other 
 after-decomposition vendor plugins.  and use requirements.txt
 there. for longer term, this might be a way to go.  but i don't
 want to block my work until it happens.
 
 d. follow the way how openvswitch is installed by devstack. a
 downside: we can't give a jenkins run for a patch which introduces 
 an extra requirement.  (like my patch for the mentioned blueprint 
 [2])
 
 i think b. is the most reasonable choice, at least for short/mid
 term.
 
 any comments/thoughts?
 
 
 One thing that I want to ensure we are clear on is about the
 agent's OpenFlow communication strategy going forward, because that
 determines how we make a decision based on the options you have
 outlined: do we enforce the use of ryu while ovs-ofctl goes away
 from day 0? Or do we provide an 'opt-in' type of approach where
 users can explicitly choose if/when to adopt ryu in favor of
 ovs-ofctl? The latter means that we'll keep both solutions for a
 reasonable amount of time to smooth the transition process.
 
 If we adopt the former (i.e. ryu goes in, ovs-ofctl goes out),
 then option a) makes sense to me, but I am not sure how happy
 deployers, and packagers are going to be welcoming this approach.
 There's already too much going on in Kilo right now :)

Even if we leave both options available, packagers wouldn't avoid
shipping ryu, for if the option is present in the code and can be
controlled from config files, then we should assume some users will
switch implementations, and we don't want to break them.

 
 If we adopt the latter, then I think it's desirable to have two
 separate configurations with which we test the agent. This means
 that we'll have a new job (besides the existing ones) that runs the
 agent with ryu instead of ovs-ofctl. This means that option d) is
 the viable one, where DevStack will have to install the dependency
 based on some configuration variable that is determined by the
 openstack-infra project definition.
 
 Thoughts?
 
 Cheers, Armando
 
 
 
 YAMAMOTO Takashi
 
 [1]
 https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python 
 [2] https://review.openstack.org/#/c/153946/
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU536CAAoJEC5aWaUY1u57Q/4H/iJPbp/KA9ouI6lVv5v7U+i3
gOh4SSwWkc047iaPYmJmcvvLFSujSX9se3LJllaLZAnvVwpwViIqmx7BfcVMrGFP
FVi3Jgz8yhEVXIrAz0wdrU1G2h13Gt1S67Oparype8Ivf5RzFx1ahHKVXVC89VYz
XRo5rk7V4LTAUNB3IlPP8xpCalyuw67eyx2ubY8efbH7i342gJ/JHykxIWKlBNHL
szjy3ydim7MyGCzFVrQUjQkw7jhH9cEbVAZU1MCXwzpuO0M+1Aciype0mtlAzo45
i+A+GfDtF9SGxH0D1PUepUwDXZ+P1UyyA53q41KDZ6Vg3ESRmYN3GUu/N/LXtgY=
=3eA4
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Joe Gordon
On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi Jeremy,
   Couldn't find anything strong in the logs to back the reason for OOM.
 At the time OOM happens, mysqld and java processes have the most RAM hence
 OOM selects mysqld (4.7G) to be killed.

 From a glusterfs backend perspective, i haven't found anything suspicious,
 and we don't have the logs of glusterfs (which is typically in
 /var/log/glusterfs) so can't delve inside glusterfs too much :(

 BharatK (in CC) also tried to re-create the issue in local VM setup, but
 it hasn't yet!

 Having said that,* we do know* that we started seeing this issue after we
 enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
 to enable non-admin to create hyp-assisted snaps). We think that enabling
 online snaps might have added to the number of tests and memory load 
 thats the only clue we have as of now!


It looks like OOM killer hit while qemu was busy and during
a ServerRescueTest. Maybe libvirt logs would be useful as well?

And I don't see any tempest tests calling assisted-volume-snapshots

Also this looks odd: Feb 19 18:47:16
devstack-centos7-rax-iad-916633.slave.openstack.org libvirtd[3753]: missing
__com.redhat_reason in disk io error event



 So :

   1) BharatK  has merged the patch (
 https://review.openstack.org/#/c/157707/ ) to revert the policy.json in
 the glusterfs job. So no more nova-assisted-snap tests.

   2) We also are increasing the timeout of our job in patch (
 https://review.openstack.org/#/c/157835/1 ) so that we can get a full run
 without timeouts to do a good analysis of the logs (logs are not posted if
 the job times out)

 Can you please re-enable our job, so that we can confirm that disabling
 online snap TCs is helping the issue, which if it does, can help us narrow
 down the issue.

 We also plan to monitor  debug over the weekend hence having the job
 enabled can help us a lot.

 thanx,
 deepak


 On Thu, Feb 19, 2015 at 10:37 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
 [...]
  For some reason we are seeing the centos7 glusterfs CI job getting
  aborted/ killed either by Java exception or the build getting
  aborted due to timeout.
 [...]
  Hoping to root cause this soon and get the cinder-glusterfs CI job
  back online soon.

 I manually reran the same commands this job runs on an identical
 virtual machine and was able to reproduce some substantial
 weirdness.

 I temporarily lost remote access to the VM around 108 minutes into
 running the job (~17:50 in the logs) and the out of band console
 also became unresponsive to carriage returns. The machine's IP
 address still responded to ICMP ping, but attempts to open new TCP
 sockets to the SSH service never got a protocol version banner back.
 After about 10 minutes of that I went out to lunch but left
 everything untouched. To my excitement it was up and responding
 again when I returned.

 It appears from the logs that it runs well past the 120-minute mark
 where devstack-gate tries to kill the gate hook for its configured
 timeout. Somewhere around 165 minutes in (18:47) you can see the
 kernel out-of-memory killer starts to kick in and kill httpd and
 mysqld processes according to the syslog. Hopefully this is enough
 additional detail to get you a start at finding the root cause so
 that we can reenable your job. Let me know if there's anything else
 you need for this.

 [1] http://fungi.yuggoth.org/tmp/logs.tar
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Mike Bayer


Doug Hellmann d...@doughellmann.com wrote:

 
 
 On Fri, Feb 20, 2015, at 11:28 AM, Mike Bayer wrote:
 Doug Hellmann d...@doughellmann.com wrote:
 
 And I don't think we want the database library doing anything with this
 case at all. Recovery code is tricky, and often prevents valid use cases
 (perhaps the parent *meant* for the child to reuse the open connection
 and isn't going to continue using it so there won't be a conflict).
 
 The bug here is in the way the application, using Oslo's service module,
 is forking. We should fix the service module to make it possible to fork
 correctly, and to have that be the default behavior. The db library
 shouldn't be concerned with whether or not it's in a forked process --
 that's not its job.
 
 OK.  But should the DB library at least *check* that this condition is
 present?  Because, it saves a ton of time vs. trying to understand the
 unpredictable and subtle race conditions which occur if it is not
 checked.
 
 I really don't think that's the right place to deal with the situation,
 either with a fix or a check or whatever. The race today happened to be
 in the database code, but it could easily have been in the messaging
 library or something else that shares a connection to a remote service.

I’m just looking for, “a log line”.  So the next time a “SQLAlchemy is not 
loading my result correctly” bug gets sent to me, I can look in their logs, see 
this note, and know why it’s happening, rather than having to dig into all the 
querying code and making sure their mappings are correct, trying to run their 
bug under devstack, and everything else.

I’m trying to use software to give me information to make my life easier.   
What a crazy idea !






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-20 Thread Aggarwal, Nikunj
Hi,

I have submitted  presentations for Openstack L summit :


The Heat Orchestration Template Builder: A 
demonstrationhttps://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration



Please cast your vote if you feel it is worth for presentation.

Thanks  Regards,
Nikunj

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Jeremy Stanley
On 2015-02-20 16:29:31 +0100 (+0100), Deepak Shetty wrote:
 Couldn't find anything strong in the logs to back the reason for
 OOM. At the time OOM happens, mysqld and java processes have the
 most RAM hence OOM selects mysqld (4.7G) to be killed.
[...]

Today I reran it after you rolled back some additional tests, and it
runs for about 117 minutes before the OOM killer shoots nova-compute
in the head. At your request I've added /var/log/glusterfs into the
tarball this time: http://fungi.yuggoth.org/tmp/logs2.tar
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [gate tests] Cinder drivers being set up as jobs in infra

2015-02-20 Thread Jeremy Stanley
On 2015-02-20 09:38:22 -0800 (-0800), Mike Perez wrote:
[...]
 I was also not aware of the assistance being given to Open Source
 solutions, but makes sense to me.

To quote Harry Tuttle, We're all in it together.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 10:16 AM, Joshua Harlow harlo...@outlook.com
wrote:


 It'd be interesting to see what a distribution (canonical, redhat...)
 would think about this movement. I know yahoo! has been looking into it for
 similar reasons (but we are more flexibly then I think a packager such as
 canonical/redhat/debian/... would/culd be). With a move to venv's that
 seems like it would just offload the work to find the set of dependencies
 that work together (in a single-install) to packagers instead.

 Is that ok/desired at this point?

 -Josh


I share this concern, as well. I wonder if the compiled list of pinned
dependencies will be the only thing we look at upstream. Once functional on
stable branches, will we essentially forget about the non-pinned
requirements.txt that downstreams are meant to use?

One way of looking at it, though (especially wrt stable) is that the pinned
list of compiled dependencies more closely resembles how distros are
packaging this stuff.  That is, instead of providing explicit dependencies
via a pinned list, they are providing them via a frozen package archive
(ie, ubuntu 14.04) that are known to provide a working set.  It'd be up to
distros to make sure that everything is functional prior to freezing that,
and I imagine they already do that.

-Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] disable a check by default

2015-02-20 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/20/2015 12:07 PM, Ihar Hrachyshka wrote:
 Hi,
 
 I would like to introduce a hacking check that would be disabled
 by default:
 
 https://review.openstack.org/157894
 
 I see the following commit in flake8:
 
 https://gitlab.com/methane/flake8/commit/b301532636b60683b339a2d081728d22957a142f

  which suggests it's now possible via specifying entry points in
 some special way. But I fail to determine the proper form to
 achieve this.
 
 Any ideas?

There's a change in progress that I believe is related:
https://review.openstack.org/#/c/134052/

- -Ben
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU54ORAAoJEDehGd0Fy7uqw3gH/2j6AIt3t9Stc8j6KfGjpUTX
FFUJd1xenITTnAlCazWasOPTx7SbmEmQB9EvYjuKTaU6D6D1C+fZoQp0d0TYCpry
piaU9BK4C9juYpmWZbSL07SLJugcWV4LF2FMpVb8BMr6/KYhO01sAyL/mdF6l2yu
cXR6RCNiXF51LuX/zqjJNbjRQPsckOcyhgVjpj0BE9vThebcVOf1HtOCj8P2Pw/J
JtLci/vZk1nQ0S5ftn/V9S9zmSM/vcKx8Cl68vDkB43QCm+7lNnG0QbqONhRGcI/
4C4WlJePeK/8KQy20LBEf1L0kodZohUwkSsjLw7kr8YjCiQ58Wq8Kh/eNA66DVI=
=js+R
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-20 Thread Joshua Harlow
I started https://etherpad.openstack.org/p/service-py-replacements with 
some ideas/thoughts/notes; feel free to add any on u want (or adjust it 
accordingly with better ideas).


-Josh

Victor Stinner wrote:

Hi,

Davanum Srinivas wrote:

+1 to fix Oslo's service module any ways, irrespective of this bug.


By the way, the Service class is a blocker point for the implementation of 
asyncio and threads specs:

https://review.openstack.org/#/c/153298/
https://review.openstack.org/#/c/156711/

We may allow to execute a function before fork() to explicitly share some 
things with all child processes. But most things (instanciate the application, 
open DB connections, etc.) should be done after the fork.

Well, it looks like everyone agree. We just need someone to implement the idea 
:-)

We may write a new class instead of modifying the existing class, to not break 
applications. Doug Hellamnn even proposed once to have an abstraction of the 
concurrency model (eventlet, threads, asyncio). I don't know if it's worth it.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][cisco][apic] entry point for APIC agent

2015-02-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

does anyone know why we don't maintain an entry point for APIC agent
in setup.cfg? The code in [1] looks like there is a main() function
for the agent, but for some reason it's not exposed to any
console_script during installation of neutron.

Is there any reason not to do it?

[1]:
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py#n320

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU54WDAAoJEC5aWaUY1u57m/oH/1sUvwf9v/sKYbfZXU23h0I4
GJpPfI70l0NktkFIObO2tikshaSKygC0wv7zk6HGEiE2b0ATC5fRv0VaNtk7WMKu
PqCWK6PV6IvuphZbt4f6A7mJ7JpSn06SQe0TEPABx9DUhybjXJ6iP0hSSb/Te+2M
MP17IlepgHNasegiCD1VsWKAy3ZmnC5GwM+H6qKIe2pmn7NjBqXh8uRxbv/IzGjJ
3YxHhS35xHd31neR9B7V16peXy1lTjwFkyw8XlJNufAmOhCVsN0uIDAhwv3XJRHF
+9MOgpB0fpVqxbEWrflW1Lmy06Hr/scq/t7bQt4Lntu3A+PQEQJ0kx4aHElreyw=
=K7In
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joe Gordon
On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
   Its more than just the naming.  In the original proposal,
   requirements.txt is the compiled list of all pinned deps (direct and
   transitive), while requirements.in http://requirements.in reflects
   what people will actually use.  Whatever is in requirements.txt affects
   the egg's requires.txt. Instead, we can keep requirements.txt unchanged
   and have it still be the canonical list of dependencies, while
   reqiurements.out/requirements.gate/requirements.whatever is an upstream
   utility we produce and use to keep things sane on our slaves.
  
   Maybe all we need is:
  
   * update the existing post-merge job on the requirements repo to
 produce
   a requirements.txt (as it does now) as well the compiled version.
  
   * modify devstack in some way with a toggle to have it process
   dependencies from the compiled version when necessary
  
   I'm not sure how the second bit jives with the existing devstack
   installation code, specifically with the libraries from git-or-master
   but we can probably add something to warm the system with dependencies
   from the compiled version prior to calling pip/setup.py/etc
   http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to ensure that
  all of OpenStack is installable together in a unified way, and change
  it's installation so that it doesn't do that any more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on the idea
  that OpenStack can be run all together in a single environment, and just
  double down on the devstack venv work instead.

 I don't disagree with your conclusion, but that's not how I read what he
 proposed. :-)


Sean was reading between the lines here. We are doing all this extra work
to make sure OpenStack can be run together in a single environment, but it
seems like more and more people are moving away from deploying with that
model anyway. Moving to this model would require a little more then just
installing everything in separate venvs.  We would need to make sure we
don't cap oslo libraries etc. in order to prevent conflicts inside a single
service. And we would still need a story around what to do with stable
branches, how do we make sure new libraries don't break stable branches --
which in turn can break master via grenade and other jobs.



 Joe wanted requirements.txt to be the pinned requirements computed from
 the list of all global requirements that work together. Pinning to a
 single version works in our gate, but makes installing everything else
 together *outside* of the gate harder because if the projects don't all
 sync all requirements changes pretty much at the same time they won't be
 compatible.

 Adam suggested leaving requirements.txt alone and creating a different
 list of pinned requirements that is *only* used in our gate. That way we
 still get the pinning for our gate, and the values are computed from the
 requirements used in the projects but they aren't propagated back out to
 the projects in a way that breaks their PyPI or distro packages.

 Another benefit of Adam's proposal is that we would only need to keep
 the list of pins in the global requirements repository, so we would have
 fewer tooling changes to make.

 Doug

 
-Sean
 
  --
  Sean Dague
  http://dague.net
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [cinder] CI via infra for the DRBD Cinder driver

2015-02-20 Thread Philipp Marek
Hi all,

this is a reflection of the discussion I just had on #openstack-infra; it's 
about (re-)using the central CI infrastructure for our Open-Source DRBD 
driver too.


The current status is:
 * The DRBD driver is already in Cinder, so DRBD-replicated Cinder storage
   using iSCSI to the hypervisors does work out-of-the-box.
 * The Nova-parts didn't make it in for Kilo; we'll try to get them into L.
 * I've got a lib/backends/drbd for devstack, that together with a matching 
   local.conf can set up a node - at least for a limited set of 
   distributions (as DRBD needs a kernel module, Ubuntu/Debian via DKMS are 
   the easy way).
   [Please note that package installation is not yet done in this script 
   yet - I'm not sure whether I can/may/should simply add an 
   apt-repository.]


Now, clarkb told me about two caveats:

  «Yup, so the two things I will start with is that multinode testing is 
   still really rudimentary, we only just got tempest sort of working with 
   it. So I might suggest running on a single node first to get the general 
   thing working.

   The other thing is that we don't have the zuul code to vote with 
   a different account deployed/merged yet. So initially you could run your 
   job but it wouldn't vote against, say, cinder.»


Cinder has a deadline for CI: March 19th; upon relaying that fact (resp. 
nearly correct date) clarkb said

  «thats about 3 weeks... probably at least for the zuul thing.»

So, actually it's nearly 4 weeks, let's hope that it all works out.


Actually, the multi-node testing will only be needed when we get the Nova 
parts in, because then it would make sense to test (Nova) via both iSCSI 
and the DRBD transport; for Cinder CI a single-node setup is sufficient.


My remaining questions are:
 * Is it possible to have our driver tested via the common infrastructure?
 * Is it okay to setup another apt-repository during the devstack run,
   to install the needed packages? I'm not sure whether our servers
   would simply be accessible, some firewall or filtering proxy could
   break such things easily.
 * Apart from the cinder-backend script in devstack (which I'll have to 
   finish first, see eg. package installation), is any other information 
   needed from us?


Thank you for your feedback and any help you can offer!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Call for developers/contributors for kolla-k3

2015-02-20 Thread Christopher Aedo
Steve, thanks for sharing this - the decomposed blueprints are going
to make getting involved easy.  Containerizing OpenStack services
looks like a great solution to the challenge of upgrades, and pace of
the work you and others on the Kolla team have kept up is impressive.
Regarding Fuel, we are not very biased when it comes to the deployment
framework, and deploying via containers is looking better all the
time.  Over the next few weeks you can expect a few of us to get
involved and start helping out.  I'm really hoping we are able to
integrate this with Fuel and get more folks involved with and using
Kolla.

-Christopher

On Thu, Feb 19, 2015 at 1:42 AM, Steven Dake (stdake) std...@cisco.com wrote:
 Hey folks,

 I’d like to invite the broader OpenStack community to participate in
 developing milestone #3 of Kolla – a Project to Containerize the deployment
 of OpenStack.  This is a major refactoring of Kolla to make it viable for
 use by projects such as TripleO or Fuel.

 We have an aggressive set of features the core team has identified it wants
 to develop.  Our deadline for development is March 19th.  Theoretically we
 could slip into early April, but the idea is to synchronize our schedule
 with the broader OpenStack project’s release schedule.  We have
 approximately 4-6 weeks to finish our development described out in the below
 blueprint.

 The set of features the core team has identified we want to develop is
 described in this specification:

 https://github.com/stackforge/kolla/blob/master/specs/containerize-openstack.rst

 I have decomposed the specification in to specific work items.  Most of the
 blueprints will take between 4 and 20 hours of work individually.  If your
 keen to learn about containers or the future of deployment architectures in
 OpenStack, please come and join in our development.  We do our development
 in the #tripleo channel on IRC using the standard OpenStack review and
 development process.

 The decomposed blueprints are here:

 https://blueprints.launchpad.net/kolla

 Come pick one out and start developing today :)

 Regards
 -steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] disable a check by default

2015-02-20 Thread Ian Cordasco
On 2/20/15, 12:07, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I would like to introduce a hacking check that would be disabled by
default:

https://review.openstack.org/157894

I see the following commit in flake8:

https://gitlab.com/methane/flake8/commit/b301532636b60683b339a2d081728d229
57a142f

which suggests it's now possible via specifying entry points in some
special way. But I fail to determine the proper form to achieve this.

Any ideas?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU53fXAAoJEC5aWaUY1u57XjcIALfx/AgR/69W0ecODiIycQLr
3IBjj+muKjpMIY8VilNqvO8rR+tm2Z+T87NOkbJdme10GxR1vkfa7ve95GnqZ3XF
9YAs10Dlc2gxBjL3TWOxJL2pD5reeNOTt3PRWHSW2SdAIfkvypZQRg8oSq2JbloD
V8/z+PPxLU5hkOmCp5GET1qN+PItvilVZ53LAi/VYIJ3VDON0MKjJeZRXAycCy2k
HmEPw9by5qyeGLd5w8V3uwDXeGCDkzOovi8VVbnga3DBMUFLSUyXPDdXesGkLeyU
5mnFrhBxRRZmqjkMlp8C6DzoagFsk4CwFzDk+LM0w/xMlv56CFyGT8zjd4sONcQ=
=7+do
-END PGP SIGNATURE-

Hey Ihar!

First, the canonical source for Flake8 is actually
https://gitlab.com/pycqa/flake8 (so
https://gitlab.com/pycqa/flake8/commit/b301532636b60683b339a2d081728d22957a
142f is the link that should be used to reference the commit).

As for having a check that is disabled by default, there’s no entry-point
magic here. The check itself that is registered simply needs an
‘off_by_default’ attribute defined and set to the value ‘True’. I believe
hacking was looking at adding a decorator to do this for check authors,
perhaps Joe (Gordon) can give you more details about that.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
 On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
   On 02/20/2015 12:26 AM, Adam Gandelman wrote:
Its more than just the naming.  In the original proposal,
requirements.txt is the compiled list of all pinned deps (direct and
transitive), while requirements.in http://requirements.in reflects
what people will actually use.  Whatever is in requirements.txt affects
the egg's requires.txt. Instead, we can keep requirements.txt unchanged
and have it still be the canonical list of dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.
   
Maybe all we need is:
   
* update the existing post-merge job on the requirements repo to
  produce
a requirements.txt (as it does now) as well the compiled version.
   
* modify devstack in some way with a toggle to have it process
dependencies from the compiled version when necessary
   
I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master
but we can probably add something to warm the system with dependencies
from the compiled version prior to calling pip/setup.py/etc
http://setup.py/etc
  
   It sounds like you are suggesting we take the tool we use to ensure that
   all of OpenStack is installable together in a unified way, and change
   it's installation so that it doesn't do that any more.
  
   Which I'm fine with.
  
   But if we are doing that we should just whole hog give up on the idea
   that OpenStack can be run all together in a single environment, and just
   double down on the devstack venv work instead.
 
  I don't disagree with your conclusion, but that's not how I read what he
  proposed. :-)
 
 
 Sean was reading between the lines here. We are doing all this extra work
 to make sure OpenStack can be run together in a single environment, but
 it
 seems like more and more people are moving away from deploying with that
 model anyway. Moving to this model would require a little more then just
 installing everything in separate venvs.  We would need to make sure we
 don't cap oslo libraries etc. in order to prevent conflicts inside a
 single

Something I've noticed in this discussion: We should start talking about
our libraries, not just Oslo libraries. Oslo isn't the only project
managing libraries used by more than one other team any more. It never
really was, if you consider the clients, but we have PyCADF and various
middleware and other things now, too. We can base our policies on what
we've learned from Oslo, but we need to apply them to *all* libraries,
no matter which team manages them.

 service. And we would still need a story around what to do with stable
 branches, how do we make sure new libraries don't break stable branches
 --
 which in turn can break master via grenade and other jobs.

I'm comfortable using simple caps based on minor number increments for
stable branches. New libraries won't end up in the stable branches
unless they are a patch release. We can set up test jobs for stable
branches of libraries to run tempest just like we do against master, but
using all stable branch versions of the source files (AFAIK, we don't
have a job like that now, but I could be wrong).

I'm less confident that we have identified all of the issues with more
limited pins, so I'm reluctant to back that approach for now. That may
be an excess of caution on my part, though.

 
 
 
  Joe wanted requirements.txt to be the pinned requirements computed from
  the list of all global requirements that work together. Pinning to a
  single version works in our gate, but makes installing everything else
  together *outside* of the gate harder because if the projects don't all
  sync all requirements changes pretty much at the same time they won't be
  compatible.
 
  Adam suggested leaving requirements.txt alone and creating a different
  list of pinned requirements that is *only* used in our gate. That way we
  still get the pinning for our gate, and the values are computed from the
  requirements used in the projects but they aren't propagated back out to
  the projects in a way that breaks their PyPI or distro packages.
 
  Another benefit of Adam's proposal is that we would only need to keep
  the list of pins in the global requirements repository, so we would have
  fewer tooling changes to make.
 
  Doug
 
  
 -Sean
  
   --
   Sean Dague
   http://dague.net
  
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  

Re: [openstack-dev] [nova] The strange case of osapi_compute_unique_server_name_scope

2015-02-20 Thread Andrew Bogott

On 2/20/15 9:06 AM, Mike Dorman wrote:

I can report that we do use this option (‘global' setting.)  We have to
enforce name uniqueness for instances’ integration with some external
systems (namely AD and Spacewalk) which require unique naming.

However, we also do some external name validation which I think
effectively enforces uniqueness as well.  So if this were deprecated, I
don’t know if it would directly affect us for our specific situation.

Other operators, anyone else using osapi_compute_unique_server_name_scope?

I use it!  And, in fact, added it in the first place :(

I have no real recall of what we concluded when originally discussing 
the associated race.  The feature is useful to me and I'd love it if it 
could be moved into the db to fix the race, but I concede that it's a 
pretty big can o' worms, so if no one else cries out in pain I can live 
with it being deprecated.


-Andrew




Mike





On 2/19/15, 3:18 AM, Matthew Booth mbo...@redhat.com wrote:


Nova contains a config variable osapi_compute_unique_server_name_scope.
Its help text describes it pretty well:

When set, compute API will consider duplicate hostnames invalid within
the specified scope, regardless of case. Should be empty, project or
global.

So, by default hostnames are not unique, but depending on this setting
they could be unique either globally or in the scope of a project.

Ideally a unique constraint would be enforced by the database but,
presumably because this is a config variable, that isn't the case here.
Instead it is enforced in code, but the code which does this predictably
races. My first attempt to fix this using the obvious SQL solution
appeared to work, but actually fails in MySQL as it doesn't support that
query structure[1][2]. SQLite and PostgreSQL do support it, but they
don't support the query structure which MySQL supports. Note that this
isn't just a syntactic thing. It looks like it's still possible to do
this if we compound the workaround with a second workaround, but I'm
starting to wonder if we'd be better fixing the design.

First off, do we need this config variable? Is anybody actually using
it? I suspect the answer's going to be yes, but it would be extremely
convenient if it's not.

Assuming this configurability is required, is there any way we can
instead use it to control a unique constraint in the db at service
startup? This would be something akin to a db migration. How do we
manage those?

Related to the above, I'm not 100% clear on which services run this
code. Is it possible for different services to have a different
configuration of this variable, and does that make sense? If so, that
would preclude a unique constraint in the db.

Thanks,

Matt

[1] Which has prompted me to get the test_db_api tests running on MySQL.
See this series if you're interested:
https://review.openstack.org/#/c/156299/

[2] For specifics, see my ramblings here:
https://review.openstack.org/#/c/141115/7/nova/db/sqlalchemy/api.py,cm
line 2547
--
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev