Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-13 Thread ozamiatin



6/13/15 01:55, Clint Byrum пишет:

Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700:

On 6/1/15, 5:03 PM, "Davanum Srinivas"  wrote:


fyi, the spec for zeromq driver in oslo.messaging is here:
https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
.rst,unified

-- dims

I was about to provide some email comments on the above review off gerrit,
but figured maybe it would be good to make a quick status of the state of
this general effort for pushing out a better zmq driver for oslo essaging.
So I started to look around the oslo/zeromq wiki and saw few email threads
that drew my interest.

In this email (Nov 2014) Ilya proposes about getting rid of a central
broker for zmq:
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050701.htm
l
Not clear if Ilya already had in mind to instead have a local proxy on
every node (as proposed in the above spec)


In this email (mar 2014), Yatin described the prospect of using zmq in a
completely broker-less way (so not even a proxy per node), with the use of
matchmaker rings to configure well known ports.
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030411.html
Which is pretty close to what I think would be a better design (with the
variant that I'd rather see a robust and highly available name server
instead of fixed port assignments), I'd be interested to know what
happened to that proposal and why we ended up with a proxy per node
solution at this stage (I'll reply to the proxy per node design in a
separate email to complement my gerrit comments).


I could not find one document that summarizes the list of issues related
to rabbitMQ deployments, all it appears is that many people are unhappy
with it, some are willing to switch to zmq, many are hesitant and some are
decidedly skeptical. On my side I know a number of issues related to oslo
messaging over rabbitMQ.

I think it is important for the community to understand that of the many
issues generally attributed to oslo messaging over rabbitMQ, not all of
them are caused by the choice of rabbitMQ as a transport (and hence those
will likely not be fixed if we just switched from rabbitMQ to ZMQ) and
many are actually caused by the misuse of oslo messaging by the apps
(Neutron, Nova...) and can only be fixed by modification of the app code.

I think personally that there is a strong case for a properly designed ZMQ
driver but we first need to make the expectations very clear.

One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.
As a result, app developers are using the APIs sometimes indiscriminately
and that will have an impact on the overall quality of openstack in
deployment conditions.
I understand that a lot of the existing code was written in a hurry and
good enough to work properly on small setups, but some code will break
really badly under load or when things start to go south in the cloud.
That is unless the community realizes that perhaps there is something that
needs to be done.

We're only starting to see today things breaking under load because we
have more lab tests at scale, more deployments at scale and we only start
to see real system level testing at scale with HA testing (the kind of
test where you inject load and cause failures of all sorts). Today we know
that openstack behaves terribly in these conditions, even in so-called HA
deployments!

As a first step, would it be useful to have one single official document
that characterizes all the issues we're trying to fix and perhaps used
that document as a basis for showing which of all these issues will be
fixed by the use of the zmq driver? I think that could help us focus
better on the type of requirements we need from this new ZMQ driver.


I think you missed "it is not tested in the gate" as a root cause for
some of the ambiguity.
It is not missed. Passing the devstack-gate is a first requirement of 
approval

of a new driver implementation and it mentioned in the spec.

Anecdotes and bug reports are super important for
knowing where to invest next, but a test suite would at least establish a
base line and prevent the sort of thrashing and confusion that comes from
such a diverse community of users feeding bug reports into the system.

Also, not having a test in the gate is a serious infraction now, and will
lead to zmq's removal from oslo.messaging now that we have a ratified
policy requiring this. I suggest a first step being to strive to get a
devstack-gate job that runs using zmq instead of rabbitmq. You can
trigger it in oslo.messaging's check pipeline, and make it non-voting,
but eventually it needs to get into nova, neutron, cinder, heat, etc.
etc. Without that, you'll find that the community of potential
benefactors of any effort you put into zmq will shrink dramatically when
we are forced to remove th

[openstack-dev] [neutron] Missing openvswitch filter rules

2015-06-13 Thread Jeff Feng

I'm using OVSHybridIptablesFirewallDriver in ovs_neutron_plugin.ini

[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

But I can not see any related rules added in iptables after restart
neutron-openvswitch-agent.

Anyone have seen same issue before ? This is in Juno release.
any idea which configuration could be wrong/missed ?


# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
neutron-openvswi-INPUT all -- anywhere anywhere
FWR all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- anywhere anywhere
neutron-openvswi-FORWARD all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
neutron-filter-top all -- anywhere anywhere
neutron-openvswi-OUTPUT all -- anywhere anywhere

Chain FWR (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere multiport dports 52311
ACCEPT udp -- anywhere anywhere multiport dports 52311
ACCEPT udp -- anywhere anywhere multiport dports 55400:55415
ACCEPT udp -- anywhere anywhere multiport sports 55400:55415
REJECT tcp -- anywhere anywhere tcp flags:SYN,RST,ACK/SYN reject-with
icmp-port-unreachable
REJECT udp -- anywhere anywhere reject-with icmp-port-unreachable

Chain neutron-filter-top (2 references)
target prot opt source destination
neutron-openvswi-local all -- anywhere anywhere

Chain neutron-openvswi-FORWARD (1 references)
target prot opt source destination

Chain neutron-openvswi-INPUT (1 references)
target prot opt source destination

Chain neutron-openvswi-OUTPUT (1 references)
target prot opt source destination

Chain neutron-openvswi-local (1 references)
target prot opt source destination

Chain neutron-openvswi-sg-chain (0 references)
target prot opt source destination

Chain neutron-openvswi-sg-fallback (0 references)
target prot opt source destination
DROP all -- anywhere anywhere

Thanks
Jeff Feng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-13 Thread Hongbin Lu
Thanks Adrian. Sounds good.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-13-15 2:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Hongbin,

Good use case. I suggest that we add a parameter to magnum bay-create that will 
allow the user to override the baymodel.apiserver_port attribute with a new 
value that will end up in the bay.api_address attribute as part of the URL. 
This approach assumes implementation of the magnum-api-address-url blueprint. 
This way we solve for the use case, and don't need a new attribute on the bay 
resource that requires users to concatenate multiple attribute values in order 
to get a native client tool working.
Adrian

On Jun 12, 2015, at 6:32 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
A use case could be the cloud is behind a proxy and the API port is filtered. 
In this case, users have to start the service in an alternative port.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
mailto:wk...@cn.ibm.com>> wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://:

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Jay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau mailto:jay.lau@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint




I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so there will 
be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:
I’m

[openstack-dev] [os-ansible-deployment] Core team nomination

2015-06-13 Thread Kevin Carter
Hello,

I would like to nominate Ian Cordasco (sigmavirus24 on IRC) for the 
os-ansible-deployment-core team. Ian has been contributing to the OSAD project 
for some time now and has always had quality reviews[0], he's landing great 
patches[1], he's almost always in the meetings, and is simply an amazing person 
to work with. His open source first attitude, security mindset, and willingness 
to work cross project is invaluable and will only stand to better the project 
and the deployers whom consume it.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

Thank you.

-- 

Kevin Carter

[0] 
https://review.openstack.org/#/q/status:closed+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z
[1] 
https://review.openstack.org/#/q/status:merged+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][puppet] Federation using ipsilon

2015-06-13 Thread Rich Megginson

On 06/12/2015 07:30 PM, Adam Young wrote:

On 06/12/2015 04:53 PM, Rich Megginson wrote:
I've done a first pass of setting up a puppet module to configure 
Keystone to use ipsilon for federation, using 
https://github.com/richm/puppet-apache-auth-mods, and a version of 
ipsilon-client-install with patches 
https://fedorahosted.org/ipsilon/ticket/141 and 
https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified 
version of the ipa/rdo federation setup scripts - 
https://github.com/richm/rdo-vm-factory.


I would like some feedback from the Keystone and puppet folks about 
this approach.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I take it this is not WebSSO yet, but only Federation.

Around here...

https://github.com/richm/puppet-apache-auth-mods/blob/master/manifests/keystone_ipsilon.pp#L64 



You would need to have the trusted dashboard, etc.


Right.  In order to do websso, there is some additional setup that needs 
to be done in the apache conf for the keystone wsgi virtual hosts (which 
is in the rdo-federation-setup script).  There is also some additional 
configuration to do to Horizon to enable federated auth and/or websso.





But I think that is what you intend.


Right.  What I've done so far is only the first step.


However, without an ECP setup, we really have no way to test it.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to microversion API code which is not in API layer

2015-06-13 Thread Devananda van der Veen
Yes. A new query parameter is a change in the contract, regardless of where
the code change lies.

-Deva
 On Jun 12, 2015 6:20 PM, "Chen CH Ji"  wrote:

> Hi
>  We have [1] in the db layer and it's directly used by API
> layer , the filters is directly from client's input
>  In this case, when doing [2] or similar changes, do we need
> to consider microversion usage when we change options?
>  Thanks
>
> [1]
> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4440
> [2] https://review.openstack.org/#/c/144883
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
> Phone: +86-10-82454158
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Where are all the research papers?

2015-06-13 Thread Joshua Harlow
Out of curiosity is there any known listing of papers (like ACM style or 
other) on openstack or evaluations of it that are published on a wiki or 
other? Especially as the number of projects increases, I would start to 
expect more articles being published, more papers and such (all of them 
would be an interesting read...).


I did find one but I'm starting to wonder if there are more and if there 
are not, what is stopping people from writing more? (are we not doing 
enough outreach to people that would right papers?)


'On fault resilience of OpenStack'

https://kabru.eecs.umich.edu/papers/publications/2013/socc2013_ju.pdf

It'd be neat to somehow get more published articles about openstack 
coming from universities somehow (even if the articles are about bugs 
like in 'What Bugs Live in the Cloud?' @ 
http://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf). Maybe we should also 
feature them on the openstack blog when/if they get published as a 
showing of good faith to the article creator/researcher/other...


Anyone have any thoughts on this?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev