Re: [openstack-dev] [oslo] instance lock and class lock

2014-09-04 Thread Zang MingJie
does it require bp or bug report to submit oslo.concurrency patch ?


On Wed, Sep 3, 2014 at 7:15 PM, Davanum Srinivas dava...@gmail.com wrote:

 Zang MingJie,

 Can you please consider submitting a review against oslo.concurrency?


 http://git.openstack.org/cgit/openstack/oslo.concurrency/tree/oslo/concurrency

 That will help everyone who will adopt/use that library.

 thanks,
 dims

 On Wed, Sep 3, 2014 at 1:45 AM, Zang MingJie zealot0...@gmail.com wrote:
  Hi all:
 
  currently oslo provides lock utility, but unlike other languages, it is
  class lock, which prevent all instances call the function. IMO, oslo
 should
  provide an instance lock, only lock current instance to gain better
  concurrency.
 
  I have written a lock in a patch[1], please consider pick it into oslo
 
  [1]
 
 https://review.openstack.org/#/c/114154/4/neutron/openstack/common/lockutils.py
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] instance lock and class lock

2014-09-02 Thread Zang MingJie
Hi all:

currently oslo provides lock utility, but unlike other languages, it is
class lock, which prevent all instances call the function. IMO, oslo should
provide an instance lock, only lock current instance to gain better
concurrency.

I have written a lock in a patch[1], please consider pick it into oslo

[1]
https://review.openstack.org/#/c/114154/4/neutron/openstack/common/lockutils.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] [neutron] designate and neutron integration

2014-08-25 Thread Zang MingJie
I don't like the idea that uses bind9 views to split networks, due to
follow reasons:

the designate may not or hard to know the router's public address
non-router may exist for some isolate networks
there is no routes in our dhcp namespace currently

I suggest run one bind9 instance for each network which has domain support,
and introduce a designate-bind9-agent to control the bind9 instances.

+--+--+- network
|  |  |
+-++-++-+
|instance || dnsmasq ||  bind9  |
+-++-++-+
   |  |
   +--+   +-+
   |dhcp agent|   |dns agent|
   +--+   +-+

On Tue, Aug 12, 2014 at 1:41 AM, Hayes, Graham graham.ha...@hp.com wrote:

 kazuhiro MIYASHITA,

 As designate progresses with server pools, we aim to have support for a
 'private' dns server, that could run within a neutron network - is that
 the level of integration you were referring to?

 That is, for the time being, a long term goal, and not covered by Carl's
 Kilo blueprint.

 We talked with both people from both Neutron and Nova in Atlanta, and
 worked out the first steps for designate / neutron integration (auto
 provisioning of records)

 For that level of integration, we are assuming that a neutron router
 will be involved in DNS queries within a network.

 Long term I would prefer to see a 'private pool' connecting directly to
 the Network2 (like any other service VM (LBaaS etc)) and have dnsmasq
 pass on only records hosted by that 'private pool' to designate.

 This is all yet to be fleshed out, so I am open to suggestions. It
 requires that we complete server pools, and that work is only just
 starting (it was the main focus of our mid-cycle 2 weeks ago).

 Graham

 On Mon, 2014-08-11 at 11:02 -0600, Carl Baldwin wrote:
  kazuhiro MIYASHITA,
 
  I have done a lot of thinking about this.  I have a blueprint on hold
  until Kilo for Neutron/Designate integration [1].
 
  However, my blueprint doesn't quite address what you are going after
  here.  An assumption that I have made is that Designate is an external
  or internet facing service so a Neutron router needs to be in the
  datapath to carry requests from dnsmasq to an external network.  The
  advantage of this is that it is how Neutron works today so there is no
  new development needed.
 
  Could you elaborate on the advantages of connecting dnsmasq directly
  to the external network where Designate will be available?
 
  Carl
 
  [1] https://review.openstack.org/#/c/88624/
 
  On Mon, Aug 11, 2014 at 7:51 AM, Miyashita, Kazuhiro
  miy...@jp.fujitsu.com wrote:
   Hi,
  
   I want to ask about neutron and designate integration.
   I think dnsmasq fowards DNS request from instance to designate is
better.
  
  ++
  |DNS server(designate)   |
  ++
   |
   -+--+-- Network1
|
 ++
 |dnsmasq |
 ++
   |
   -+--+-- Network2
|
   +-+
   |instance |
   +-+
  
   Because it's simpler than virtual router connects Network1 and
Network2.
   If router connects network, instance should know where DNS server is.
it is complicated.
   dnsmasq returns its ip address as dns server in DHCP replay by
ordinary, so,
   I think good dnsmasq becomes like a gateway to designate.
  
   But, I can't connect dnsmasq to Network1. because of today's neutron
design.
  
   Question:
 Does designate design team have a plan such as above integration?
 or other integration design?
  
   *1: Network1 and Network2 are deployed by neutron.
   *2: neutron deploys dnsmasq as a dhcp server.
   dnsmasq can forward DNS request.
  
   Thanks,
  
   kazuhiro MIYASHITA
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] l2pop problems

2014-08-05 Thread Zang MingJie
Hi Mathieu:

We have deployed the new l2pop described in the previous mail in our
environment, and works pretty well. It solved the timing problem, and
also reduces lots of l2pop rpc calls. I'm going to file a blueprint to
propose the changes.

On Fri, Jul 18, 2014 at 10:26 PM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 Hi Zang,

 On Wed, Jul 16, 2014 at 4:43 PM, Zang MingJie zealot0...@gmail.com wrote:
 Hi, all:

 While resolving ovs restart rebuild br-tun flows[1], we have found
 several l2pop problems:

 1. L2pop is depending on agent_boot_time to decide whether send all
 port information or not, but the agent_boot_time is unreliable, for
 example if the service receives port up message before agent status
 report, the agent won't receive any port on other agents forever.

 you're right, there a race condition here, if the agent has more than
 1 port on the same network and if the agent sends its
 update_device_up() on every port before it sends its report_state(),
 it won't receive fdb concerning these network. Is it the race you are
 mentionning above?
 Since the report_state is done in a dedicated greenthread, and is
 launched before the greenthread that manages ovsdb_monitor, the state
 of the agent should be updated before the agent gets aware of its
 ports and sends get_device_details()/update_device_up(), am I wrong?
 So, after a restart of an agent, the agent_uptime() should be less
 than the agent_boot_time configured by default in the conf when the
 agent sent its first update_device_up(), the l2pop MD will be aware of
 this restart and trigger the cast of all fdb entries to the restarted
 agent.

 But I agree that it might relies on enventlet thread managment and on
 agent_boot_time that can be misconfigured by the provider.

 2. If the openvswitch restarted, all flows will be lost, including all
 l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

 To resolve the problems, I'm suggesting some changes:

 1. Because the agent_boot_time is unreliable, the service can't decide
 whether to send flooding entry or not. But the agent can build up the
 flooding entries from unicast entries, it has already been
 implemented[2]

 2. Create a rpc from agent to service which fetch all fdb entries, the
 agent calls the rpc in `provision_local_vlan`, before setting up any
 port.[3]

 After these changes, the l2pop service part becomes simpler and more
 robust, mainly 2 function: first, returns all fdb entries at once when
 requested; second, broadcast fdb single entry when a port is up/down.

 That's an implementation that we have been thinking about during the
 l2pop implementation.
 Our purpose was to minimize RPC calls. But if this implementation is
 buggy due to uncontrolled thread order and/or bad usage of the
 agent_boot_time parameter, it's worth investigating your proposal [3].
 However, I don't get why [3] depends on [2]. couldn't we have a
 network_sync() sent by the agent during provision_local_vlan() which
 will reconfigure ovs when the agent and/or the ovs restart?

actual, [3] doesn't strictly depend [2], we have encountered l2pop
problems several times where the unicast is correct, but the broadcast
fails, so we decide completely ignore the broadcast entries in rpc,
only deal unicast entries, and use unicast entries to build broadcast
rules.



 [1] https://bugs.launchpad.net/neutron/+bug/1332450
 [2] https://review.openstack.org/#/c/101581/
 [3] https://review.openstack.org/#/c/107409/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] l2pop problems

2014-07-16 Thread Zang MingJie
Hi, all:

While resolving ovs restart rebuild br-tun flows[1], we have found
several l2pop problems:

1. L2pop is depending on agent_boot_time to decide whether send all
port information or not, but the agent_boot_time is unreliable, for
example if the service receives port up message before agent status
report, the agent won't receive any port on other agents forever.

2. If the openvswitch restarted, all flows will be lost, including all
l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

To resolve the problems, I'm suggesting some changes:

1. Because the agent_boot_time is unreliable, the service can't decide
whether to send flooding entry or not. But the agent can build up the
flooding entries from unicast entries, it has already been
implemented[2]

2. Create a rpc from agent to service which fetch all fdb entries, the
agent calls the rpc in `provision_local_vlan`, before setting up any
port.[3]

After these changes, the l2pop service part becomes simpler and more
robust, mainly 2 function: first, returns all fdb entries at once when
requested; second, broadcast fdb single entry when a port is up/down.

[1] https://bugs.launchpad.net/neutron/+bug/1332450
[2] https://review.openstack.org/#/c/101581/
[3] https://review.openstack.org/#/c/107409/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][neutron] can't notify the broadcast fdb entries

2014-07-09 Thread Zang MingJie
Hi:

We are encountered the same problem here. some of our ovs-agent
haven't received any fdb entry after a restart

To solve the problem I'm going to add a rpc call to l2pop mechanism
driver, when triggered, the l2pop send all fdb entries to the agent.
The agent call the driver while starting.

On Wed, Jul 9, 2014 at 8:56 AM, Yangxurong yangxur...@huawei.com wrote:
 Hi, folks,



 Now , in order to improve broadcast fdb entries traffic, we do some
 restriction:



 if agent_active_ports == 1 or (

 self.get_agent_uptime(agent) 
 cfg.CONF.l2pop.agent_boot_time):

 # First port activated on current agent in this network,

 # we have to provide it with the whole list of fdb entries



  but this restriction will bring about new issue (fdb entries could
 not be notified)base on the follow cases:

 1.   Distribute deploy multiple neutron server and bulk create ports in
 short order, namely, agent_active_ports will be more than 1, so fdb entries
 could not be notified and thus lead to fail to establish the tunnel port.

 2.   The time of booting Ovs-agent is more than
 cfg.CONF.l2pop.agent_boot_time, so, fdb entries could not be notified and
 thus lead to fail to establish the tunnel port.



 IMO, the restrictions are not perfect.

 Any good thought, looking forward to your response.



 Thanks,

 XuRong Yang




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-07-03 Thread Zang MingJie
Although the SNAT DVR has some trade off, I still think it is
necessary. Here is pros and cons for consideration:

pros:

save W-E bandwidth
high availability (distributed, no single point failure)

cons:

waste public ips (one ip per compute node vs one ip per l3-agent, if
double-SNAT implemented)
different tenants may share SNAT source ips
compute node requires public interface

Under certain deployment, the cons may not cause problems, can we
provide SNAT DVR as a alternative option, which can be fully
controlled by could admin ? The admin chooses whether use it or not.

 To resolve the problem, we are using double-SNAT,

 first, set up one namespace for each router, SNAT tenant ip ranges to
 a separate range, say 169.254.255.0/24

 then, SNAT from 169.254.255.0/24 to public network.

 We are already using this method, and saved tons of ips in our
 deployment, only one public ip is required per router agent

 Functionally it could works, but break the existing normal OAM pattern, which 
 expecting VMs from one tenant share a public IP, but share no IP with other 
 tenant. As I know, at least some customer don't accept this way, they think 
 VMs in different hosts appear as different public IP is very strange.

 In fact I severely doubt the value of N-S distributing in a real 
 commercialized production environment, including FIP. There are many things 
 that traditional N-S central nodes need to control: security, auditing, 
 logging, and so on, it is not the simple forwarding. We need a tradeoff 
 between performance and policy control model:

 1. N-S traffic is usually much less than W-E traffic, do we really need 
 distribute N-S traffic besides W-E traffic?
 2. With NFV progress like intel DPDK, we can build very cost-effective 
 service application on commodity x86 server (simple SNAT with 10Gbps/s per 
 core at average Internet packet length)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-26 Thread Zang MingJie
On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:
  If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?

 It is technically possible to implement default SNAT at the compute node.

 One approach would be to use a single IP address per compute node as a
 default SNAT address shared by all VMs on that compute node.  While this
 optimizes for number of external IPs consumed per compute node, the downside
 is having VMs from different tenants sharing the same default SNAT IP address
 and conntrack table.  That downside may be acceptable for some deployments,
 but it is not acceptable in others.

To resolve the problem, we are using double-SNAT,

first, set up one namespace for each router, SNAT tenant ip ranges to
a separate range, say 169.254.255.0/24

then, SNAT from 169.254.255.0/24 to public network.

We are already using this method, and saved tons of ips in our
deployment, only one public ip is required per router agent


 Another approach would be to use a single IP address per router per compute
 node.  This avoids the multi-tenant issue mentioned above, at the cost of
 consuming more IP addresses, potentially one default SNAT IP address for each
 VM on the compute server (which is the case when every VM on the compute node
 is from a different tenant and/or using a different router).  At that point
 you might as well give each VM a floating IP.

 Hence the approach taken with the initial DVR implementation is to keep
 default SNAT as a centralized service.

 - Jack

 -Original Message-
 From: Zang MingJie [mailto:zealot0...@gmail.com]
 Sent: Wednesday, June 25, 2014 6:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut

 On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
  Hi,
  for each compute node to have SNAT to Internet, I think we have the
  drawbacks:
  1. SNAT is done in router, so each router will have to consume one public 
  IP
  on each compute node, which is money.

 SNAT can save more ips than wasted on floating ips

  2. for each compute node to go out to Internet, the compute node will have
  one more NIC, which connect to physical switch, which is money too
 

 Floating ip also need a public NIC on br-ex. Also we can use a
 separate vlan to handle the network, so this is not a problem

  So personally, I like the design:
   floating IPs and 1:N SNAT still use current network nodes, which will have
  HA solution enabled and we can have many l3 agents to host routers. but
  normal east/west traffic across compute nodes can use DVR.

 BTW, does HA implementation still active ? I haven't seen it has been
 touched for a while

 
  yong sheng gong
 
 
  On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  In current DVR design, SNAT is north/south direction, but packets have
  to go west/east through the network node. If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?
 
  SNAT versus floating ips, can save tons of public ips, in trade of
  introducing a single failure point, and limiting the bandwidth of the
  network node. If the SNAT performance problem can be solved, I'll
  encourage people to use SNAT over floating ips. unless the VM is
  serving a public service
 
  --
  Zang MingJie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Zang MingJie
Hi:

In current DVR design, SNAT is north/south direction, but packets have
to go west/east through the network node. If every compute node is
assigned a public ip, is it technically able to improve SNAT packets
w/o going through the network node ?

SNAT versus floating ips, can save tons of public ips, in trade of
introducing a single failure point, and limiting the bandwidth of the
network node. If the SNAT performance problem can be solved, I'll
encourage people to use SNAT over floating ips. unless the VM is
serving a public service

--
Zang MingJie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Zang MingJie
On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com wrote:
 Hi,
 for each compute node to have SNAT to Internet, I think we have the
 drawbacks:
 1. SNAT is done in router, so each router will have to consume one public IP
 on each compute node, which is money.

SNAT can save more ips than wasted on floating ips

 2. for each compute node to go out to Internet, the compute node will have
 one more NIC, which connect to physical switch, which is money too


Floating ip also need a public NIC on br-ex. Also we can use a
separate vlan to handle the network, so this is not a problem

 So personally, I like the design:
  floating IPs and 1:N SNAT still use current network nodes, which will have
 HA solution enabled and we can have many l3 agents to host routers. but
 normal east/west traffic across compute nodes can use DVR.

BTW, does HA implementation still active ? I haven't seen it has been
touched for a while


 yong sheng gong


 On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:

 Hi:

 In current DVR design, SNAT is north/south direction, but packets have
 to go west/east through the network node. If every compute node is
 assigned a public ip, is it technically able to improve SNAT packets
 w/o going through the network node ?

 SNAT versus floating ips, can save tons of public ips, in trade of
 introducing a single failure point, and limiting the bandwidth of the
 network node. If the SNAT performance problem can be solved, I'll
 encourage people to use SNAT over floating ips. unless the VM is
 serving a public service

 --
 Zang MingJie

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-19 Thread Zang MingJie
Hi:

I don't like the idea of ResourceDriver and AgentDriver. I suggested
use a singleton worker thread to manager all underlying setup, so the
driver should do nothing other than fire a update event to the worker.

The worker thread may looks like this one:

# the only variable store all local state which survives between
different events, including lvm, fdb or whatever
state = {}

# loop forever
while True:
event = ev_queue.pop()
if not event:
sleep() # may be interrupted when new event comes
continue

origin_state = state
new_state = event.merge_state(state)

if event.is_ovsdb_changed():
if event.is_tunnel_changed():
setup_tunnel(new_state, old_state, event)
if event.is_port_tags_changed():
setup_port_tags(new_state, old_state, event)

if event.is_flow_changed():
if event.is_flow_table_1_changed():
setup_flow_table_1(new_state, old_state, event)
if event.is_flow_table_2_changed():
setup_flow_table_2(new_state, old_state, event)
if event.is_flow_table_3_changed():
setup_flow_table_3(new_state, old_state, event)
if event.is_flow_table_4_changed():
setup_flow_table_4(new_state, old_state, event)

if event.is_iptable_changed():
if event.is_iptable_nat_changed():
setup_iptable_nat(new_state, old_state, event)
if event.is_iptable_filter_changed():
setup_iptable_filter(new_state, old_state, event)

   state = new_state

when any part has been changed by a event, the corresponding setup_xxx
function rebuild the whole part, then use the restore like
`iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
part.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-06-17 Thread Zang MingJie
On Thu, May 29, 2014 at 6:57 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Zang

 Since, SSL-VPN for Juno bp is approved in neturon-spec,
 I would like to restart this work.
 Could you share your code if it is possible?
 Also, Let's discuss how we can collaborate in here.

Currently We are running havana branch, I'm trying to rebase it to master


 Best
 Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Zang MingJie
Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture


On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com wrote:
 Following the discussions in the ML2 subgroup weekly meetings, I have added
 more information on the etherpad [1] describing the proposed architecture
 for modular L2 agents. I have also posted some code fragments at [2]
 sketching the implementation of the proposed architecture. Please have a
 look when you get a chance and let us know if you have any comments.

 [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
 [2] https://review.openstack.org/#/c/99187/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Monitoring agent

2014-05-15 Thread Zang MingJie
Hi All:

I want to know is there any way to continuously monitor ports
connectivity, and report dead port via snmp trap or whatever.

Currently I got neutron-debug utility which can help me checking port
status by hand, but it would be better to have an agent to do the job
automatically, and also collect port statistics for analysis.

Look for suggestion

Regards

--
Zang MingJie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-07 Thread Zang MingJie
+1 to implement a modular framework where user can choose whether to
use barbican or sqldb

On Fri, May 2, 2014 at 4:28 AM, John Wood john.w...@rackspace.com wrote:
 Hello Samuel,

 Just noting that the link below shows current-state Barbican. We are in the
 process of designing SSL certificate support for Barbican via blueprints
 such as this one:
 https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates
 We intend to discuss this feature in Atlanta to enable coding in earnest for
 Juno.

 The Container resource is intended to capture/store the final certificate
 details.

 Thanks,
 John


 
 From: Samuel Bercovici [samu...@radware.com]
 Sent: Thursday, May 01, 2014 12:50 PM
 To: OpenStack Development Mailing List (not for usage questions);
 os.v...@gmail.com
 Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert
 implementation for LBaaS and VPN

 Hi Vijay,



 I have looked at the Barbican APIs –
 https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface

 I was no able to see a “native” API that will accept an SSL certificate
 (private key, public key, CSR, etc.) and will store it.

 We can either store the whole certificate as a single file as a secret or
 use a container and store all the certificate parts as secrets.



 I think that having LBaaS reference Certificates as IDs using some service
 is the right way to go so this might be achived by either:

 1.   Adding to Barbican and API to store / generate certificates

 2.   Create a new “module” that might start by being hosted in neutron
 or keystone that will allow to manage certificates and will use Barbican
 behind the scenes to store them.

 3.   Decide on a container structure to use in Babican but implement the
 way to access and arrange it as a neutron library



 Was any decision made on how to proceed?



 Regards,

 -Sam.









 From: Vijay B [mailto:os.v...@gmail.com]
 Sent: Wednesday, April 30, 2014 3:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for
 LBaaS and VPN



 Hi,



 It looks like there are areas of common effort in multiple efforts that are
 proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in
 neutron.



 Two relevant efforts are listed below:





 https://review.openstack.org/#/c/74031/
 (https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL)



 https://review.openstack.org/#/c/58897/
 (https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn)







 Both VPN and LBaaS will use SSL certificates and keys, and this makes it
 better to implement SSL entities as first class citizens in the OS world.
 So, three points need to be discussed here:



 1. The VPN SSL implementation above is putting the SSL cert content in a
 mapping table, instead of maintaining certs separately and referencing them
 using IDs. The LBaaS implementation stores certificates in a separate table,
 but implements the necessary extensions and logic under LBaaS. We propose
 that both these implementations move away from this and refer to SSL
 entities using IDs, and that the SSL entities themselves are implemented as
 their own resources, serviced either by a core plugin or a new SSL plugin
 (assuming neutron; please also see point 3 below).



 2. The actual data store where the certs and keys are stored should be
 configurable at least globally, such that the SSL plugin code will
 singularly refer to that store alone when working with the SSL entities. The
 data store candidates currently are Barbican and a sql db. Each should have
 a separate backend driver, along with the required config values. If further
 evaluation of Barbican shows that it fits all SSL needs, we should make it a
 priority over a sqldb driver.



 3. Where should the primary entries for the SSL entities be stored? While
 the actual certs themselves will reside on Barbican or SQLdb, the entities
 themselves are currently being implemented in Neutron since they are being
 used/referenced there. However, we feel that implementing them in keystone
 would be most appropriate. We could also follow a federated model where a
 subset of keys can reside on another service such as Neutron. We are fine
 with starting an initial implementation in neutron, in a modular manner, and
 move it later to keystone.





 Please provide your inputs on this.





 Thanks,

 Regards,

 Vijay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] SSL VPN Implemenatation

2014-04-29 Thread Zang MingJie
Hi all:

Currently I'm working on ssl vpn, based on patchsets by Nachi[1] and Rajesh[2]

There are secure issues pointed by mark, that ssl private keys are
stored plain in database and in config files of vpn-agents. As
Barbican is incubated, we can store certs and their private keys in
Barbican. But after checking openvpn configurations, I don't think
there is any way to prevent storing private key in openvpn config
files without modify the openvpn implementation.

I have also made several changes, added a optional port field to
sslvpn-connection table, integrated with service plugin framework
(I'll follow service flavor framework when it is ready), and completed
the neutronclient part. It is already developed in our testing
environment, I'll upload my patch sooner or later.

[1] https://review.openstack.org/#/c/58897/
[2] https://review.openstack.org/#/c/70274/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Provider Framework and Flavor Framework

2014-04-17 Thread Zang MingJie
Hi Eugene:

I have several questions

1. I wonder if tags is really needed. for example, if I want a ipsec
vpn, I'll define a flavor which is directly refer to ipsec provider.
If using current design, almost all users will end up creating flavors
like this:

ipsec tags=[ipsec]
sslvpn tags=[sslvpn]

so the tags is totally useless, and I suggest replace tags by provider
name/uuid. It is much more straightforward and easier.

2. currently the provider name is something configured in neutron.conf:

service_provider=service_type:name:driver[:default]

the name is arbitrary, user may set whatever he want, and the name is
also used in service instance stored in database. I don't know why
give user the ability to name the provider, and the name is totally
nonsense, it hasn't been referred anywhere. I think each service
provider should provide a alias, so user can configure service more
flexible:

service_provider=service_type:driver_alias[:default]

and the alias can be also used as an identifier in rpc or other place,
also I don't want to see any user configured name used as an
identifier.


On Wed, Apr 16, 2014 at 5:10 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Hi folks,

 In Icehouse there were attempts to apply Provider Framework ('Service Type
 Framework') approach to VPN and Firewall services.
 Initially Provider Framework was created as a simplistic approach of
 allowing user to choose service implementation.
 That approach definitely didn't account for public cloud case where users
 should not be aware of underlying implementation, while being able to
 request capabilities or a SLA.

 However, Provider Framework consists of two parts:
 1) API part.
 That's just 'provider' attribute of the main resource of the service plus a
 REST call to fetch available providers for a service

 2) Dispatching part
 That's a DB table that keeps mapping between resource and implementing
 provider/driver.
 With this mapping it's possible to dispatch a REST call to the particular
 driver that is implementing the service.

 As we are moving to better API and user experience, we may want to drop the
 first part, which makes the framework non-public-cloud-friendly but the
 second part will remain if we ever want to support more then one driver
 simultaneously.

 Flavor framework proposes choosing implementation based on capabilities, but
 the result of the choice (e.g. scheduling) is still a mapping between
 resource and the driver.
 So the second part is still needed for the Flavor Framework.

 I think it's a good time to continue the discussion on Flavor and Provider
 Frameworks.

 Some references:
 1. Flavor Framework description
 https://wiki.openstack.org/wiki/Neutron/FlavorFramework
 2. Flavor Framework PoC/example code https://review.openstack.org/#/c/83055/
 3. FWaaS integration with Provider framework:
 https://review.openstack.org/#/c/60699/
 4. VPNaaS integration with Provider framework:
 https://review.openstack.org/#/c/41827/

 I'd like to see the work on (3) and (4) continued, considering Provider
 Framework is a basis for Flavor Framework.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Does l2-pop sync fdb on agent start ?

2014-02-26 Thread Zang MingJie
Hi all,

I found my ovs-agent has missed some tunnels on br-tun. I have l2-pop
enabled, if some fdb entries is added while the agent is down, can it be
added back once the agent is back ?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ML2 improvement, more extensible and more modular

2013-12-04 Thread Zang MingJie
Hi, all:

I have already written a patch[1] which makes ml2 segment more extensible,
where segments can contains more fields other than physical network and
segmentation id, but there is still a lot of work to do to make the ml2
more extensible. Here are some of my thoughts.

First, introduce a new extension to abandon provider extension. Currently
the provider extension only support physical network and segmentation id,
as a result the net-create and net-show calls can't handle any extra
fields. Because we already have multi-segment support, we may need an
extension which extends the network with only one field, segments; json can
be used to describe segments when accessing the API (net-create/show). But
there comes a new problem, type drivers must check the access policy of
fields inside segment very carefully, there is nowhere to ensure the access
permission other than the type driver. multiprovidernet extension is a good
start point, but some modification still required.

Second, add segment calls to mechanism driver. There is an one-to-many
relation between network and segments, but it is not clear and hide inside
multi-segment implementation, it should be more clear and more extensible,
so people can use it wisely. I want to add some new APIs to mechanism
manager which handles segment relate operations, eg,
segment_create/segment_release, and separate segment operations from
network.

Last, as our l2 agent (ovs-agent) only handles l2 segments operations, and
do nothing with networks or subnets, I wonder if we can remove all network
related code inside agent implementation, and only handle segments, change
lvm map from {network_id-segment/ports} to {segment_id-segment/ports}.
The goal is to make the ovs-agent pure l2 agent.

[1] https://review.openstack.org/#/c/37893/

--
Zang MingJie
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-12-03 Thread Zang MingJie
On Sat, Nov 30, 2013 at 6:32 PM, Édouard Thuleau thul...@gmail.com wrote:

 And what do you think about the performance issue I talked ?
 Do you have any thought to improve wildcarding to use megaflow feature ?


I have invested a little further, here is my environment

X1 (10.0.5.1) --- OVS BR --- X2 (10.0.5.2)

I have set up several flows to make port 5000 open on X2:

$ sudo ovs-ofctl dump-flows br
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=49.672s, table=0, n_packets=7, n_bytes=496,
idle_age=6, priority=256,tcp,nw_src=10.0.5.2,tp_src=5000 actions=NORMAL
 cookie=0x0, duration=29.854s, table=0, n_packets=8, n_bytes=562,
idle_age=6, priority=256,tcp,nw_dst=10.0.5.2,tp_dst=5000 actions=NORMAL
 cookie=0x0, duration=2014.523s, table=0, n_packets=96, n_bytes=4032,
idle_age=35, priority=512,arp actions=NORMAL
 cookie=0x0, duration=2006.462s, table=0, n_packets=51, n_bytes=4283,
idle_age=40, priority=0 actions=drop

and here is the kernel flows after 2 connections created:

$ sudo ovs-dpctl dump-flows
skb_priority(0),in_port(8),eth(src=2e:19:44:50:9d:17,dst=ae:7f:28:4f:14:ec),eth_type(0x0800),ipv4(src=
10.0.5.1/255.255.255.255,dst=10.0.5.2/255.255.255.255,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=35789,dst=5000),
packets:1, bytes:66, used:2.892s, flags:., actions:10
skb_priority(0),in_port(8),eth(src=2e:19:44:50:9d:17,dst=ae:7f:28:4f:14:ec),eth_type(0x0800),ipv4(src=
10.0.5.1/255.255.255.255,dst=10.0.5.2/255.255.255.255,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=35775,dst=5000),
packets:0, bytes:0, used:never, actions:10
skb_priority(0),in_port(10),eth(src=ae:7f:28:4f:14:ec,dst=2e:19:44:50:9d:17),eth_type(0x0800),ipv4(src=
10.0.5.2/255.255.255.255,dst=10.0.5.1/0.0.0.0,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=5000/0x,dst=35789/0),
packets:1, bytes:78, used:1.344s, flags:P., actions:8

conclusion:
mac-src,mac-dst can't be wildcard, because they are used by l2 bridging and
mac learning.
ip-src and port-src can't be wildcard
only ip-dst and port-dst can be wildcard

I don't know why ip-src and port-src can't be wildcard, maybe I just hit an
ovs bug.


  Édouard.

 On Fri, Nov 29, 2013 at 1:11 PM, Zang MingJie zealot0...@gmail.com
 wrote:
  On Fri, Nov 29, 2013 at 2:25 PM, Jian Wen jian@canonical.com
 wrote:
  I don't think we can implement a stateful firewall[1] now.
 
  I don't think we need a stateful firewall, a stateless one should work
  well. If the stateful conntrack is completed in the future, we can
  also take benefit from it.
 
 
  Once connection tracking capability[2] is added to the Linux OVS, we
  could start to implement the ovs-firewall-driver blueprint.
 
  [1] http://en.wikipedia.org/wiki/Stateful_firewall
  [2]
 
 http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS
 
 
  On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson geekinu...@gmail.com
 wrote:
 
  Adding Jun to this thread since gmail is failing him.
 
 
  On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi
  amir.sadou...@rackspace.com wrote:
 
  Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
  interested to see what Jun Park has. I might have something ready
 before he
  is available again, but would like to collaborate regardless.
 
  Amir
 
 
 
  On Nov 19, 2013, at 3:31 AM, Kanthi P pavuluri.kan...@gmail.com
 wrote:
 
  Hi All,
 
  Thanks for the response!
  Amir,Mike: Is your implementation being done according to ML2 plugin
 
  Regards,
  Kanthi
 
 
  On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson geekinu...@gmail.com
  wrote:
 
  Hi Kanthi,
 
  Just to reiterate what Kyle said, we do have an internal
 implementation
  using flows that looks very similar to security groups. Jun Park was
 the guy
  that wrote this and is looking to get it upstreamed. I think he'll
 be back
  in the office late next week. I'll point him to this thread when
 he's back.
 
  -Mike
 
 
  On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery)
  kmest...@cisco.com wrote:
 
  On Nov 18, 2013, at 4:26 PM, Kanthi P pavuluri.kan...@gmail.com
  wrote:
   Hi All,
  
   We are planning to implement quantum security groups using
 openflows
   for ovs plugin instead of iptables which is the case now.
  
   Doing so we can avoid the extra linux bridge which is connected
   between the vnet device and the ovs bridge, which is given as a
 work around
   since ovs bridge is not compatible with iptables.
  
   We are planning to create a blueprint and work on it. Could you
   please share your views on this
  
  Hi Kanthi:
 
  Overall, this idea is interesting and removing those extra bridges
  would certainly be nice. Some people at Bluehost gave a talk at the
 Summit
  [1] in which they explained they have done something similar, you
 may want
  to reach out to them since they have code for this internally
 already.
 
  The OVS plugin is in feature freeze during Icehouse, and will be
  deprecated in favor of ML2 [2] at the end of Icehouse. I would
 advise you

Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-29 Thread Zang MingJie
On Fri, Nov 29, 2013 at 2:25 PM, Jian Wen jian@canonical.com wrote:
 I don't think we can implement a stateful firewall[1] now.

I don't think we need a stateful firewall, a stateless one should work
well. If the stateful conntrack is completed in the future, we can
also take benefit from it.


 Once connection tracking capability[2] is added to the Linux OVS, we
 could start to implement the ovs-firewall-driver blueprint.

 [1] http://en.wikipedia.org/wiki/Stateful_firewall
 [2]
 http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS


 On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson geekinu...@gmail.com wrote:

 Adding Jun to this thread since gmail is failing him.


 On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi
 amir.sadou...@rackspace.com wrote:

 Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
 interested to see what Jun Park has. I might have something ready before he
 is available again, but would like to collaborate regardless.

 Amir



 On Nov 19, 2013, at 3:31 AM, Kanthi P pavuluri.kan...@gmail.com wrote:

 Hi All,

 Thanks for the response!
 Amir,Mike: Is your implementation being done according to ML2 plugin

 Regards,
 Kanthi


 On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson geekinu...@gmail.com
 wrote:

 Hi Kanthi,

 Just to reiterate what Kyle said, we do have an internal implementation
 using flows that looks very similar to security groups. Jun Park was the 
 guy
 that wrote this and is looking to get it upstreamed. I think he'll be back
 in the office late next week. I'll point him to this thread when he's back.

 -Mike


 On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery)
 kmest...@cisco.com wrote:

 On Nov 18, 2013, at 4:26 PM, Kanthi P pavuluri.kan...@gmail.com
 wrote:
  Hi All,
 
  We are planning to implement quantum security groups using openflows
  for ovs plugin instead of iptables which is the case now.
 
  Doing so we can avoid the extra linux bridge which is connected
  between the vnet device and the ovs bridge, which is given as a work 
  around
  since ovs bridge is not compatible with iptables.
 
  We are planning to create a blueprint and work on it. Could you
  please share your views on this
 
 Hi Kanthi:

 Overall, this idea is interesting and removing those extra bridges
 would certainly be nice. Some people at Bluehost gave a talk at the Summit
 [1] in which they explained they have done something similar, you may want
 to reach out to them since they have code for this internally already.

 The OVS plugin is in feature freeze during Icehouse, and will be
 deprecated in favor of ML2 [2] at the end of Icehouse. I would advise you 
 to
 retarget your work at ML2 when running with the OVS agent instead. The
 Neutron team will not accept new features into the OVS plugin anymore.

 Thanks,
 Kyle

 [1]
 http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
 [2] https://wiki.openstack.org/wiki/Neutron/ML2

  Thanks,
  Kanthi
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Cheers,
 Jian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Campus Network Blueprint

2013-07-12 Thread Zang MingJie
Hi Filipe:

I disagree your ml2-external-port BP

It is unsuitable to connect multiple l2 networks directly, there may
be ip conflict, dhcp conflict and other problems. although neutron
dhcp agent won't respond dhcp request from unknown source, an external
dhcp may respond vm dhcp request. If we move an external port form a
network to another network, how can we ensure that the arp cache is
cleared.

And it will aslo make l2-population bp (
https://blueprints.launchpad.net/quantum/+spec/l2-population ) more
difficault. Our l2 forwarding works better if the device knows the
whole topology of the network, but the external part is totally
unknown.

So, I suggest a layer-3 solution, where the out world connects to vms
via l3 agent.


Thank you for sharing the idea
Regards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Neutron] Shared network improvement (RFC)

2013-07-08 Thread Zang MingJie
I have created a blueprint here:

https://blueprints.launchpad.net/neutron/+spec/zone-based-router

I'll complete the full specification soon

On Sun, Jul 7, 2013 at 12:37 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Thanks for sharing your thoughts on the mailing list.
 I will read them carefully and provide my comments soon.

 In the meanwhile, I advice you file a blueprint, possibly with more
 design/implementation details.
 The blueprint you reference aimed at solving a much easier problem.
 In the spec (or the whiteboard) it was mentioned that a full solution to the
 issue of network domain sharing was out of its scope.

 Salvatore


 On 5 July 2013 16:11, Zang MingJie zealot0...@gmail.com wrote:

 Hi:
   Currently we are working on a problem of neutron network isolation
 and inter-communication. Currently neutron has private network and
 shared network, but they are not flexible. The private network cannot
 access other network, and the shared network is fully open. To solve
 this problem, we got another design.

   First, introduce a new concept Zone, basically each Zone is a
 isolated ip address space, like VPN-Site in cisco vrf or route
 distinguisher in mpls-vpn or bgp-vpnv4. Each network must belong to a
 Zone. And we must ensure ip address is unique inside every Zone, which
 means no duplicated ip in the same Zone. By this assumption, given
 (Zone,ip-address) tuple we can locate a unique port.

   Then, we implement our l3 agent, make sure all networks in the same
 Zone can communicate each other, and network in different Zones can't
 communicate. Each tenant can create limit number of Zones (quota) and
 share it to others for intercommunication.

   By separate Zone from tenant, we get more flexible control of
 network scope. To maintain backward compatibility, when migrate,
 create a Zone for all shared network and create Zones for each tenant.

   Data Model:
 * add a new resource: Zone (CRUD)
 * add a new parameter Zone to network
 * deprecate 'shared' param of network
 * a network w/o Zone and is shared belongs global Zone
 * a network w/o Zone and is not shared belongs the Zone which has
 the same id of tenant-id

   API change:
 * add API to grant/revoke Zone access to others (should we support
 revoke ?). a tenant only permitted to create network in the Zone he
 can access.

   Implementation overview:
 * l3-agent should be changed to suite the requirement, don't
 launch l3-agent per node*tenant, but per node*Zone. This should be
 very easy.
 * Ensure ip uniqueness inside Zone when creating subnets

   Related BPs:
 *
 https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev