----- Original Message ----- > @Nachi: Yes that could a good improvement to factorize the RPC mechanism. > > Another idea: > What about creating a RPC topic per security group (quid of the RPC topic > scalability) on which an agent subscribes if one of its ports is associated > to the security group? > > Regards, > Édouard. > >
Hmm, Interesting, @Nachi, I'm not sure I fully understood: SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] .. port[SG_ID1, SG_ID2], port2 , port3 Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ... and let the agent do all the combination work. Something like this could make sense? Security_Groups = {SG1:{IPs:[....],RULES:[....], SG2:{IPs:[....],RULES:[....]} } Ports = {Port1:[SG1, SG2], Port2: [SG1] .... } @Edouard, actually I like the idea of having the agent subscribed to security groups they have ports on... That would remove the need to include all the security groups information on every call... But would need another call to get the full information of a set of security groups at start/resync if we don't already have any. > > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com > wrote: > > > > hi Miguel Ángel, > I am very agree with you about the following point: > > * physical implementation on the hosts (ipsets, nftables, ... ) > --this can reduce the load of compute node. > > * rpc communication mechanisms. > -- this can reduce the load of neutron server > can you help me to review my BP specs? > > > > > > > > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com > > wrote: > > > > Hi it's a very interesting topic, I was getting ready to raise > >the same concerns about our security groups implementation, shihanzhang > >thank you for starting this topic. > > > > Not only at low level where (with our default security group > >rules -allow all incoming from 'default' sg- the iptable rules > >will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices" > >rpc call from ovs-agent to neutron-server grows to message sizes of >100MB, > >generating serious scalability issues or timeouts/retries that > >totally break neutron service. > > > > (example trace of that RPC call with a few instances > > http://www.fpaste.org/104401/14008522/ ) > > > > I believe that we also need to review the RPC calling mechanism > >for the OVS agent here, there are several possible approaches to breaking > >down (or/and CIDR compressing) the information we return via this api call. > > > > > > So we have to look at two things here: > > > > * physical implementation on the hosts (ipsets, nftables, ... ) > > * rpc communication mechanisms. > > > > Best regards, > >Miguel Ángel. > > > >----- Mensaje original ----- > > > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables? > >> It also based on the rule set mechanism. > >> The issue in that proposition, it's only stable since the begin of the > >> year > >> and on Linux kernel 3.13. > >> But there lot of pros I don't list here (leverage iptables limitation, > >> efficient update rule, rule set, standardization of netfilter > >> commands...). > > > >> Édouard. > > > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com > wrote: > > > >> > we have done some tests, but have different result: the performance is > >> > nearly > >> > the same for empty and 5k rules in iptable, but huge gap between > >> > enable/disable iptable hook on linux bridge > >> > > > >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com > > >> > wrote: > >> > > > >> > > Now I have not get accurate test data, but I can confirm the following > >> > > points: > >> > > >> > >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable > >> > > filter > >> > > it > >> > > one by one, if a VM in default security group and this default > >> > > security > >> > > group have many members, but ipset chain is set, the time ipset filter > >> > > one > >> > > and many member is not much difference. > >> > > >> > >> > > 2. when the iptable rule is very large, the probability of failure > >> > > that > >> > > iptable-save save the iptable rule is very large. > >> > > >> > > > >> > > At 2014-06-19 10:55:56, "Kevin Benton" < blak...@gmail.com > wrote: > >> > > >> > > > >> > > > This sounds like a good idea to handle some of the performance > >> > > > issues > >> > > > until > >> > > > the ovs firewall can be implemented down the the line. > >> > > > >> > > >> > >> > > > Do you have any performance comparisons? > >> > > > >> > > >> > >> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < ayshihanzh...@126.com > > >> > > > wrote: > >> > > > >> > > >> > > > >> > > > > Hello all, > >> > > > > >> > > > >> > > >> > > > >> > > > > Now in neutron, it use iptable implementing security group, but > >> > > > > the > >> > > > > performance of this implementation is very poor, there is a bug: > >> > > > > https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this > >> > > > > problem. > >> > > > > In > >> > > > > his test, w ith default security groups(which has remote security > >> > > > > group), > >> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry > >> > > > > compute > >> > > > > node, > >> > > > > although his patch can reduce the processing time, but it don't > >> > > > > solve > >> > > > > this > >> > > > > problem fundamentally. I have commit a BP to solve this problem: > >> > > > > https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security > >> > > > > >> > > > > >> > > > >> > > >> > >> > > > > There are other people interested in this it? > >> > > > > >> > > > >> > > >> > > > >> > > > > _______________________________________________ > >> > > > > >> > > > >> > > >> > >> > > > > OpenStack-dev mailing list > >> > > > > >> > > > >> > > >> > >> > > > > OpenStack-dev@lists.openstack.org >> > > > > >> > > > >> > > >> > >> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > >> > > > > >> > > > >> > > >> > > > >> > > _______________________________________________ > >> > > >> > >> > > OpenStack-dev mailing list > >> > > >> > >> > > OpenStack-dev@lists.openstack.org >> > > >> > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >> > > > >> > _______________________________________________ > >> > >> > OpenStack-dev mailing list > >> > >> > OpenStack-dev@lists.openstack.org >> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev@lists.openstack.org >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >_______________________________________________ > >OpenStack-dev mailing list > > OpenStack-dev@lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev