++. This might also be useful in ipam.
I'll have a look at the patch regardless for the short term. (I had
actually suggested something similar on Eugene's patch review.)
Carl
On Jun 26, 2014 8:14 AM, "Zang MingJie" wrote:
> it would be better to make the range increase dynamic, instead of cr
it would be better to make the range increase dynamic, instead of create
all entries at initialize.
for example, if vxlan range 1~1M is configured, only initialize 1~1K, when
it has been used up, extend the range to 1K~2K, and so on.
___
OpenStack-dev ma
Hi everyone,
A new change (https://review.openstack.org/101982) has been proposed to
improve vxlan pool initiation with an improvement on delete of obsolete
unallocated vnis using a unique delete SQL command.
I've tested performance with the following (delete only) scenario: vxlan
range is change
Mike,
Thanks a lot for your response!
Some comments:
> There’s some in-Python filtering following it which does not seem
necessary; the "alloc.vxlan_vni not in vxlan_vnis” phrase
> could just as well be a SQL “NOT IN” expression.
There we have to do specific set intersection between configured ran
On Jun 7, 2014, at 4:38 PM, Eugene Nikanorov wrote:
> Hi folks,
>
> There was a small discussion about the better way of doing sql operations for
> vni synchronization with the config.
> Initial proposal was to handle those in chunks. Carl also suggested to issue
> a single sql query.
> I've
Hi folks,
There was a small discussion about the better way of doing sql operations
for vni synchronization with the config.
Initial proposal was to handle those in chunks. Carl also suggested to
issue a single sql query.
I've did some testing with my sql and postgress.
I've tested the following s
great.
I will do more test base on Eugene Nikanorov's modification.
*Thanks,*
2014-06-05 11:01 GMT+08:00 Isaku Yamahata :
> Wow great.
> I think the same applies to gre type driver.
> so we should create similar one after vxlan case is resolved.
>
> thanks,
>
>
> On Thu, Jun 05, 2014 at 12:36:5
Wow great.
I think the same applies to gre type driver.
so we should create similar one after vxlan case is resolved.
thanks,
On Thu, Jun 05, 2014 at 12:36:54AM +0400,
Eugene Nikanorov wrote:
> We hijacked the vxlan initialization performance thread with ipam! :)
> I've tried to address initia
: [openstack-dev] [Neutron] One performance issue about VXLAN pool
initiation
We hijacked the vxlan initialization performance thread with ipam! :)
I've tried to address initial problem with some simple sqla stuff:
https://review.openstack.org/97774
With sqlite it gives ~3x benefit over exi
You are right. I did feel a bit bad about hijacking the thread. But,
most of discussion was related closely enough that I never decided to
fork in to a newer thread.
I think I'm done now. I'll have a look at your review and we'll put
IPAM to rest for now. :)
Carl
On Wed, Jun 4, 2014 at 2:36
We hijacked the vxlan initialization performance thread with ipam! :)
I've tried to address initial problem with some simple sqla stuff:
https://review.openstack.org/97774
With sqlite it gives ~3x benefit over existing code in master.
Need to do a little bit more testing with real backends to make
Yes, memcached is a candidate that looks promising. First things first,
though. I think we need the abstraction of an ipam interface merged. That
will take some more discussion and work on its own.
Carl
On May 30, 2014 4:37 PM, "Eugene Nikanorov" wrote:
> > I was thinking it would be a separa
et
> the result(1h)?
>
>
>
>
>
> /Yalei
>
>
>
> *From:* Xurong Yang [mailto:ido...@gmail.com]
> *Sent:* Thursday, May 29, 2014 6:01 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Neutron] One performa
performance issue about VXLAN pool
initiation
Hi, Folks,
When we configure VXLAN range [1,16M], neutron-server service costs long time
and cpu rate is very high(100%) when initiation. One test base on postgresql
has been verified: more than 1h when VXLAN range is [1, 1M].
So, any good
Hi,
i have reported a bug[1]
[1]https://bugs.launchpad.net/neutron/+bug/1324875
but no better idea about this issue now, maybe need more discussion.
any thoughts?
:)
Xurong Yang
2014-05-31 6:33 GMT+08:00 Eugene Nikanorov :
> > I was thinking it would be a separate process that would communica
> I was thinking it would be a separate process that would communicate over
the RPC channel or something.
memcached?
Eugene.
On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin wrote:
> Eugene,
>
> That was part of the "whole new set of complications" that I
> dismissively waved my hands at. :)
>
>
Eugene,
That was part of the "whole new set of complications" that I
dismissively waved my hands at. :)
I was thinking it would be a separate process that would communicate
over the RPC channel or something. More complications come when you
think about making this process HA, etc. It would mea
Hi Carl,
The idea of in-memory storage was discussed for similar problem, but might
not work for multiple server deployment.
Some hybrid approach though may be used, I think.
Thanks,
Eugene.
On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin wrote:
> This is very similar to IPAM... There is a spa
This is very similar to IPAM... There is a space of possible ids or
addresses that can grow very large. We need to track the allocation
of individual ids or addresses from that space and be able to quickly
come up with a new allocations and recycle old ones. I've had this in
the back of my mind
I agree with Salvatore, I don't the optimization of that method (and
possibly others) requires a BP, but rather a bug.
Can you please file one Xurong?
Thanks,
Kyle
On Fri, May 30, 2014 at 3:39 AM, Salvatore Orlando wrote:
> It seems that method has some room for optimization, and I suspect the
It seems that method has some room for optimization, and I suspect the same
logic has been used in other type drivers as well.
If optimization is possible, it might be the case to open a bug for it.
Salvatore
On 30 May 2014 04:58, Xurong Yang wrote:
> Hi,
> Thanks for your response, yes, i get
Hi,
Thanks for your response, yes, i get the reason, so, That's why i question
that whether one good solution can have a high performance with a large
vxlan range? if possible, a blueprint is deserved to consider.
Tanks,
Xurong Yang
2014-05-29 18:12 GMT+08:00 ZZelle :
> Hi,
>
>
> vxlan network
Hi,
vxlan network are inserted/verified in DB one by one, which could explain
the time required
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vxlan.py#L138-L172
Cédric
On Thu, May 29, 2014 at 12:01 PM, Xurong Yang wrote:
> Hi, Folks,
>
> When we configur
Hi, Folks,
When we configure VXLAN range [1,16M], neutron-server service costs long
time and cpu rate is very high(100%) when initiation. One test base on
postgresql has been verified: more than 1h when VXLAN range is [1, 1M].
So, any good solution about this performance issue?
Thanks,
Xurong Ya
24 matches
Mail list logo