Hi Ben,
So I decided to try find the leak with help of Address Sanitizer as you
recommended.
Would the following approach make sense?:
- build OVS from sources with flags CFLAGS="-g -O2 -fsanitize=leak
-fno-omit-frame-pointer -fno-common"
- service openvswitch-switch stop
- replace binary
On Wed, Mar 6, 2019 at 7:01 PM Oleg Bondarev wrote:
>
> I'm thinking if this can be malloc() not returning memory to the system
> after peak loads:
> *"Occasionally, free can actually return memory to the operating system
> and make the process smaller. Usually, all it can do is allow a later
I'm thinking if this can be malloc() not returning memory to the system
after peak loads:
*"Occasionally, free can actually return memory to the operating system and
make the process smaller. Usually, all it can do is allow a later call to
malloc to reuse the space. In the meantime, the space
Starting from 0x30, this looks like a "minimatch" data structure, which
is a kind of compressed bitwise match against a flow.
0030: 4014
0040: fa16 3e2b c5d5 0022
0058:
On Tue, Mar 5, 2019 at 9:00 PM Ben Pfaff wrote:
> OK. That gives me something to investigate. I'll see what I can find
> out.
>
> You're running 64-bit kernel and userspace, x86-64?
>
Yes.
>
> On Tue, Mar 05, 2019 at 08:42:14PM +0400, Oleg Bondarev wrote:
> > On Tue, Mar 5, 2019 at 7:26 PM
OK. That gives me something to investigate. I'll see what I can find
out.
You're running 64-bit kernel and userspace, x86-64?
On Tue, Mar 05, 2019 at 08:42:14PM +0400, Oleg Bondarev wrote:
> On Tue, Mar 5, 2019 at 7:26 PM Ben Pfaff wrote:
>
> > You're talking about the email where you dumped
Hi,
thanks for your help!
On Tue, Mar 5, 2019 at 7:26 PM Ben Pfaff wrote:
> You're talking about the email where you dumped out a repeating sequence
> from some blocks? That might be the root of the problem, if you can
> provide some more context. I didn't see from the message where you
>
You're talking about the email where you dumped out a repeating sequence
from some blocks? That might be the root of the problem, if you can
provide some more context. I didn't see from the message where you
found the sequence (was it just at the beginning of each of the 4 MB
blocks you reported
Hi Ben,
I didn't have a chance to debug the scripts yet, but just in case you
missed my last email with examples of repeatable blocks
and sequences - do you think we still need to analyze further, will the
scripts tell more about the heap?
Thanks,
Oleg
On Thu, Feb 28, 2019 at 10:14 PM Ben Pfaff
I just replied to the other thread (since I'm running a different version of
OVS -2.10.1-) and added ovs-...@openvswitch.org mailing list as well. Oleg,
maybe you also want to add the dev list in case they can help on this?
Basically I still observe the continuous memory usage increase by
On Tue, Feb 26, 2019 at 01:41:45PM +0400, Oleg Bondarev wrote:
> Hi,
>
> thanks for the scripts, so here's the output for a 24G core dump:
> https://pastebin.com/hWa3R9Fx
> there's 271 entries of 4MB - does it seem something we should take a closer
> look at?
I think that this output really just
Hi Ben,
so here're examples of how those repeatable blocks look like:
https://pastebin.com/JBUaeX44
https://pastebin.com/wKreHDJf
https://pastebin.com/f41knqgn
All those blocks are mostly filled with such sequence:
" fa16 3e39 83c4 ..>9
0030
On Tue, Feb 26, 2019 at 1:41 PM Oleg Bondarev
wrote:
> Hi,
>
> thanks for the scripts, so here's the output for a 24G core dump:
> https://pastebin.com/hWa3R9Fx
> there's 271 entries of 4MB - does it seem something we should take a
> closer look at?
>
not 4 but ~67Mb of course.
>
> Thanks,
>
Hi,
thanks for the scripts, so here's the output for a 24G core dump:
https://pastebin.com/hWa3R9Fx
there's 271 entries of 4MB - does it seem something we should take a closer
look at?
Thanks,
Oleg
On Tue, Feb 26, 2019 at 3:26 AM Ben Pfaff wrote:
> Some combinations of kernel bonding with
Some combinations of kernel bonding with Open vSwitch don't necessarily
work that well. I have forgotten which ones are problematic or why.
However, the problems are functional ones (the bonds don't work well),
not operational ones like memory leaks. A memory leak would be a bug
whether or not
I read in a few places that mixing OS networking features (like bonding) and
OVS is not a good idea and that the recommendation is to do everything at OVS
level. That's why I assumed the configuration was not ok (even when it worked
correctly for around two years albeit the high memory usage I
Both configurations should work, so probably you did find a bug causing
a memory leak in the former configuration.
464 MB actually sounds like a lot also.
On Sun, Feb 24, 2019 at 02:58:02PM +, Fernando Casas Schössow wrote:
> Hi Ben,
>
> In my case I think I found the cause of the issue,
Hi Ben,
In my case I think I found the cause of the issue, and it was indeed a
misconfiguration on my side.
Yet I'm not really sure why the misconfiguration was causing the high memory
usage on OVS.
The server has 4 NICs. Bonded in two bonds of two.
The problem I think it was that the bonding
It's odd that two people would notice the same problem at the same time
on old branches.
Anyway, I'm attaching the scripts I have. They are rough. The second
one invokes the first one as a subprocess; it is probably the one you
should use. I might have to walk you through how to use it, or
Ah, sorry, I missed "ovs-vswitchd memory consumption behavior" thread.
So I guess I'm also interested in the scripts for analyzing the heap in a
core dump :)
Thanks,
Oleg
On Wed, Feb 20, 2019 at 7:00 PM Oleg Bondarev
wrote:
> Hi,
>
> OVS 2.8.0, uptime 197 days, 44G RAM.
> ovs-appctl
Hi,
OVS 2.8.0, uptime 197 days, 44G RAM.
ovs-appctl memory/show reports:
"handlers:35 ofconns:4 ports:73 revalidators:13 rules:1099 udpif
keys:686"
Similar data on other nodes of the OpenStack cluster.
Seems usage grows gradually over time.
Are there any known issues, like
21 matches
Mail list logo