Hello there,

I'm working on a minor fork of ovs 2.1.2 in a raspberry pi running a
raspbian. There's a controller out there, as well as other openflow
agents, also rpi. All together configure an SDN which I can manage from
the controller. However, this scenario lasts only about 20 minutes, as
ovs instances break in a terrible segfault.

Reading the log from ovs-vswitchd, I see that there's a problem with the
maximum number of open files in the system, causing it to reject
connections to Unix socket "ovs-vswitchd.<pid>.ctl"

After the initial operations, the log stays 20 minutes in silence and then:

2015-06-03T12:21:38.421Z|00027|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:21:38.421Z|00028|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:21:38.421Z|00029|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:21:38.422Z|00030|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:21:38.422Z|00031|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:21:38.435Z|00032|connmgr|WARN|accept failed (Too many open
files)
2015-06-03T12:21:38.436Z|00033|connmgr|WARN|accept failed (Too many open
files)
2015-06-03T12:21:38.436Z|00034|connmgr|WARN|accept failed (Too many open
files)
2015-06-03T12:21:38.436Z|00035|connmgr|WARN|accept failed (Too many open
files)
2015-06-03T12:21:38.538Z|00036|connmgr|WARN|accept failed (Too many open
files)
2015-06-03T12:21:50.451Z|00037|unixctl|WARN|Dropped 1545 log messages in
last 12 seconds (most recently, 0 seconds ago) due to excessive rate
2015-06-03T12:21:50.451Z|00038|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:22:02.456Z|00039|unixctl|WARN|Dropped 1479 log messages in
last 12 seconds (most recently, 0 seconds ago) due to excessive rate
2015-06-03T12:22:02.456Z|00040|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:22:14.458Z|00041|unixctl|WARN|Dropped 1529 log messages in
last 12 seconds (most recently, 0 seconds ago) due to excessive rate
2015-06-03T12:22:14.458Z|00042|unixctl|WARN|punix:/usr/local/var/run/openvswitch/ovs-vswitchd.5393.ctl:
accept failed: Too many open files
2015-06-03T12:22:23.623Z|00002|daemon(monitor)|ERR|1 crashes: pid 5393
died, killed (Segmentation fault), restarting


The only traffic that is holding is composed by custom openflow
statistics messages bound to the controller. The custom messages are
about twice long as original ones, and are sent about each 10 seconds.
That is, there is little traffic going on.

It is relevant to say that the rpi's are dedicated to this function,
therefore there cannot be any other processes triggering this error.

Well then, do you think that it's normal to reach the limit of open
files under these conditions?
Is there any way to limit this so it doesn't crash?
And who is openning this file, here?

Thank you very much
Ferran

_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to