Hi all,

finally, I found some time for answering.

On Fri, 30 Sep 2011, Marc Kleine-Budde wrote:
> > sk->sk_wmem_alloc is increased by skb->truesize whenever application
> > creates a skb belonging to the socket (i.e. on write) and decreased by
> > the same amount whenever the skb is passed to the driver. The value
> > of skb->truesize is the sizeof(can_frame) + sizeof(skb), which is 200
> > in my case (PowerPC).
> 
> Can you check on some other ARCH, 32 and 64 bits please.

sizeof(struct sk_buff) for kernel v3.0, different archs
- powerpc (custom config): 184
- x86_64 (default config): 240
- x86_64 (Debian): 240
- i386 (default config) 192
- um (64bit) (default config): 208


On Fri, 30 Sep 2011, Oliver Hartkopp wrote:
> Hello all,
> 
> On 09/30/11 15:52, Marc Kleine-Budde wrote:
> 
> > On 09/30/2011 02:32 PM, Michal Sojka wrote:
> 
> 
> >> The default value of sk->sk_wmem_alloc is 108544 which means that for
> >> CAN, this limit is reached (and the application blocks) when it has
> >> 542 CAN frames waiting to be send to the driver. This is of cause more
> >> then 10, allowed by dev->tx_queue_len.
> >>
> >> Therefore, we propose apply patch like this:
> 
> 
> (..)
> 
> >> +  dev->tx_queue_len = 22;
> 
> 
> (..)
> 
> >> +  sk->sk_sndbuf = SOCK_MIN_SNDBUF;
> 
> 
> >> This sets the minimum possible sk_sndbuf, i.e. 2048, which allows to
> >> have 11 frames queued for a socket before the application blocks.
> 
> 
> (..)
> 
> > What about dynamically calculating the sk->sk_sndbuf providing room for
> > a fixed number of CAN frames in the socket, i.e. 10 so so. Maybe even
> > make the number of CAN frames configurable during runtime.
> 
> 
> If we can modify the rcvbuf size with SO_RCVBUF we should be able to use
> SO_SNDBUF for our needs too.
> 
> Indeed i tend to set the sk_sndbuf to a size that allows to store only 3 CAN
> frames for each raw socket by default.

Dynamic calculation may have sense. I only do not think that it is a
good idea to set sk_sndbuf to smaller values than SOCK_MIN_SNDBUF (which
would likely happen for only 3 frames). Then it would be possible to set
a higher value by setsockopt() and an attempt to set it back to the
default value would lead to EINVAL.


> 
> >> It is also necessary to slightly increase the default tx_queue_len.
> >> Increasing it to 22 allows using two applications (or better two
> >> sockets) without seeing ENOBUFS. The third application/socket then
> >> gets ENOBUFS just for its first write().
> > 
> > Hmmm...3 applications isn't that much, is it?
> > How many ether applications are needed to deplete the standard 1000 tx
> > queuelen?

Of course, 3 was just an example to show that my hypothesis is correct.
 
> > 100k snd_buf / 2k skb+data = 50 frames per sock
> > 1000 tx_queuelen / 50 socks = 20 Aps
> 
> 
> IMO the question is which delay would would like to guarantee for applications
> on the system. E.g. if we want a maximum delay of e.g. 50ms @500kbit/s the
> tx_queue_len could be calculated to a value like 50 or so - not very academic 
> %-)

The question is what we want to guarantee - whether the delay from an
application to the bus or just the number of frames sitting in the TX
queues. In fact, the former cannot be guaranteed because other nodes may
flood the bus with high prio frames which can increase the delay to
infinity, independently of tx_queue_len. The latter has sane meaning all
the time.

-Michal
_______________________________________________
Socketcan-core mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/socketcan-core

Reply via email to