Am 19.08.2011 17:27, schrieb "Müller, René":
> Am 17.08.2011 11:43, schrieb Oliver Hartkopp:
>> Am 17.08.2011 10:38, schrieb "Müller, René":
>>> Hi all,
>>>
>>> I have an performance issue with socketcan and an MPC5200B. My setup looks 
>>> like this:
>>>   - MPC5200B board (TQM5200)
>>>   - custom base board with two PCA82C251, one for each can controller
>>>   - linux-2.6.27.18-denx, I use the mpc52xx driver
>>>   - booted with uboot and kernel from flash
>>>   - mount root filesystem via NFS
>>>   - can0 with 1Mbit/s
>>>   - candump -l can0 to tmpfs 
>> Hi René,
>>
>> can you check if the frames are dropped on socket-level? I assume, that the 
>> candump is not able to dump the stuff into tmpfs at full speed.
>>
>> See details at:
>>
>> http://www.mail-archive.com/[email protected]/msg00170.html
> Hi Oliver,
>
> thanks for the hint. I tried it, and indeed I lose my frames because I'm to 
> slow with fetching them. I will take a look, if I find a better mechanism to 
> dump the frames.
Hi Oliver,

I made some tests with increased rx buffer size and a hacked candump. I want to 
dump every frame in an 30 second time span, so I increased the rx buffer with 
candump -r to 20000000. In candump I disabled the live-dump to the log file. 
When I press a key, I set a "nothing passes" filter, so that the socket rx 
buffer contains the last ~25s (1Mbit/s with >90% busload). Then I start writing 
the frames to my candump log file until the buffer is empty. So far this 
solution is fine (no data is lost), but it eats my memory. For this ~25s I need 
~100MB ram. My board has only 128MB, so this is way too much and I cannot 
increase the buffer further (I'm still missing ~5s).

I took a look at the kernel code and I think the socket buffer has some 
"slight" overhead in case of can. There is an huge structure used to store the 
8 bytes of one can frame. I'm afraid that this structure is needed.

I think my main problem is the system call overhead of recv/recvmsg, because I 
tried to store the data in an circular buffer (from boost libraries) entirely 
in ram and I'm still losing data. The only way I get the frames without data 
loss seems to increase the socket buffer.

At the moment I see 3 options:
1. Decrease memory demand of the socket buffer (to fit my 30s into <30MB)
2. Put more than one can frame into one socket buffer structure to minimize the 
overhead
3. Try lincan with hopefully less system call overhead

What do you think I should try? Do you see another option to get the frames at 
such high busload without data loss?


Thanks,
René


>> As this functionality is only working on a 2.6.33+, i created a patch for 
>> our MPC5200 based system (which has a 2.6.28.10), that upgrades the CAN 
>> network and drivers to a recent functionality (including dropcount, isotp, 
>> cangw and a recent mpc52xx driver).
>>
>> I can send the (huge) patches to you, if you're interested.
> Thanks for the offer. For the moment I got a 2.6.35.7 working for testing 
> purposes. If I need a back port later, I will come back to you.
>
>
> Best regards,
> René
>
>
>> Regards,
>> Oliver
>>
>>> Now I play with the busload on can0 (generated by CANalyzer and three 
>>> CANcaseXL). This leads to the following results:
>>>   - 0% to 71% busload ->  no missing frames
>>>   - 74% busload ->  5% missing frames
>>>   - 77% busload ->  10% missing frames
>>>   - 90% busload ->  45% missing frames
>>>
>>> The missing frames are measured by comparison of the candump log file 
>>> against the CANalyzer log file. The indicated busload is measured by 
>>> CANalyzer. The cause seems to be very simple: the cpu load is too high (or 
>>> the cpu is too slow). When the busload is under 71%, the cpu load is under 
>>> 100%. When the busload goes higher than 71%, the cpu load is 100%. This 
>>> seems to be the cause for the missing frames.
>>>
>>> Has anyone else ever seen such an performance issue? Does someone use the 
>>> MPC5200B in high busload environments? Maybe I configured something wrong 
>>> in my linux.
>>>
>>> What about lincan? It has an character device approach with very little 
>>> overhead. Is there an chance, that this will solve my problem?
>>>
>>>
>>> Best regards,
>>> René
>>>
>>> _______________________________________________
>>> Socketcan-users mailing list
>>> [email protected]
>>> https://lists.berlios.de/mailman/listinfo/socketcan-users 
>> _______________________________________________
>> Socketcan-users mailing list
>> [email protected]
>> https://lists.berlios.de/mailman/listinfo/socketcan-users
> _______________________________________________
> Socketcan-users mailing list
> [email protected]
> https://lists.berlios.de/mailman/listinfo/socketcan-users
_______________________________________________
Socketcan-users mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/socketcan-users

Reply via email to