* I couldn't really find out, where the "normal" Linux Kernel fixes
    the cmsg structure. After all messages have been added to the
    buffer, the original start address and the resulting length has to
    be written to the structure. I'm currently doing this in
    rt_udp_recvmsg (just before "return") but I believe there is a
    better place...
You could run the kernel against a debugger (kgdb or the built-in
gdbserver of QEMU/KVM) and set a watchpoint...
Sounds interesting, I haven't used either of these two, but I should definitely set up QEMU! But I think I have already figured out: it's on line 2166 and 2169 in __sys_recvmsg (net/socket.c). The socket level recvmsg functions operate on a copy in kernel space and the original user structure doesn't get modified (therefore the original address is still available in msg_control); the __put_user function just updates the msg_controllen field.

So maybe that's something that should be changed in Xenomai? Is sys_rtdm_recvmsg (ksrc/skins/rtdm/syscall.c) the function that is always called, no matter which skin is used? Or are there system call numbers defined for every skin? So if I use RTnet together with POSIX skin, and I'm using the recvmsg function, __wrap_recvmsg is called (compiler magic), which uses:

   set_errno(XENOMAI_SKINCALL3(__rtdm_muxid, __rtdm_recvmsg, fd -
   __rtdm_fd_start, msg, flags));

which will actually resolve to a call to sys_rdtm_recvmsg?

I don't understand how to implement socket options. Is it okay to just introduce a new ioctl, or is it better to use setsockopt? I'm just unsure where to implement this. Currently I wrote some code to ip_sock.c in stack/ipv4 but that's wrong. It currently doesn't work because of the level... The Linux Kernel has got the file sock.c which implements a handler for SOL_SOCKET. Is there a simple method to use setsockopt, or is it better to just introduce a new ioctl?
You receive setsockopt request via an official IOCTL from the RTDM
layer. Watch out for _RTIOC_SETSOCKOPT.
I'm not sure which function is supposed to handle this request. In Linux, for example, I believe sock_setsock... only handles requests on SOL_SOCKET level. I can't really figure out, how this is done on RTnet. It would require some changes to (for example) add SO_TIMESTAMPNS to rt_socket_common_ioctl: I would have to handle _RTIOC_SETSOCKOPT, and then filter SO_TIMESTAMPNS and if it's some other option or some other level, just return -EOPNOTSUPP?

What's the difference between rt_socket_if_ioctl and rt_socket_common_ioctl? It's a little complicated to figure out what exactly happens if ioctl is called (on a socket)... Maybe I will try again in a few days... I think I have to first check Xenomai, and then it may be obvious which function is called first... Most likely Xenomai will already call the function defined in some RTDM device structure, which will call rt_socket_if_ioctl and rt_socket_common_ioctl ...

On x86, the conversion will be barely visible. But on low-end archs, it
may make a little difference. However, supporting a standard interface
makes some sense to ease portability, so you should have SO_TIMESTAMPNS.
An optimized interface could be added on demand.
I think it's best to support both interfaces, letting the user decide. For my task, the conversion is pointless, since I have to convert to nanosecs_abs_t again. It doesn't add a lot of complexity to the code to support both.

First remark: You are mixing changes of different kind in your commits.
Specifically changes to generated autotools are better kept separate
from code changes (including changes to autotools input files). Same for
coding style changes. That will ease the review later on. Fortunately,
git allows you reorganize things as often as you like.

Oh, and something is fairly broken in your tree over there. Looks like
one commit deletes all the generated autotools files...

BTW, I don't think this feature has to be optional at compile time. It
only adds a single if to the hotpath, not really expensive. And you
should really measure the difference between TIMESTAMPNS64 and
TIMESTAMPNS.
Oh, sorry. I know, it's currently a mess. I plan to re-base exp and squash it to a single commit since actual code is just about 100 lines or so... I know that's not what you are supposed to do, but all intermediate commits don't add any value. The actual code isn't that complicated; As soon as I have figured out how it should be done. I used git to transfer between the PC I test on, and the PC I develop on, so the exp branch is really messed up...

I can remove CONFIG_RTNET_SO_TIMESTAMPNS, I just thought it's a good idea because the feature is new and it could cause problems (maybe on other architectures...), but of course, it decreases code readability.

I will try to keep different changes separate. I have recently checked another repository and it's really annoying, so thanks for the hint ;)

What is your target arch?
My target is x86, but the patch should be available for all targets! Therefore I think it's best to leave the decision to the user...

Thanks for your help!
------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with <2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
_______________________________________________
RTnet-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rtnet-developers

Reply via email to