[Xenomai-core] Linking failure while compiling POSIX skin as built-in module
Hi folks, I detected a minor problem while compiling 2.6.14.2 with the most recent xenomai code from svn. The Compilation would quit at linking if one tries to build the posix skin as built-in module. Abort message: ipc/built-in.o: In function `sem_init': : multiple definition of `sem_init' kernel/built-in.o:: first defined here ld: Warning: size of symbol `sem_init' changed from 296 in kernel/built-in.o to 55 in ipc/built-in.o The problem is the sem_init() method of the SysV IPC routine in the linux kernel, which is unluckily called excactly the same as the posix sem_init() but differently declared. It's declared as void __init sem_init (void) in SysV IPC and int sem_init(sem_t *sem, int pshared, unsigned value); in POSIX. Thus the linker get confused trying to link the two symbols in one kernel image. Workaround coulb be deactivating the SysV IPC routine in the kernel or compile Posix skin as a loadable module. Cheers Luotao Fu ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [RFC] define your own pipe heap
Philippe Gerum wrote: Jan Kiszka wrote: Philippe Gerum wrote: Jan Kiszka wrote: Jan Kiszka wrote: ... A patch says more than thousand words. ;) As a first approach, I picked the second variant and implemented a new function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and rt_pipe_free so that the right pool is used by them. I thought about this variant again, and it seems to me rather unsafe in case some buffer allocation takes place between rt_pipe_create and rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a new argument poolsize instead. Yep, looks safer to me too. Ok, I addressed most your comments, and here is round 2 of variant 2. The only question for me is if we should rt_pipe_create in kernel space from RT context with poolsize=0 if this is prevented effectively for userspace task? This is not prevented for user-space, since there is an automatic switch to secondary mode caused by the lostage exec bit. That's what I meant. Anyway, this doesn't change the situation: rt_pipe_create is not called in primary context from userspace, so the questions if we should allow this for kernelspace and poolsize=0. So far, I deny any non-RT invocation. You likely mean, any non-Linux invocation. Of course, I did. Jan signature.asc Description: OpenPGP digital signature ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [RFC] define your own pipe heap
Jan Kiszka wrote: Hi there, yea, I also want to join this endless pipe discussion! ;) We ran into troubles here due to large messages that should be sent via native pipes. Large means larger than the default message heap, the system heap, of the pipe subsystem so far. That raised the question why we should not provide some interface (and the required internal patches) to maintain per-pipe heaps. At least optional. This would allow to create pipe channels with even some megabyte space as we used them under RTAI. I would provide the required patches, but I'm not yet sure if we better extend rt_pipe_create with an additional parameter bufsize (0 = default, i.e. system heap, 0 allocate per kmalloc or vmalloc) or if we add another function like rt_pipe_bufsize to allocate a dedicated heap after creation. A patch says more than thousand words. ;) As a first approach, I picked the second variant and implemented a new function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and rt_pipe_free so that the right pool is used by them. Note, the patch is yet untested and is also lacking documentation. It's just intended to gain some feedback. Jan Index: skins/native/pipe.h === --- skins/native/pipe.h (Revision 165) +++ skins/native/pipe.h (Arbeitskopie) @@ -37,6 +37,8 @@ #ifdef __KERNEL__ +#include nucleus/heap.h + #define XENO_PIPE_MAGIC 0x0202 typedef xnpipe_mh_t RT_PIPE_MSG; @@ -57,6 +59,10 @@ RT_PIPE_MSG *buffer; /* ! Buffer used in byte stream mode. */ +xnheap_t *bufpool; /* ! Current buffer pool. */ + +xnheap_t privpool; /* ! Private buffer pool. */ + size_t fillsz; /* ! Bytes written to the buffer. */ u_long flushable; /* ! Flush request flag. */ @@ -87,6 +93,9 @@ const char *name, int minor); +int rt_pipe_setpool(RT_PIPE *pipe, +size_t poolsize); + int rt_pipe_delete(RT_PIPE *pipe); ssize_t rt_pipe_read(RT_PIPE *pipe, @@ -113,9 +122,11 @@ size_t size, int mode); -RT_PIPE_MSG *rt_pipe_alloc(size_t size); +RT_PIPE_MSG *rt_pipe_alloc(RT_PIPE *pipe, + size_t size); -int rt_pipe_free(RT_PIPE_MSG *msg); +int rt_pipe_free(RT_PIPE *pipe, + RT_PIPE_MSG *msg); ssize_t rt_pipe_flush(RT_PIPE *pipe); Index: skins/native/syscall.c === --- skins/native/syscall.c (Revision 165) +++ skins/native/syscall.c (Arbeitskopie) @@ -3332,7 +3332,7 @@ /* Zero-sized messages are allowed, so we still need to free the message buffer even if no data copy took place. */ -rt_pipe_free(msg); +rt_pipe_free(pipe,msg); return err; } @@ -3374,7 +3374,7 @@ if (!__xn_access_ok(curr,VERIFY_READ,__xn_reg_arg2(regs),size)) return -EFAULT; -msg = rt_pipe_alloc(size); +msg = rt_pipe_alloc(pipe,size); if (!msg) return -ENOMEM; @@ -3386,7 +3386,7 @@ if (err != size) /* If the operation failed, we need to free the message buffer by ourselves. */ - rt_pipe_free(msg); + rt_pipe_free(pipe,msg); return err; } @@ -3436,7 +3436,7 @@ } else { - msg = rt_pipe_alloc(size); + msg = rt_pipe_alloc(pipe,size); if (!msg) return -ENOMEM; @@ -3449,7 +3449,7 @@ err = rt_pipe_stream(pipe,buf,size); if (msg) - rt_pipe_free(msg); + rt_pipe_free(pipe,msg); return err; } Index: skins/native/pipe.c === --- skins/native/pipe.c (Revision 165) +++ skins/native/pipe.c (Arbeitskopie) @@ -50,8 +50,6 @@ #include native/registry.h #include native/pipe.h -static xnheap_t *__pipe_heap = kheap; - static int __pipe_flush_apc; static DECLARE_XNQUEUE(__pipe_flush_q); @@ -83,6 +81,14 @@ #endif /* CONFIG_XENO_NATIVE_EXPORT_REGISTRY */ +static void __pipe_flush_pool (xnheap_t *heap, + void *poolmem, + u_long poolsize, + void *cookie) +{ +xnarch_sysfree(poolmem,poolsize); +} + static inline ssize_t __pipe_flush (RT_PIPE *pipe) { @@ -122,8 +128,10 @@ size_t size, void *cookie) { +RT_PIPE *pipe = (RT_PIPE *)cookie; + /* Allocate memory for the incoming message. */ -return xnheap_alloc(__pipe_heap,size); +return xnheap_alloc(pipe-bufpool,size); } static int __pipe_output_handler (int bminor, @@ -131,8 +139,10 @@ int retval, void *cookie) { +RT_PIPE *pipe = (RT_PIPE *)cookie; + /* Free memory from output/discarded message. */ -xnheap_free(__pipe_heap,mh); +xnheap_free(pipe-bufpool,mh); return retval; } @@ -240,6 +250,7 @@ return -EPERM; pipe-buffer = NULL; +pipe-bufpool = kheap; pipe-fillsz = 0; pipe-flushable = 0; pipe-handle = 0;/* i.e. (still) unregistered pipe. */ @@ -289,6 +300,38 @@
Re: [Xenomai-core] [RFC] define your own pipe heap
Jan Kiszka wrote: ... A patch says more than thousand words. ;) As a first approach, I picked the second variant and implemented a new function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and rt_pipe_free so that the right pool is used by them. I thought about this variant again, and it seems to me rather unsafe in case some buffer allocation takes place between rt_pipe_create and rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a new argument poolsize instead. Still untested... Jan Index: skins/native/pipe.h === --- skins/native/pipe.h (revision 165) +++ skins/native/pipe.h (working copy) @@ -37,6 +37,8 @@ #ifdef __KERNEL__ +#include nucleus/heap.h + #define XENO_PIPE_MAGIC 0x0202 typedef xnpipe_mh_t RT_PIPE_MSG; @@ -57,6 +59,10 @@ RT_PIPE_MSG *buffer; /* ! Buffer used in byte stream mode. */ +xnheap_t *bufpool; /* ! Current buffer pool. */ + +xnheap_t privpool; /* ! Private buffer pool. */ + size_t fillsz; /* ! Bytes written to the buffer. */ u_long flushable; /* ! Flush request flag. */ @@ -85,8 +91,12 @@ int rt_pipe_create(RT_PIPE *pipe, const char *name, - int minor); + int minor, + size_t poolsize); +int rt_pipe_setpool(RT_PIPE *pipe, +size_t poolsize); + int rt_pipe_delete(RT_PIPE *pipe); ssize_t rt_pipe_read(RT_PIPE *pipe, @@ -113,9 +123,11 @@ size_t size, int mode); -RT_PIPE_MSG *rt_pipe_alloc(size_t size); +RT_PIPE_MSG *rt_pipe_alloc(RT_PIPE *pipe, + size_t size); -int rt_pipe_free(RT_PIPE_MSG *msg); +int rt_pipe_free(RT_PIPE *pipe, + RT_PIPE_MSG *msg); ssize_t rt_pipe_flush(RT_PIPE *pipe); Index: skins/native/syscall.c === --- skins/native/syscall.c (revision 165) +++ skins/native/syscall.c (working copy) @@ -3194,6 +3194,7 @@ char name[XNOBJECT_NAME_LEN]; RT_PIPE_PLACEHOLDER ph; int err, minor; +size_t poolsize; RT_PIPE *pipe; if (!__xn_access_ok(curr,VERIFY_WRITE,__xn_reg_arg1(regs),sizeof(ph))) @@ -3213,12 +3214,15 @@ /* Device minor. */ minor = (int)__xn_reg_arg3(regs); +/* Buffer pool size. */ +poolsize = (size_t)__xn_reg_arg4(regs); + pipe = (RT_PIPE *)xnmalloc(sizeof(*pipe)); if (!pipe) return -ENOMEM; -err = rt_pipe_create(pipe,name,minor); +err = rt_pipe_create(pipe,name,minor,poolsize); if (err == 0) { @@ -3332,7 +3336,7 @@ /* Zero-sized messages are allowed, so we still need to free the message buffer even if no data copy took place. */ -rt_pipe_free(msg); +rt_pipe_free(pipe,msg); return err; } @@ -3374,7 +3378,7 @@ if (!__xn_access_ok(curr,VERIFY_READ,__xn_reg_arg2(regs),size)) return -EFAULT; -msg = rt_pipe_alloc(size); +msg = rt_pipe_alloc(pipe,size); if (!msg) return -ENOMEM; @@ -3386,7 +3390,7 @@ if (err != size) /* If the operation failed, we need to free the message buffer by ourselves. */ - rt_pipe_free(msg); + rt_pipe_free(pipe,msg); return err; } @@ -3436,7 +3440,7 @@ } else { - msg = rt_pipe_alloc(size); + msg = rt_pipe_alloc(pipe,size); if (!msg) return -ENOMEM; @@ -3449,7 +3453,7 @@ err = rt_pipe_stream(pipe,buf,size); if (msg) - rt_pipe_free(msg); + rt_pipe_free(pipe,msg); return err; } Index: skins/native/lib/pipe.c === --- skins/native/lib/pipe.c (revision 165) +++ skins/native/lib/pipe.c (working copy) @@ -23,7 +23,8 @@ int rt_pipe_create (RT_PIPE *pipe, const char *name, - int minor) + int minor, + size_t poolsize) { return XENOMAI_SKINCALL3(__xeno_muxid, __xeno_pipe_create, Index: skins/native/pipe.c === --- skins/native/pipe.c (revision 165) +++ skins/native/pipe.c (working copy) @@ -50,8 +50,6 @@ #include native/registry.h #include native/pipe.h -static xnheap_t *__pipe_heap = kheap; - static int __pipe_flush_apc; static DECLARE_XNQUEUE(__pipe_flush_q); @@ -83,6 +81,14 @@ #endif /* CONFIG_XENO_NATIVE_EXPORT_REGISTRY */ +static void __pipe_flush_pool (xnheap_t *heap, + void *poolmem, + u_long poolsize, + void *cookie) +{ +xnarch_sysfree(poolmem,poolsize); +} + static inline ssize_t __pipe_flush (RT_PIPE *pipe) { @@ -122,8 +128,10 @@
Re: [Xenomai-core] [RFC] define your own pipe heap
[EMAIL PROTECTED] wrote on 22.11.2005 11:21:09: Jan Kiszka wrote: ... A patch says more than thousand words. ;) As a first approach, I picked the second variant and implemented a new function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and rt_pipe_free so that the right pool is used by them. I thought about this variant again, and it seems to me rather unsafe in case some buffer allocation takes place between rt_pipe_create and rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a new argument poolsize instead. I haven't read the patch thoroughly yet so just a few common remarks. IMHO, the interface would be much clear in case a rt_pipe_create() is extended since: - we avoid a misuse when a rt_pipe_alloc() is called with an old pool but rt_pipe_free() (explicitly or implicitly internally) is called with a new one; - rt_pipe_setpool() can be successfuly called only once and a user must care that: o either this is next call after a pipe has been created; o or all messages allocated from the old pool have been freed by that moment (otherwise we need to extend a pipe::message interface so that every message knows the exact pool it has been allocated from and, moreover, implement the reference counting so that a pool object exists as long as there is at least a reference on it - and we don't need that complications). So, yep, adding an additional parameter to the rt_pipe_create() would be a better solution. My humble 2 cents :o) Once I thought about an interface that allows to attach/detach an existing RT_HEAP object to any pipe/queue/maybe_smth_else but here we would need to extend the underlying mechanisms as I mentioned above. And likely, such an interface is not something of big avail for strict real-time environments. Keep it simple instaed :o) --- Best regards, Dmitry
[Xenomai-core] Linking failure while compiling POSIX skin as built-in module
Hi folks, I detected a minor problem while compiling 2.6.14.2 with the most recent xenomai code from svn. The Compilation would quit at linking if one tries to build the posix skin as built-in module. Abort message: ipc/built-in.o: In function `sem_init': : multiple definition of `sem_init' kernel/built-in.o:: first defined here ld: Warning: size of symbol `sem_init' changed from 296 in kernel/built-in.o to 55 in ipc/built-in.o The problem is the sem_init() method of the SysV IPC routine in the linux kernel, which is unluckily called excactly the same as the posix sem_init() but differently declared. It's declared as void __init sem_init (void) in SysV IPC and int sem_init(sem_t *sem, int pshared, unsigned value); in POSIX. Thus the linker get confused trying to link the two symbols in one kernel image. Workaround coulb be deactivating the SysV IPC routine in the kernel or compile Posix skin as a loadable module. Cheers Luotao Fu
Re: [Xenomai-core] [RFC] define your own pipe heap
Philippe Gerum wrote: Jan Kiszka wrote: Philippe Gerum wrote: Jan Kiszka wrote: Jan Kiszka wrote: ... A patch says more than thousand words. ;) As a first approach, I picked the second variant and implemented a new function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and rt_pipe_free so that the right pool is used by them. I thought about this variant again, and it seems to me rather unsafe in case some buffer allocation takes place between rt_pipe_create and rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a new argument poolsize instead. Yep, looks safer to me too. Ok, I addressed most your comments, and here is round 2 of variant 2. The only question for me is if we should rt_pipe_create in kernel space from RT context with poolsize=0 if this is prevented effectively for userspace task? This is not prevented for user-space, since there is an automatic switch to secondary mode caused by the lostage exec bit. That's what I meant. Anyway, this doesn't change the situation: rt_pipe_create is not called in primary context from userspace, so the questions if we should allow this for kernelspace and poolsize=0. So far, I deny any non-RT invocation. You likely mean, any non-Linux invocation. Of course, I did. Jan signature.asc Description: OpenPGP digital signature