[Xenomai-core] [PULL REQUEST] analogy renaming + many bugfixes

2009-10-19 Thread Alexis Berlemont
The following changes since commit 8c847c4bf43fa65c3ec541850ecdb7e96113e94f:
  Philippe Gerum (1):
powerpc: fix commit number for 2.6.30.3-DENX

are available in the git repository at:

  ssh+git://g...@xenomai.org/xenomai-abe.git analogy

Alexis Berlemont (18):
  Remove useless wrappers (comedi_copy_*_user())
  Initialize the freshly allocated device's private area
  Fix obvious typo mistake
  Fix some error checkings in analog output command test function
  Fix various minor bugs
  Fix internal trigger via instruction (we do not need any data in the 
instruction structure)
  Add ai / ao trigger callback
  Add a trigger instruction
  Replace an info message by an error message
  Fix a problem in the mite configuration (only for AI)
  Add a missing EXPORT_SYMBOL() comedi_alloc_subd.
  Align fake macro declarations with real functions declarations
  Fix modules compilations issues
  Comedi4RTDM -> Analogy (first part)
  Comedi4RTDM -> Analogy (second part)
  Comedi4RTDM -> Analogy (third part, kernel side compiles)
  Comedi4RTDM -> Analogy (last part, user side compiles and runs)
  Update *_alloc_subd() after bugfix backport from comedi branch

 Makefile.in|1 +
 aclocal.m4 |4 +-
 config/Makefile.in |1 +
 configure  | 5454 
+++-
 configure.in   |6 +-
 doc/Makefile.in|1 +
 doc/docbook/Makefile.in|1 +
 doc/docbook/custom-stylesheets/Makefile.in |1 +
 doc/docbook/custom-stylesheets/xsl/Makefile.in |1 +
 .../custom-stylesheets/xsl/common/Makefile.in  |1 +
 doc/docbook/custom-stylesheets/xsl/fo/Makefile.in  |1 +
 .../custom-stylesheets/xsl/html/Makefile.in|1 +
 doc/docbook/xenomai/Makefile.in|1 +
 doc/doxygen/Makefile.in|1 +
 doc/man/Makefile.in|1 +
 doc/txt/Makefile.in|1 +
 include/Makefile.am|2 +-
 include/Makefile.in|3 +-
 include/{comedi => analogy}/Makefile.am|6 +-
 include/{comedi => analogy}/Makefile.in|   13 +-
 include/analogy/analogy.h  |  152 +
 .../comedi_driver.h => analogy/analogy_driver.h}   |   16 +-
 include/{comedi => analogy}/buffer.h   |  192 +-
 include/{comedi => analogy}/channel_range.h|  162 +-
 include/{comedi => analogy}/command.h  |   34 +-
 include/{comedi => analogy}/context.h  |   32 +-
 include/{comedi => analogy}/descriptor.h   |   28 +-
 include/{comedi => analogy}/device.h   |   58 +-
 include/{comedi => analogy}/driver.h   |   34 +-
 include/analogy/instruction.h  |  225 +
 include/{comedi => analogy}/ioctl.h|   40 +-
 include/analogy/os_facilities.h|  191 +
 include/analogy/subdevice.h|  271 +
 include/analogy/transfer.h |  105 +
 include/{comedi => analogy}/types.h|   14 +-
 include/asm-arm/Makefile.in|1 +
 include/asm-arm/bits/Makefile.in   |1 +
 include/asm-blackfin/Makefile.in   |1 +
 include/asm-blackfin/bits/Makefile.in  |1 +
 include/asm-generic/Makefile.in|1 +
 include/asm-generic/bits/Makefile.in   |1 +
 include/asm-nios2/Makefile.in  |1 +
 include/asm-nios2/bits/Makefile.in |1 +
 include/asm-powerpc/Makefile.in|1 +
 include/asm-powerpc/bits/Makefile.in   |1 +
 include/asm-sim/Makefile.in|1 +
 include/asm-sim/bits/Makefile.in   |1 +
 include/asm-x86/Makefile.in|1 +
 include/asm-x86/bits/Makefile.in   |1 +
 include/comedi/comedi.h|  151 -
 include/comedi/instruction.h   |  225 -
 include/comedi/os_facilities.h |  211 -
 include/comedi/subdevice.h |  271 -
 include/comedi/transfer.h  |  105 -
 include/native/Makefile.in |1 +
 include/nucleus/Makefile.in|1 +
 include/posix/Makefile.in  |1 +
 include/posix/sys/Makefile.in  |1 +
 include/psos+/Makefile.in  |1 +
 include/rtai/Makefile.in   |1 +
 include/rtdm/Makefile.in   |1 +

Re: [Xenomai-core] [PATCH v2 1/3] nucleus: Use Linux spin lock for heap list management

2009-10-19 Thread Jan Kiszka
Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>> No need for hard nklock protection of kheapq and the map counter, a
>>> normal spin lock suffices as all users must run over the root thread
>>> anyway.
>> At the very least, this should use rthal_spin_lock, in order to seem to
>> respect the layering.
> 
> Then the conversion would make no sense (hard interrupt lock again).
> Given that we are fiddling with Linux mm directly here, there is no hal
> involved.
> 
>> Anyway, do we really want to change this now?
>>
> 
> That's a different question. The patch alone does not buy us that much
> when we cannot reuse the lock for the heapq.

Wait, there is one advantage: We no longer have to walk kheapq under
nklock in __validate_heap_addr. I think that makes this patch
worthwhile. Will adapt it to avoid conflicts with the simulator.

> 
> Jan
> 
> PS: More invasive changes will come anyway to plug cleanup races in the
> heap code.
> 

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH v2 2/3] nucleus: Include all heaps in statistics

2009-10-19 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> @@ -234,12 +239,65 @@ int xnheap_init(xnheap_t *heap,
>>  
>>  appendq(&heap->extents, &extent->link);
>>  
>> +vsnprintf(heap->name, sizeof(heap->name), name, args);
>> +
>> +spin_lock(&heapq_lock);
>> +appendq(&heapq, &heap->stat_link);
>> +spin_unlock(&heapq_lock);
> 
> You can not use a Linux spinlock in xnheap_init and xnheap_destroy:
> - this breaks the build for the simulator;
> - callers of xnheap_init and xnheap_destroy are not guaranteed to run on
> the root domain.

Oh, yes, unfortunately. That callers appear to be fixable, but that's
probably not worth it at this point. I will have to rewrite
heap_read_proc to break out of nklock frequently. Also not nice, but
less invasive.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH v2 1/3] nucleus: Use Linux spin lock for heap list management

2009-10-19 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> No need for hard nklock protection of kheapq and the map counter, a
>> normal spin lock suffices as all users must run over the root thread
>> anyway.
> 
> At the very least, this should use rthal_spin_lock, in order to seem to
> respect the layering.

Then the conversion would make no sense (hard interrupt lock again).
Given that we are fiddling with Linux mm directly here, there is no hal
involved.

> 
> Anyway, do we really want to change this now?
> 

That's a different question. The patch alone does not buy us that much
when we cannot reuse the lock for the heapq.

Jan

PS: More invasive changes will come anyway to plug cleanup races in the
heap code.



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH v2 2/3] nucleus: Include all heaps in statistics

2009-10-19 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> @@ -234,12 +239,65 @@ int xnheap_init(xnheap_t *heap,
>  
>   appendq(&heap->extents, &extent->link);
>  
> + vsnprintf(heap->name, sizeof(heap->name), name, args);
> +
> + spin_lock(&heapq_lock);
> + appendq(&heapq, &heap->stat_link);
> + spin_unlock(&heapq_lock);

You can not use a Linux spinlock in xnheap_init and xnheap_destroy:
- this breaks the build for the simulator;
- callers of xnheap_init and xnheap_destroy are not guaranteed to run on
the root domain.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH v2 2/3] nucleus: Include all heaps in statistics

2009-10-19 Thread Jan Kiszka
This extends /proc/xenomai/heap with statistics about all currently used
heaps.

Signed-off-by: Jan Kiszka 
---

 include/nucleus/heap.h|   12 -
 ksrc/drivers/ipc/iddp.c   |3 +
 ksrc/drivers/ipc/xddp.c   |6 ++-
 ksrc/nucleus/heap.c   |  106 ++---
 ksrc/nucleus/module.c |2 -
 ksrc/nucleus/pod.c|5 +-
 ksrc/nucleus/shadow.c |5 ++
 ksrc/skins/native/heap.c  |6 ++-
 ksrc/skins/native/pipe.c  |4 +-
 ksrc/skins/native/queue.c |6 ++-
 ksrc/skins/posix/shm.c|4 +-
 ksrc/skins/psos+/rn.c |6 ++-
 ksrc/skins/rtai/shm.c |7 ++-
 ksrc/skins/vrtx/heap.c|6 ++-
 ksrc/skins/vrtx/syscall.c |3 +
 15 files changed, 143 insertions(+), 38 deletions(-)

diff --git a/include/nucleus/heap.h b/include/nucleus/heap.h
index f39ef64..fcd9a8d 100644
--- a/include/nucleus/heap.h
+++ b/include/nucleus/heap.h
@@ -115,6 +115,10 @@ typedef struct xnheap {
 
XNARCH_DECL_DISPLAY_CONTEXT();
 
+   xnholder_t stat_link;   /* Link in heapq */
+
+   char name[48];
+
 } xnheap_t;
 
 extern xnheap_t kheap;
@@ -202,7 +206,8 @@ void xnheap_cleanup_proc(void);
 
 int xnheap_init_mapped(xnheap_t *heap,
   u_long heapsize,
-  int memflags);
+  int memflags,
+  const char *name, ...);
 
 int xnheap_destroy_mapped(xnheap_t *heap,
  void (*release)(struct xnheap *heap),
@@ -224,7 +229,10 @@ int xnheap_destroy_mapped(xnheap_t *heap,
 int xnheap_init(xnheap_t *heap,
void *heapaddr,
u_long heapsize,
-   u_long pagesize);
+   u_long pagesize,
+   const char *name, ...);
+
+void xnheap_set_name(xnheap_t *heap, const char *name, ...);
 
 int xnheap_destroy(xnheap_t *heap,
   void (*flushfn)(xnheap_t *heap,
diff --git a/ksrc/drivers/ipc/iddp.c b/ksrc/drivers/ipc/iddp.c
index a407946..b6382f1 100644
--- a/ksrc/drivers/ipc/iddp.c
+++ b/ksrc/drivers/ipc/iddp.c
@@ -559,7 +559,8 @@ static int __iddp_bind_socket(struct rtipc_private *priv,
}
 
ret = xnheap_init(&sk->privpool,
- poolmem, poolsz, XNHEAP_PAGE_SIZE);
+ poolmem, poolsz, XNHEAP_PAGE_SIZE,
+ "ippd: %d", port);
if (ret) {
xnarch_free_host_mem(poolmem, poolsz);
goto fail;
diff --git a/ksrc/drivers/ipc/xddp.c b/ksrc/drivers/ipc/xddp.c
index f62147a..a5dafef 100644
--- a/ksrc/drivers/ipc/xddp.c
+++ b/ksrc/drivers/ipc/xddp.c
@@ -703,7 +703,7 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
}
 
ret = xnheap_init(&sk->privpool,
- poolmem, poolsz, XNHEAP_PAGE_SIZE);
+ poolmem, poolsz, XNHEAP_PAGE_SIZE, "");
if (ret) {
xnarch_free_host_mem(poolmem, poolsz);
goto fail;
@@ -746,6 +746,10 @@ static int __xddp_bind_socket(struct rtipc_private *priv,
sk->minor = ret;
sa->sipc_port = ret;
sk->name = *sa;
+
+   if (poolsz > 0)
+   xnheap_set_name(sk->bufpool, "xddp: %d", sa->sipc_port);
+
/* Set default destination if unset at binding time. */
if (sk->peer.sipc_port < 0)
sk->peer = *sa;
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 5a17a94..81fdd7a 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -77,6 +77,7 @@ xnheap_t kstacks; /* Private stack pool */
 #endif
 
 static DEFINE_SPINLOCK(heapq_lock);
+static DEFINE_XNQUEUE(heapq);  /* Heap list for /proc reporting */
 
 static void init_extent(xnheap_t *heap, xnextent_t *extent)
 {
@@ -110,7 +111,7 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  */
 
 /*!
- * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long 
pagesize)
+ * \fn xnheap_init(xnheap_t *heap,void *heapaddr,u_long heapsize,u_long 
pagesize,const char *name,...)
  * \brief Initialize a memory heap.
  *
  * Initializes a memory heap suitable for time-bounded allocation
@@ -147,6 +148,10 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  * best one for your needs. In the current implementation, pagesize
  * must be a power of two in the range [ 8 .. 32768 ] inclusive.
  *
+ * @param name Name displayed in statistic outputs. This parameter can
+ * be a format string, in which case succeeding parameters will be used
+ * to resolve the final name.
+ *
  * @return 0 is returned upon success, or one of the following error
  * codes:
  *
@@ -163,8 +168,8 @@ static void init_extent(xnheap_t *heap, xnextent_t *extent)
  * Rescheduling: never.
  */
 
-int xnheap_init(xnheap_t *heap,
-   void *heapaddr, u_long heapsize, u_long pagesize)
+static int xnheap_init_va(xnheap_t *heap,

[Xenomai-core] [PATCH v2 3/3] native: Release fastlock to the proper heap

2009-10-19 Thread Jan Kiszka
Don't assume rt_task_delete is only called for in-kernel users, it may
be triggered via auto-cleanup also on user space objects. So check for
the creator and release the fastlock to the correct heap.

Signed-off-by: Jan Kiszka 
---

 ksrc/skins/native/mutex.c |   14 --
 1 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/ksrc/skins/native/mutex.c b/ksrc/skins/native/mutex.c
index 20eb484..6cf7eb1 100644
--- a/ksrc/skins/native/mutex.c
+++ b/ksrc/skins/native/mutex.c
@@ -47,6 +47,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -316,8 +317,17 @@ int rt_mutex_delete(RT_MUTEX *mutex)
err = rt_mutex_delete_inner(mutex);
 
 #ifdef CONFIG_XENO_FASTSYNCH
-   if (!err)
-   xnfree(mutex->synch_base.fastlock);
+   if (!err) {
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (mutex->cpid) {
+   int global = xnsynch_test_flags(&mutex->synch_base,
+   RT_MUTEX_EXPORTED);
+   xnheap_free(&xnsys_ppd_get(global)->sem_heap,
+   mutex->synch_base.fastlock);
+   } else
+#endif /* CONFIG_XENO_OPT_PERVASIVE */
+   xnfree(mutex->synch_base.fastlock);
+   }
 #endif /* CONFIG_XENO_FASTSYNCH */
 
return err;


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH v2 1/3] nucleus: Use Linux spin lock for heap list management

2009-10-19 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> No need for hard nklock protection of kheapq and the map counter, a
> normal spin lock suffices as all users must run over the root thread
> anyway.

At the very least, this should use rthal_spin_lock, in order to seem to
respect the layering.

Anyway, do we really want to change this now?

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH v2 0/3] Sem heap statistics & mutex auto-cleanup fixes

2009-10-19 Thread Jan Kiszka
This version extends the heap statistics to include all heaps that are
intialized by the nucleus or some skip. Some internal API modifications
were required which makes the second patch a bit larger. Moreover, I
modified some internal locking of nucleus' heap to use plain Linux spin
locks instead of heavy nklock.

Please pull the series from

git://xenomai.org/xenomai-jki.git for-upstream

if there are no concerns.

Jan Kiszka (3):
  nucleus: Use Linux spin lock for heap list management
  nucleus: Include all heaps in statistics
  native: Release fastlock to the proper heap

 include/nucleus/heap.h|   12 -
 ksrc/drivers/ipc/iddp.c   |3 +-
 ksrc/drivers/ipc/xddp.c   |6 ++-
 ksrc/nucleus/heap.c   |  132 ++--
 ksrc/nucleus/module.c |2 +-
 ksrc/nucleus/pod.c|5 +-
 ksrc/nucleus/shadow.c |5 ++-
 ksrc/skins/native/heap.c  |6 ++-
 ksrc/skins/native/mutex.c |   14 -
 ksrc/skins/native/pipe.c  |4 +-
 ksrc/skins/native/queue.c |6 ++-
 ksrc/skins/posix/shm.c|4 +-
 ksrc/skins/psos+/rn.c |6 ++-
 ksrc/skins/rtai/shm.c |7 ++-
 ksrc/skins/vrtx/heap.c|6 ++-
 ksrc/skins/vrtx/syscall.c |3 +-
 16 files changed, 167 insertions(+), 54 deletions(-)

[1] http://thread.gmane.org/gmane.linux.real-time.xenomai.devel/6559

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH v2 1/3] nucleus: Use Linux spin lock for heap list management

2009-10-19 Thread Jan Kiszka
No need for hard nklock protection of kheapq and the map counter, a
normal spin lock suffices as all users must run over the root thread
anyway.

Signed-off-by: Jan Kiszka 
---

 ksrc/nucleus/heap.c |   26 --
 1 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 9ca2591..5a17a94 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -76,6 +76,8 @@ EXPORT_SYMBOL_GPL(kheap);
 xnheap_t kstacks;  /* Private stack pool */
 #endif
 
+static DEFINE_SPINLOCK(heapq_lock);
+
 static void init_extent(xnheap_t *heap, xnextent_t *extent)
 {
caddr_t freepage;
@@ -1022,14 +1024,13 @@ static void __unreserve_and_free_heap(void *ptr, size_t 
size, int kmflags)
 static void xnheap_vmclose(struct vm_area_struct *vma)
 {
xnheap_t *heap = vma->vm_private_data;
-   spl_t s;
 
-   xnlock_get_irqsave(&nklock, s);
+   spin_lock(&heapq_lock);
 
if (atomic_dec_and_test(&heap->archdep.numaps)) {
if (heap->archdep.release) {
removeq(&kheapq, &heap->link);
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
__unreserve_and_free_heap(heap->archdep.heapbase,
  xnheap_extentsize(heap),
  heap->archdep.kmflags);
@@ -1038,7 +1039,7 @@ static void xnheap_vmclose(struct vm_area_struct *vma)
}
}
 
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
 }
 
 static struct vm_operations_struct xnheap_vmops = {
@@ -1068,9 +1069,8 @@ static int xnheap_ioctl(struct inode *inode,
 {
xnheap_t *heap;
int err = 0;
-   spl_t s;
 
-   xnlock_get_irqsave(&nklock, s);
+   spin_lock(&heapq_lock);
 
heap = __validate_heap_addr((void *)arg);
 
@@ -1083,7 +1083,7 @@ static int xnheap_ioctl(struct inode *inode,
 
   unlock_and_exit:
 
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
 
return err;
 }
@@ -1148,7 +1148,6 @@ static int xnheap_mmap(struct file *file, struct 
vm_area_struct *vma)
 int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, int memflags)
 {
void *heapbase;
-   spl_t s;
int err;
 
/* Caller must have accounted for internal overhead. */
@@ -1172,9 +1171,9 @@ int xnheap_init_mapped(xnheap_t *heap, u_long heapsize, 
int memflags)
heap->archdep.heapbase = heapbase;
heap->archdep.release = NULL;
 
-   xnlock_get_irqsave(&nklock, s);
+   spin_lock(&heapq_lock);
appendq(&kheapq, &heap->link);
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
 
return 0;
 }
@@ -1184,20 +1183,19 @@ int xnheap_destroy_mapped(xnheap_t *heap, void 
(*release)(struct xnheap *heap),
 {
int ret = 0, ccheck;
unsigned long len;
-   spl_t s;
 
ccheck = mapaddr ? 1 : 0;
 
-   xnlock_get_irqsave(&nklock, s);
+   spin_lock(&heapq_lock);
 
if (atomic_read(&heap->archdep.numaps) > ccheck) {
heap->archdep.release = release;
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
return -EBUSY;
}
 
removeq(&kheapq, &heap->link); /* Prevent further mapping. */
-   xnlock_put_irqrestore(&nklock, s);
+   spin_unlock(&heapq_lock);
 
len = xnheap_extentsize(heap);
 


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH 1/2] nucleus: Add semaphore heap statistics

2009-10-19 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> This extends /proc/xenomai/heap with statistics about the global as well
>> as all per-process semaphore heaps. This is helpful to track down the
>> reason for ENOMEM (system or sem heap full?) and to find out that we
>> are leaking memory from the global heap on automatic native mutex,
>> queue, and heap deletion.
> 
> I'd rather see the xnholder_t in the xnheap structure. This way, we
> would be able to add all heaps to /proc/xenomai/heaps.
> 

Makes sense, will rework this.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH 1/2] nucleus: Add semaphore heap statistics

2009-10-19 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> This extends /proc/xenomai/heap with statistics about the global as well
> as all per-process semaphore heaps. This is helpful to track down the
> reason for ENOMEM (system or sem heap full?) and to find out that we
> are leaking memory from the global heap on automatic native mutex,
> queue, and heap deletion.

I'd rather see the xnholder_t in the xnheap structure. This way, we
would be able to add all heaps to /proc/xenomai/heaps.

-- 
  Gilles


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH 0/2] Sem heap statistics & mutex auto-cleanup fixes

2009-10-19 Thread Jan Kiszka
This series fixes the native mutex auto-cleanup issues I reported in
[1]. It does not yet address the issues around heaps and queues as I
stumbled over more cleanup-related races, also in the nucleus, and want
to go through this with more care. Expect further patches to follow.

Also included in this pull request is a statistic enhancement for
/proc/xenomai/heap: It was lacking the sem heaps so far.

Please pull the series from

git://xenomai.org/xenomai-jki.git for-upstream

if there are no concerns.

Jan Kiszka (2):
  nucleus: Add semaphore heap statistics
  native: Release fastlock to the proper heap

 include/nucleus/sys_ppd.h |   11 +++
 ksrc/nucleus/heap.c   |   30 ++
 ksrc/nucleus/shadow.c |   23 +++
 ksrc/skins/native/mutex.c |   14 --
 4 files changed, 76 insertions(+), 2 deletions(-)

[1] http://thread.gmane.org/gmane.linux.real-time.xenomai.devel/6559

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH 2/2] native: Release fastlock to the proper heap

2009-10-19 Thread Jan Kiszka
Don't assume rt_task_delete is only called for in-kernel users, it may
be triggered via auto-cleanup also on user space objects. So check for
the creator and release the fastlock to the correct heap.

Signed-off-by: Jan Kiszka 
---

 ksrc/skins/native/mutex.c |   14 --
 1 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/ksrc/skins/native/mutex.c b/ksrc/skins/native/mutex.c
index 20eb484..6cf7eb1 100644
--- a/ksrc/skins/native/mutex.c
+++ b/ksrc/skins/native/mutex.c
@@ -47,6 +47,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -316,8 +317,17 @@ int rt_mutex_delete(RT_MUTEX *mutex)
err = rt_mutex_delete_inner(mutex);
 
 #ifdef CONFIG_XENO_FASTSYNCH
-   if (!err)
-   xnfree(mutex->synch_base.fastlock);
+   if (!err) {
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (mutex->cpid) {
+   int global = xnsynch_test_flags(&mutex->synch_base,
+   RT_MUTEX_EXPORTED);
+   xnheap_free(&xnsys_ppd_get(global)->sem_heap,
+   mutex->synch_base.fastlock);
+   } else
+#endif /* CONFIG_XENO_OPT_PERVASIVE */
+   xnfree(mutex->synch_base.fastlock);
+   }
 #endif /* CONFIG_XENO_FASTSYNCH */
 
return err;


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH 1/2] nucleus: Add semaphore heap statistics

2009-10-19 Thread Jan Kiszka
This extends /proc/xenomai/heap with statistics about the global as well
as all per-process semaphore heaps. This is helpful to track down the
reason for ENOMEM (system or sem heap full?) and to find out that we
are leaking memory from the global heap on automatic native mutex,
queue, and heap deletion.

Signed-off-by: Jan Kiszka 
---

 include/nucleus/sys_ppd.h |   11 +++
 ksrc/nucleus/heap.c   |   30 ++
 ksrc/nucleus/shadow.c |   23 +++
 3 files changed, 64 insertions(+), 0 deletions(-)

diff --git a/include/nucleus/sys_ppd.h b/include/nucleus/sys_ppd.h
index 8f8cb44..015af53 100644
--- a/include/nucleus/sys_ppd.h
+++ b/include/nucleus/sys_ppd.h
@@ -3,11 +3,17 @@
 
 #include 
 #include 
+#include 
 
 struct xnsys_ppd {
xnshadow_ppd_t ppd;
xnheap_t sem_heap;
 
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   xnholder_t link;
+   pid_t pid;
+#endif
+
 #define ppd2sys(addr) container_of(addr, struct xnsys_ppd, ppd)
 };
 
@@ -15,6 +21,11 @@ extern struct xnsys_ppd __xnsys_global_ppd;
 
 #ifdef CONFIG_XENO_OPT_PERVASIVE
 
+#include 
+
+extern xnqueue_t xnsys_ppds;
+extern spinlock_t xnsys_ppd_lock;
+
 static inline struct xnsys_ppd *xnsys_ppd_get(int global)
 {
xnshadow_ppd_t *ppd;
diff --git a/ksrc/nucleus/heap.c b/ksrc/nucleus/heap.c
index 9ca2591..4a3abd0 100644
--- a/ksrc/nucleus/heap.c
+++ b/ksrc/nucleus/heap.c
@@ -67,6 +67,7 @@ HEAP {
 #include 
 #include 
 #include 
+#include 
 #include 
 
 xnheap_t kheap;/* System heap */
@@ -1277,11 +1278,17 @@ int xnheap_destroy_mapped(xnheap_t *heap, void 
(*release)(struct xnheap *heap),
 #ifdef CONFIG_PROC_FS
 
 #include 
+#include 
 
 static int heap_read_proc(char *page,
  char **start,
  off_t off, int count, int *eof, void *data)
 {
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   xnholder_t *entry;
+   struct xnsys_ppd *sys_ppd;
+#endif
+   xnheap_t *sem_heap;
int len;
 
if (!xnpod_active_p())
@@ -1298,6 +1305,29 @@ static int heap_read_proc(char *page,
   xnheap_used_mem(&kstacks),
   xnheap_page_size(&kstacks));
 #endif
+   sem_heap = &__xnsys_global_ppd.sem_heap;
+   if (sem_heap)
+   len += sprintf(page + len, "size=%lu:used=%lu:pagesz=%lu  "
+  "(global sem heap)\n",
+  xnheap_usable_mem(sem_heap),
+  xnheap_used_mem(sem_heap),
+  xnheap_page_size(sem_heap));
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   spin_lock(&xnsys_ppd_lock);
+   entry = getheadq(&xnsys_ppds);
+   while (entry) {
+   sys_ppd = container_of(entry, struct xnsys_ppd, link);
+   sem_heap = &sys_ppd->sem_heap;
+   len += sprintf(page + len, "size=%lu:used=%lu:pagesz=%lu  "
+  "(private sem heap [%d])\n",
+  xnheap_usable_mem(sem_heap),
+  xnheap_used_mem(sem_heap),
+  xnheap_page_size(sem_heap),
+  sys_ppd->pid);
+   entry = nextq(&xnsys_ppds, entry);
+   }
+   spin_unlock(&xnsys_ppd_lock);
+#endif
 
len -= off;
if (len <= off + count)
diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c
index d6d1203..ec69438 100644
--- a/ksrc/nucleus/shadow.c
+++ b/ksrc/nucleus/shadow.c
@@ -515,6 +515,9 @@ void xnshadow_rpi_check(void)
 
 #endif /* !CONFIG_XENO_OPT_RPIDISABLE */
 
+DEFINE_XNQUEUE(xnsys_ppds);
+DEFINE_SPINLOCK(xnsys_ppd_lock);
+
 static xnqueue_t *ppd_hash;
 #define PPD_HASH_SIZE 13
 
@@ -652,6 +655,20 @@ static inline void ppd_remove_mm(struct mm_struct *mm,
xnlock_put_irqrestore(&nklock, s);
 }
 
+static void xnsys_ppd_register(struct xnsys_ppd *sys_ppd)
+{
+   spin_lock(&xnsys_ppd_lock);
+   appendq(&xnsys_ppds, &sys_ppd->link);
+   spin_unlock(&xnsys_ppd_lock);
+}
+
+static void xnsys_ppd_unregister(struct xnsys_ppd *sys_ppd)
+{
+   spin_lock(&xnsys_ppd_lock);
+   removeq(&xnsys_ppds, &sys_ppd->link);
+   spin_unlock(&xnsys_ppd_lock);
+}
+
 static inline void request_syscall_restart(xnthread_t *thread,
   struct pt_regs *regs,
   int sysflags)
@@ -1892,10 +1909,16 @@ static void *xnshadow_sys_event(int event, void *data)
return ERR_PTR(err);
}
 
+   p->pid = current->pid;
+   xnsys_ppd_register(p);
+
return &p->ppd;
 
case XNSHADOW_CLIENT_DETACH:
p = ppd2sys((xnshadow_ppd_t *) data);
+
+   xnsys_ppd_unregister(p);
+
xnheap_destroy_mapped(&p->sem_heap, post_ppd_release, NULL);
 
return NULL;


___
Xenomai-core