Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Stelian Pop
[Sorry for the delay, I was quite busy those days and unable to test the
patch...]

Le mardi 19 septembre 2006 à 17:34 +0200, Jan Kiszka a écrit :
  
  Or maybe we should lower the API level a little bit, and let the user
  specify the physical address of the mapping instead of the virtual
  one
 
 How would this help? You still need the virtual address for VMA blocks
 in order to collect the pages.

Actualy we need both the virtual address (for vmalloc case) and the
physical address (for ioremap case).

 +if (mmap_data-mem_type == RTDM_MEMTYPE_VMALLOC) {
[...]

This still won't work because for RTDM_MEMTYPE_IOREMAP
virt_to_phys((void *)vaddr) doesn't return the physical address.

So we still need to either:
a) walk the vmlist struct to find out the needed vm-phys_addr (which
will probably won't work in 2.4, or even in earlier 2.6 kernels)

b) pass both the physical and the virtual adresses in the
rtdm_mmap_to_user prototype (the physical address would be used only
when RTDM_MEMTYPE_IOREMAP is specified)

c) make 'src_addr' be a virtual address in some cases, and a physical
address in ioremap case.

d) make a special rtdm_mmap_iomem_to_user() function...

The (tested) patch below implements the b) case.

Index: include/rtdm/rtdm_driver.h
===
--- include/rtdm/rtdm_driver.h  (révision 1652)
+++ include/rtdm/rtdm_driver.h  (copie de travail)
@@ -1021,6 +1021,25 @@
 
 /* --- utility functions --- */
 
+/*!
+ * @addtogroup util
+ * @{
+ */
+
+/*!
+ * @anchor RTDM_MEMTYPE_xxx @name Memory Types
+ * Flags defining the type of a memory region
+ * @{
+ */
+/** Allocated with kmalloc() */
+#define RTDM_MEMTYPE_KMALLOC0x00
+/** Remapped physical memory */
+#define RTDM_MEMTYPE_IOREMAP0x01
+/** Allocated with vmalloc() */
+#define RTDM_MEMTYPE_VMALLOC0x02
+/** @} Memory Types */
+/** @} util */
+
 #define rtdm_printk(format, ...)printk(format, ##__VA_ARGS__)
 
 #ifndef DOXYGEN_CPP /* Avoid static inline tags for RTDM in doxygen */
@@ -1035,8 +1054,9 @@
 }
 
 #ifdef CONFIG_XENO_OPT_PERVASIVE
-int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
-  int prot, void **pptr,
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info,
+  void *src_vaddr, unsigned long src_paddr, size_t len,
+  int mem_type, int prot, void **pptr,
   struct vm_operations_struct *vm_ops,
   void *vm_private_data);
 int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len);
Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(révision 1652)
+++ ksrc/skins/rtdm/drvlib.c(copie de travail)
@@ -1368,7 +1368,9 @@
 
 #if defined(CONFIG_XENO_OPT_PERVASIVE) || defined(DOXYGEN_CPP)
 struct rtdm_mmap_data {
-void *src_addr;
+void *src_vaddr;
+unsigned long src_paddr;
+int mem_type;
 struct vm_operations_struct *vm_ops;
 void *vm_private_data;
 };
@@ -1376,17 +1378,22 @@
 static int rtdm_mmap_buffer(struct file *filp, struct vm_area_struct *vma)
 {
 struct rtdm_mmap_data *mmap_data = filp-private_data;
-unsigned long vaddr, maddr, size;
+unsigned long vaddr, paddr, maddr, size;
 
 vma-vm_ops = mmap_data-vm_ops;
 vma-vm_private_data = mmap_data-vm_private_data;
 
-vaddr = (unsigned long)mmap_data-src_addr;
+vaddr = (unsigned long)mmap_data-src_vaddr;
 maddr = vma-vm_start;
 size  = vma-vm_end - vma-vm_start;
 
+if (mmap_data-mem_type == RTDM_MEMTYPE_IOREMAP)
+   paddr = mmap_data-src_paddr;
+else
+   paddr = virt_to_phys((void *)vaddr);
+
 #ifdef CONFIG_MMU
-if ((vaddr = VMALLOC_START)  (vaddr  VMALLOC_END)) {
+if (mmap_data-mem_type == RTDM_MEMTYPE_VMALLOC) {
 unsigned long mapped_size = 0;
 
 XENO_ASSERT(RTDM, (vaddr == PAGE_ALIGN(vaddr)), return -EINVAL);
@@ -1403,8 +1410,7 @@
 return 0;
 } else
 #endif /* CONFIG_MMU */
-return xnarch_remap_io_page_range(vma, maddr,
-  virt_to_phys((void *)vaddr),
+return xnarch_remap_io_page_range(vma, maddr, paddr,
   size, PAGE_SHARED);
 }
 
@@ -1417,8 +1423,12 @@
  *
  * @param[in] user_info User information pointer as passed to the invoked
  * device operation handler
- * @param[in] src_addr Kernel address to be mapped
+ * @param[in] src_vaddr Kernel virtual address to be mapped
+ * @param[in] src_paddr Kernel physical address to be mapped (used only for IO
+ * memory, i.e. mem_type=RTDM_MEMTYPE_IOREMAP)
  * @param[in] len Length of the memory range
+ * @param[in] mem_type Type of the passed memory range, see
+ * @ref RTDM_MEMTYPE_xxx
  * @param[in] prot Protection flags for the user's memory range, typically
  * either PROT_READ or PROT_READ|PROT_WRITE
  * 

Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Jan Kiszka
Stelian Pop wrote:
 [Sorry for the delay, I was quite busy those days and unable to test the
 patch...]
 
 Le mardi 19 septembre 2006 à 17:34 +0200, Jan Kiszka a écrit :
 Or maybe we should lower the API level a little bit, and let the user
 specify the physical address of the mapping instead of the virtual
 one
 How would this help? You still need the virtual address for VMA blocks
 in order to collect the pages.
 
 Actualy we need both the virtual address (for vmalloc case) and the
 physical address (for ioremap case).
 
 +if (mmap_data-mem_type == RTDM_MEMTYPE_VMALLOC) {
 [...]
 
 This still won't work because for RTDM_MEMTYPE_IOREMAP
 virt_to_phys((void *)vaddr) doesn't return the physical address.
 
 So we still need to either:
   a) walk the vmlist struct to find out the needed vm-phys_addr (which
 will probably won't work in 2.4, or even in earlier 2.6 kernels)

Sounds fragile, indeed.

 
   b) pass both the physical and the virtual adresses in the
 rtdm_mmap_to_user prototype (the physical address would be used only
 when RTDM_MEMTYPE_IOREMAP is specified)

Hmm, a bit ugly, this long argument list.

 
   c) make 'src_addr' be a virtual address in some cases, and a physical
 address in ioremap case.

That's so far the cleanest solution in my eyes.

 
   d) make a special rtdm_mmap_iomem_to_user() function...

Also an option. Specifically, it wouldn't break the existing API... What
about rtdm_iomap_to_user? Would you like to work out a patch in this
direction?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Stelian Pop
Le vendredi 22 septembre 2006 à 10:58 +0200, Jan Kiszka a écrit :

  d) make a special rtdm_mmap_iomem_to_user() function...
 
 Also an option. Specifically, it wouldn't break the existing API... What
 about rtdm_iomap_to_user? Would you like to work out a patch in this
 direction?

Here it comes.

Index: include/rtdm/rtdm_driver.h
===
--- include/rtdm/rtdm_driver.h  (révision 1652)
+++ include/rtdm/rtdm_driver.h  (copie de travail)
@@ -1035,10 +1035,16 @@
 }
 
 #ifdef CONFIG_XENO_OPT_PERVASIVE
-int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info,
+  void *src_addr, size_t len,
   int prot, void **pptr,
   struct vm_operations_struct *vm_ops,
   void *vm_private_data);
+int rtdm_iomap_to_user(rtdm_user_info_t *user_info,
+   unsigned long src_addr, size_t len,
+   int prot, void **pptr,
+   struct vm_operations_struct *vm_ops,
+   void *vm_private_data);
 int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len);
 
 static inline int rtdm_read_user_ok(rtdm_user_info_t *user_info,
Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(révision 1652)
+++ ksrc/skins/rtdm/drvlib.c(copie de travail)
@@ -1368,7 +1368,8 @@
 
 #if defined(CONFIG_XENO_OPT_PERVASIVE) || defined(DOXYGEN_CPP)
 struct rtdm_mmap_data {
-void *src_addr;
+void *src_vaddr;
+unsigned long src_paddr;
 struct vm_operations_struct *vm_ops;
 void *vm_private_data;
 };
@@ -1376,16 +1377,22 @@
 static int rtdm_mmap_buffer(struct file *filp, struct vm_area_struct *vma)
 {
 struct rtdm_mmap_data *mmap_data = filp-private_data;
-unsigned long vaddr, maddr, size;
+unsigned long vaddr, paddr, maddr, size;
 
 vma-vm_ops = mmap_data-vm_ops;
 vma-vm_private_data = mmap_data-vm_private_data;
 
-vaddr = (unsigned long)mmap_data-src_addr;
+vaddr = (unsigned long)mmap_data-src_vaddr;
+paddr = (unsigned long)mmap_data-src_paddr;
+if (!paddr)
+   /* kmalloc memory */
+   paddr = virt_to_phys((void *)vaddr);
+
 maddr = vma-vm_start;
 size  = vma-vm_end - vma-vm_start;
 
 #ifdef CONFIG_MMU
+/* Catch only vmalloc memory */
 if ((vaddr = VMALLOC_START)  (vaddr  VMALLOC_END)) {
 unsigned long mapped_size = 0;
 
@@ -1403,8 +1410,7 @@
 return 0;
 } else
 #endif /* CONFIG_MMU */
-return xnarch_remap_io_page_range(vma, maddr,
-  virt_to_phys((void *)vaddr),
+return xnarch_remap_io_page_range(vma, maddr, paddr,
   size, PAGE_SHARED);
 }
 
@@ -1412,12 +1418,51 @@
 .mmap = rtdm_mmap_buffer,
 };
 
+static int __rtdm_do_mmap(rtdm_user_info_t *user_info,
+  struct rtdm_mmap_data *mmap_data,
+  size_t len, int prot, void **pptr)
+{
+struct file *filp;
+const struct file_operations*old_fops;
+void*old_priv_data;
+void*user_ptr;
+
+XENO_ASSERT(RTDM, xnpod_root_p(), return -EPERM;);
+
+filp = filp_open(/dev/zero, O_RDWR, 0);
+if (IS_ERR(filp))
+return PTR_ERR(filp);
+
+old_fops = filp-f_op;
+filp-f_op = rtdm_mmap_fops;
+
+old_priv_data = filp-private_data;
+filp-private_data = mmap_data;
+
+down_write(user_info-mm-mmap_sem);
+user_ptr = (void *)do_mmap(filp, (unsigned long)*pptr, len, prot,
+   MAP_SHARED, 0);
+up_write(user_info-mm-mmap_sem);
+
+filp-f_op = (typeof(filp-f_op))old_fops;
+filp-private_data = old_priv_data;
+
+filp_close(filp, user_info-files);
+
+if (IS_ERR(user_ptr))
+return PTR_ERR(user_ptr);
+
+*pptr = user_ptr;
+return 0;
+}
+
+
 /**
  * Map a kernel memory range into the address space of the user.
  *
  * @param[in] user_info User information pointer as passed to the invoked
  * device operation handler
- * @param[in] src_addr Kernel address to be mapped
+ * @param[in] src_addr Kernel virtual address to be mapped
  * @param[in] len Length of the memory range
  * @param[in] prot Protection flags for the user's memory range, typically
  * either PROT_READ or PROT_READ|PROT_WRITE
@@ -1462,50 +1507,84 @@
  *
  * Rescheduling: possible.
  */
-int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info,
+  void *src_addr, size_t len,
   int prot, void **pptr,
   struct vm_operations_struct *vm_ops,
   void *vm_private_data)
 {
-struct rtdm_mmap_data   mmap_data = {src_addr, vm_ops, 

Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Jan Kiszka
Stelian Pop wrote:
 Le vendredi 22 septembre 2006 à 10:58 +0200, Jan Kiszka a écrit :
 
 d) make a special rtdm_mmap_iomem_to_user() function...
 Also an option. Specifically, it wouldn't break the existing API... What
 about rtdm_iomap_to_user? Would you like to work out a patch in this
 direction?
 
 Here it comes.

Your patch looks very good. Assuming that you have tested it
successfully, I'm going to merge it soon.

Thanks,
Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-18 Thread Stelian Pop
Le vendredi 15 septembre 2006 à 18:40 +0200, Jan Kiszka a écrit :

 In case no one comes up with an easy, portable way to detect remapped
 memory as well: What about some flags the caller of rtdm_mmap_to_user
 has to pass, telling what kind of memory it is? Would simplify the RTDM
 part, and the user normally knows quite well where the memory came from.
 And I love to break APIs. :)

This would be perfect. We could even reuse the prot field for that
(PROT_READ | PROT_WRITE | PROT_VMALLOC | PROT_IOREMAP). Not the cleanest
solution, but it won't break the API this way.

Or maybe we should lower the API level a little bit, and let the user
specify the physical address of the mapping instead of the virtual
one

Stelian.
-- 
Stelian Pop [EMAIL PROTECTED]


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-15 Thread Stelian Pop
Hi,

I need to be able to map an IO memory buffer to userspace from a RTDM
driver.

rtdm_mmap_to_user() seems to do what I need, but it doesn't work. Its
code thinks that all virtual addresses between VMALLOC_START and
VMALLOC_END are obtained through vmalloc() and tries to call
xnarch_remap_vm_page() on them, which fails.

Virtual addresses coming from ioremap() need to go through
xnarch_remap_io_page_range(), and their physical address cannot be
obtained with a simple virt_to_phys().

A working patch is attached below, but there might (should ?) be a
better way to do it. Some of the code may also belong to
asm-generic/system.h instead of the RTDM skin.

Note that you may also need to EXPORT_SYMBOL(vmlist and vmlist_lock) in
mm/vmalloc.c if you want to build the RTDM skin as a module.

Comments ?

Stelian.

Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(révision 1624)
+++ ksrc/skins/rtdm/drvlib.c(copie de travail)
@@ -1377,6 +1377,7 @@
 {
 struct rtdm_mmap_data *mmap_data = filp-private_data;
 unsigned long vaddr, maddr, size;
+struct vm_struct *vm;
 
 vma-vm_ops = mmap_data-vm_ops;
 vma-vm_private_data = mmap_data-vm_private_data;
@@ -1385,7 +1386,21 @@
 maddr = vma-vm_start;
 size  = vma-vm_end - vma-vm_start;
 
+write_lock(vmlist_lock);
+for (vm = vmlist; vm != NULL; vm = vm-next) {
+   if (vm-addr == (void *)vaddr)
+   break;
+}
+write_unlock(vmlist_lock);
+
+/* ioremap'ed memory */
+if (vm  vm-flags  VM_IOREMAP)
+return xnarch_remap_io_page_range(vma, maddr,
+ vm-phys_addr,
+  size, PAGE_SHARED);
+else
 #ifdef CONFIG_MMU
+/* vmalloc'ed memory */
 if ((vaddr = VMALLOC_START)  (vaddr  VMALLOC_END)) {
 unsigned long mapped_size = 0;
 

-- 
Stelian Pop [EMAIL PROTECTED]


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-15 Thread Jan Kiszka
Stelian Pop wrote:
 Hi,
 
 I need to be able to map an IO memory buffer to userspace from a RTDM
 driver.
 
 rtdm_mmap_to_user() seems to do what I need, but it doesn't work. Its
 code thinks that all virtual addresses between VMALLOC_START and
 VMALLOC_END are obtained through vmalloc() and tries to call
 xnarch_remap_vm_page() on them, which fails.

Ok, interesting (one never stops learning).

 
 Virtual addresses coming from ioremap() need to go through
 xnarch_remap_io_page_range(), and their physical address cannot be
 obtained with a simple virt_to_phys().
 
 A working patch is attached below, but there might (should ?) be a
 better way to do it. Some of the code may also belong to
 asm-generic/system.h instead of the RTDM skin.
 
 Note that you may also need to EXPORT_SYMBOL(vmlist and vmlist_lock) in
 mm/vmalloc.c if you want to build the RTDM skin as a module.
 
 Comments ?

In case no one comes up with an easy, portable way to detect remapped
memory as well: What about some flags the caller of rtdm_mmap_to_user
has to pass, telling what kind of memory it is? Would simplify the RTDM
part, and the user normally knows quite well where the memory came from.
And I love to break APIs. :)

I'm CC'ing the first users of this service to ask for some feedback from
their POV. Hope gna does not -once again- mangles the CCs.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core