Re: [Xenomai-core] rt-video interface

2006-03-27 Thread Rodrigo Rosenfeld Rosas
Em Domingo 26 Março 2006 06:49, Jan Kiszka escreveu:

>...
>Maybe derived a subset from the full V4L2 API is the way to go. But
>let's wait if you discover other interface designs.

Actually, my priorities changed again... I'll need to finish (start actually) 
an application using the camera in a hard real-time context for writing 
another article for RTSS (http://www.rtss.org) that will happen in Brazil 
this year. Hope to see some of you here if my article is approved. Then I 
will come back to my interface designs research...

>...
>> This method also requires poll and select to be implemented in V4L2. We
>> should discuss how to deal with it if we stick with the V4L2 variant idea.
>
>Hmm, what file descriptors have to be monitored in parallel so that
>poll/select is required?

I didn't really understand why should poll/select be required, but the author 
says it is too important to be optional... We should ask him why ;) Anyway, 
there are more efficient ways for monitoring a buffer state and wait for 
events in RTDM. I don't think we should use poll/select anyway...

>...
>> Which vision applications do you have in mind?
>
>So far "only" a subset of your scenario: One of my colleagues needs to
>synchronise frame timestamps with timestamps of other input, from range
>sensors e.g. The actually processing is not (yet?) hard RT, but the
>input synchronisation is essential.

I think that the timestamp provided by the interface is enough, don't you 
think?

Best Regards,

Rodrigo.





___
Yahoo! doce lar. Faça do Yahoo! sua homepage.
http://br.yahoo.com/homepageset.html



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt-video interface

2006-03-26 Thread Jan Kiszka
Rodrigo Rosenfeld Rosas wrote:
> Em Segunda 20 Março 2006 21:24, Jan Kiszka escreveu:
>> ...
>> Does your time allow to list the minimal generic services a RTDM video
>> capturing driver has to provide in a similar fashion like the serial or
>> the CAN profile? If it's mostly about copying existing Linux API specs,
>> feel free to just reference them. But the differences should certainly
>> fill up a RT-video (or so) profile, and that would be great!
> 
> If we're going to try to be as near as possible of the V4L2 API draft, the 
> minimal generic services is a lot to implement.
> 
> Actually, I don't know if implementing a V4L2 variant would be a good idea... 
> Maybe there are designs better suited to real-time applications. I need time 
> to investigate it more. My advisor asked me for. It's becoming really hard to 
> finish my master Thesis until June 15... :(

Maybe derived a subset from the full V4L2 API is the way to go. But
let's wait if you discover other interface designs.

> 
> Basically I would need to have it clearer the most common use cases in real 
> situations... From the moment, here is my design (from a user point of view 
> of one kind of application): I'm using images to estimate speed of a robot 
> and identifying objects. I need the timestamps between both images and all 
> processes must be deterministic. I process the images and do the 
> calculations.
> 
> For avoid copying, I'm using the mmap facility. I process the image in the 
> same memory region when possible for avoidind the need of allocating more 
> memory. When using a NTSC camera, we have 60Hz frames, being 30 odds and 30 
> evens in a second. It means that a full frame (odd+even) would take about 
> 33ms to complete. It is common, though, to use only half a frame in 
> processing not only because it is faster to acquire (17ms) but it is also 
> faster to process and acceptable in most cases. So, instead of 640x480 frames 
> we would process 640x240 frames. If the proportion is important, the user can 
> do something like:
> 
> for (w1=0,w=0; w1<640; w1+=2, w++)
>  for(h=0; h<240; h++)
>  {
>process_pixel(w,h);
>  }
> 
> If all the processing can me made in 17ms, we could process the odd field 
> when 
> acquiring the even and vice-versa. Otherwise, we could have a buffer with the 
> latest acquires so that we wouldn't need to wait until a frame is completed.
> 
>  In summary, my control loop would be something like:
> 
> task_acquire
> {
>   new_image=acquire_image();
>   speed=get_speed(new_image,old_image);
>   old_image = new_image;
> }
> 
> task_do_pid_control 
> {
>   drive_motors(speed, desired_speed);
> }
> 
> Well, that is a use case and I can get this behaviour with the V4L2 API, 
> although I don't know if it is the best suited API. Let me introduce the V4L2 
> API and then (in other messages) we can discuss others approaches.
> 
> There are some interfaces available in V4L2: capture, overlay and output. 
> I'll 
> discuss only capture here that I think it is the most relevant for 
> rt-applications.
> 
> There are four IO modes: read/write, streaming I/O (memory mapping or user 
> pointer) and asyncronous I/O.
> 
> The read/write mode is the simplest but also the less efficient, since it 
> copies the buffer content to the user. It works like it is expected to. On 
> V4L2 API it requires the poll and select implementation but we could adapt 
> them to a simpler and more efficient way.
> 
> The user pointer approach has no sense for PCI framegrabbers on x86, since 
> these boards need a physical contiguous memory region for doing DMA. This 
> method consists on the user allocating the memory on userspace and passing 
> the pointers to the drivers.
> 
> The asyncronous I/O is not defined yer, so there are three and not four I/O 
> modes.
> 
> The third is the most useful for real-time applications and is what I'm 
> currently using: streaming by memory mapping.
> See http://www.linuxtv.org/downloads/video4linux/API/V4L2_API/spec/x3303.htm
> 
> In short, the user request a number of buffers in VIDIOC_REQBUFS ioctl and 
> the 
> driver allocate or reserve them and return the number of actual allocated 
> buffers.
> 
> There is another ioctl (VIDIOC_QUERYBUF) for querying each buffer. Along with 
> other information, the user gets a memory offset to be used in a mmap call. 
> Then the buffers will be queued in a input buffer with the VIDIOC_QBUF ioctl. 
> When the VIDIOC_STREAMON ioctl is called, the board begins capturing in FIFO 
> order of the input buffer and as the acquires are done the buffers are moved 
> to an output buffer also in FIFO order. The user dequeue a buffer from the 
> output buffer with the VIDIOC_DQBUF ioctl. When the user has finished using 
> the result of that buffer (s)he will enqueues it again. When stop processing, 
> a VIDIOC_STREAMOFF ioctl call is made and cleans all buffers besides stopping 
> capture.

This sounds best applicable for hard RT indeed.

> 
> This 

Re: [Xenomai-core] rt-video interface

2006-03-21 Thread Rodrigo Rosenfeld Rosas
Em Segunda 20 Março 2006 21:24, Jan Kiszka escreveu:
>...
>Does your time allow to list the minimal generic services a RTDM video
>capturing driver has to provide in a similar fashion like the serial or
>the CAN profile? If it's mostly about copying existing Linux API specs,
>feel free to just reference them. But the differences should certainly
>fill up a RT-video (or so) profile, and that would be great!

If we're going to try to be as near as possible of the V4L2 API draft, the 
minimal generic services is a lot to implement.

Actually, I don't know if implementing a V4L2 variant would be a good idea... 
Maybe there are designs better suited to real-time applications. I need time 
to investigate it more. My advisor asked me for. It's becoming really hard to 
finish my master Thesis until June 15... :(

Basically I would need to have it clearer the most common use cases in real 
situations... From the moment, here is my design (from a user point of view 
of one kind of application): I'm using images to estimate speed of a robot 
and identifying objects. I need the timestamps between both images and all 
processes must be deterministic. I process the images and do the 
calculations.

For avoid copying, I'm using the mmap facility. I process the image in the 
same memory region when possible for avoidind the need of allocating more 
memory. When using a NTSC camera, we have 60Hz frames, being 30 odds and 30 
evens in a second. It means that a full frame (odd+even) would take about 
33ms to complete. It is common, though, to use only half a frame in 
processing not only because it is faster to acquire (17ms) but it is also 
faster to process and acceptable in most cases. So, instead of 640x480 frames 
we would process 640x240 frames. If the proportion is important, the user can 
do something like:

for (w1=0,w=0; w1<640; w1+=2, w++)
 for(h=0; h<240; h++)
 {
   process_pixel(w,h);
 }

If all the processing can me made in 17ms, we could process the odd field when 
acquiring the even and vice-versa. Otherwise, we could have a buffer with the 
latest acquires so that we wouldn't need to wait until a frame is completed.

 In summary, my control loop would be something like:

task_acquire
{
new_image=acquire_image();
speed=get_speed(new_image,old_image);
old_image = new_image;
}

task_do_pid_control 
{
drive_motors(speed, desired_speed);
}

Well, that is a use case and I can get this behaviour with the V4L2 API, 
although I don't know if it is the best suited API. Let me introduce the V4L2 
API and then (in other messages) we can discuss others approaches.

There are some interfaces available in V4L2: capture, overlay and output. I'll 
discuss only capture here that I think it is the most relevant for 
rt-applications.

There are four IO modes: read/write, streaming I/O (memory mapping or user 
pointer) and asyncronous I/O.

The read/write mode is the simplest but also the less efficient, since it 
copies the buffer content to the user. It works like it is expected to. On 
V4L2 API it requires the poll and select implementation but we could adapt 
them to a simpler and more efficient way.

The user pointer approach has no sense for PCI framegrabbers on x86, since 
these boards need a physical contiguous memory region for doing DMA. This 
method consists on the user allocating the memory on userspace and passing 
the pointers to the drivers.

The asyncronous I/O is not defined yer, so there are three and not four I/O 
modes.

The third is the most useful for real-time applications and is what I'm 
currently using: streaming by memory mapping.
See http://www.linuxtv.org/downloads/video4linux/API/V4L2_API/spec/x3303.htm

In short, the user request a number of buffers in VIDIOC_REQBUFS ioctl and the 
driver allocate or reserve them and return the number of actual allocated 
buffers.

There is another ioctl (VIDIOC_QUERYBUF) for querying each buffer. Along with 
other information, the user gets a memory offset to be used in a mmap call. 
Then the buffers will be queued in a input buffer with the VIDIOC_QBUF ioctl. 
When the VIDIOC_STREAMON ioctl is called, the board begins capturing in FIFO 
order of the input buffer and as the acquires are done the buffers are moved 
to an output buffer also in FIFO order. The user dequeue a buffer from the 
output buffer with the VIDIOC_DQBUF ioctl. When the user has finished using 
the result of that buffer (s)he will enqueues it again. When stop processing, 
a VIDIOC_STREAMOFF ioctl call is made and cleans all buffers besides stopping 
capture.

This method also requires poll and select to be implemented in V4L2. We should 
discuss how to deal with it if we stick with the V4L2 variant idea.

I would like to understand what would be other possible usages of real-time 
vision before I could propose another approach so we can discuss what would 
be better for us.

Besides the API issue, we should think also on the API implementation. I think 
we should

Re: [Xenomai-core] rt-video interface

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Em Segunda 20 Março 2006 21:24, Jan Kiszka escreveu:
>...
>You may want to have a look at this thread regarding poll/select and RT:
>http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.htm

I tried to. Not found. But I didn't give up so quicky. It was missing the 
final 'l':
http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.html

>Do video capturing applications tend to have to observe multiple
>channels asynchronously via a single thread? If so, my statement about
>how often poll/select is actually required in RT-applications may have
>to be reconsidered.

Actually, I don't see any reasonable reason for using select/poll in rt 
applications. But, while trying to keep the API similar to V4L2, I would 
implement them by IOCTL and think it is OK, since it was already done for 
MMAP/MUNMAP. I don't think it worths writing the poll/select rt like 
functions...

What could be discussed here is if it will be required or not to have those 
calls needed when using streaming (most designs will use streaming). I don't 
think it should be required as it is on V4L2, but could be implemented 
optionally, as IOCTL calls. But I would need to investigate more this topic. 
I'll do it tomorrow... I'm the last man in the lab and they are calling me 
out for closing the lab...

>...
>Does your time allow to list the minimal generic services a RTDM video
>capturing driver has to provide in a similar fashion like the serial or
>the CAN profile? If it's mostly about copying existing Linux API specs,
>feel free to just reference them. But the differences should certainly
>fill up a RT-video (or so) profile, and that would be great!

I'll think about it and will answer tomorrow.

Regards,

Rodrigo





___
Yahoo! doce lar. Faça do Yahoo! sua homepage.
http://br.yahoo.com/homepageset.html



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt-video interface

2006-03-20 Thread Jan Kiszka
Rodrigo Rosenfeld Rosas wrote:
> Hi Jan and others interested.
> 
> I've finally got my driver in a usable condition. It lacks a lot of 
> functionalities yet, but it aplies to my needs.
> 
> I would like to propose a real-time video interface for using with RTDM.
> 
> For making it simple to port Linux applications to Xenomai, I tried to make 
> it 
> as close as possible to Video for Linux 2 API. I didn't see any serious 
> problem regarding its use on real-time environments in the specification. So, 
> the changes I think that would be necessary are:
> 
> o Change open/fopen to rtdm_dev_open
> o Implement MMAP/MUNMAP as an IOCTL (while it can not be done in a rt-context 
> in the mean time, nor should be necessary)
> o Implement also as IOCTLs: select and poll (I didn't implement them on my 
> driver because I didn't need them, but it should be necessary accordling to 
> specs)

You may want to have a look at this thread regarding poll/select and RT:

http://www.mail-archive.com/rtnet-users%40lists.sourceforge.net/msg00968.htm

Do video capturing applications tend to have to observe multiple
channels asynchronously via a single thread? If so, my statement about
how often poll/select is actually required in RT-applications may have
to be reconsidered.

> o Change all timeval structs to uint64_t or some typedef to it for making it 
> easier to store the timestamps (we use rtdm_clock_read() instead of 
> gettimeofday())
> 
> I can't remember of another issue now. I think these changes would be enough.
> 
> Any ideas?

Does your time allow to list the minimal generic services a RTDM video
capturing driver has to provide in a similar fashion like the serial or
the CAN profile? If it's mostly about copying existing Linux API specs,
feel free to just reference them. But the differences should certainly
fill up a RT-video (or so) profile, and that would be great!

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt-video interface

2006-03-20 Thread Rodrigo Rosenfeld Rosas
Hi Jan and others interested.

I've finally got my driver in a usable condition. It lacks a lot of 
functionalities yet, but it aplies to my needs.

I would like to propose a real-time video interface for using with RTDM.

For making it simple to port Linux applications to Xenomai, I tried to make it 
as close as possible to Video for Linux 2 API. I didn't see any serious 
problem regarding its use on real-time environments in the specification. So, 
the changes I think that would be necessary are:

o Change open/fopen to rtdm_dev_open
o Implement MMAP/MUNMAP as an IOCTL (while it can not be done in a rt-context 
in the mean time, nor should be necessary)
o Implement also as IOCTLs: select and poll (I didn't implement them on my 
driver because I didn't need them, but it should be necessary accordling to 
specs)
o Change all timeval structs to uint64_t or some typedef to it for making it 
easier to store the timestamps (we use rtdm_clock_read() instead of 
gettimeofday())

I can't remember of another issue now. I think these changes would be enough.

Any ideas?

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core