Em Segunda 20 Março 2006 21:24, Jan Kiszka escreveu:
>...
>Does your time allow to list the minimal generic services a RTDM video
>capturing driver has to provide in a similar fashion like the serial or
>the CAN profile? If it's mostly about copying existing Linux API specs,
>feel free to just reference them. But the differences should certainly
>fill up a RT-video (or so) profile, and that would be great!

If we're going to try to be as near as possible of the V4L2 API draft, the 
minimal generic services is a lot to implement.

Actually, I don't know if implementing a V4L2 variant would be a good idea... 
Maybe there are designs better suited to real-time applications. I need time 
to investigate it more. My advisor asked me for. It's becoming really hard to 
finish my master Thesis until June 15... :(

Basically I would need to have it clearer the most common use cases in real 
situations... From the moment, here is my design (from a user point of view 
of one kind of application): I'm using images to estimate speed of a robot 
and identifying objects. I need the timestamps between both images and all 
processes must be deterministic. I process the images and do the 
calculations.

For avoid copying, I'm using the mmap facility. I process the image in the 
same memory region when possible for avoidind the need of allocating more 
memory. When using a NTSC camera, we have 60Hz frames, being 30 odds and 30 
evens in a second. It means that a full frame (odd+even) would take about 
33ms to complete. It is common, though, to use only half a frame in 
processing not only because it is faster to acquire (17ms) but it is also 
faster to process and acceptable in most cases. So, instead of 640x480 frames 
we would process 640x240 frames. If the proportion is important, the user can 
do something like:

for (w1=0,w=0; w1<640; w1+=2, w++)
 for(h=0; h<240; h++)
 {
   process_pixel(w,h);
 }

If all the processing can me made in 17ms, we could process the odd field when 
acquiring the even and vice-versa. Otherwise, we could have a buffer with the 
latest acquires so that we wouldn't need to wait until a frame is completed.

 In summary, my control loop would be something like:

task_acquire
{
        new_image=acquire_image();
        speed=get_speed(new_image,old_image);
        old_image = new_image;
}

task_do_pid_control 
{
        drive_motors(speed, desired_speed);
}

Well, that is a use case and I can get this behaviour with the V4L2 API, 
although I don't know if it is the best suited API. Let me introduce the V4L2 
API and then (in other messages) we can discuss others approaches.

There are some interfaces available in V4L2: capture, overlay and output. I'll 
discuss only capture here that I think it is the most relevant for 
rt-applications.

There are four IO modes: read/write, streaming I/O (memory mapping or user 
pointer) and asyncronous I/O.

The read/write mode is the simplest but also the less efficient, since it 
copies the buffer content to the user. It works like it is expected to. On 
V4L2 API it requires the poll and select implementation but we could adapt 
them to a simpler and more efficient way.

The user pointer approach has no sense for PCI framegrabbers on x86, since 
these boards need a physical contiguous memory region for doing DMA. This 
method consists on the user allocating the memory on userspace and passing 
the pointers to the drivers.

The asyncronous I/O is not defined yer, so there are three and not four I/O 
modes.

The third is the most useful for real-time applications and is what I'm 
currently using: streaming by memory mapping.
See http://www.linuxtv.org/downloads/video4linux/API/V4L2_API/spec/x3303.htm

In short, the user request a number of buffers in VIDIOC_REQBUFS ioctl and the 
driver allocate or reserve them and return the number of actual allocated 
buffers.

There is another ioctl (VIDIOC_QUERYBUF) for querying each buffer. Along with 
other information, the user gets a memory offset to be used in a mmap call. 
Then the buffers will be queued in a input buffer with the VIDIOC_QBUF ioctl. 
When the VIDIOC_STREAMON ioctl is called, the board begins capturing in FIFO 
order of the input buffer and as the acquires are done the buffers are moved 
to an output buffer also in FIFO order. The user dequeue a buffer from the 
output buffer with the VIDIOC_DQBUF ioctl. When the user has finished using 
the result of that buffer (s)he will enqueues it again. When stop processing, 
a VIDIOC_STREAMOFF ioctl call is made and cleans all buffers besides stopping 
capture.

This method also requires poll and select to be implemented in V4L2. We should 
discuss how to deal with it if we stick with the V4L2 variant idea.

I would like to understand what would be other possible usages of real-time 
vision before I could propose another approach so we can discuss what would 
be better for us.

Besides the API issue, we should think also on the API implementation. I think 
we should create a skeleton, common to all drivers to facilitate the driver 
building process.

Well, too much for a first message, I think... :)

Which vision applications do you have in mind?

Rodrigo.





_______________________________________________________
Yahoo! doce lar. Faça do Yahoo! sua homepage.
http://br.yahoo.com/homepageset.html



_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to