On Wed, 17 May 2000, Marcus Meissner wrote:

> On Wed, May 17, 2000 at 12:17:12AM +0100, Alan Cox wrote:
> > > with this totally device-specific data stream. No client would be able (or
> > > want) to understand device-dependent formats. V4L does not seem to have a
> > > good place to insert such conversion code into. V4L2 does. In the mean
> > > time all drivers produce RGB24 (or GRAY) output - not because they love it
> > > but because they must.
> > 
> > No. The API is intended to dump everything in user space. Doing conversions
> > in kernel space is bad for applications
> 
> Definitely. I am currently converting RGB888 back to YUV and rescale 640x480
> down to CIF (or 320x240 up) so the Video compressor codecs build into 'vic'
> can use it.
> 
> That the ov511 driver is converting it from YUV420 to RGB does not exactly
> increase performance.
> 
> > This is broken. They should be outputting YUV not RGB888 to user space. 
> 
> Yes.

Ok, let's assume I change a driver to output just its raw format
(maybe slightly tweaked to conform to one of standard formats). Let's
see if this can be done in a logical manner, using V4L as it exists
now.

IBM camera (model 2) produces YUV format (packed) on sizes <= 176x144
and RGB (sort of) on 352x288. The driver reports (and sets) its
capability via VIDIOCGPICT, VIDIOCSPICT. Only one format is set at any
given time, and V4L does not allow to enumerate supported formats.
Then V4L client supplies desired format with every VIDIOCMCAPTURE
request. However it can't know which format this driver prefers for
the given frame size (which is also supplied in the request).

This means that the client must magically know that driver XYZ sends
format ABC under this circumstances and format DEF under another
circumstances. Then the client can instruct the driver to send most
appropriate format in the frame (or even re-negotiate picture settings
via VIDIOCGPICT, VIDIOCSPICT - though these ioctls don't deal with
current frame size...)

Since most clients are not written with specific devices in mind, they
don't know that - and they will continue to request frames in old
format (YUV) even if the camera switched its mode (to RGB) to
accomodate for larger frame size. To change the desired frame size
user simply resizes the V4L client window (as done by xawtv) or can
use buttons (as camstream does). To complicate things even further,
some cameras have snapshot feature which can be programmed to emit
-one frame- with completely different format and size each time the
button is pressed.

So the question is: how do we avoid conversion of YUV into RGB or RGB
into YUV if camera can produce either of them at will, and the client
doesn't really know which is which when? Current solution, though bad
and slow, at least gives us a stable output format regardless of
whatever camera has in its twisted mind. My knowledge of V4L is
probably insufficient to answer this question.

Additionally, how many clients are written "correctly" if the solution
exists? How many clients are capable of handling any of several
supported V4L formats, and what if most of them can deal with only one
or two?

The change in drivers will be very simple. But I have no idea what
impact will it have on installed base of clients. I don't mind
deleting YUV->RGB math, but I am worried that this will render USB
cameras mostly unusable. Opinions?

Thanks,
Dmitri


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to