> > enum v4l2_buf_type {
> >     V4L2_BUF_TYPE_VIDEO_CAPTURE  = 1,
> >     V4L2_BUF_TYPE_VIDEO_OUTPUT   = 2,
> >     V4L2_BUF_TYPE_VIDEO_OVERLAY  = 3,
> >     V4L2_BUF_TYPE_VBI_CAPTURE    = 4,
> >     V4L2_BUF_TYPE_VBI_OUTPUT     = 5,
> >     V4L2_BUF_TYPE_PRIVATE        = 0x80,
> > };
>  
>  In the old approach, there was a separate VIDEO_CODECIN/OUT, I kind of
>  understand why it was removed, but that makes some other parts a it
>  harder. 

Someone actually uses the codecin/out + effect?  It isn't a big issue to
re-add these things later again, but I don't want to have something in
the API which isn't used ...

> > struct v4l2_fmtdesc
> > {
> >     __u32               index;             /* Format number      */
> >     enum v4l2_buf_type  type;              /* buffer type        */
> >     __u8                description[32];   /* Description string */
> >     __u32               pixelformat;       /* Format fourcc      */
> >     __u32               reserved[4];
> > };
>  
>  We see the type here, and the 'flag' field is removed too. Now, if I
>  understand correctly, we need to define separate picture formats for
>  input and output. That's fine, though I actually expected a flag instead
>  of a type field for this (i.e., flag the thing as input-only, or
>  input|output, etc.). That's fine, now let's get back to the
>  compressed/codec thing. For gstreamer, we actually used this field to
>  define the mime type of the stream: if flag&compressed, mimetype =
>  sprintf("video/%4.4s", fourcc), else, mimetype="video/raw",type=fourcc.
>  I think this was particularly useful.

The rationale for removing the compressed flag was that
  (1) applications don't try to use formats they don't know what they
      are, and
  (2) if you recognice the pixelformat you know whenever this is a
      compressed format or not.
Thus it is redundant information.

gstreamer accepts _every_ format a v4l2 device provides, even if it
doesn't know what this is?  How does it handle the frames then?

>  The other solution (which I like too, but you probably don't ;-) ) is to
>  add CODECIN/OUT as v4l2_buf_type, like it as in the old v4l2.

Codec in/out was meant for a device which accepts frames from the
application and returns them compressed to the appliaction as I
understood it.  That is something different than a grabber card doing
compression in real time.

> > #define V4L2_CAP_TUNER              0x00010000  /* Has a tuner */
>  
>  I'd like to see V4L2_CAP_AUDIO be added here, is that possible?

Sounds reasonable.  Same as with TUNER, would be somewhat redundant, but
likely useful for a quick check whenever a device does audio too (usb
cams often don't).

> > #if 0
> > /* ### generic compression settings don't work, there is too much
> >  * ### codec-specific stuff.  Maybe reuse that for MPEG codec settings
> >  * ### later ... */
> > struct v4l2_compression
>  
>  Just rename it to v4l2_mpegcompression?

mpeg has some more options than v4l2_compression currently has.  I want
to leave this #ifdef'ed out until we have something reasonable there.

> > struct v4l2_window
> > {
> >     struct v4l2_rect        w;
> >     __u32                   chromakey;
> >     struct v4l2_clip        *clips;
> >     __u32                   clipcount;
> >     void                    *bitmap;
> > };
>  
>  This is not right, w can't have unsigned x/y parameters, they *need* to
>  be signed because the window can be offscreen:

Good point, changed top/left in v4l2_rect to __s32.

> > struct v4l2_outputparm
> > {
> >     __u32              capability;   /*  Supported modes */
> >     __u32              outputmode;   /*  Current mode */
> >     struct v4l2_fract  timeperframe; /*  Time per frame in seconds */
> >     __u32              extendedmode; /*  Driver-specific extensions */
> >     __u32              writebuffers; /*  # of buffers for write */
> >     __u32              reserved[4];
> > };
>  
>  I must have missed something, what's this for?

video output (V4L2_CAP_VIDEO_OUTPUT).  Was also in the old header file.

> > struct v4l2_crop {
> >     enum v4l2_buf_type      type;
> >     struct v4l2_rect        c;
> > };
>  
>  I'm seeing some problems for cropped capture. For cropping, you
>  basically first need to set the capture size/format, since cropping
>  depends on the format being used. But after that, if the application
>  sets cropping rectangles, the size changes. Is that how it's supposed to
>  work (and is the application then supposed to re-request size by G_FMT),

Yes, Michael posted a long text about this (and probably also puts this
into the new spec docbook).

> > struct v4l2_enumstd
> > {
> >     __u32                   index;
> >     struct v4l2_standard    std;
> >     __u32                   inputset;  /* set of inputs that */
> >                                        /* support this standard */
> >     __u32                   outputset; /* set of outputs that */
> >                                        /* support this standard */
> >     __u32                   reserved[2];
> > };
>  
>  inputset/outputset is just the same as inputs/outputs, or are they  a
>  bitfield where (1<<x) means that input/output x supports this std?

They are a bitfield.  "inputs" was meant as bitfield too, and I renamed
it to make this clear.

  Gerd

-- 
You can't please everybody.  And usually if you _try_ to please
everybody, the end result is one big mess.
                                -- Linus Torvalds, 2002-04-20



_______________________________________________
Video4linux-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/video4linux-list

Reply via email to