Re: [RFC] Resolution change support in video codecs in v4l2

2012-01-11 Thread 'Sakari Ailus'
Hi Kamil,

On Wed, Jan 04, 2012 at 11:19:08AM +0100, Kamil Debski wrote:
...
   This takes care of the delay related problems by requiring more buffers.
   You have an initial delay then the frames are returned with a constant
  rate.
  
   Dequeuing of any frame will be delayed until it is no longer used - it
  doesn't
   matter whether it is a key (I) frame, P frame o r a B frame. Actually B
  frames
   shouldn't be used as reference. Usually a frame is referencing only 2-3
  previous
   and maybe 1 ahead (for B-frames) frames and they don't need to be 
   I-frames.
  Still
   the interval between I-frames may be 16 or even many, many, more.
  
  Considering it can be 16 or even more, I see even more reason in returning
  frames when hardware only reads them.
 
 It can be 31337 P-frames after an I-frame but it doesn't matter, as the codec
 will never ever need more than X frames for reference. Usually the X is 
 small, 
 like 2-3. I have never seen a number as high as 16. After this X frames are
 processed
 and kept it will allow to dequeue frames with no additional delay.
 This is a CONSTANT delay. 

It's constant, and you need up to that number more frames available for
decoding. There's no way around it.

 P-frames are equally good as reference as I-frames. No need to keep the 
 I-frame
 for an indefinite time.
 
 In other words: interval between I-frames is NOT the number of buffers that
 have to be kept as reference.

Referring to what Wikipedia has to say about H.264:

Using previously-encoded pictures as references in a much more
flexible way than in past standards, allowing up to 16 reference
frames (or 32 reference fields, in the case of interlaced encoding)
to be used in some cases. This is in contrast to prior standards,
where the limit was typically one; or, in the case of conventional
B pictures, two. This particular feature usually allows modest
improvements in bit rate and quality in most scenes. But in certain
types of scenes, such as those with repetitive motion or
back-and-forth scene cuts or uncovered background areas, it allows a
significant reduction in bit rate while maintaining clarity.

URL:http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC

We need to be prepared to the worst case, which, according to this, is 16.

Regards,

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC] Resolution change support in video codecs in v4l2

2012-01-04 Thread Kamil Debski
Hi Sakari,

 From: 'Sakari Ailus' [mailto:sakari.ai...@iki.fi]
 Sent: 01 January 2012 23:29
decs in v4l2
 
 Hi Kamil,
 
 Apologies for my later reply.
 
 On Mon, Dec 12, 2011 at 11:17:06AM +0100, Kamil Debski wrote:
   -Original Message-
   From: 'Sakari Ailus' [mailto:sakari.ai...@iki.fi]
   Sent: 09 December 2011 20:55
   To: Kamil Debski
   Cc: 'Mauro Carvalho Chehab'; linux-media@vger.kernel.org; 'Laurent
 Pinchart';
   'Sebastian Dröge'; Sylwester Nawrocki; Marek Szyprowski
   Subject: Re: [RFC] Resolution change support in video codecs in v4l2
  
   Hi Kamil,
  
   On Tue, Dec 06, 2011 at 04:03:33PM +0100, Kamil Debski wrote:
   ...
  The user space still wants to be able to show these buffers, so a
 new
 flag
  would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.
 
  Huh? Assuming a capture device, when kernel makes a buffer
 available
   to
 userspace,
  kernel should not touch on it anymore (not even for read -
 although
 reading from
  it probably won't cause any issues, as video applications in
 general
   don't
 write
  into those buffers). The opposite is true for output devices: once
 userspace fills it,
  and queues, it should not touch that buffer again.
 
  This is part of the queue/dequeue logic. I can't see any need for
 an
   extra
  flag to explicitly say that.

 There is a reason to do so. An example of this is below. The
 memory-to-memory device has two queues, output can capture. A video
   decoder
 memory-to-memory device's output queue handles compressed video and
 the
 capture queue provides the application decoded frames.

 Certain frames in the stream are key frames, meaning that the
 decoding
   of
 the following non-key frames requires access to the key frame. The
   number of
 non-key frame can be relatively large, say 16, depending on the
 codec.

 If the user should wait for all the frames to be decoded before the
 key
 frame can be shown, then either the key frame is to be skipped or
   delayed.
 Both of the options are highly undesirable.
   
I don't think that such a delay is worrisome. This is only initial
 delay.
The hw will process these N buffers and after that it works exactly
 the
   same
as it would without the delay in terms of processing time.
  
   Well, yes, but consider that the decoder also processes key frames when
 the
   decoding is in progress. The dequeueing of the key frames (and any
 further
   frames as long as the key frame is needed by the decoder) will be
 delayed
   until the key frame is no longer required.
  
   You need extra buffers to cope with such a situation, and in the worst
 case,
   or when the decoder is just as fast as you want to show the frames on
 the
   display, you need double the amount of buffers compared to what you'd
 really
   need for decoding. To make matters worse, this tends to happen at
 largest
   resolutions.
  
   I think we'd like to avoid this.
 
  I really, really, don’t see why you say that we would need double the
 number of
  buffers?
 
  Let's suppose that the stream may reference 2 previous frames.
 
  Frame number: 123456789ABCDEF
  Returned frame: 123456789ABCDEF
  Buffers returned:   123123123123... (in case we have only 3 buffers)
 
  See? After we decode frame number 3 we can return frame number 3. Thus we
 need
  minimum of 3 buffers. If we want to have 4 for simultaneous the use of
  application
  we allocate 7.
 
  The current codec handling system has been build on the following
 assumptions:
  - the buffers should be dequeued in order
  - the buffers should be only dequeued when they are no longer is use
 
 What does in use mean to you? Both read and write, or just read?

In use means both read and write in this context.

 Assume frame 1 is required to decode frames 2 and 3.
 
 If we delay dequeueing of te first of the above three frames since the codec
 accesses it for reading, we will also delay dequeueing of any subsequent
 frames until the first frame is decoded. If this is repeated, and assuming
 the speed of the decoder is the same as playback of those frames, the player
 will require a minimum of six frames to cope with the uneven time interval
 the decoder will be able to give those frames to the player. Otherwise, only
 three frames would be enough.

Why six frames?

Why the uneven time interval the decoder will be able to give those frames to
the player?

If you look again here

  Frame number: 123456789ABCDEF
  Returned frame: 123456789ABCDEF
  Buffers returned:   123123123123... (in case we have only 3 buffers)

You can see that with 3 buffers you get a constant delay of 2 frames. In most
cases (and
I am just dropping the cases when you feed the coded with compressed slices and
not whole frames) you queue one source stream frame and you get one decoded
frame.
Simple as that.

 
  This takes care of the delay related

Re: [RFC] Resolution change support in video codecs in v4l2

2012-01-01 Thread 'Sakari Ailus'
Hi Kamil,

Apologies for my later reply.

On Mon, Dec 12, 2011 at 11:17:06AM +0100, Kamil Debski wrote:
  -Original Message-
  From: 'Sakari Ailus' [mailto:sakari.ai...@iki.fi]
  Sent: 09 December 2011 20:55
  To: Kamil Debski
  Cc: 'Mauro Carvalho Chehab'; linux-media@vger.kernel.org; 'Laurent 
  Pinchart';
  'Sebastian Dröge'; Sylwester Nawrocki; Marek Szyprowski
  Subject: Re: [RFC] Resolution change support in video codecs in v4l2
  
  Hi Kamil,
  
  On Tue, Dec 06, 2011 at 04:03:33PM +0100, Kamil Debski wrote:
  ...
 The user space still wants to be able to show these buffers, so a new
flag
 would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.

 Huh? Assuming a capture device, when kernel makes a buffer available
  to
userspace,
 kernel should not touch on it anymore (not even for read - although
reading from
 it probably won't cause any issues, as video applications in general
  don't
write
 into those buffers). The opposite is true for output devices: once
userspace fills it,
 and queues, it should not touch that buffer again.

 This is part of the queue/dequeue logic. I can't see any need for an
  extra
 flag to explicitly say that.
   
There is a reason to do so. An example of this is below. The
memory-to-memory device has two queues, output can capture. A video
  decoder
memory-to-memory device's output queue handles compressed video and the
capture queue provides the application decoded frames.
   
Certain frames in the stream are key frames, meaning that the decoding
  of
the following non-key frames requires access to the key frame. The
  number of
non-key frame can be relatively large, say 16, depending on the codec.
   
If the user should wait for all the frames to be decoded before the key
frame can be shown, then either the key frame is to be skipped or
  delayed.
Both of the options are highly undesirable.
  
   I don't think that such a delay is worrisome. This is only initial delay.
   The hw will process these N buffers and after that it works exactly the
  same
   as it would without the delay in terms of processing time.
  
  Well, yes, but consider that the decoder also processes key frames when the
  decoding is in progress. The dequeueing of the key frames (and any further
  frames as long as the key frame is needed by the decoder) will be delayed
  until the key frame is no longer required.
  
  You need extra buffers to cope with such a situation, and in the worst case,
  or when the decoder is just as fast as you want to show the frames on the
  display, you need double the amount of buffers compared to what you'd really
  need for decoding. To make matters worse, this tends to happen at largest
  resolutions.
  
  I think we'd like to avoid this.
 
 I really, really, don’t see why you say that we would need double the number 
 of
 buffers?
 
 Let's suppose that the stream may reference 2 previous frames.
 
 Frame number: 123456789ABCDEF
 Returned frame: 123456789ABCDEF
 Buffers returned:   123123123123... (in case we have only 3 buffers)
 
 See? After we decode frame number 3 we can return frame number 3. Thus we need
 minimum of 3 buffers. If we want to have 4 for simultaneous the use of
 application
 we allocate 7. 
 
 The current codec handling system has been build on the following assumptions:
 - the buffers should be dequeued in order
 - the buffers should be only dequeued when they are no longer is use

What does in use mean to you? Both read and write, or just read?

Assume frame 1 is required to decode frames 2 and 3.

If we delay dequeueing of te first of the above three frames since the codec
accesses it for reading, we will also delay dequeueing of any subsequent
frames until the first frame is decoded. If this is repeated, and assuming
the speed of the decoder is the same as playback of those frames, the player
will require a minimum of six frames to cope with the uneven time interval
the decoder will be able to give those frames to the player. Otherwise, only
three frames would be enough.

 This takes care of the delay related problems by requiring more buffers.
 You have an initial delay then the frames are returned with a constant rate.
 
 Dequeuing of any frame will be delayed until it is no longer used - it doesn't
 matter whether it is a key (I) frame, P frame o r a B frame. Actually B frames
 shouldn't be used as reference. Usually a frame is referencing only 2-3 
 previous
 and maybe 1 ahead (for B-frames) frames and they don't need to be I-frames. 
 Still
 the interval between I-frames may be 16 or even many, many, more.

Considering it can be 16 or even more, I see even more reason in returning
frames when hardware only reads them.

I'm not against making it configurable for the user, keeping the traditional
behaviour could be beneficial as well if the user wishes to further precess
the frames in-place.

...

 Anyway I

RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-12 Thread Kamil Debski
 -Original Message-
 From: 'Sakari Ailus' [mailto:sakari.ai...@iki.fi]
 Sent: 09 December 2011 20:55
 To: Kamil Debski
 Cc: 'Mauro Carvalho Chehab'; linux-media@vger.kernel.org; 'Laurent Pinchart';
 'Sebastian Dröge'; Sylwester Nawrocki; Marek Szyprowski
 Subject: Re: [RFC] Resolution change support in video codecs in v4l2
 
 Hi Kamil,
 
 On Tue, Dec 06, 2011 at 04:03:33PM +0100, Kamil Debski wrote:
 ...
The user space still wants to be able to show these buffers, so a new
   flag
would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.
   
Huh? Assuming a capture device, when kernel makes a buffer available
 to
   userspace,
kernel should not touch on it anymore (not even for read - although
   reading from
it probably won't cause any issues, as video applications in general
 don't
   write
into those buffers). The opposite is true for output devices: once
   userspace fills it,
and queues, it should not touch that buffer again.
   
This is part of the queue/dequeue logic. I can't see any need for an
 extra
flag to explicitly say that.
  
   There is a reason to do so. An example of this is below. The
   memory-to-memory device has two queues, output can capture. A video
 decoder
   memory-to-memory device's output queue handles compressed video and the
   capture queue provides the application decoded frames.
  
   Certain frames in the stream are key frames, meaning that the decoding
 of
   the following non-key frames requires access to the key frame. The
 number of
   non-key frame can be relatively large, say 16, depending on the codec.
  
   If the user should wait for all the frames to be decoded before the key
   frame can be shown, then either the key frame is to be skipped or
 delayed.
   Both of the options are highly undesirable.
 
  I don't think that such a delay is worrisome. This is only initial delay.
  The hw will process these N buffers and after that it works exactly the
 same
  as it would without the delay in terms of processing time.
 
 Well, yes, but consider that the decoder also processes key frames when the
 decoding is in progress. The dequeueing of the key frames (and any further
 frames as long as the key frame is needed by the decoder) will be delayed
 until the key frame is no longer required.
 
 You need extra buffers to cope with such a situation, and in the worst case,
 or when the decoder is just as fast as you want to show the frames on the
 display, you need double the amount of buffers compared to what you'd really
 need for decoding. To make matters worse, this tends to happen at largest
 resolutions.
 
 I think we'd like to avoid this.

I really, really, don’t see why you say that we would need double the number of
buffers?

Let's suppose that the stream may reference 2 previous frames.

Frame number: 123456789ABCDEF
Returned frame: 123456789ABCDEF
Buffers returned:   123123123123... (in case we have only 3 buffers)

See? After we decode frame number 3 we can return frame number 3. Thus we need
minimum of 3 buffers. If we want to have 4 for simultaneous the use of
application
we allocate 7. 

The current codec handling system has been build on the following assumptions:
- the buffers should be dequeued in order
- the buffers should be only dequeued when they are no longer is use

This takes care of the delay related problems by requiring more buffers.
You have an initial delay then the frames are returned with a constant rate.

Dequeuing of any frame will be delayed until it is no longer used - it doesn't
matter whether it is a key (I) frame, P frame o r a B frame. Actually B frames
shouldn't be used as reference. Usually a frame is referencing only 2-3 previous
and maybe 1 ahead (for B-frames) frames and they don't need to be I-frames. 
Still
the interval between I-frames may be 16 or even many, many, more.

In your other email you have mentioned acceleration. I can agree with you that
it makes the process faster than decoding the same compressed stream many times.
I have never seen any implementation that would process the same compressed
stream more than once. Thus I would not say it's purely for acceleration. This 
is
the way it is done - you keep older decompressed frames for reference.
Reprocessing
the compressed stream would be too computation demanding I suppose.

Anyway I can definitely recommend the book H.264 and MPEG-4 video compression:
Video coding for next-generation multimedia by Iain E.G. Richardson. It is a
good
book about video coding and modern codecs with many things explained. It would
help
to get you around with codecs and could answer many of your questions.
http://www.amazon.com/H-264-MPEG-4-Video-Compression-Generation/dp/0470848375

Best wishes,
--
Kamil Debski
Linux Platform Group
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-12 Thread Laurent Pinchart
Hi Kamil,

On Tuesday 06 December 2011 16:03:33 Kamil Debski wrote:
 On 06 December 2011 15:36 Sakari Ailus wrote:
  On Fri, Dec 02, 2011 at 02:50:17PM -0200, Mauro Carvalho Chehab wrote:
   On 02-12-2011 11:57, Sakari Ailus wrote:
Some codecs need to be able to access buffers which have already been
decoded to decode more buffers. Key frames, simply.
   
   Ok, but let's not add unneeded things at the API if you're not sure. If
   we have such need for a given hardware, then add it. Otherwise, keep it
   simple.
 
  This is not so much dependent on hardware but on the standards which the
  cdoecs implement.
  
The user space still wants to be able to show these buffers, so a new
flag would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for
example.
   
   Huh? Assuming a capture device, when kernel makes a buffer available to
   userspace, kernel should not touch on it anymore (not even for read -
   although reading from it probably won't cause any issues, as video
   applications in general don't write into those buffers). The opposite is
   true for output devices: once userspace fills it, and queues, it should
   not touch that buffer again.
   
   This is part of the queue/dequeue logic. I can't see any need for an
   extra flag to explicitly say that.
  
  There is a reason to do so. An example of this is below. The
  memory-to-memory device has two queues, output can capture. A video
  decoder memory-to-memory device's output queue handles compressed video
  and the capture queue provides the application decoded frames.
  
  Certain frames in the stream are key frames, meaning that the decoding of
  the following non-key frames requires access to the key frame. The number
  of non-key frame can be relatively large, say 16, depending on the
  codec.
  
  If the user should wait for all the frames to be decoded before the key
  frame can be shown, then either the key frame is to be skipped or
  delayed. Both of the options are highly undesirable.
 
 I don't think that such a delay is worrisome. This is only initial delay.
 The hw will process these N buffers and after that it works exactly the
 same as it would without the delay in terms of processing time.

For offline video decoding (such as playing a movie for instance) that's 
probably not a big issue. For online video decoding (video conferencing) where 
you want to minimize latency it can be.

  Alternatively one could allocate the double number of buffers required.
  At 1080p and 16 buffers this could be roughly 66 MB. Additionally,
  starting the playback is delayed for the duration for the decoding of
  those frames. I think we should not force users to do so.
 
 I really don't think it is necessary to allocate twice as many buffers.
 Assuming that hw needs K buffers you may alloc N (= K + L) and the
 application may use all these L buffers at a time.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-12 Thread Kamil Debski
Hi,

 From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
 Sent: 12 December 2011 11:59
 
 Hi Kamil,
 
 On Tuesday 06 December 2011 16:03:33 Kamil Debski wrote:
  On 06 December 2011 15:36 Sakari Ailus wrote:
   On Fri, Dec 02, 2011 at 02:50:17PM -0200, Mauro Carvalho Chehab wrote:
On 02-12-2011 11:57, Sakari Ailus wrote:
 Some codecs need to be able to access buffers which have already
 been
 decoded to decode more buffers. Key frames, simply.
   
Ok, but let's not add unneeded things at the API if you're not sure.
 If
we have such need for a given hardware, then add it. Otherwise, keep
 it
simple.
  
   This is not so much dependent on hardware but on the standards which the
   cdoecs implement.
  
 The user space still wants to be able to show these buffers, so a
 new
 flag would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for
 example.
   
Huh? Assuming a capture device, when kernel makes a buffer available
 to
userspace, kernel should not touch on it anymore (not even for read -
although reading from it probably won't cause any issues, as video
applications in general don't write into those buffers). The opposite
 is
true for output devices: once userspace fills it, and queues, it
 should
not touch that buffer again.
   
This is part of the queue/dequeue logic. I can't see any need for an
extra flag to explicitly say that.
  
   There is a reason to do so. An example of this is below. The
   memory-to-memory device has two queues, output can capture. A video
   decoder memory-to-memory device's output queue handles compressed video
   and the capture queue provides the application decoded frames.
  
   Certain frames in the stream are key frames, meaning that the decoding
 of
   the following non-key frames requires access to the key frame. The
 number
   of non-key frame can be relatively large, say 16, depending on the
   codec.
  
   If the user should wait for all the frames to be decoded before the key
   frame can be shown, then either the key frame is to be skipped or
   delayed. Both of the options are highly undesirable.
 
  I don't think that such a delay is worrisome. This is only initial delay.
  The hw will process these N buffers and after that it works exactly the
  same as it would without the delay in terms of processing time.
 
 For offline video decoding (such as playing a movie for instance) that's
 probably not a big issue. For online video decoding (video conferencing)
 where
 you want to minimize latency it can be.

In this use case it would be good to setup the encoder to use as little 
reference frames as possible. The lesser reference frames are used the shorter
is the delay. The stream used for video conferencing should definitely be
different
from the one used in DVD/Blu-ray.

Also you can set the display delay to 0 then you will get the frames as soon as
possible, but it's up to the application to display them in the right order and
to make sure that they are not modified.
 
   Alternatively one could allocate the double number of buffers required.
   At 1080p and 16 buffers this could be roughly 66 MB. Additionally,
   starting the playback is delayed for the duration for the decoding of
   those frames. I think we should not force users to do so.
 
  I really don't think it is necessary to allocate twice as many buffers.
  Assuming that hw needs K buffers you may alloc N (= K + L) and the
  application may use all these L buffers at a time.
 

Best wishes,
--
Kamil Debski
Linux Platform Group
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-12 Thread Laurent Pinchart
Hi Mauro,

On Tuesday 06 December 2011 16:40:11 Mauro Carvalho Chehab wrote:
 On 06-12-2011 13:19, Kamil Debski wrote:
  On 06 December 2011 15:42 Mauro Carvalho Chehab wrote:
  On 06-12-2011 12:28, 'Sakari Ailus' wrote:
  On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
  ...
  
  2) new requirement is for a bigger buffer. DMA transfers need to
  be stopped before actually writing inside the buffer (otherwise,
  memory will be corrupted).
  
  In this case, all queued buffers should be marked with an error
  flag. So, both V4L2_BUF_FLAG_FORMATCHANGED and
  V4L2_BUF_FLAG_ERROR should raise. The new format should be
  available via G_FMT.
  
  I'd like to reword this as follows:
  
  1. In all cases, the application needs to be informed that the format
  has changed.
 
  V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT
  will report the new format.
  
  2. In all cases, the application must have the option of reallocating
  buffers if it wishes.
  
  In order to support this, the driver needs to wait until the
  application acknowledged the format change before it starts decoding
  the stream. Otherwise, if the codec started decoding the new stream
  to the existing buffers by itself, applications wouldn't have the
  option of freeing the existing buffers and allocating smaller ones.
  
  STREAMOFF/STREAMON is one way of acknowledging the format change. I'm
  not opposed to other ways of doing that, but I think we need an
  acknowledgment API to tell the driver to proceed.
  
  Forcing STRAEMOFF/STRAEMON has two major advantages:
  
  1) The application will have an ability to free and reallocate buffers
  if it wishes so, and
  
  2) It will get explicit information on the changed format. Alternative
  would require an additional API to query the format of buffers in cases
  the information isn't implicitly available.
  
  As already said, a simple flag may give this meaning. Alternatively (or
  complementary, an event may be generated, containing the new format).
  
  If we do not require STRAEMOFF/STREAMON, the stream would have to be
  paused until the application chooses to continue it after dealing with
  its buffers and formats.
  
  No. STREAMOFF is always used to stop the stream. We can't make it mean
  otherwise.
  
  So, after calling it, application should assume that frames will be
  lost, while the DMA engine doesn't start again.

For live capture devices, sure, but we're talking about memory-to-memory here. 
If the hardware is stopped compressed buffers on the OUTPUT queue won't be 
decoded anymore, and userspace won't be able to queue more buffers to the 
driver. There will be no frame loss on the kernel side.

  Do you mean all buffers or just those that are queued in hardware?
 
 Of course the ones queued.
 
  What has been processed stays processed, it should not matter to the
  buffers that have been processed.
 
 Sure.
 
  The compressed buffer that is queued in the driver and that caused the
  resolution change is on the OUTPUT queue.
 
 Not necessarily. If the buffer is smaller than the size needed for the
 resolution change, what is there is trash, as it could be a partially
 filled buffer or an empty buffer, depending if the driver detected about
 the format change after or before start filling it.

Were's talking about video decoding. On the OUTPUT queue the driver receives 
compressed buffers. At some point the compressed buffers contain information 
to notifies the driver and/or hardware of a resolution change in the 
compressed stream. At the point the buffers on the CAPTURE queue might not be 
suitable anymore, but there's no issue with buffers on the OUTPUT queue. If 
STREAMOFF is then called on the CAPTURE queue only, buffers already queued on 
the OUTPUT queue will stay queued.

  STREMOFF is only done on the CAPTURE queue, so it stays queued and
  information is retained.
  
   From CAPTURE all processed buffers have already been dequeued, so yes
   the content of the buffers queued in hw is lost. But this is ok, because
   after the resolution change the previous frames are not used in
   prediction.
 
 No. According with the spec:
 
   The VIDIOC_STREAMON and VIDIOC_STREAMOFF ioctl start and stop the 
 capture
 or output process during streaming (memory mapping or user pointer) I/O.
 
   Specifically the capture hardware is disabled and no input buffers are
 filled (if there are any empty buffers in the incoming queue) until
 VIDIOC_STREAMON has been called. Accordingly the output hardware is
 disabled, no video signal is produced until VIDIOC_STREAMON has been
 called. The ioctl will succeed only when at least one output buffer is in
 the incoming queue.
 
   The VIDIOC_STREAMOFF ioctl, apart of aborting or finishing any DMA in
 progress, unlocks any user pointer buffers locked in physical memory, and
 it removes all buffers from the incoming and outgoing queues. That means
 all images captured but not dequeued yet will 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-10 Thread 'Sakari Ailus'
Hi Mauro,

On Tue, Dec 06, 2011 at 02:39:41PM -0200, Mauro Carvalho Chehab wrote:
...
 I think that still it should contain no useful data, just *_FORMAT_CHANGED | 
 *_ERROR
 flags set. Then the application could decide whether it keeps the current
 size/alignment/... or should it allocate new buffers. Then ACK the driver.
 
 This will cause frame losses on Capture devices. It probably doesn't make 
 sense to
 define resolution change support like this for output devices.
 
 Eventually, we may have an extra flag: *_PAUSE. If *_PAUSE is detected, a 
 VIDEO_DECODER_CMD
 is needed to continue.
 
 So, on M2M devices, the 3 flags are raised and the buffer is not filled.  
 This would cover
 Sakari's case.

This sounds good in my opinion. I've been concentrated to memory-to-memory
devices so far, but I now reckon the data to be processed might not arrive
from the system memory.

I agree we need different behaviour in the two cases: when the data arrives
from the system memory, no loss of decoded data should happen due to
reconfiguration of the device done by the user --- which sometimes is
mandatory.

Would pause, as you propose it, be set by the driver, or by the application
in the intent to indicate the stream should be stopped whenever the format
changes, or both?

 The thing is that we have two queues in memory-to-memory devices.
 I think the above does apply to the CAPTURE queue:
 - no processing is done after STREAMOFF
 - buffers that have been queue are dequeued and their content is lost
 Am I wrong?
 
 This is what is there at the spec. I think we need to properly specify what
 happens for M2M devices.

I fully agree. Different device profiles have a role in this.

Cheers,

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-09 Thread 'Sakari Ailus'
Hi Kamil,

On Tue, Dec 06, 2011 at 04:03:33PM +0100, Kamil Debski wrote:
...
   The user space still wants to be able to show these buffers, so a new
  flag
   would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.
  
   Huh? Assuming a capture device, when kernel makes a buffer available to
  userspace,
   kernel should not touch on it anymore (not even for read - although
  reading from
   it probably won't cause any issues, as video applications in general don't
  write
   into those buffers). The opposite is true for output devices: once
  userspace fills it,
   and queues, it should not touch that buffer again.
  
   This is part of the queue/dequeue logic. I can't see any need for an extra
   flag to explicitly say that.
  
  There is a reason to do so. An example of this is below. The
  memory-to-memory device has two queues, output can capture. A video decoder
  memory-to-memory device's output queue handles compressed video and the
  capture queue provides the application decoded frames.
  
  Certain frames in the stream are key frames, meaning that the decoding of
  the following non-key frames requires access to the key frame. The number of
  non-key frame can be relatively large, say 16, depending on the codec.
  
  If the user should wait for all the frames to be decoded before the key
  frame can be shown, then either the key frame is to be skipped or delayed.
  Both of the options are highly undesirable.
 
 I don't think that such a delay is worrisome. This is only initial delay.
 The hw will process these N buffers and after that it works exactly the same
 as it would without the delay in terms of processing time.

Well, yes, but consider that the decoder also processes key frames when the
decoding is in progress. The dequeueing of the key frames (and any further
frames as long as the key frame is needed by the decoder) will be delayed
until the key frame is no longer required.

You need extra buffers to cope with such a situation, and in the worst case,
or when the decoder is just as fast as you want to show the frames on the
display, you need double the amount of buffers compared to what you'd really
need for decoding. To make matters worse, this tends to happen at largest
resolutions.

I think we'd like to avoid this.

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-09 Thread 'Sakari Ailus'
On Wed, Dec 07, 2011 at 12:12:08PM +0100, Kamil Debski wrote:
 
  From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
  Sent: 06 December 2011 23:42
  
 
 [...]
 
  
   That's a good point. It's more related to changes in stream properties
  ---
   the frame rate of the stream could change, too. That might be when you
  could
   like to have more buffers in the queue. I don't think this is critical
   either.
   
   This could also depend on the properties of the codec. Again, I'd wish
  a
   comment from someone who knows codecs well. Some codecs need to be able
  to
   access buffers which have already been decoded to decode more buffers.
  Key
   frames, simply.
   
   Ok, but let's not add unneeded things at the API if you're not sure. If
  we have
   such need for a given hardware, then add it. Otherwise, keep it simple.
   
   This is not so much dependent on hardware but on the standards which the
   cdoecs implement.
  
   Could you please elaborate it? On what scenario this is needed?
  
  This is a property of the stream, not a property of the decoder. To
  reconstruct each frame, a part of the stream is required and already decoded
  frames may be used to accelerate the decoding. What those parts are. depends
  on the codec, not a particular implementation.
 
 They are not used to accelerate decoding. They are used to predict what
 should be displayed. If that frame is missing or modified it will cause
 corruption in consecutive frames.
 
 I want to make it clear - they are necessary, not optional to accelerate
 decoding speed.

I think we're talking about the same thing. They are being used to
accelerate it --- instead of reconstructing the previous frame from the
compressed stream, the codecjust reuses it. In practice this is always done
and the implementations probably require it.

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-07 Thread Kamil Debski

 From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
 Sent: 06 December 2011 23:42
 

[...]

 
  That's a good point. It's more related to changes in stream properties
 ---
  the frame rate of the stream could change, too. That might be when you
 could
  like to have more buffers in the queue. I don't think this is critical
  either.
  
  This could also depend on the properties of the codec. Again, I'd wish
 a
  comment from someone who knows codecs well. Some codecs need to be able
 to
  access buffers which have already been decoded to decode more buffers.
 Key
  frames, simply.
  
  Ok, but let's not add unneeded things at the API if you're not sure. If
 we have
  such need for a given hardware, then add it. Otherwise, keep it simple.
  
  This is not so much dependent on hardware but on the standards which the
  cdoecs implement.
 
  Could you please elaborate it? On what scenario this is needed?
 
 This is a property of the stream, not a property of the decoder. To
 reconstruct each frame, a part of the stream is required and already decoded
 frames may be used to accelerate the decoding. What those parts are. depends
 on the codec, not a particular implementation.

They are not used to accelerate decoding. They are used to predict what
should be displayed. If that frame is missing or modified it will cause
corruption in consecutive frames.

I want to make it clear - they are necessary, not optional to accelerate
decoding speed.
 
 Anyone with more knowledge of codecs than myself might be able to give a
 concrete example. Sebastian?
 

--
Kamil Debski
Linux Platform Group
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-07 Thread Kamil Debski
 From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
 Sent: 06 December 2011 17:40
 
 On 06-12-2011 14:11, Kamil Debski wrote:
  From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
  Sent: 06 December 2011 16:40
 
  On 06-12-2011 13:19, Kamil Debski wrote:
  From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
  Sent: 06 December 2011 15:42
 
  On 06-12-2011 12:28, 'Sakari Ailus' wrote:
  Hi all,
 
  On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
  ...
  2) new requirement is for a bigger buffer. DMA transfers need to
  be
  stopped before actually writing inside the buffer (otherwise,
  memory
  will be corrupted).
 
  In this case, all queued buffers should be marked with an error
  flag.
  So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR
  should
  raise. The new format should be available via G_FMT.
 
  I'd like to reword this as follows:
 
  1. In all cases, the application needs to be informed that the format
  has
  changed.
 
  V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT
  will
  report the new format.
 
  2. In all cases, the application must have the option of reallocating
  buffers
  if it wishes.
 
  In order to support this, the driver needs to wait until the
  application
  acknowledged the format change before it starts decoding the stream.
  Otherwise, if the codec started decoding the new stream to the
 existing
  buffers by itself, applications wouldn't have the option of freeing
 the
  existing buffers and allocating smaller ones.
 
  STREAMOFF/STREAMON is one way of acknowledging the format change. I'm
  not
  opposed to other ways of doing that, but I think we need an
  acknowledgment API
  to tell the driver to proceed.
 
  Forcing STRAEMOFF/STRAEMON has two major advantages:
 
  1) The application will have an ability to free and reallocate buffers
  if
  it
  wishes so, and
 
  2) It will get explicit information on the changed format. Alternative
  would
  require an additional API to query the format of buffers in cases the
  information isn't implicitly available.
 
  As already said, a simple flag may give this meaning. Alternatively (or
  complementary,
  an event may be generated, containing the new format).
 
  If we do not require STRAEMOFF/STREAMON, the stream would have to be
  paused
  until the application chooses to continue it after dealing with its
  buffers
  and formats.
 
  No. STREAMOFF is always used to stop the stream. We can't make it mean
  otherwise.
 
  So, after calling it, application should assume that frames will be
 lost,
  while
  the DMA engine doesn't start again.
 
  Do you mean all buffers or just those that are queued in hardware?
 
  Of course the ones queued.
 
  What has been processed stays processed, it should not matter to the
  buffers
  that have been processed.
 
  Sure.
 
  The compressed buffer that is queued in the driver and that caused the
  resolution
  change is on the OUTPUT queue.
 
  Not necessarily. If the buffer is smaller than the size needed for the
  resolution
  change, what is there is trash, as it could be a partially filled buffer
 or
  an
  empty buffer, depending if the driver detected about the format change
 after
  or
  before start filling it.
 
  I see the problem. If a bigger buffer is needed it's clear. A buffer with
  no sane data is returned and *_FORMAT_CHANGED | *_ERROR flags are set.
  If the resolution is changed but it fits the current conditions (size +
 number
  of buffers) then what should be the contents of the returned buffer?
 
 The one returned on G_FMT or at an specific event. Userspace application can
 change it later with S_FMT, if needed.
 
  I think that still it should contain no useful data, just *_FORMAT_CHANGED
 | *_ERROR
  flags set. Then the application could decide whether it keeps the current
  size/alignment/... or should it allocate new buffers. Then ACK the driver.
 
 This will cause frame losses on Capture devices. It probably doesn't make
 sense to
 define resolution change support like this for output devices.
 
 Eventually, we may have an extra flag: *_PAUSE. If *_PAUSE is detected, a
 VIDEO_DECODER_CMD
 is needed to continue.
 
 So, on M2M devices, the 3 flags are raised and the buffer is not filled.
 This would cover
 Sakari's case.
 
  For our (Samsung) hw this is not a problem, we could always use the
 existing buffers
  if it is possible (size). But Sakari had reported that he might need to
 adjust some
  alignment property. Also, having memory constraints, the application might
 choose
  to allocate smaller buffers.
 
 
 
  STREMOFF is only done on the CAPTURE queue, so it
  stays queued and information is retained.
 
From CAPTURE all processed buffers have already been dequeued, so yes
 the
  content of
  the buffers queued in hw is lost. But this is ok, because after the
  resolution change
  the previous frames are not used in prediction.
 
  No. According with the spec:
 
 The VIDIOC_STREAMON 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Laurent Pinchart
Hi Kamil,

On Friday 02 December 2011 18:32:33 Kamil Debski wrote:
 Hi,
 
 Thank you for your comments, Mauro!
 
 Laurent there is a question for you below, so it would be great
 if you could spare a minute and have a look.

Sure :-)

 On 02 December 2011 18:08 Mauro Carvalho Chehab wrote:
  On 02-12-2011 13:41, Kamil Debski wrote:
   On 02 December 2011 14:58 Sakari Ailus wrote:
   On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
   On 02-12-2011 08:31, Kamil Debski wrote:
   Hi,
   
   Yesterday we had a chat about video codecs in V4L2 and how to change
   the interface to accommodate the needs of GStreamer (and possibly
   other media players and applications using video codecs).
   
   The problem that many hardware codecs need a fixed number of pre-
   allocated buffers should be resolved when gstreamer 0.11 will be
   released.
   
   The main issue that we are facing is the resolution change and how
   it should be handled by the driver and the application. The
   resolution change is particularly common in digital TV. It occurs
   when content on a single channel is changed. For example there is the
   transition from a TV series to a commercials block. Other stream
   parameters may also change. The minimum number of buffers required
   for decoding is of particular interest of us. It is connected with
   how many old buffers are used for picture prediction.
   
   When this occurs there are two possibilities: resolution can be
   increased or decreased. In the first case it is very likely that the
   current buffers are too small to fit the decoded frames. In the
   latter there is the choice to use the existing buffers or allocate
   new set of buffer with reduced size.
   Same applies to the number of buffers - it can be decreased or
   increased.
   
   On the OUTPUT queue there is not much to be done. A buffer that
   contains a frame with the new resolution will not be dequeued until
   it is fully processed.
  
   On the CAPTURE queue the application has to be notified about the
   resolution change.  The idea proposed during the chat is to introduce
   a new flag V4L2_BUF_FLAG_WRONGFORMAT.
   
   IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.
   
   The alternative is to return a specific error code to the user --- the
   frame would not be decoded in either case. See below.

I agree with Mauro, WRONGFORMAT might not be the best name. That's a detail 
though, whatever the flag name should be, I think we agree that we need a 
flag.

   1) After all the frames with the old resolution are dequeued a
   buffer with the following flags V4L2_BUF_FLAG_ERROR |
   V4L2_BUF_FLAG_WRONGFORMAT is returned.
  
   2) To acknowledge the resolution change the application should
   STREAMOFF, check what has changed and then STREAMON.
   
   I don't think this is needed, if the buffer size is enough to support
   the new format.
   
   Sometimes not, but sometimes there are buffer line alignment
   requirements which must be communicated to the driver using S_FMT. If
   the frames are decoded using a driver-decided format, it might be
   impossible to actually use these decoded frames.
   That's why there's streamoff and streamon.
   
   Also, if memory use is our key consideration then the application might
   want to allocate smaller buffers.
  
  OK, but the API should support both cases.

Sure, both cases should be supported. There are two problems here:

- Buffer size might need to be changed. If the new format has a higher 
resolution than the old one, and the currently allocated buffers are too small 
to handle it, applications need to allocate new buffers. An alternative to 
that is to allocate big enough buffers up-front, which is a perfectly valid 
solution for some use cases, but not for all of them (think about memory 
constrained devices). We thus need to allow applications to allocate new 
buffers or use the existing ones, provided they are big enough. The same 
applies when the resolution is lowered, applications might want to keep using 
the already allocated buffers, or free them and allocate smaller buffers to 
lower memory pressure. Both cases need to be supported.

- Applications need to be notified that the format changed. As an example, 
let's assume we switch from a landscape resolution to a portrait resolution 
with the same size. Buffers don't need to be reallocated, but applications 
will get a pretty weird image if they interpret 600x800 images as 800x600 
images.

   Btw, a few drivers (bttv comes into my mind) properly handles format
   changes.
  
   This were there in order to support a bad behavior found on a few
   V4L1 applications, where the applications were first calling STREAMON
   and then setting the buffer.
  
   The buffers do not have a format, the video device queue has. If the
   format changes during streaming it is impossible to find that out using
   the current API.
   
   If I'm not mistaken, the old vlc V4L1 driver 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread 'Sakari Ailus'
Hi all,

On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
...
2) new requirement is for a bigger buffer. DMA transfers need to be
stopped before actually writing inside the buffer (otherwise, memory
will be corrupted).
   
In this case, all queued buffers should be marked with an error flag.
So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should
raise. The new format should be available via G_FMT.
 
 I'd like to reword this as follows:
 
 1. In all cases, the application needs to be informed that the format has 
 changed.
 
 V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT will 
 report the new format.
 
 2. In all cases, the application must have the option of reallocating buffers 
 if it wishes.
 
 In order to support this, the driver needs to wait until the application 
 acknowledged the format change before it starts decoding the stream. 
 Otherwise, if the codec started decoding the new stream to the existing 
 buffers by itself, applications wouldn't have the option of freeing the 
 existing buffers and allocating smaller ones.
 
 STREAMOFF/STREAMON is one way of acknowledging the format change. I'm not 
 opposed to other ways of doing that, but I think we need an acknowledgment 
 API 
 to tell the driver to proceed.

Forcing STRAEMOFF/STRAEMON has two major advantages:

1) The application will have an ability to free and reallocate buffers if it
wishes so, and

2) It will get explicit information on the changed format. Alternative would
require an additional API to query the format of buffers in cases the
information isn't implicitly available.

If we do not require STRAEMOFF/STREAMON, the stream would have to be paused
until the application chooses to continue it after dealing with its buffers
and formats.

I'd still return a specific error when the size changes since it's more
explicit that something is not right, rather than just a flag. But if I'm
alone in thinking so I won't insist.

Regards,

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Sakari Ailus
Hi Mauro,

On Fri, Dec 02, 2011 at 02:50:17PM -0200, Mauro Carvalho Chehab wrote:
 On 02-12-2011 11:57, Sakari Ailus wrote:
 Hi Mauro,
 
 On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
 On 02-12-2011 08:31, Kamil Debski wrote:
 Hi,
 
 Yesterday we had a chat about video codecs in V4L2 and how to change the
 interface to accommodate the needs of GStreamer (and possibly other media
 players and applications using video codecs).
 
 The problem that many hardware codecs need a fixed number of pre-allocated
 buffers should be resolved when gstreamer 0.11 will be released.
 
 The main issue that we are facing is the resolution change and how it 
 should be
 handled by the driver and the application. The resolution change is
 particularly common in digital TV. It occurs when content on a single 
 channel
 is changed. For example there is the transition from a TV series to a
 commercials block. Other stream parameters may also change. The minimum 
 number
 of buffers required for decoding is of particular interest of us. It is
 connected with how many old buffers are used for picture prediction.
 
 When this occurs there are two possibilities: resolution can be increased 
 or
 decreased. In the first case it is very likely that the current buffers 
 are too
 small to fit the decoded frames. In the latter there is the choice to use 
 the
 existing buffers or allocate new set of buffer with reduced size. Same 
 applies
 to the number of buffers - it can be decreased or increased.
 
 On the OUTPUT queue there is not much to be done. A buffer that contains a
 frame with the new resolution will not be dequeued until it is fully 
 processed.
 
 On the CAPTURE queue the application has to be notified about the 
 resolution
 change.  The idea proposed during the chat is to introduce a new flag
 V4L2_BUF_FLAG_WRONGFORMAT.
 
 IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.
 
 The alternative is to return a specific error code to the user --- the frame
 would not be decoded in either case. See below.
 
 As I said, some drivers work with buffers with a bigger size than the format, 
 and
 does allow setting a smaller format without the need of streamoff.
 
 
 1) After all the frames with the old resolution are dequeued a buffer with 
 the
 following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is 
 returned.
 2) To acknowledge the resolution change the application should STREAMOFF, 
 check
 what has changed and then STREAMON.
 
 I don't think this is needed, if the buffer size is enough to support the 
 new
 format.
 
 Sometimes not, but sometimes there are buffer line alignment requirements
 which must be communicated to the driver using S_FMT. If the frames are
 decoded using a driver-decided format, it might be impossible to actually
 use these decoded frames.
 
 That's why there's streamoff and streamon.
 
 Don't get me wrong. What I'm saying is that there are valid cases where
 there's no need to streamoff/streamon. What I'm saying is that, when
 there's no need to do it, just don't rise the V4L2_BUF_FLAG_ERROR flag.
 The V4L2_BUF_FLAG_FORMATCHANGED still makes sense.

I try not to. :)

The issue is that it's the user space which knows how it is going to use the
buffers it dequeues from a device. It's not possible for the driver know
that --- unless explicitly told by the user. This could be a new flag, if we
need differing behaviour as it seems here.

The user may queue the same memory buffers to another device, which I
consider to be a common use case in embedded systems. The line alignment
requirements of the two (or more) devices often are not the same.

 
 Btw, a few drivers (bttv comes into my mind) properly handles format 
 changes.
 This were there in order to support a bad behavior found on a few V4L1 
 applications,
 where the applications were first calling STREAMON and then setting the 
 buffer.
 
 The buffers do not have a format, the video device queue has. If the format
 changes during streaming it is impossible to find that out using the current
 API.
 
 Yes, but extending it to proper support it is the scope of this RFC. Yet, 
 several
 drivers allow to resize the format (e. g. the resolution of the image) 
 without
 streamon/streamoff, via an explicit call to S_FMT.
 
 So, whatever change at the API is done, it should keep supporting format 
 changes
 (in practice, resolution changes) without the need of re-initializing the DMA 
 engine,
 of course when such change won't break the capability for userspace to decode
 the new frames.

Stopping and starting the queue does not involve additional penalty: a
memory-to-memory device processes buffers one at a time, and most of the
hardware stops between the buffers in any case before being started by the
software again. The buffers are not affected either; they stay mapped and
pinned to memory.

 
 If I'm not mistaken, the old vlc V4L1 driver used to do that.
 
 What bttv used to do is to allocate a buffer big 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Mauro Carvalho Chehab

On 06-12-2011 12:28, 'Sakari Ailus' wrote:

Hi all,

On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
...

2) new requirement is for a bigger buffer. DMA transfers need to be
stopped before actually writing inside the buffer (otherwise, memory
will be corrupted).

In this case, all queued buffers should be marked with an error flag.
So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should
raise. The new format should be available via G_FMT.


I'd like to reword this as follows:

1. In all cases, the application needs to be informed that the format has
changed.

V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT will
report the new format.

2. In all cases, the application must have the option of reallocating buffers
if it wishes.

In order to support this, the driver needs to wait until the application
acknowledged the format change before it starts decoding the stream.
Otherwise, if the codec started decoding the new stream to the existing
buffers by itself, applications wouldn't have the option of freeing the
existing buffers and allocating smaller ones.

STREAMOFF/STREAMON is one way of acknowledging the format change. I'm not
opposed to other ways of doing that, but I think we need an acknowledgment API
to tell the driver to proceed.


Forcing STRAEMOFF/STRAEMON has two major advantages:

1) The application will have an ability to free and reallocate buffers if it
wishes so, and

2) It will get explicit information on the changed format. Alternative would
require an additional API to query the format of buffers in cases the
information isn't implicitly available.


As already said, a simple flag may give this meaning. Alternatively (or 
complementary,
an event may be generated, containing the new format).


If we do not require STRAEMOFF/STREAMON, the stream would have to be paused
until the application chooses to continue it after dealing with its buffers
and formats.


No. STREAMOFF is always used to stop the stream. We can't make it mean 
otherwise.

So, after calling it, application should assume that frames will be lost, while
the DMA engine doesn't start again.

For things like MPEG decoders, Hans proposed an ioctl, that could use to pause
and continue the decoding.


I'd still return a specific error when the size changes since it's more
explicit that something is not right, rather than just a flag. But if I'm
alone in thinking so I won't insist.

Regards,



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Kamil Debski
 From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
 Sent: 06 December 2011 15:36
 
 Hi Mauro,
 
 On Fri, Dec 02, 2011 at 02:50:17PM -0200, Mauro Carvalho Chehab wrote:
  On 02-12-2011 11:57, Sakari Ailus wrote:
  Hi Mauro,
  
  On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
  On 02-12-2011 08:31, Kamil Debski wrote:
  Hi,
  
  Yesterday we had a chat about video codecs in V4L2 and how to change
 the
  interface to accommodate the needs of GStreamer (and possibly other
 media
  players and applications using video codecs).
  
  The problem that many hardware codecs need a fixed number of pre-
 allocated
  buffers should be resolved when gstreamer 0.11 will be released.
  
  The main issue that we are facing is the resolution change and how it
 should be
  handled by the driver and the application. The resolution change is
  particularly common in digital TV. It occurs when content on a single
 channel
  is changed. For example there is the transition from a TV series to a
  commercials block. Other stream parameters may also change. The minimum
 number
  of buffers required for decoding is of particular interest of us. It is
  connected with how many old buffers are used for picture prediction.
  
  When this occurs there are two possibilities: resolution can be
 increased or
  decreased. In the first case it is very likely that the current buffers
 are too
  small to fit the decoded frames. In the latter there is the choice to
 use the
  existing buffers or allocate new set of buffer with reduced size. Same
 applies
  to the number of buffers - it can be decreased or increased.
  
  On the OUTPUT queue there is not much to be done. A buffer that
 contains a
  frame with the new resolution will not be dequeued until it is fully
 processed.
  
  On the CAPTURE queue the application has to be notified about the
 resolution
  change.  The idea proposed during the chat is to introduce a new flag
  V4L2_BUF_FLAG_WRONGFORMAT.
  
  IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.
  
  The alternative is to return a specific error code to the user --- the
 frame
  would not be decoded in either case. See below.
 
  As I said, some drivers work with buffers with a bigger size than the
 format, and
  does allow setting a smaller format without the need of streamoff.
  
  
  1) After all the frames with the old resolution are dequeued a buffer
 with the
  following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is
 returned.
  2) To acknowledge the resolution change the application should
 STREAMOFF, check
  what has changed and then STREAMON.
  
  I don't think this is needed, if the buffer size is enough to support
 the new
  format.
  
  Sometimes not, but sometimes there are buffer line alignment requirements
  which must be communicated to the driver using S_FMT. If the frames are
  decoded using a driver-decided format, it might be impossible to actually
  use these decoded frames.
  
  That's why there's streamoff and streamon.
 
  Don't get me wrong. What I'm saying is that there are valid cases where
  there's no need to streamoff/streamon. What I'm saying is that, when
  there's no need to do it, just don't rise the V4L2_BUF_FLAG_ERROR flag.
  The V4L2_BUF_FLAG_FORMATCHANGED still makes sense.
 
 I try not to. :)
 
 The issue is that it's the user space which knows how it is going to use the
 buffers it dequeues from a device. It's not possible for the driver know
 that --- unless explicitly told by the user. This could be a new flag, if we
 need differing behaviour as it seems here.
 
 The user may queue the same memory buffers to another device, which I
 consider to be a common use case in embedded systems. The line alignment
 requirements of the two (or more) devices often are not the same.
 
 
  Btw, a few drivers (bttv comes into my mind) properly handles format
 changes.
  This were there in order to support a bad behavior found on a few V4L1
 applications,
  where the applications were first calling STREAMON and then setting the
 buffer.
  
  The buffers do not have a format, the video device queue has. If the
 format
  changes during streaming it is impossible to find that out using the
 current
  API.
 
  Yes, but extending it to proper support it is the scope of this RFC. Yet,
 several
  drivers allow to resize the format (e. g. the resolution of the image)
 without
  streamon/streamoff, via an explicit call to S_FMT.
 
  So, whatever change at the API is done, it should keep supporting format
 changes
  (in practice, resolution changes) without the need of re-initializing the
 DMA engine,
  of course when such change won't break the capability for userspace to
 decode
  the new frames.
 
 Stopping and starting the queue does not involve additional penalty: a
 memory-to-memory device processes buffers one at a time, and most of the
 hardware stops between the buffers in any case before being started by the
 software again. The buffers are not affected 

RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Kamil Debski
 From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
 Sent: 06 December 2011 15:42
 
 On 06-12-2011 12:28, 'Sakari Ailus' wrote:
  Hi all,
 
  On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
  ...
  2) new requirement is for a bigger buffer. DMA transfers need to be
  stopped before actually writing inside the buffer (otherwise, memory
  will be corrupted).
 
  In this case, all queued buffers should be marked with an error flag.
  So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should
  raise. The new format should be available via G_FMT.
 
  I'd like to reword this as follows:
 
  1. In all cases, the application needs to be informed that the format has
  changed.
 
  V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT
 will
  report the new format.
 
  2. In all cases, the application must have the option of reallocating
 buffers
  if it wishes.
 
  In order to support this, the driver needs to wait until the application
  acknowledged the format change before it starts decoding the stream.
  Otherwise, if the codec started decoding the new stream to the existing
  buffers by itself, applications wouldn't have the option of freeing the
  existing buffers and allocating smaller ones.
 
  STREAMOFF/STREAMON is one way of acknowledging the format change. I'm not
  opposed to other ways of doing that, but I think we need an
 acknowledgment API
  to tell the driver to proceed.
 
  Forcing STRAEMOFF/STRAEMON has two major advantages:
 
  1) The application will have an ability to free and reallocate buffers if
 it
  wishes so, and
 
  2) It will get explicit information on the changed format. Alternative
 would
  require an additional API to query the format of buffers in cases the
  information isn't implicitly available.
 
 As already said, a simple flag may give this meaning. Alternatively (or
 complementary,
 an event may be generated, containing the new format).
 
  If we do not require STRAEMOFF/STREAMON, the stream would have to be
 paused
  until the application chooses to continue it after dealing with its
 buffers
  and formats.
 
 No. STREAMOFF is always used to stop the stream. We can't make it mean
 otherwise.
 
 So, after calling it, application should assume that frames will be lost,
 while
 the DMA engine doesn't start again.

Do you mean all buffers or just those that are queued in hardware?
What has been processed stays processed, it should not matter to the buffers
that have been processed.

The compressed buffer that is queued in the driver and that caused the 
resolution
change is on the OUTPUT queue. STREMOFF is only done on the CAPTURE queue, so it
stays queued and information is retained. 

From CAPTURE all processed buffers have already been dequeued, so yes the 
content of
the buffers queued in hw is lost. But this is ok, because after the resolution 
change
the previous frames are not used in prediction.

My initial idea was to acknowledge the resolution change by G_FMT. 
Later in our chat it had evolved into S_FMT. Then it mutated into
STREAMOFF/STREAMON on the CAPTURE queue.

 
 For things like MPEG decoders, Hans proposed an ioctl, that could use to
 pause
 and continue the decoding.

This still could be useful... But processing will also pause when hw runs out
of buffers. It will be resumed after the application consumes/produces new
buffers and enqueue them.

 
  I'd still return a specific error when the size changes since it's more
  explicit that something is not right, rather than just a flag. But if I'm
  alone in thinking so I won't insist.
 
  Regards,
 

Best wishes,
--
Kamil Debski
Linux Platform Group
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Mauro Carvalho Chehab

On 06-12-2011 13:19, Kamil Debski wrote:

From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
Sent: 06 December 2011 15:42

On 06-12-2011 12:28, 'Sakari Ailus' wrote:

Hi all,

On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
...

2) new requirement is for a bigger buffer. DMA transfers need to be
stopped before actually writing inside the buffer (otherwise, memory
will be corrupted).

In this case, all queued buffers should be marked with an error flag.
So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should
raise. The new format should be available via G_FMT.


I'd like to reword this as follows:

1. In all cases, the application needs to be informed that the format has
changed.

V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT

will

report the new format.

2. In all cases, the application must have the option of reallocating

buffers

if it wishes.

In order to support this, the driver needs to wait until the application
acknowledged the format change before it starts decoding the stream.
Otherwise, if the codec started decoding the new stream to the existing
buffers by itself, applications wouldn't have the option of freeing the
existing buffers and allocating smaller ones.

STREAMOFF/STREAMON is one way of acknowledging the format change. I'm not
opposed to other ways of doing that, but I think we need an

acknowledgment API

to tell the driver to proceed.


Forcing STRAEMOFF/STRAEMON has two major advantages:

1) The application will have an ability to free and reallocate buffers if

it

wishes so, and

2) It will get explicit information on the changed format. Alternative

would

require an additional API to query the format of buffers in cases the
information isn't implicitly available.


As already said, a simple flag may give this meaning. Alternatively (or
complementary,
an event may be generated, containing the new format).


If we do not require STRAEMOFF/STREAMON, the stream would have to be

paused

until the application chooses to continue it after dealing with its

buffers

and formats.


No. STREAMOFF is always used to stop the stream. We can't make it mean
otherwise.

So, after calling it, application should assume that frames will be lost,
while
the DMA engine doesn't start again.


Do you mean all buffers or just those that are queued in hardware?


Of course the ones queued.


What has been processed stays processed, it should not matter to the buffers
that have been processed.


Sure.


The compressed buffer that is queued in the driver and that caused the 
resolution
change is on the OUTPUT queue.


Not necessarily. If the buffer is smaller than the size needed for the 
resolution
change, what is there is trash, as it could be a partially filled buffer or an
empty buffer, depending if the driver detected about the format change after or
before start filling it.


STREMOFF is only done on the CAPTURE queue, so it
stays queued and information is retained.

 From CAPTURE all processed buffers have already been dequeued, so yes the 
content of
the buffers queued in hw is lost. But this is ok, because after the resolution 
change
the previous frames are not used in prediction.


No. According with the spec:

The VIDIOC_STREAMON and VIDIOC_STREAMOFF ioctl start and stop the 
capture or
output process during streaming (memory mapping or user pointer) I/O.

Specifically the capture hardware is disabled and no input buffers are 
filled
(if there are any empty buffers in the incoming queue) until 
VIDIOC_STREAMON
has been called. Accordingly the output hardware is disabled, no video 
signal
is produced until VIDIOC_STREAMON has been called. The ioctl will 
succeed
only when at least one output buffer is in the incoming queue.

The VIDIOC_STREAMOFF ioctl, apart of aborting or finishing any DMA in 
progress,
unlocks any user pointer buffers locked in physical memory, and it 
removes all
buffers from the incoming and outgoing queues. That means all images 
captured
but not dequeued yet will be lost, likewise all images enqueued for 
output
but not transmitted yet. I/O returns to the same state as after calling
VIDIOC_REQBUFS and can be restarted accordingly.


My initial idea was to acknowledge the resolution change by G_FMT.
Later in our chat it had evolved into S_FMT. Then it mutated into
STREAMOFF/STREAMON on the CAPTURE queue.


Why application should ack with it? If the application doesn't ack, it can just 
send
a VIDIOC_STREAMOFF. If application doesn't do it, and the buffer size is enough 
to
proceed, the hardware should just keep doing the DMA transfers and assume that 
the
applications are OK.


For things like MPEG decoders, Hans proposed an ioctl, that could use to
pause
and continue the decoding.


This still could be useful... But processing will also pause when hw runs out
of buffers. It will be resumed after the 

RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Kamil Debski
 From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
 Sent: 06 December 2011 16:40
 
 On 06-12-2011 13:19, Kamil Debski wrote:
  From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
  Sent: 06 December 2011 15:42
 
  On 06-12-2011 12:28, 'Sakari Ailus' wrote:
  Hi all,
 
  On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
  ...
  2) new requirement is for a bigger buffer. DMA transfers need to
 be
  stopped before actually writing inside the buffer (otherwise,
 memory
  will be corrupted).
 
  In this case, all queued buffers should be marked with an error
 flag.
  So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR
 should
  raise. The new format should be available via G_FMT.
 
  I'd like to reword this as follows:
 
  1. In all cases, the application needs to be informed that the format
 has
  changed.
 
  V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT
  will
  report the new format.
 
  2. In all cases, the application must have the option of reallocating
  buffers
  if it wishes.
 
  In order to support this, the driver needs to wait until the
 application
  acknowledged the format change before it starts decoding the stream.
  Otherwise, if the codec started decoding the new stream to the existing
  buffers by itself, applications wouldn't have the option of freeing the
  existing buffers and allocating smaller ones.
 
  STREAMOFF/STREAMON is one way of acknowledging the format change. I'm
 not
  opposed to other ways of doing that, but I think we need an
  acknowledgment API
  to tell the driver to proceed.
 
  Forcing STRAEMOFF/STRAEMON has two major advantages:
 
  1) The application will have an ability to free and reallocate buffers
 if
  it
  wishes so, and
 
  2) It will get explicit information on the changed format. Alternative
  would
  require an additional API to query the format of buffers in cases the
  information isn't implicitly available.
 
  As already said, a simple flag may give this meaning. Alternatively (or
  complementary,
  an event may be generated, containing the new format).
 
  If we do not require STRAEMOFF/STREAMON, the stream would have to be
  paused
  until the application chooses to continue it after dealing with its
  buffers
  and formats.
 
  No. STREAMOFF is always used to stop the stream. We can't make it mean
  otherwise.
 
  So, after calling it, application should assume that frames will be lost,
  while
  the DMA engine doesn't start again.
 
  Do you mean all buffers or just those that are queued in hardware?
 
 Of course the ones queued.
 
  What has been processed stays processed, it should not matter to the
 buffers
  that have been processed.
 
 Sure.
 
  The compressed buffer that is queued in the driver and that caused the
 resolution
  change is on the OUTPUT queue.
 
 Not necessarily. If the buffer is smaller than the size needed for the
 resolution
 change, what is there is trash, as it could be a partially filled buffer or
 an
 empty buffer, depending if the driver detected about the format change after
 or
 before start filling it.

I see the problem. If a bigger buffer is needed it's clear. A buffer with
no sane data is returned and *_FORMAT_CHANGED | *_ERROR flags are set.
If the resolution is changed but it fits the current conditions (size + number
of buffers) then what should be the contents of the returned buffer?

I think that still it should contain no useful data, just *_FORMAT_CHANGED | 
*_ERROR
flags set. Then the application could decide whether it keeps the current
size/alignment/... or should it allocate new buffers. Then ACK the driver.

For our (Samsung) hw this is not a problem, we could always use the existing 
buffers
if it is possible (size). But Sakari had reported that he might need to adjust 
some
alignment property. Also, having memory constraints, the application might 
choose
to allocate smaller buffers.


 
  STREMOFF is only done on the CAPTURE queue, so it
  stays queued and information is retained.
 
   From CAPTURE all processed buffers have already been dequeued, so yes the
 content of
  the buffers queued in hw is lost. But this is ok, because after the
 resolution change
  the previous frames are not used in prediction.
 
 No. According with the spec:
 
   The VIDIOC_STREAMON and VIDIOC_STREAMOFF ioctl start and stop the
 capture or
   output process during streaming (memory mapping or user pointer) I/O.
 
   Specifically the capture hardware is disabled and no input buffers are
 filled
   (if there are any empty buffers in the incoming queue) until
 VIDIOC_STREAMON
   has been called. Accordingly the output hardware is disabled, no video
 signal
   is produced until VIDIOC_STREAMON has been called. The ioctl will
 succeed
   only when at least one output buffer is in the incoming queue.
 
   The VIDIOC_STREAMOFF ioctl, apart of aborting or finishing any DMA in
 progress,
   unlocks any user pointer buffers locked in 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Mauro Carvalho Chehab

On 06-12-2011 12:35, Sakari Ailus wrote:

Hi Mauro,

On Fri, Dec 02, 2011 at 02:50:17PM -0200, Mauro Carvalho Chehab wrote:

On 02-12-2011 11:57, Sakari Ailus wrote:

Hi Mauro,

On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:

On 02-12-2011 08:31, Kamil Debski wrote:

Hi,

Yesterday we had a chat about video codecs in V4L2 and how to change the
interface to accommodate the needs of GStreamer (and possibly other media
players and applications using video codecs).

The problem that many hardware codecs need a fixed number of pre-allocated
buffers should be resolved when gstreamer 0.11 will be released.

The main issue that we are facing is the resolution change and how it should be
handled by the driver and the application. The resolution change is
particularly common in digital TV. It occurs when content on a single channel
is changed. For example there is the transition from a TV series to a
commercials block. Other stream parameters may also change. The minimum number
of buffers required for decoding is of particular interest of us. It is
connected with how many old buffers are used for picture prediction.

When this occurs there are two possibilities: resolution can be increased or
decreased. In the first case it is very likely that the current buffers are too
small to fit the decoded frames. In the latter there is the choice to use the
existing buffers or allocate new set of buffer with reduced size. Same applies
to the number of buffers - it can be decreased or increased.

On the OUTPUT queue there is not much to be done. A buffer that contains a
frame with the new resolution will not be dequeued until it is fully processed.

On the CAPTURE queue the application has to be notified about the resolution
change.  The idea proposed during the chat is to introduce a new flag
V4L2_BUF_FLAG_WRONGFORMAT.


IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.


The alternative is to return a specific error code to the user --- the frame
would not be decoded in either case. See below.


As I said, some drivers work with buffers with a bigger size than the format, 
and
does allow setting a smaller format without the need of streamoff.




1) After all the frames with the old resolution are dequeued a buffer with the
following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is returned.
2) To acknowledge the resolution change the application should STREAMOFF, check
what has changed and then STREAMON.


I don't think this is needed, if the buffer size is enough to support the new
format.


Sometimes not, but sometimes there are buffer line alignment requirements
which must be communicated to the driver using S_FMT. If the frames are
decoded using a driver-decided format, it might be impossible to actually
use these decoded frames.

That's why there's streamoff and streamon.


Don't get me wrong. What I'm saying is that there are valid cases where
there's no need to streamoff/streamon. What I'm saying is that, when
there's no need to do it, just don't rise the V4L2_BUF_FLAG_ERROR flag.
The V4L2_BUF_FLAG_FORMATCHANGED still makes sense.


I try not to. :)

The issue is that it's the user space which knows how it is going to use the
buffers it dequeues from a device. It's not possible for the driver know
that --- unless explicitly told by the user. This could be a new flag, if we
need differing behaviour as it seems here.

The user may queue the same memory buffers to another device, which I
consider to be a common use case in embedded systems. The line alignment
requirements of the two (or more) devices often are not the same.


So what?

You're probably not referring to queuing the same buffer at the same time
to two separate devices. If they're being queued at different times, I can't
see any issue.




Btw, a few drivers (bttv comes into my mind) properly handles format changes.
This were there in order to support a bad behavior found on a few V4L1 
applications,
where the applications were first calling STREAMON and then setting the buffer.


The buffers do not have a format, the video device queue has. If the format
changes during streaming it is impossible to find that out using the current
API.


Yes, but extending it to proper support it is the scope of this RFC. Yet, 
several
drivers allow to resize the format (e. g. the resolution of the image) without
streamon/streamoff, via an explicit call to S_FMT.

So, whatever change at the API is done, it should keep supporting format changes
(in practice, resolution changes) without the need of re-initializing the DMA 
engine,
of course when such change won't break the capability for userspace to decode
the new frames.


Stopping and starting the queue does not involve additional penalty: a
memory-to-memory device processes buffers one at a time, and most of the
hardware stops between the buffers in any case before being started by the
software again. The buffers are not affected either; they stay mapped and
pinned to 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Mauro Carvalho Chehab

On 06-12-2011 14:11, Kamil Debski wrote:

From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
Sent: 06 December 2011 16:40

On 06-12-2011 13:19, Kamil Debski wrote:

From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
Sent: 06 December 2011 15:42

On 06-12-2011 12:28, 'Sakari Ailus' wrote:

Hi all,

On Tue, Dec 06, 2011 at 01:00:59PM +0100, Laurent Pinchart wrote:
...

2) new requirement is for a bigger buffer. DMA transfers need to

be

stopped before actually writing inside the buffer (otherwise,

memory

will be corrupted).

In this case, all queued buffers should be marked with an error

flag.

So, both V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR

should

raise. The new format should be available via G_FMT.


I'd like to reword this as follows:

1. In all cases, the application needs to be informed that the format

has

changed.

V4L2_BUF_FLAG_FORMATCHANGED (or a similar flag) is all we need. G_FMT

will

report the new format.

2. In all cases, the application must have the option of reallocating

buffers

if it wishes.

In order to support this, the driver needs to wait until the

application

acknowledged the format change before it starts decoding the stream.
Otherwise, if the codec started decoding the new stream to the existing
buffers by itself, applications wouldn't have the option of freeing the
existing buffers and allocating smaller ones.

STREAMOFF/STREAMON is one way of acknowledging the format change. I'm

not

opposed to other ways of doing that, but I think we need an

acknowledgment API

to tell the driver to proceed.


Forcing STRAEMOFF/STRAEMON has two major advantages:

1) The application will have an ability to free and reallocate buffers

if

it

wishes so, and

2) It will get explicit information on the changed format. Alternative

would

require an additional API to query the format of buffers in cases the
information isn't implicitly available.


As already said, a simple flag may give this meaning. Alternatively (or
complementary,
an event may be generated, containing the new format).


If we do not require STRAEMOFF/STREAMON, the stream would have to be

paused

until the application chooses to continue it after dealing with its

buffers

and formats.


No. STREAMOFF is always used to stop the stream. We can't make it mean
otherwise.

So, after calling it, application should assume that frames will be lost,
while
the DMA engine doesn't start again.


Do you mean all buffers or just those that are queued in hardware?


Of course the ones queued.


What has been processed stays processed, it should not matter to the

buffers

that have been processed.


Sure.


The compressed buffer that is queued in the driver and that caused the

resolution

change is on the OUTPUT queue.


Not necessarily. If the buffer is smaller than the size needed for the
resolution
change, what is there is trash, as it could be a partially filled buffer or
an
empty buffer, depending if the driver detected about the format change after
or
before start filling it.


I see the problem. If a bigger buffer is needed it's clear. A buffer with
no sane data is returned and *_FORMAT_CHANGED | *_ERROR flags are set.
If the resolution is changed but it fits the current conditions (size + number
of buffers) then what should be the contents of the returned buffer?


The one returned on G_FMT or at an specific event. Userspace application can
change it later with S_FMT, if needed.


I think that still it should contain no useful data, just *_FORMAT_CHANGED | 
*_ERROR
flags set. Then the application could decide whether it keeps the current
size/alignment/... or should it allocate new buffers. Then ACK the driver.


This will cause frame losses on Capture devices. It probably doesn't make sense 
to
define resolution change support like this for output devices.

Eventually, we may have an extra flag: *_PAUSE. If *_PAUSE is detected, a 
VIDEO_DECODER_CMD
is needed to continue.

So, on M2M devices, the 3 flags are raised and the buffer is not filled.  This 
would cover
Sakari's case.


For our (Samsung) hw this is not a problem, we could always use the existing 
buffers
if it is possible (size). But Sakari had reported that he might need to adjust 
some
alignment property. Also, having memory constraints, the application might 
choose
to allocate smaller buffers.





STREMOFF is only done on the CAPTURE queue, so it
stays queued and information is retained.

  From CAPTURE all processed buffers have already been dequeued, so yes the

content of

the buffers queued in hw is lost. But this is ok, because after the

resolution change

the previous frames are not used in prediction.


No. According with the spec:

The VIDIOC_STREAMON and VIDIOC_STREAMOFF ioctl start and stop the
capture or
output process during streaming (memory mapping or user pointer) I/O.

Specifically the capture hardware is disabled and no input buffers are
filled
(if there are any empty buffers in the 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-06 Thread Sakari Ailus
On Tue, Dec 06, 2011 at 02:20:32PM -0200, Mauro Carvalho Chehab wrote:
 1) After all the frames with the old resolution are dequeued a buffer 
 with the
 following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is 
 returned.
 2) To acknowledge the resolution change the application should 
 STREAMOFF, check
 what has changed and then STREAMON.
 
 I don't think this is needed, if the buffer size is enough to support the 
 new
 format.
 
 Sometimes not, but sometimes there are buffer line alignment requirements
 which must be communicated to the driver using S_FMT. If the frames are
 decoded using a driver-decided format, it might be impossible to actually
 use these decoded frames.
 
 That's why there's streamoff and streamon.
 
 Don't get me wrong. What I'm saying is that there are valid cases where
 there's no need to streamoff/streamon. What I'm saying is that, when
 there's no need to do it, just don't rise the V4L2_BUF_FLAG_ERROR flag.
 The V4L2_BUF_FLAG_FORMATCHANGED still makes sense.
 
 I try not to. :)
 
 The issue is that it's the user space which knows how it is going to use the
 buffers it dequeues from a device. It's not possible for the driver know
 that --- unless explicitly told by the user. This could be a new flag, if we
 need differing behaviour as it seems here.
 
 The user may queue the same memory buffers to another device, which I
 consider to be a common use case in embedded systems. The line alignment
 requirements of the two (or more) devices often are not the same.
 
 So what?
 
 You're probably not referring to queuing the same buffer at the same time
 to two separate devices. If they're being queued at different times, I can't
 see any issue.

The buffers are used in separate devices at different times. If the decoder
has a line alignment requirement of 32, but the display output device has
128, chances are good that the new bytesperline won't be suitable to be
displayed.

 Btw, a few drivers (bttv comes into my mind) properly handles format 
 changes.
 This were there in order to support a bad behavior found on a few V4L1 
 applications,
 where the applications were first calling STREAMON and then setting the 
 buffer.
 
 The buffers do not have a format, the video device queue has. If the format
 changes during streaming it is impossible to find that out using the 
 current
 API.
 
 Yes, but extending it to proper support it is the scope of this RFC. Yet, 
 several
 drivers allow to resize the format (e. g. the resolution of the image) 
 without
 streamon/streamoff, via an explicit call to S_FMT.
 
 So, whatever change at the API is done, it should keep supporting format 
 changes
 (in practice, resolution changes) without the need of re-initializing the 
 DMA engine,
 of course when such change won't break the capability for userspace to 
 decode
 the new frames.
 
 Stopping and starting the queue does not involve additional penalty: a
 memory-to-memory device processes buffers one at a time, and most of the
 hardware stops between the buffers in any case before being started by the
 software again. The buffers are not affected either; they stay mapped and
 pinned to memory.
 
 This is true only on memory-to-memory devices (although the V4L2 spec is 
 incomplete
 with regards to this specific type of device).

That's true. Would this be a candidate for a new V4L2 profile?

 If I'm not mistaken, the old vlc V4L1 driver used to do that.
 
 What bttv used to do is to allocate a buffer big enough to support the 
 max resolution.
 So, any format changes (size increase or decrease) would fit into the 
 allocated
 buffers.
 
 Depending on how applications want to handle format changes, and how big 
 is the
 amount of memory on the system, a similar approach may be done with 
 CREATE_BUFFERS:
 just allocate enough space for the maximum supported resolution for that 
 stream,
 and let the resolution changes, as required.
 
 I'm not fully certain it is always possible to find out the largest stream
 resolution. I'd like an answer from someone knowing more about video codecs
 than I do.
 
 When the input or output is some hardware device, there is a maximum 
 resolution
 (sensor resolution, monitor resolution, pixel sampling rate, etc). A pure 
 DSP block
 doesn't have it, but, anyway, it should be up to the userspace application 
 to decide
 if it wants to over-allocate a buffer, in order to cover a scenario where 
 the
 resolution may change or not.
 
 In this case, allocating bigger buffers than necessary is a big
 disadvantage, and forcing the application to requeue the compressed data
 to the OUTPUT queue doesn't sound very appealing either. This could involve
 the application having to queue several buffers which already have been
 decoded earlier on. Also figuring out how many is a task for the decoder,
 not the application.
 
 Sorry, but it seems I missed what scenario you're considering... M2M, Output, 
 Capture? All?
 some specific pipeline?

In this caase it's 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-05 Thread Mauro Carvalho Chehab

On 02-12-2011 22:08, 'Sakari Ailus' wrote:

Hi Mauro,

On Fri, Dec 02, 2011 at 03:07:44PM -0200, Mauro Carvalho Chehab wrote:

I'm not fully certain it is always possible to find out the largest stream
resolution. I'd like an answer from someone knowing more about video codecs
than I do.


That is one thing. Also, I don't think that allocating N buffers each of
1920x1080 size up front is a good idea. In embedded systems the memory can
be scarce (although recently this is changing and we see smart phones with
1 GB of ram). It is better to allow application to use the extra memory when
possible, if the memory is required by the hardware then it can be reclaimed.


It depends on how much memory you have at the device. API's should be designed
to allow multiple usecases. I'm sure that dedicated system (either embedded
or not) meant to work only streaming video will need to have enough memory to
work with the worse case. If there are any requirements for such server to not
stop streaming if the resolution changes, the right thing to do is to allocate
N buffers of 1920x1080.

Also, as you've said, even on smart phones, devices new devices now can have
multiple cores, GB's of ram, and, soon enough, likely 64 bits kernels.


Some devices may, but they then to be high end devices. Others are tighter
on memory and even if there is plenty, one can seldom just go and waste it.
As you also said, we must take different use cases into account.


Let's not limit the API due to a current constraint that may not be true on a
near future.

What I'm saying is that it should be an option for the driver to require
STREAMOFF in order to change buffers size, and not a mandatory requirement.


Let's assume the user does not wish that the streaming is stopped at format
change if the buffers are big enough for the new format. The user does get a
buffer thelling the format has changed, and requests a new format using G_FMT.
In between the two IOCTLs time has passed and the format may have changed
again. How would we avoid that from happening, unless we stop the stream?


I see two situations:

1) The buffer is big enough for the new format. There's no need to stop 
streaming.

2) The buffer is not big enough. Whatever logic used, the DMA engine should not
be able to write past the buffer (it can either stop before filling the buffer 
or
the harware may just partially fill the data). An ERROR flag should be rised on
this case, and a STREAMOFF event will happen. The STREAMOFF is implicit in this
case. So, even if the driver doesn't receive such ioctl, there's no sense to 
keep
the DMA engine working.


So, in case 1, there is enough buffer for the new format. As a flag indicates 
that
the format has changed, userspace could get the new format either by some event
or by a G_FMT call.

Ok, with a G_FMT call, there is a risk that the format would change again for a
newly filled buffer, but this is very unlikely (as changing the format for just
one frame doesn't make much sense on a stream, except if such frame is a 
high-res
snapshot). On non-snapshot streams, format resolution may happen, for example,
when an HD sports game or a 3D movie may got interrupted by some commercials
broadcasted on a different way. In such case, the new format will stay
there for more than a few buffers. So, using G_FMT on such cases would work.

An event-based implementation is for sure more robust. So, I think it is worthy
to have it also implemented.


The underlying root cause for the problem is that the format is not bound to
buffers.


Yes. A format-change type of event could be used to do such binding.


I also do not see it as a problem to require streaö stop and start. Changing
resolution during streaming is anyway something any current application
likely hasn't prepared for, so we are not breaking anything. Quite contrary,
actually: applicatuons now knowing the flag would only able to dequeue junk
after receiving it the first time.


This maybe true for the current applications and usecases, but it doesn't seem 
to
fit forall.

Assuming a decoding block that it is receiving an MPEG block as input, and it
is converting it to RGB, before actually filling a buffer, it is possible for
such block to warn the userspace application that the format has changed, before
actually starting to fill such buffer. If the buffer is big enough, there is no
need to discard its content, nor to stop streaming.


...


The user space still wants to be able to show these buffers, so a new flag
would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.


Currently it is done in the following way. On the CAPTURE side you have a
total of N buffers. Out of them K are necessary for decoding (K = 1 + L).
L is the number of buffers necessary for reference lookup and the single
buffer is required as the destination for new frame. If less than K buffers
are queued then no processing is done. The buffers that have been dequeued
should be ok with the application changing 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Mauro Carvalho Chehab

On 02-12-2011 08:31, Kamil Debski wrote:

Hi,

Yesterday we had a chat about video codecs in V4L2 and how to change the
interface to accommodate the needs of GStreamer (and possibly other media
players and applications using video codecs).

The problem that many hardware codecs need a fixed number of pre-allocated
buffers should be resolved when gstreamer 0.11 will be released.

The main issue that we are facing is the resolution change and how it should be
handled by the driver and the application. The resolution change is
particularly common in digital TV. It occurs when content on a single channel
is changed. For example there is the transition from a TV series to a
commercials block. Other stream parameters may also change. The minimum number
of buffers required for decoding is of particular interest of us. It is
connected with how many old buffers are used for picture prediction.

When this occurs there are two possibilities: resolution can be increased or
decreased. In the first case it is very likely that the current buffers are too
small to fit the decoded frames. In the latter there is the choice to use the
existing buffers or allocate new set of buffer with reduced size. Same applies
to the number of buffers - it can be decreased or increased.

On the OUTPUT queue there is not much to be done. A buffer that contains a
frame with the new resolution will not be dequeued until it is fully processed.

On the CAPTURE queue the application has to be notified about the resolution
change.  The idea proposed during the chat is to introduce a new flag
V4L2_BUF_FLAG_WRONGFORMAT.


IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.



1) After all the frames with the old resolution are dequeued a buffer with the
following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is returned.
2) To acknowledge the resolution change the application should STREAMOFF, check
what has changed and then STREAMON.


I don't think this is needed, if the buffer size is enough to support the new
format.

Btw, a few drivers (bttv comes into my mind) properly handles format changes.
This were there in order to support a bad behavior found on a few V4L1 
applications,
where the applications were first calling STREAMON and then setting the buffer.

If I'm not mistaken, the old vlc V4L1 driver used to do that.

What bttv used to do is to allocate a buffer big enough to support the max 
resolution.
So, any format changes (size increase or decrease) would fit into the allocated
buffers.

Depending on how applications want to handle format changes, and how big is the
amount of memory on the system, a similar approach may be done with 
CREATE_BUFFERS:
just allocate enough space for the maximum supported resolution for that stream,
and let the resolution changes, as required.

I see two possible scenarios here:

1) new format size is smaller than the buffers. Just V4L2_BUF_FLAG_FORMATCHANGED
should be rised. No need to stop DMA transfers with STREAMOFF.

2) new requirement is for a bigger buffer. DMA transfers need to be stopped 
before
actually writing inside the buffer (otherwise, memory will be corrupted). In 
this
case, all queued buffers should be marked with an error flag. So, both
V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should raise. The new format
should be available via G_FMT.


3) The application should check with G_FMT how did the format change and the
V4L2_CID_MIN_BUFFERS_FOR_CAPTURE control to check how many buffers are
required.
4) Now it is necessary to resume processing:
   A. If there is no need to change the size of buffers or their number the
application needs only to STREAMON.
   B. If it is necessary to allocate bigger buffers the application should use
CREATE_BUFS to allocate new buffers, the old should be left until the
application is finished and frees them with the DESTROY_BUFS call. S_FMT
should be used to adjust the new format (if necessary and possible in HW).


If the application already cleaned the DMA transfers with STREAMOFF, it can
also just re-queue the buffers with REQBUFS, e. g. vb2 should be smart enough to
accept both ways to allocate buffers.

Also, if the format changed, applications should use G_FMT  to get the new 
buffer
requirements. Using S_FMT here doesn't seem to be the right thing to do, as the
format may have changed again, while the DMA transfers were stopped by 
STREAMOFF.


   C. If only the number of buffers has changed then it is possible to add
buffers with CREATE_BUF or remove spare buffers with DESTROY_BUFS (not yet
implemented to my knowledge).


I don't see why a new format would require more buffers.

The minimal number of buffers is more related to latency issues and processing
speed at userspace than to any driver or format-dependent hardware constraints.

On the other hand, the maximum number of buffers might eventually have some
constraint, e. g. a hardware might support less buffers, if the resolution
is too high.

I prefer to not add 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Sakari Ailus
Hi Mauro,

On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
 On 02-12-2011 08:31, Kamil Debski wrote:
 Hi,
 
 Yesterday we had a chat about video codecs in V4L2 and how to change the
 interface to accommodate the needs of GStreamer (and possibly other media
 players and applications using video codecs).
 
 The problem that many hardware codecs need a fixed number of pre-allocated
 buffers should be resolved when gstreamer 0.11 will be released.
 
 The main issue that we are facing is the resolution change and how it should 
 be
 handled by the driver and the application. The resolution change is
 particularly common in digital TV. It occurs when content on a single channel
 is changed. For example there is the transition from a TV series to a
 commercials block. Other stream parameters may also change. The minimum 
 number
 of buffers required for decoding is of particular interest of us. It is
 connected with how many old buffers are used for picture prediction.
 
 When this occurs there are two possibilities: resolution can be increased or
 decreased. In the first case it is very likely that the current buffers are 
 too
 small to fit the decoded frames. In the latter there is the choice to use the
 existing buffers or allocate new set of buffer with reduced size. Same 
 applies
 to the number of buffers - it can be decreased or increased.
 
 On the OUTPUT queue there is not much to be done. A buffer that contains a
 frame with the new resolution will not be dequeued until it is fully 
 processed.
 
 On the CAPTURE queue the application has to be notified about the resolution
 change.  The idea proposed during the chat is to introduce a new flag
 V4L2_BUF_FLAG_WRONGFORMAT.
 
 IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.

The alternative is to return a specific error code to the user --- the frame
would not be decoded in either case. See below.

 
 1) After all the frames with the old resolution are dequeued a buffer with 
 the
 following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is returned.
 2) To acknowledge the resolution change the application should STREAMOFF, 
 check
 what has changed and then STREAMON.
 
 I don't think this is needed, if the buffer size is enough to support the new
 format.

Sometimes not, but sometimes there are buffer line alignment requirements
which must be communicated to the driver using S_FMT. If the frames are
decoded using a driver-decided format, it might be impossible to actually
use these decoded frames.

That's why there's streamoff and streamon.

 Btw, a few drivers (bttv comes into my mind) properly handles format changes.
 This were there in order to support a bad behavior found on a few V4L1 
 applications,
 where the applications were first calling STREAMON and then setting the 
 buffer.

The buffers do not have a format, the video device queue has. If the format
changes during streaming it is impossible to find that out using the current
API.

 If I'm not mistaken, the old vlc V4L1 driver used to do that.
 
 What bttv used to do is to allocate a buffer big enough to support the max 
 resolution.
 So, any format changes (size increase or decrease) would fit into the 
 allocated
 buffers.
 
 Depending on how applications want to handle format changes, and how big is 
 the
 amount of memory on the system, a similar approach may be done with 
 CREATE_BUFFERS:
 just allocate enough space for the maximum supported resolution for that 
 stream,
 and let the resolution changes, as required.

I'm not fully certain it is always possible to find out the largest stream
resolution. I'd like an answer from someone knowing more about video codecs
than I do.

 I see two possible scenarios here:
 
 1) new format size is smaller than the buffers. Just 
 V4L2_BUF_FLAG_FORMATCHANGED
 should be rised. No need to stop DMA transfers with STREAMOFF.
 
 2) new requirement is for a bigger buffer. DMA transfers need to be stopped 
 before
 actually writing inside the buffer (otherwise, memory will be corrupted). In 
 this
 case, all queued buffers should be marked with an error flag. So, both
 V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should raise. The new 
 format
 should be available via G_FMT.

In memory-to-memory devices, I assume that the processing stops immediately
when it's not possible to further process the frames. The capture queue
would be stopped.

 3) The application should check with G_FMT how did the format change and the
 V4L2_CID_MIN_BUFFERS_FOR_CAPTURE control to check how many buffers are
 required.
 4) Now it is necessary to resume processing:
A. If there is no need to change the size of buffers or their number the
 application needs only to STREAMON.
B. If it is necessary to allocate bigger buffers the application should 
  use
 CREATE_BUFS to allocate new buffers, the old should be left until the
 application is finished and frees them with the DESTROY_BUFS call. S_FMT
 should be used to adjust 

RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Kamil Debski
Hi,

 From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
 Sent: 02 December 2011 14:58
 
 Hi Mauro,
 
 On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
  On 02-12-2011 08:31, Kamil Debski wrote:
  Hi,
  
  Yesterday we had a chat about video codecs in V4L2 and how to change the
  interface to accommodate the needs of GStreamer (and possibly other media
  players and applications using video codecs).
  
  The problem that many hardware codecs need a fixed number of pre-
 allocated
  buffers should be resolved when gstreamer 0.11 will be released.
  
  The main issue that we are facing is the resolution change and how it
 should be
  handled by the driver and the application. The resolution change is
  particularly common in digital TV. It occurs when content on a single
 channel
  is changed. For example there is the transition from a TV series to a
  commercials block. Other stream parameters may also change. The minimum
 number
  of buffers required for decoding is of particular interest of us. It is
  connected with how many old buffers are used for picture prediction.
  
  When this occurs there are two possibilities: resolution can be increased
 or
  decreased. In the first case it is very likely that the current buffers
 are too
  small to fit the decoded frames. In the latter there is the choice to use
 the
  existing buffers or allocate new set of buffer with reduced size. Same
 applies
  to the number of buffers - it can be decreased or increased.
  
  On the OUTPUT queue there is not much to be done. A buffer that contains
 a
  frame with the new resolution will not be dequeued until it is fully
 processed.
  
  On the CAPTURE queue the application has to be notified about the
 resolution
  change.  The idea proposed during the chat is to introduce a new flag
  V4L2_BUF_FLAG_WRONGFORMAT.
 
  IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.
 
 The alternative is to return a specific error code to the user --- the frame
 would not be decoded in either case. See below.
 
  
  1) After all the frames with the old resolution are dequeued a buffer
 with the
  following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is
 returned.
  2) To acknowledge the resolution change the application should STREAMOFF,
 check
  what has changed and then STREAMON.
 
  I don't think this is needed, if the buffer size is enough to support the
 new
  format.
 
 Sometimes not, but sometimes there are buffer line alignment requirements
 which must be communicated to the driver using S_FMT. If the frames are
 decoded using a driver-decided format, it might be impossible to actually
 use these decoded frames.
 That's why there's streamoff and streamon.

Also, if memory use is our key consideration then the application might want
to allocate smaller buffers.

 
  Btw, a few drivers (bttv comes into my mind) properly handles format
 changes.
  This were there in order to support a bad behavior found on a few V4L1
 applications,
  where the applications were first calling STREAMON and then setting the
 buffer.
 
 The buffers do not have a format, the video device queue has. If the format
 changes during streaming it is impossible to find that out using the current
 API.
 
  If I'm not mistaken, the old vlc V4L1 driver used to do that.
 
  What bttv used to do is to allocate a buffer big enough to support the max
 resolution.
  So, any format changes (size increase or decrease) would fit into the
 allocated
  buffers.
 
  Depending on how applications want to handle format changes, and how big
 is the
  amount of memory on the system, a similar approach may be done with
 CREATE_BUFFERS:
  just allocate enough space for the maximum supported resolution for that
 stream,
  and let the resolution changes, as required.
 
 I'm not fully certain it is always possible to find out the largest stream
 resolution. I'd like an answer from someone knowing more about video codecs
 than I do.

That is one thing. Also, I don't think that allocating N buffers each of 
1920x1080 size up front is a good idea. In embedded systems the memory can
be scarce (although recently this is changing and we see smart phones with
1 GB of ram). It is better to allow application to use the extra memory when
possible, if the memory is required by the hardware then it can be reclaimed.

 
  I see two possible scenarios here:
 
  1) new format size is smaller than the buffers. Just
 V4L2_BUF_FLAG_FORMATCHANGED
  should be rised. No need to stop DMA transfers with STREAMOFF.
 
  2) new requirement is for a bigger buffer. DMA transfers need to be
 stopped before
  actually writing inside the buffer (otherwise, memory will be corrupted).
 In this
  case, all queued buffers should be marked with an error flag. So, both
  V4L2_BUF_FLAG_FORMATCHANGED and V4L2_BUF_FLAG_ERROR should raise. The new
 format
  should be available via G_FMT.
 
 In memory-to-memory devices, I assume that the processing stops immediately
 when it's not 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Mauro Carvalho Chehab

On 02-12-2011 11:57, Sakari Ailus wrote:

Hi Mauro,

On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:

On 02-12-2011 08:31, Kamil Debski wrote:

Hi,

Yesterday we had a chat about video codecs in V4L2 and how to change the
interface to accommodate the needs of GStreamer (and possibly other media
players and applications using video codecs).

The problem that many hardware codecs need a fixed number of pre-allocated
buffers should be resolved when gstreamer 0.11 will be released.

The main issue that we are facing is the resolution change and how it should be
handled by the driver and the application. The resolution change is
particularly common in digital TV. It occurs when content on a single channel
is changed. For example there is the transition from a TV series to a
commercials block. Other stream parameters may also change. The minimum number
of buffers required for decoding is of particular interest of us. It is
connected with how many old buffers are used for picture prediction.

When this occurs there are two possibilities: resolution can be increased or
decreased. In the first case it is very likely that the current buffers are too
small to fit the decoded frames. In the latter there is the choice to use the
existing buffers or allocate new set of buffer with reduced size. Same applies
to the number of buffers - it can be decreased or increased.

On the OUTPUT queue there is not much to be done. A buffer that contains a
frame with the new resolution will not be dequeued until it is fully processed.

On the CAPTURE queue the application has to be notified about the resolution
change.  The idea proposed during the chat is to introduce a new flag
V4L2_BUF_FLAG_WRONGFORMAT.


IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.


The alternative is to return a specific error code to the user --- the frame
would not be decoded in either case. See below.


As I said, some drivers work with buffers with a bigger size than the format, 
and
does allow setting a smaller format without the need of streamoff.




1) After all the frames with the old resolution are dequeued a buffer with the
following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is returned.
2) To acknowledge the resolution change the application should STREAMOFF, check
what has changed and then STREAMON.


I don't think this is needed, if the buffer size is enough to support the new
format.


Sometimes not, but sometimes there are buffer line alignment requirements
which must be communicated to the driver using S_FMT. If the frames are
decoded using a driver-decided format, it might be impossible to actually
use these decoded frames.

That's why there's streamoff and streamon.


Don't get me wrong. What I'm saying is that there are valid cases where there's 
no
need to streamoff/streamon. What I'm saying is that, when there's no need to do 
it,
just don't rise the V4L2_BUF_FLAG_ERROR flag. The V4L2_BUF_FLAG_FORMATCHANGED 
still
makes sense.


Btw, a few drivers (bttv comes into my mind) properly handles format changes.
This were there in order to support a bad behavior found on a few V4L1 
applications,
where the applications were first calling STREAMON and then setting the buffer.


The buffers do not have a format, the video device queue has. If the format
changes during streaming it is impossible to find that out using the current
API.


Yes, but extending it to proper support it is the scope of this RFC. Yet, 
several
drivers allow to resize the format (e. g. the resolution of the image) without
streamon/streamoff, via an explicit call to S_FMT.

So, whatever change at the API is done, it should keep supporting format changes
(in practice, resolution changes) without the need of re-initializing the DMA 
engine,
of course when such change won't break the capability for userspace to decode
the new frames.


If I'm not mistaken, the old vlc V4L1 driver used to do that.

What bttv used to do is to allocate a buffer big enough to support the max 
resolution.
So, any format changes (size increase or decrease) would fit into the allocated
buffers.

Depending on how applications want to handle format changes, and how big is the
amount of memory on the system, a similar approach may be done with 
CREATE_BUFFERS:
just allocate enough space for the maximum supported resolution for that stream,
and let the resolution changes, as required.


I'm not fully certain it is always possible to find out the largest stream
resolution. I'd like an answer from someone knowing more about video codecs
than I do.


When the input or output is some hardware device, there is a maximum resolution
(sensor resolution, monitor resolution, pixel sampling rate, etc). A pure DSP 
block
doesn't have it, but, anyway, it should be up to the userspace application to 
decide
if it wants to over-allocate a buffer, in order to cover a scenario where the
resolution may change or not.


I see two possible scenarios here:

1) new format 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Mauro Carvalho Chehab

On 02-12-2011 13:41, Kamil Debski wrote:

Hi,


From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
Sent: 02 December 2011 14:58

Hi Mauro,

On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:

On 02-12-2011 08:31, Kamil Debski wrote:

Hi,

Yesterday we had a chat about video codecs in V4L2 and how to change the
interface to accommodate the needs of GStreamer (and possibly other media
players and applications using video codecs).

The problem that many hardware codecs need a fixed number of pre-

allocated

buffers should be resolved when gstreamer 0.11 will be released.

The main issue that we are facing is the resolution change and how it

should be

handled by the driver and the application. The resolution change is
particularly common in digital TV. It occurs when content on a single

channel

is changed. For example there is the transition from a TV series to a
commercials block. Other stream parameters may also change. The minimum

number

of buffers required for decoding is of particular interest of us. It is
connected with how many old buffers are used for picture prediction.

When this occurs there are two possibilities: resolution can be increased

or

decreased. In the first case it is very likely that the current buffers

are too

small to fit the decoded frames. In the latter there is the choice to use

the

existing buffers or allocate new set of buffer with reduced size. Same

applies

to the number of buffers - it can be decreased or increased.

On the OUTPUT queue there is not much to be done. A buffer that contains

a

frame with the new resolution will not be dequeued until it is fully

processed.


On the CAPTURE queue the application has to be notified about the

resolution

change.  The idea proposed during the chat is to introduce a new flag
V4L2_BUF_FLAG_WRONGFORMAT.


IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.


The alternative is to return a specific error code to the user --- the frame
would not be decoded in either case. See below.



1) After all the frames with the old resolution are dequeued a buffer

with the

following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is

returned.

2) To acknowledge the resolution change the application should STREAMOFF,

check

what has changed and then STREAMON.


I don't think this is needed, if the buffer size is enough to support the

new

format.


Sometimes not, but sometimes there are buffer line alignment requirements
which must be communicated to the driver using S_FMT. If the frames are
decoded using a driver-decided format, it might be impossible to actually
use these decoded frames.
That's why there's streamoff and streamon.


Also, if memory use is our key consideration then the application might want
to allocate smaller buffers.


OK, but the API should support both cases.


Btw, a few drivers (bttv comes into my mind) properly handles format

changes.

This were there in order to support a bad behavior found on a few V4L1

applications,

where the applications were first calling STREAMON and then setting the

buffer.

The buffers do not have a format, the video device queue has. If the format
changes during streaming it is impossible to find that out using the current
API.


If I'm not mistaken, the old vlc V4L1 driver used to do that.

What bttv used to do is to allocate a buffer big enough to support the max

resolution.

So, any format changes (size increase or decrease) would fit into the

allocated

buffers.

Depending on how applications want to handle format changes, and how big

is the

amount of memory on the system, a similar approach may be done with

CREATE_BUFFERS:

just allocate enough space for the maximum supported resolution for that

stream,

and let the resolution changes, as required.


I'm not fully certain it is always possible to find out the largest stream
resolution. I'd like an answer from someone knowing more about video codecs
than I do.


That is one thing. Also, I don't think that allocating N buffers each of
1920x1080 size up front is a good idea. In embedded systems the memory can
be scarce (although recently this is changing and we see smart phones with
1 GB of ram). It is better to allow application to use the extra memory when
possible, if the memory is required by the hardware then it can be reclaimed.


It depends on how much memory you have at the device. API's should be designed
to allow multiple usecases. I'm sure that dedicated system (either embedded
or not) meant to work only streaming video will need to have enough memory to
work with the worse case. If there are any requirements for such server to not
stop streaming if the resolution changes, the right thing to do is to allocate
N buffers of 1920x1080.

Also, as you've said, even on smart phones, devices new devices now can have
multiple cores, GB's of ram, and, soon enough, likely 64 bits kernels.

Let's not limit the API due to a current constraint that may not be true on a
near 

RE: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Kamil Debski
Hi,

Thank you for your comments, Mauro!

Laurent there is a question for you below, so it would be great 
if you could spare a minute and have a look.

 From: Mauro Carvalho Chehab [mailto:mche...@redhat.com]
 Sent: 02 December 2011 18:08
 
 On 02-12-2011 13:41, Kamil Debski wrote:
  Hi,
 
  From: Sakari Ailus [mailto:sakari.ai...@iki.fi]
  Sent: 02 December 2011 14:58
 
  Hi Mauro,
 
  On Fri, Dec 02, 2011 at 10:35:40AM -0200, Mauro Carvalho Chehab wrote:
  On 02-12-2011 08:31, Kamil Debski wrote:
  Hi,
 
  Yesterday we had a chat about video codecs in V4L2 and how to change
 the
  interface to accommodate the needs of GStreamer (and possibly other
 media
  players and applications using video codecs).
 
  The problem that many hardware codecs need a fixed number of pre-
  allocated
  buffers should be resolved when gstreamer 0.11 will be released.
 
  The main issue that we are facing is the resolution change and how it
  should be
  handled by the driver and the application. The resolution change is
  particularly common in digital TV. It occurs when content on a single
  channel
  is changed. For example there is the transition from a TV series to a
  commercials block. Other stream parameters may also change. The minimum
  number
  of buffers required for decoding is of particular interest of us. It is
  connected with how many old buffers are used for picture prediction.
 
  When this occurs there are two possibilities: resolution can be
 increased
  or
  decreased. In the first case it is very likely that the current buffers
  are too
  small to fit the decoded frames. In the latter there is the choice to
 use
  the
  existing buffers or allocate new set of buffer with reduced size. Same
  applies
  to the number of buffers - it can be decreased or increased.
 
  On the OUTPUT queue there is not much to be done. A buffer that
 contains
  a
  frame with the new resolution will not be dequeued until it is fully
  processed.
 
  On the CAPTURE queue the application has to be notified about the
  resolution
  change.  The idea proposed during the chat is to introduce a new flag
  V4L2_BUF_FLAG_WRONGFORMAT.
 
  IMO, a bad name. I would call it as V4L2_BUF_FLAG_FORMATCHANGED.
 
  The alternative is to return a specific error code to the user --- the
 frame
  would not be decoded in either case. See below.
 
 
  1) After all the frames with the old resolution are dequeued a buffer
  with the
  following flags V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_WRONGFORMAT is
  returned.
  2) To acknowledge the resolution change the application should
 STREAMOFF,
  check
  what has changed and then STREAMON.
 
  I don't think this is needed, if the buffer size is enough to support
 the
  new
  format.
 
  Sometimes not, but sometimes there are buffer line alignment requirements
  which must be communicated to the driver using S_FMT. If the frames are
  decoded using a driver-decided format, it might be impossible to actually
  use these decoded frames.
  That's why there's streamoff and streamon.
 
  Also, if memory use is our key consideration then the application might
 want
  to allocate smaller buffers.
 
 OK, but the API should support both cases.
 
  Btw, a few drivers (bttv comes into my mind) properly handles format
  changes.
  This were there in order to support a bad behavior found on a few V4L1
  applications,
  where the applications were first calling STREAMON and then setting the
  buffer.
 
  The buffers do not have a format, the video device queue has. If the
 format
  changes during streaming it is impossible to find that out using the
 current
  API.
 
  If I'm not mistaken, the old vlc V4L1 driver used to do that.
 
  What bttv used to do is to allocate a buffer big enough to support the
 max
  resolution.
  So, any format changes (size increase or decrease) would fit into the
  allocated
  buffers.
 
  Depending on how applications want to handle format changes, and how big
  is the
  amount of memory on the system, a similar approach may be done with
  CREATE_BUFFERS:
  just allocate enough space for the maximum supported resolution for that
  stream,
  and let the resolution changes, as required.
 
  I'm not fully certain it is always possible to find out the largest
 stream
  resolution. I'd like an answer from someone knowing more about video
 codecs
  than I do.
 
  That is one thing. Also, I don't think that allocating N buffers each of
  1920x1080 size up front is a good idea. In embedded systems the memory can
  be scarce (although recently this is changing and we see smart phones with
  1 GB of ram). It is better to allow application to use the extra memory
 when
  possible, if the memory is required by the hardware then it can be
 reclaimed.
 
 It depends on how much memory you have at the device. API's should be
 designed
 to allow multiple usecases. I'm sure that dedicated system (either embedded
 or not) meant to work only streaming video will need to have enough memory
 to
 work with 

Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread Mauro Carvalho Chehab

On 02-12-2011 15:32, Kamil Debski wrote:

Usually there is a minimum number of buffers that has to be kept for future
references. New frames reference previous frames (and sometimes the following
frames as well) to achieve better compression. If we haven't got enough buffers
decoding cannot be done.


OK, but changing the resolution won't affect the number of buffers needed
for
inter-frame interpolation.


Changing resolution alone will not affect it, but changing the resolution
is very often associated with other changes in the stream. Imagine that there
are two streams - one 720P that needs 4 buffer and second 1080P that needs 6.
When the change is done - both resolution is increased and the minimum
number of buffers.


OK. In this case, if just 4 buffers were pre-allocated, driver will need to
stop streaming (or the application should start with 6 buffers, if it needs
to support such changes without stopping the stream).


As I said before, a dqueued buffer is assomed to be a buffer where the
Kernel
won't use it anymore. If kernel still needs it, just don't dequeue it yet.
Anything different than that may cause memory corruption, cache coherency
issues, etc.


I agree. This flag could be used as a hint when the display delay is enforced.
On the other hand - when the application requests an arbitrary display delay
then it should be aware that modifying those buffers is risky.


Looking at the issue from the other side, I can't see what the application would
do with such hint.


The display delay is the number of buffers/frames that the codec processes 
before
returning the first buffer. For example if it is set to 0 then it returns the
buffers ASAP, not regarding their order or that they are still used.
This functionality can be used to create thumbnail images of movies in a gallery
(by decoding a single frame from the beginning of the movie).


A 0-delay buffer like the above described would probably mean that the userspace
application won't touch on it anyway, as it needs that buffer as soon as 
possible.

A single notice at the API spec should be enough to cover this case.


The minimal number of buffers is more related to latency issues and

processing

speed at userspace than to any driver or format-dependent hardware

constraints.


On the other hand, the maximum number of buffers might eventually have

some

constraint, e. g. a hardware might support less buffers, if the

resolution

is too high.

I prefer to not add anything to the V4L2 API with regards to changes at

max/min

number of buffers, except if we actually have any limitation at the

supported

hardware. In that case, it will likely need a separate flag, to indicate

userspace

that buffer constraints have changed, and that audio buffers will also

need to be

re-negotiated, in order to preserve A/V synch.


I think that boils down to the properties of the codec and possibly also

the

stream.


If a timestamp or sequence number is used then I don't see why should we
renegotiate audio buffers. Am I wrong? Number of audio and video of

buffers

does not need to be correlated.


Currently, alsa doesn't put a timestamp on the buffers. Ok, this is
something
that require fixes there.


Ups, I did not know this. If so then there we should think about a method to
synchronize audio and video.


Yes. Someone needs to come up with some patches for alsa. This seems to be
the right thing to do.


5) After the application does STREMON the processing should continue.

Old

buffers can still be used by the application (as CREATE_BUFS was used),

but

they should not be queued (error shall be returned in this case). After

the

application is finished with the old buffers it should free them with
DESTROY_BUFS.


If the buffers are bigger, there's no issue on not allowing queuing them.

Enforcing

it will likely break drivers and eventually applications.


I think this means buffers that are too small for the new format. They

are

no longer needed after they have been displayed --- remember there must

also

be no interruption in displaying the video.


Yes, I have meant the buffers that are too small.




Best wishes,


Regards,
Mauro.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Resolution change support in video codecs in v4l2

2011-12-02 Thread 'Sakari Ailus'
Hi Mauro,

On Fri, Dec 02, 2011 at 03:07:44PM -0200, Mauro Carvalho Chehab wrote:
 I'm not fully certain it is always possible to find out the largest stream
 resolution. I'd like an answer from someone knowing more about video codecs
 than I do.
 
 That is one thing. Also, I don't think that allocating N buffers each of
 1920x1080 size up front is a good idea. In embedded systems the memory can
 be scarce (although recently this is changing and we see smart phones with
 1 GB of ram). It is better to allow application to use the extra memory when
 possible, if the memory is required by the hardware then it can be reclaimed.
 
 It depends on how much memory you have at the device. API's should be designed
 to allow multiple usecases. I'm sure that dedicated system (either embedded
 or not) meant to work only streaming video will need to have enough memory to
 work with the worse case. If there are any requirements for such server to not
 stop streaming if the resolution changes, the right thing to do is to allocate
 N buffers of 1920x1080.
 
 Also, as you've said, even on smart phones, devices new devices now can have
 multiple cores, GB's of ram, and, soon enough, likely 64 bits kernels.

Some devices may, but they then to be high end devices. Others are tighter
on memory and even if there is plenty, one can seldom just go and waste it.
As you also said, we must take different use cases into account.

 Let's not limit the API due to a current constraint that may not be true on a
 near future.
 
 What I'm saying is that it should be an option for the driver to require
 STREAMOFF in order to change buffers size, and not a mandatory requirement.

Let's assume the user does not wish that the streaming is stopped at format
change if the buffers are big enough for the new format. The user does get a
buffer thelling the format has changed, and requests a new format using G_FMT.
In between the two IOCTLs time has passed and the format may have changed
again. How would we avoid that from happening, unless we stop the stream?

The underlying root cause for the problem is that the format is not bound to
buffers.

I also do not see it as a problem to require streaö stop and start. Changing
resolution during streaming is anyway something any current application
likely hasn't prepared for, so we are not breaking anything. Quite contrary,
actually: applicatuons now knowing the flag would only able to dequeue junk
after receiving it the first time.

...

 The user space still wants to be able to show these buffers, so a new flag
 would likely be required --- V4L2_BUF_FLAG_READ_ONLY, for example.
 
 Currently it is done in the following way. On the CAPTURE side you have a
 total of N buffers. Out of them K are necessary for decoding (K = 1 + L).
 L is the number of buffers necessary for reference lookup and the single
 buffer is required as the destination for new frame. If less than K buffers
 are queued then no processing is done. The buffers that have been dequeued
 should be ok with the application changing them. However if you request some
 arbitrary display delay you may get buffers that still could be used as
 reference. Thus I agree with Sakari that the V4L2_BUF_FLAG_READ_ONLY flag
 should be introduced.
 
 However I see one problem with such flag. Let's assume that we dequeue a
 buffer. It is still needed as reference, thus it has the READ_ONLY flag
 set. Then we dequeue another buffer. Ditto for that buffer. But after we
 have dequeued the second buffer the first can be modified. How to handle 
 this?
 
 This flag could be used as a hint for the application saying that it is risky
 to modify those buffers.
 
 As I said before, a dqueued buffer is assomed to be a buffer where the Kernel
 won't use it anymore. If kernel still needs it, just don't dequeue it yet.
 Anything different than that may cause memory corruption, cache coherency
 issues, etc.

If we do't dequeue, there will be a pause in the video which is played on a
TV. This is highly undesirable. The flag is simply telling the user that the
buffer is still being used by the hardware but only for read access.

Certain other interfaces support this kind of behaviour, which is specific
to codec devices.

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html