Re: [Libva] (H264) Reference Picture selection
On Wed, 2014-05-14 at 03:50 -0600, Sreerenj wrote: > On 14.05.2014 10:51, Xiang, Haihao wrote: > > On Mon, 2014-05-12 at 07:34 +0200, Gwenole Beauchesne wrote: > >> 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : > >>> On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: > Hi, > > 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > >> Hi, > >> > > I have a scheme where the decoder ack's received frames - With this > > information I could resync without an IDR by carefully selecting the > > reference frame(s). > Will you please describe your idea in detail? > > long term references would typically be employed as well, but it > > seems ref-pic-marking was removed may'13? > Why do you hope to use the long-term reference? As you know, the > current encoding is based on hardware-acceleration. When more the > reference frames can be selected, the driver will be more complex and > then the encoding speed is also affected. > >>> It's for implementing Cisco's clearpath technology > >>> (http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > >>> re/clearpath/clearpath_whitepaper.pdf) > >>> Basically one would periodically mark a P frame as long term > >>> reference, in > >>> case of packet loss one can thus refer to this one instead of > >>> restarting the > >>> stream with an idr. > >>> > >>> > >>> I'm not too concerned about the encoding speed as long as it's above > >>> realtime (60fps 1920x1080) > >> Maybe it's more convenient If libva can have a switch for slice_header > >> made between driver and app. Then some specially cases(ref-reorder, > >> long-term-ref) may let app to manage slice-header. > > Thanks for your suggestion. If the slice-header info is also generated > > by the app and then be passed to the driver, it can be easy to do the > > operation of "re-order and long-term-ref" > > > > But this will depend on the policy how the slice header info is > > generated. (by app or the driver). Now this is exported by querying the > > capability of the driver. If we move this into the app, the > > infrastructure will be changed a lot. > Infrastructure of what? The application? No, it doesn't really change > anything but generating the extra packed slice header. > >>> Currently the app will query the capability of encoding and determine > >>> which kind of header should be passed by the upper application(For > >>> example: PPS/SPS). > >>> If the Slice_header info is exposed, the driver will assume that it is > >>> the responsibility of generating the slice header info. In such case it > >>> is difficult to mix the two working modes without a lot of changes.(One > >>> is that the driver generates the slice header info. Another is that the > >>> app generates the slice header info). > >> Then the driver needs to be fixed to not assume anything. The API and > >> operation points look clear enough, aren't they? > >> > >> The changes to the codec layer are minimal, and this was done already, > >> in multiple instances of codec layers, and with another driver. > > Where is your code for codec layer with packed slice header/packed raw > > data, I want to know the point where is packed slice header / packed raw > > data inserted into the bitstream in the codec layer. > I have these patches for the codec-layer (gstreaemr-vaapi) but not yet > integrated to gstreamer-vaapi master. > Submitting all packed_headers in the following order just after the > submission of VAEncSequenceParameterBuffer to vaRenderPicture() : > "packed_sps, packed_pps, packed_slice". > And then submitting Miscparam, PictureParam and SliceParam. Is it not > enough? It is enough under the single slice. But it will still change where the slice header is generated. The current design is based on the following points: 1. The driver exposes the SPS/PPS/MISC and the user-space app is responsible for generating the corresponding packed ata. 2. The driver is responsible for generating the slice header. If the generation of slice header is moved from the driver to the user-space app, the driver needs a lot of changes and some working cases will be affected. Another is the complexity of handling the generation of slice_headers in multislice. > > Anyway the intel driver seems to be using fixed size arrays for > referencing packed headers (packed_header_param[4] and > packed_header_data[4]) which needs to be changed if there are > multiple slices per frame since we have to submit packed headers for > each slice. isn't it? > Yes. If more packed data is passed, the driver needs to be changed. > > > > >> > > Another problem is that it is possible that one frame has multiple > > slices. In such case the user should pass the multiple slice header > >
Re: [Libva] (H264) Reference Picture selection
On 14.05.2014 10:51, Xiang, Haihao wrote: On Mon, 2014-05-12 at 07:34 +0200, Gwenole Beauchesne wrote: 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: Hi, 2014-05-12 3:34 GMT+02:00 Zhao Yakui : On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: Hi, I have a scheme where the decoder ack's received frames - With this information I could resync without an IDR by carefully selecting the reference frame(s). Will you please describe your idea in detail? long term references would typically be employed as well, but it seems ref-pic-marking was removed may'13? Why do you hope to use the long-term reference? As you know, the current encoding is based on hardware-acceleration. When more the reference frames can be selected, the driver will be more complex and then the encoding speed is also affected. It's for implementing Cisco's clearpath technology (http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa re/clearpath/clearpath_whitepaper.pdf) Basically one would periodically mark a P frame as long term reference, in case of packet loss one can thus refer to this one instead of restarting the stream with an idr. I'm not too concerned about the encoding speed as long as it's above realtime (60fps 1920x1080) Maybe it's more convenient If libva can have a switch for slice_header made between driver and app. Then some specially cases(ref-reorder, long-term-ref) may let app to manage slice-header. Thanks for your suggestion. If the slice-header info is also generated by the app and then be passed to the driver, it can be easy to do the operation of "re-order and long-term-ref" But this will depend on the policy how the slice header info is generated. (by app or the driver). Now this is exported by querying the capability of the driver. If we move this into the app, the infrastructure will be changed a lot. Infrastructure of what? The application? No, it doesn't really change anything but generating the extra packed slice header. Currently the app will query the capability of encoding and determine which kind of header should be passed by the upper application(For example: PPS/SPS). If the Slice_header info is exposed, the driver will assume that it is the responsibility of generating the slice header info. In such case it is difficult to mix the two working modes without a lot of changes.(One is that the driver generates the slice header info. Another is that the app generates the slice header info). Then the driver needs to be fixed to not assume anything. The API and operation points look clear enough, aren't they? The changes to the codec layer are minimal, and this was done already, in multiple instances of codec layers, and with another driver. Where is your code for codec layer with packed slice header/packed raw data, I want to know the point where is packed slice header / packed raw data inserted into the bitstream in the codec layer. I have these patches for the codec-layer (gstreaemr-vaapi) but not yet integrated to gstreamer-vaapi master. Submitting all packed_headers in the following order just after the submission of VAEncSequenceParameterBuffer to vaRenderPicture() : "packed_sps, packed_pps, packed_slice". And then submitting Miscparam, PictureParam and SliceParam. Is it not enough? Anyway the intel driver seems to be using fixed size arrays for referencing packed headers (packed_header_param[4] and packed_header_data[4]) which needs to be changed if there are multiple slices per frame since we have to submit packed headers for each slice. isn't it? Another problem is that it is possible that one frame has multiple slices. In such case the user should pass the multiple slice header info. This is not an an issue. The API mandates that the packed headers are provided in order. Regards, Gwenole. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva -- Thanks Sree - Intel Finland Oy Registered Address: PL 281, 00181 Helsinki Business Identity Code: 0357606 - 4 Domiciled in Helsinki This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, 2014-05-14 10:01 GMT+02:00 Xiang, Haihao : > On Wed, 2014-05-14 at 15:49 +0800, Zhao, Yakui wrote: >> On Sun, 2014-05-11 at 23:34 -0600, Gwenole Beauchesne wrote: >> > 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : >> > > On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: >> > >> Hi, >> > >> >> > >> 2014-05-12 3:34 GMT+02:00 Zhao Yakui : >> > >> > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: >> > >> >> Hi, >> > >> >> >> > >> >> >>> I have a scheme where the decoder ack's received frames - With >> > >> >> >>> this >> > >> >> >>> information I could resync without an IDR by carefully selecting >> > >> >> >>> the >> > >> >> >>> reference frame(s). >> > >> >> > >> > >> >> >> Will you please describe your idea in detail? >> > >> >> > >> > >> >> >>> long term references would typically be employed as well, but it >> > >> >> >>> seems ref-pic-marking was removed may'13? >> > >> >> > >> > >> >> >> Why do you hope to use the long-term reference? As you know, the >> > >> >> >> current encoding is based on hardware-acceleration. When more the >> > >> >> >> reference frames can be selected, the driver will be more complex >> > >> >> >> and >> > >> >> >> then the encoding speed is also affected. >> > >> >> > >> > >> >> >It's for implementing Cisco's clearpath technology >> > >> >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa >> > >> >> >re/clearpath/clearpath_whitepaper.pdf) >> > >> >> >Basically one would periodically mark a P frame as long term >> > >> >> >reference, in >> > >> >> >case of packet loss one can thus refer to this one instead of >> > >> >> >restarting the >> > >> >> >stream with an idr. >> > >> >> > >> > >> >> > >> > >> >> >I'm not too concerned about the encoding speed as long as it's above >> > >> >> >realtime (60fps 1920x1080) >> > >> >> >> > >> >> Maybe it's more convenient If libva can have a switch for >> > >> >> slice_header made between driver and app. Then some specially >> > >> >> cases(ref-reorder, long-term-ref) may let app to manage slice-header. >> > >> > >> > >> > Thanks for your suggestion. If the slice-header info is also generated >> > >> > by the app and then be passed to the driver, it can be easy to do the >> > >> > operation of "re-order and long-term-ref" >> > >> > >> > >> > But this will depend on the policy how the slice header info is >> > >> > generated. (by app or the driver). Now this is exported by querying >> > >> > the >> > >> > capability of the driver. If we move this into the app, the >> > >> > infrastructure will be changed a lot. >> > >> >> > >> Infrastructure of what? The application? No, it doesn't really change >> > >> anything but generating the extra packed slice header. >> > > >> > > Currently the app will query the capability of encoding and determine >> > > which kind of header should be passed by the upper application(For >> > > example: PPS/SPS). >> > > If the Slice_header info is exposed, the driver will assume that it is >> > > the responsibility of generating the slice header info. In such case it >> > > is difficult to mix the two working modes without a lot of changes.(One >> > > is that the driver generates the slice header info. Another is that the >> > > app generates the slice header info). >> > >> > Then the driver needs to be fixed to not assume anything. The API and >> > operation points look clear enough, aren't they? >> >> Sorry that something is not described very clearly in my previous email >> because of one typo error. >> >>If the Slice_header info is exposed, app should be responsible of >> generating the slice header info and then passed it into the driver. >>If the slice_header info is not exposed, the driver will be >> responsible of generating the slice header info. >> >> And it is difficult to mix the above two working modes without a lot of >> changes. > > Another thing is for CBR, the QP might be changed per frame in the > driver and app/middleware doesn't not know the final QP. How does app > get the right slice_qp_delta to build the right packed slice header ? > does app always use a fixed slice_qp_delta ? AFAIK, slice_qp_delta does not really matter. You can start off QP = 26 (defaults with pic_init_qp_minus26 = 0 + slice_qp_delta = 0), and the driver can still generate per-MB QP that fits. i.e. you have an additional delta at the macroblock layer IIRC. That should fit most purposes. Regards, Gwenole. >> > The changes to the codec layer are minimal, and this was done already, >> > in multiple instances of codec layers, and with another driver. >> > >> > >> > Another problem is that it is possible that one frame has multiple >> > >> > slices. In such case the user should pass the multiple slice header >> > >> > info. >> > >> >> > >> This is not an an issue. The API mandates that the packed headers are >> > >> provided in order. >> > > >> > > >> > > >> > >> >> > >> Regards, >> > >> Gwenole. >> > > >> > > >> >> >> ___ >> Libva
Re: [Libva] (H264) Reference Picture selection
On Wed, 2014-05-14 at 15:49 +0800, Zhao, Yakui wrote: > On Sun, 2014-05-11 at 23:34 -0600, Gwenole Beauchesne wrote: > > 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : > > > On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: > > >> Hi, > > >> > > >> 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > > >> > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > > >> >> Hi, > > >> >> > > >> >> >>> I have a scheme where the decoder ack's received frames - With > > >> >> >>> this > > >> >> >>> information I could resync without an IDR by carefully selecting > > >> >> >>> the > > >> >> >>> reference frame(s). > > >> >> > > > >> >> >> Will you please describe your idea in detail? > > >> >> > > > >> >> >>> long term references would typically be employed as well, but it > > >> >> >>> seems ref-pic-marking was removed may'13? > > >> >> > > > >> >> >> Why do you hope to use the long-term reference? As you know, the > > >> >> >> current encoding is based on hardware-acceleration. When more the > > >> >> >> reference frames can be selected, the driver will be more complex > > >> >> >> and > > >> >> >> then the encoding speed is also affected. > > >> >> > > > >> >> >It's for implementing Cisco's clearpath technology > > >> >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > > >> >> >re/clearpath/clearpath_whitepaper.pdf) > > >> >> >Basically one would periodically mark a P frame as long term > > >> >> >reference, in > > >> >> >case of packet loss one can thus refer to this one instead of > > >> >> >restarting the > > >> >> >stream with an idr. > > >> >> > > > >> >> > > > >> >> >I'm not too concerned about the encoding speed as long as it's above > > >> >> >realtime (60fps 1920x1080) > > >> >> > > >> >> Maybe it's more convenient If libva can have a switch for > > >> >> slice_header made between driver and app. Then some specially > > >> >> cases(ref-reorder, long-term-ref) may let app to manage slice-header. > > >> > > > >> > Thanks for your suggestion. If the slice-header info is also generated > > >> > by the app and then be passed to the driver, it can be easy to do the > > >> > operation of "re-order and long-term-ref" > > >> > > > >> > But this will depend on the policy how the slice header info is > > >> > generated. (by app or the driver). Now this is exported by querying the > > >> > capability of the driver. If we move this into the app, the > > >> > infrastructure will be changed a lot. > > >> > > >> Infrastructure of what? The application? No, it doesn't really change > > >> anything but generating the extra packed slice header. > > > > > > Currently the app will query the capability of encoding and determine > > > which kind of header should be passed by the upper application(For > > > example: PPS/SPS). > > > If the Slice_header info is exposed, the driver will assume that it is > > > the responsibility of generating the slice header info. In such case it > > > is difficult to mix the two working modes without a lot of changes.(One > > > is that the driver generates the slice header info. Another is that the > > > app generates the slice header info). > > > > Then the driver needs to be fixed to not assume anything. The API and > > operation points look clear enough, aren't they? > > Sorry that something is not described very clearly in my previous email > because of one typo error. > >If the Slice_header info is exposed, app should be responsible of > generating the slice header info and then passed it into the driver. >If the slice_header info is not exposed, the driver will be > responsible of generating the slice header info. > > And it is difficult to mix the above two working modes without a lot of > changes. Another thing is for CBR, the QP might be changed per frame in the driver and app/middleware doesn't not know the final QP. How does app get the right slice_qp_delta to build the right packed slice header ? does app always use a fixed slice_qp_delta ? > > > > > The changes to the codec layer are minimal, and this was done already, > > in multiple instances of codec layers, and with another driver. > > > > >> > Another problem is that it is possible that one frame has multiple > > >> > slices. In such case the user should pass the multiple slice header > > >> > info. > > >> > > >> This is not an an issue. The API mandates that the packed headers are > > >> provided in order. > > > > > > > > > > > >> > > >> Regards, > > >> Gwenole. > > > > > > > > > ___ > Libva mailing list > Libva@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/libva ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
On Mon, 2014-05-12 at 07:34 +0200, Gwenole Beauchesne wrote: > 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : > > On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: > >> Hi, > >> > >> 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > >> > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > >> >> Hi, > >> >> > >> >> >>> I have a scheme where the decoder ack's received frames - With this > >> >> >>> information I could resync without an IDR by carefully selecting the > >> >> >>> reference frame(s). > >> >> > > >> >> >> Will you please describe your idea in detail? > >> >> > > >> >> >>> long term references would typically be employed as well, but it > >> >> >>> seems ref-pic-marking was removed may'13? > >> >> > > >> >> >> Why do you hope to use the long-term reference? As you know, the > >> >> >> current encoding is based on hardware-acceleration. When more the > >> >> >> reference frames can be selected, the driver will be more complex and > >> >> >> then the encoding speed is also affected. > >> >> > > >> >> >It's for implementing Cisco's clearpath technology > >> >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > >> >> >re/clearpath/clearpath_whitepaper.pdf) > >> >> >Basically one would periodically mark a P frame as long term > >> >> >reference, in > >> >> >case of packet loss one can thus refer to this one instead of > >> >> >restarting the > >> >> >stream with an idr. > >> >> > > >> >> > > >> >> >I'm not too concerned about the encoding speed as long as it's above > >> >> >realtime (60fps 1920x1080) > >> >> > >> >> Maybe it's more convenient If libva can have a switch for slice_header > >> >> made between driver and app. Then some specially cases(ref-reorder, > >> >> long-term-ref) may let app to manage slice-header. > >> > > >> > Thanks for your suggestion. If the slice-header info is also generated > >> > by the app and then be passed to the driver, it can be easy to do the > >> > operation of "re-order and long-term-ref" > >> > > >> > But this will depend on the policy how the slice header info is > >> > generated. (by app or the driver). Now this is exported by querying the > >> > capability of the driver. If we move this into the app, the > >> > infrastructure will be changed a lot. > >> > >> Infrastructure of what? The application? No, it doesn't really change > >> anything but generating the extra packed slice header. > > > > Currently the app will query the capability of encoding and determine > > which kind of header should be passed by the upper application(For > > example: PPS/SPS). > > If the Slice_header info is exposed, the driver will assume that it is > > the responsibility of generating the slice header info. In such case it > > is difficult to mix the two working modes without a lot of changes.(One > > is that the driver generates the slice header info. Another is that the > > app generates the slice header info). > > Then the driver needs to be fixed to not assume anything. The API and > operation points look clear enough, aren't they? > > The changes to the codec layer are minimal, and this was done already, > in multiple instances of codec layers, and with another driver. Where is your code for codec layer with packed slice header/packed raw data, I want to know the point where is packed slice header / packed raw data inserted into the bitstream in the codec layer. > > > >> > Another problem is that it is possible that one frame has multiple > >> > slices. In such case the user should pass the multiple slice header > >> > info. > >> > >> This is not an an issue. The API mandates that the packed headers are > >> provided in order. > > > > > > > >> > >> Regards, > >> Gwenole. > > > > > ___ > Libva mailing list > Libva@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/libva ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
On Sun, 2014-05-11 at 23:34 -0600, Gwenole Beauchesne wrote: > 2014-05-12 7:29 GMT+02:00 Zhao, Yakui : > > On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: > >> Hi, > >> > >> 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > >> > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > >> >> Hi, > >> >> > >> >> >>> I have a scheme where the decoder ack's received frames - With this > >> >> >>> information I could resync without an IDR by carefully selecting the > >> >> >>> reference frame(s). > >> >> > > >> >> >> Will you please describe your idea in detail? > >> >> > > >> >> >>> long term references would typically be employed as well, but it > >> >> >>> seems ref-pic-marking was removed may'13? > >> >> > > >> >> >> Why do you hope to use the long-term reference? As you know, the > >> >> >> current encoding is based on hardware-acceleration. When more the > >> >> >> reference frames can be selected, the driver will be more complex and > >> >> >> then the encoding speed is also affected. > >> >> > > >> >> >It's for implementing Cisco's clearpath technology > >> >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > >> >> >re/clearpath/clearpath_whitepaper.pdf) > >> >> >Basically one would periodically mark a P frame as long term > >> >> >reference, in > >> >> >case of packet loss one can thus refer to this one instead of > >> >> >restarting the > >> >> >stream with an idr. > >> >> > > >> >> > > >> >> >I'm not too concerned about the encoding speed as long as it's above > >> >> >realtime (60fps 1920x1080) > >> >> > >> >> Maybe it's more convenient If libva can have a switch for slice_header > >> >> made between driver and app. Then some specially cases(ref-reorder, > >> >> long-term-ref) may let app to manage slice-header. > >> > > >> > Thanks for your suggestion. If the slice-header info is also generated > >> > by the app and then be passed to the driver, it can be easy to do the > >> > operation of "re-order and long-term-ref" > >> > > >> > But this will depend on the policy how the slice header info is > >> > generated. (by app or the driver). Now this is exported by querying the > >> > capability of the driver. If we move this into the app, the > >> > infrastructure will be changed a lot. > >> > >> Infrastructure of what? The application? No, it doesn't really change > >> anything but generating the extra packed slice header. > > > > Currently the app will query the capability of encoding and determine > > which kind of header should be passed by the upper application(For > > example: PPS/SPS). > > If the Slice_header info is exposed, the driver will assume that it is > > the responsibility of generating the slice header info. In such case it > > is difficult to mix the two working modes without a lot of changes.(One > > is that the driver generates the slice header info. Another is that the > > app generates the slice header info). > > Then the driver needs to be fixed to not assume anything. The API and > operation points look clear enough, aren't they? Sorry that something is not described very clearly in my previous email because of one typo error. If the Slice_header info is exposed, app should be responsible of generating the slice header info and then passed it into the driver. If the slice_header info is not exposed, the driver will be responsible of generating the slice header info. And it is difficult to mix the above two working modes without a lot of changes. > > The changes to the codec layer are minimal, and this was done already, > in multiple instances of codec layers, and with another driver. > > >> > Another problem is that it is possible that one frame has multiple > >> > slices. In such case the user should pass the multiple slice header > >> > info. > >> > >> This is not an an issue. The API mandates that the packed headers are > >> provided in order. > > > > > > > >> > >> Regards, > >> Gwenole. > > > > ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
2014-05-12 7:29 GMT+02:00 Zhao, Yakui : > On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: >> Hi, >> >> 2014-05-12 3:34 GMT+02:00 Zhao Yakui : >> > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: >> >> Hi, >> >> >> >> >>> I have a scheme where the decoder ack's received frames - With this >> >> >>> information I could resync without an IDR by carefully selecting the >> >> >>> reference frame(s). >> >> > >> >> >> Will you please describe your idea in detail? >> >> > >> >> >>> long term references would typically be employed as well, but it >> >> >>> seems ref-pic-marking was removed may'13? >> >> > >> >> >> Why do you hope to use the long-term reference? As you know, the >> >> >> current encoding is based on hardware-acceleration. When more the >> >> >> reference frames can be selected, the driver will be more complex and >> >> >> then the encoding speed is also affected. >> >> > >> >> >It's for implementing Cisco's clearpath technology >> >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa >> >> >re/clearpath/clearpath_whitepaper.pdf) >> >> >Basically one would periodically mark a P frame as long term reference, >> >> >in >> >> >case of packet loss one can thus refer to this one instead of restarting >> >> >the >> >> >stream with an idr. >> >> > >> >> > >> >> >I'm not too concerned about the encoding speed as long as it's above >> >> >realtime (60fps 1920x1080) >> >> >> >> Maybe it's more convenient If libva can have a switch for slice_header >> >> made between driver and app. Then some specially cases(ref-reorder, >> >> long-term-ref) may let app to manage slice-header. >> > >> > Thanks for your suggestion. If the slice-header info is also generated >> > by the app and then be passed to the driver, it can be easy to do the >> > operation of "re-order and long-term-ref" >> > >> > But this will depend on the policy how the slice header info is >> > generated. (by app or the driver). Now this is exported by querying the >> > capability of the driver. If we move this into the app, the >> > infrastructure will be changed a lot. >> >> Infrastructure of what? The application? No, it doesn't really change >> anything but generating the extra packed slice header. > > Currently the app will query the capability of encoding and determine > which kind of header should be passed by the upper application(For > example: PPS/SPS). > If the Slice_header info is exposed, the driver will assume that it is > the responsibility of generating the slice header info. In such case it > is difficult to mix the two working modes without a lot of changes.(One > is that the driver generates the slice header info. Another is that the > app generates the slice header info). Then the driver needs to be fixed to not assume anything. The API and operation points look clear enough, aren't they? The changes to the codec layer are minimal, and this was done already, in multiple instances of codec layers, and with another driver. >> > Another problem is that it is possible that one frame has multiple >> > slices. In such case the user should pass the multiple slice header >> > info. >> >> This is not an an issue. The API mandates that the packed headers are >> provided in order. > > > >> >> Regards, >> Gwenole. > > ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
On Sun, 2014-05-11 at 22:41 -0600, Gwenole Beauchesne wrote: > Hi, > > 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > >> Hi, > >> > >> >>> I have a scheme where the decoder ack's received frames - With this > >> >>> information I could resync without an IDR by carefully selecting the > >> >>> reference frame(s). > >> > > >> >> Will you please describe your idea in detail? > >> > > >> >>> long term references would typically be employed as well, but it > >> >>> seems ref-pic-marking was removed may'13? > >> > > >> >> Why do you hope to use the long-term reference? As you know, the > >> >> current encoding is based on hardware-acceleration. When more the > >> >> reference frames can be selected, the driver will be more complex and > >> >> then the encoding speed is also affected. > >> > > >> >It's for implementing Cisco's clearpath technology > >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > >> >re/clearpath/clearpath_whitepaper.pdf) > >> >Basically one would periodically mark a P frame as long term reference, > >> >in > >> >case of packet loss one can thus refer to this one instead of restarting > >> >the > >> >stream with an idr. > >> > > >> > > >> >I'm not too concerned about the encoding speed as long as it's above > >> >realtime (60fps 1920x1080) > >> > >> Maybe it's more convenient If libva can have a switch for slice_header > >> made between driver and app. Then some specially cases(ref-reorder, > >> long-term-ref) may let app to manage slice-header. > > > > Thanks for your suggestion. If the slice-header info is also generated > > by the app and then be passed to the driver, it can be easy to do the > > operation of "re-order and long-term-ref" > > > > But this will depend on the policy how the slice header info is > > generated. (by app or the driver). Now this is exported by querying the > > capability of the driver. If we move this into the app, the > > infrastructure will be changed a lot. > > Infrastructure of what? The application? No, it doesn't really change > anything but generating the extra packed slice header. Currently the app will query the capability of encoding and determine which kind of header should be passed by the upper application(For example: PPS/SPS). If the Slice_header info is exposed, the driver will assume that it is the responsibility of generating the slice header info. In such case it is difficult to mix the two working modes without a lot of changes.(One is that the driver generates the slice header info. Another is that the app generates the slice header info). > > > Another problem is that it is possible that one frame has multiple > > slices. In such case the user should pass the multiple slice header > > info. > > This is not an an issue. The API mandates that the packed headers are > provided in order. > > Regards, > Gwenole. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, 2014-05-12 3:34 GMT+02:00 Zhao Yakui : > On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: >> Hi, >> >> >>> I have a scheme where the decoder ack's received frames - With this >> >>> information I could resync without an IDR by carefully selecting the >> >>> reference frame(s). >> > >> >> Will you please describe your idea in detail? >> > >> >>> long term references would typically be employed as well, but it >> >>> seems ref-pic-marking was removed may'13? >> > >> >> Why do you hope to use the long-term reference? As you know, the >> >> current encoding is based on hardware-acceleration. When more the >> >> reference frames can be selected, the driver will be more complex and >> >> then the encoding speed is also affected. >> > >> >It's for implementing Cisco's clearpath technology >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa >> >re/clearpath/clearpath_whitepaper.pdf) >> >Basically one would periodically mark a P frame as long term reference, in >> >case of packet loss one can thus refer to this one instead of restarting the >> >stream with an idr. >> > >> > >> >I'm not too concerned about the encoding speed as long as it's above >> >realtime (60fps 1920x1080) >> >> Maybe it's more convenient If libva can have a switch for slice_header made >> between driver and app. Then some specially cases(ref-reorder, >> long-term-ref) may let app to manage slice-header. > > Thanks for your suggestion. If the slice-header info is also generated > by the app and then be passed to the driver, it can be easy to do the > operation of "re-order and long-term-ref" > > But this will depend on the policy how the slice header info is > generated. (by app or the driver). Now this is exported by querying the > capability of the driver. If we move this into the app, the > infrastructure will be changed a lot. Infrastructure of what? The application? No, it doesn't really change anything but generating the extra packed slice header. > Another problem is that it is possible that one frame has multiple > slices. In such case the user should pass the multiple slice header > info. This is not an an issue. The API mandates that the packed headers are provided in order. Regards, Gwenole. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, >> >> >>> I have a scheme where the decoder ack's received frames - With >> >>> this information I could resync without an IDR by carefully >> >>> selecting the reference frame(s). >> > >> >> Will you please describe your idea in detail? >> > >> >>> long term references would typically be employed as well, but it >> >>> seems ref-pic-marking was removed may'13? >> > >> >> Why do you hope to use the long-term reference? As you know, the >> >> current encoding is based on hardware-acceleration. When more the >> >> reference frames can be selected, the driver will be more complex >> >> and then the encoding speed is also affected. >> > >> >It's for implementing Cisco's clearpath technology >> >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/soft >w >> >a >> >re/clearpath/clearpath_whitepaper.pdf) >> >Basically one would periodically mark a P frame as long term >> >reference, in case of packet loss one can thus refer to this one >> >instead of restarting the stream with an idr. >> > >> > >> >I'm not too concerned about the encoding speed as long as it's above >> >realtime (60fps 1920x1080) >> >> Maybe it's more convenient If libva can have a switch for slice_header made >between driver and app. Then some specially cases(ref-reorder, long-term-ref) >may let app to manage slice-header. > >Thanks for your suggestion. If the slice-header info is also generated by the >app and then be passed to the driver, it can be easy to do the operation of >"re-order and long-term-ref" > >But this will depend on the policy how the slice header info is generated. (by >app or the driver). Now this is exported by querying the capability of the >driver. >If we move this into the app, the infrastructure will be changed a lot. If app queried, driver could tell it supports both. Driver keeps current infrastructure to generate slice_header for most use cases. And probably add another logic to generate slice_data and just copy slice_header from app (advanced usage). This should be similar like decoder. But certainly, need lots efforts and libva interface may do small changes too. >Another problem is that it is possible that one frame has multiple slices. In >such case the user should pass the multiple slice header info. Yes. Compare to slice_data, slice_header should be very small. Thanks Wind ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
On Fri, 2014-05-09 at 03:06 -0600, Yuan, Feng wrote: > Hi, > > >>> I have a scheme where the decoder ack's received frames - With this > >>> information I could resync without an IDR by carefully selecting the > >>> reference frame(s). > > > >> Will you please describe your idea in detail? > > > >>> long term references would typically be employed as well, but it > >>> seems ref-pic-marking was removed may'13? > > > >> Why do you hope to use the long-term reference? As you know, the > >> current encoding is based on hardware-acceleration. When more the > >> reference frames can be selected, the driver will be more complex and > >> then the encoding speed is also affected. > > > >It's for implementing Cisco's clearpath technology > >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa > >re/clearpath/clearpath_whitepaper.pdf) > >Basically one would periodically mark a P frame as long term reference, in > >case of packet loss one can thus refer to this one instead of restarting the > >stream with an idr. > > > > > >I'm not too concerned about the encoding speed as long as it's above > >realtime (60fps 1920x1080) > > Maybe it's more convenient If libva can have a switch for slice_header made > between driver and app. Then some specially cases(ref-reorder, long-term-ref) > may let app to manage slice-header. Thanks for your suggestion. If the slice-header info is also generated by the app and then be passed to the driver, it can be easy to do the operation of "re-order and long-term-ref" But this will depend on the policy how the slice header info is generated. (by app or the driver). Now this is exported by querying the capability of the driver. If we move this into the app, the infrastructure will be changed a lot. Another problem is that it is possible that one frame has multiple slices. In such case the user should pass the multiple slice header info. Thanks. Yakui ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, 2014-05-09 11:06 GMT+02:00 Yuan, Feng : > Hi, > I have a scheme where the decoder ack's received frames - With this information I could resync without an IDR by carefully selecting the reference frame(s). >> >>> Will you please describe your idea in detail? >> long term references would typically be employed as well, but it seems ref-pic-marking was removed may'13? >> >>> Why do you hope to use the long-term reference? As you know, the >>> current encoding is based on hardware-acceleration. When more the >>> reference frames can be selected, the driver will be more complex and >>> then the encoding speed is also affected. >> >>It's for implementing Cisco's clearpath technology >>(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa >>re/clearpath/clearpath_whitepaper.pdf) >>Basically one would periodically mark a P frame as long term reference, in >>case of packet loss one can thus refer to this one instead of restarting the >>stream with an idr. >> >> >>I'm not too concerned about the encoding speed as long as it's above >>realtime (60fps 1920x1080) > > Maybe it's more convenient If libva can have a switch for slice_header made > between driver and app. Then some specially cases(ref-reorder, long-term-ref) > may let app to manage slice-header. That's what the packed headers flags are meant for. There is already an API (libva) for providing packed slice headers to the driver. However, the implementation in the driver is currently work-in-progress. Regards, Gwenole. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, >>> I have a scheme where the decoder ack's received frames - With this >>> information I could resync without an IDR by carefully selecting the >>> reference frame(s). > >> Will you please describe your idea in detail? > >>> long term references would typically be employed as well, but it >>> seems ref-pic-marking was removed may'13? > >> Why do you hope to use the long-term reference? As you know, the >> current encoding is based on hardware-acceleration. When more the >> reference frames can be selected, the driver will be more complex and >> then the encoding speed is also affected. > >It's for implementing Cisco's clearpath technology >(http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/softwa >re/clearpath/clearpath_whitepaper.pdf) >Basically one would periodically mark a P frame as long term reference, in >case of packet loss one can thus refer to this one instead of restarting the >stream with an idr. > > >I'm not too concerned about the encoding speed as long as it's above >realtime (60fps 1920x1080) Maybe it's more convenient If libva can have a switch for slice_header made between driver and app. Then some specially cases(ref-reorder, long-term-ref) may let app to manage slice-header. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
Hi, >> I have a scheme where the decoder ack's received frames - With >> this information I could resync without an IDR by carefully >> selecting the reference frame(s). > Will you please describe your idea in detail? >> long term references would typically be employed as well, but it >> seems ref-pic-marking was removed may'13? > Why do you hope to use the long-term reference? As you know, the > current encoding is based on hardware-acceleration. When more the > reference frames can be selected, the driver will be more complex > and then the encoding speed is also affected. It's for implementing Cisco's clearpath technology (http://www.cisco.com/c/dam/en/us/td/docs/telepresence/endpoint/software/clearpath/clearpath_whitepaper.pdf) Basically one would periodically mark a P frame as long term reference, in case of packet loss one can thus refer to this one instead of restarting the stream with an idr. I'm not too concerned about the encoding speed as long as it's above realtime (60fps 1920x1080) signature.asc Description: OpenPGP digital signature ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
Re: [Libva] (H264) Reference Picture selection
On Wed, 2014-05-07 at 12:32 -0600, Torbjorn Tyridal wrote: > Hi, > > > Is it possible to select which reference frames to consider when > encoding a frame with libva (Intel, haswell)? > The upper user-space application will pass the reference picture list0/1 for H264 encoding and then the libva driver will use the info for the reference frame in encoding. > > I have a scheme where the decoder ack's received frames - With this > information I could resync without an IDR by carefully selecting the > reference frame(s). Will you please describe your idea in detail? > > > long term references would typically be employed as well, but it seems > ref-pic-marking was removed may'13? > Why do you hope to use the long-term reference? As you know, the current encoding is based on hardware-acceleration. When more the reference frames can be selected, the driver will be more complex and then the encoding speed is also affected. > > -- > T. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva
[Libva] (H264) Reference Picture selection
Hi, Is it possible to select which reference frames to consider when encoding a frame with libva (Intel, haswell)? I have a scheme where the decoder ack's received frames - With this information I could resync without an IDR by carefully selecting the reference frame(s). long term references would typically be employed as well, but it seems ref-pic-marking was removed may'13? -- T. ___ Libva mailing list Libva@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libva