Am 15.09.2016 um 23:07 schrieb Julien Isorce:
On 15 September 2016 at 16:02, Leo Liu <leo....@amd.com
<mailto:leo....@amd.com>> wrote:
On 09/15/2016 10:43 AM, Andy Furniss wrote:
Since bf901a2
st/va: also honors interlaced preference when providing a
video format
existing scripts and most use cases will need true.
Signed-off-by: Andy Furniss <adf.li...@gmail.com
<mailto:adf.li...@gmail.com>>
---
src/gallium/state_trackers/va/surface.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/gallium/state_trackers/va/surface.c
b/src/gallium/state_trackers/va/surface.c
index 00df69d..e73e17e 100644
--- a/src/gallium/state_trackers/va/surface.c
+++ b/src/gallium/state_trackers/va/surface.c
@@ -43,7 +43,7 @@
#include "va_private.h"
-DEBUG_GET_ONCE_BOOL_OPTION(nointerlace,
"VAAPI_DISABLE_INTERLACE", FALSE);
+DEBUG_GET_ONCE_BOOL_OPTION(nointerlace,
"VAAPI_DISABLE_INTERLACE", TRUE);
Like being mentioned, It'll still override the preferred
interlaced format when this env is not explicitly used.
Not sure this will be okay with other case. @Julien?
Hi,
So the problem is that with radeon, PIPE_VIDEO_CAP_SUPPORTS_INTERLACED
always returns false for encoding
but can return true for decoding (depending on the chipset and the
codec). Then when doing transcoding you need all to be non interlaced
to avoid extra copy/conversion and even a clash. Is it correct ?
Yes correct. The decoder supports both interlaced as well as progressive
memory layout, but the encoder can only handle progressive input.
Interlaced memory layout is needed for things like adaptive
deinterlacing as well as VDPAU OpenGL interop.
Should debug_get_option_nointerlace() be moved to
radeon_video.c::rvid_get_video_param ?
For the short term that sounds like a good idea to me.
In the mid term we need to better handle this case. E.g. allocate the
video buffer with the layout the driver reports as preferred, if that
doesn't match the use case (transcoding, deinterlacing or interop)
convert as needed.
Other question:
case PIPE_VIDEO_CAP_SUPPORTS_INTERLACED:
if (rscreen->family < CHIP_PALM) {
/* MPEG2 only with shaders and no support for
interlacing on R6xx style UVD */
return codec != PIPE_VIDEO_FORMAT_MPEG12 &&
rscreen->family > CHIP_RV770;
} else {
if (u_reduce_video_profile(profile) == PIPE_VIDEO_FORMAT_HEVC)
return false; //The firmware doesn't support
interlaced HEVC.
return true;
}
So if instead it would always return false then it will really work on
hardware for which above code says true ?
It would work, but the deinterlacing and interop use case I noted above
would break.
Because my understanding is that for nvidia hardware this is not a
preference but rather a requirement but I might be wrong.
Yes, as far as I know that is correct. AMD hardware can handle both for
most hardware/profile combinations, even the HEVC limit only applies to
a certain firmware version IIRC.
In any case, with current upstream code and VAAPI_DISABLE_INTERLACE=1
it hits "assert(templat->interlaced);" in
nouveau_vp3_video_buffer_create. If I remove the asset it crashes and
can even stall the driver (just wanted to check ).
Crap, feel free to revert the patch setting it to true by default.
Regards,
Christian.
Cheers
Julien
Regards,
Leo
#include <va/va_drmcommon.h>
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev