Re: [e-users] Question on emotion using HW-accelerated rendering

2013-01-12 Thread The Rasterman
On Wed, 9 Jan 2013 19:19:53 +0530 Arvind R arvin...@gmail.com said:

 Hi all,
 
 My understanding is that emotion gets the video backend to render RGBA
 to the evas canvas that is then displayed by the ecore-evas backend.
 Correct?

actually it gets the video decoder (xine/gstremer etc.) to decode to yuv... not
rgb. this video coded decodes however it likes. might use software. might use
hardware. not emotion's business really. :)

the yuv data is then handed to the rendering engine. software of course
software converts to rgba, scales and so on. gl will upload to a texture and
use glsl shaders to do the conversion (gpu now does the work while it draws
textures and scales) so... if you use gl - video is accelerated. only catch is
the texture upload of yuv data. it depends how gl drivers implement this - with
a cpu swizzle or a dedicated dma enigne.

 If so, would it be possible, for instance, using the xine backend to
 render directly to screen using whatever HW-accleration available to
 it, and have the evas-canvas as an 'underlay' to the video screen in
 order to trap events. This would mean modifying the emotion-xine
 module to be an interceptor in the xine pipeline instead of being a
 video_output driver.

there is already code to do this with gstreamer. it's all broken and its also
special-case. only allowed if you run e17 AND your video objects is unobscured
etc. - and as i said.. its all broken at the moment. gl based video is far more
flexible and all hw accelerated as per above.

 Feasible?
 
 Arvind R.
 
 --
 Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
 and much more. Keep your Java skills current with LearnJavaNow -
 200+ hours of step-by-step video tutorials by Java experts.
 SALE $49.99 this month only -- learn more at:
 http://p.sf.net/sfu/learnmore_122612 
 ___
 enlightenment-users mailing list
 enlightenment-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/enlightenment-users
 


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com


--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_123012
___
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users


[e-users] Question on emotion using HW-accelerated rendering

2013-01-09 Thread Arvind R
Hi all,

My understanding is that emotion gets the video backend to render RGBA
to the evas canvas that is then displayed by the ecore-evas backend.
Correct?

If so, would it be possible, for instance, using the xine backend to
render directly to screen using whatever HW-accleration available to
it, and have the evas-canvas as an 'underlay' to the video screen in
order to trap events. This would mean modifying the emotion-xine
module to be an interceptor in the xine pipeline instead of being a
video_output driver.

Feasible?

Arvind R.

--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users


Re: [e-users] Question on emotion using HW-accelerated rendering

2013-01-09 Thread Gustavo Sverzut Barbieri
On Wed, Jan 9, 2013 at 11:49 AM, Arvind R arvin...@gmail.com wrote:

 Hi all,

 My understanding is that emotion gets the video backend to render RGBA
 to the evas canvas that is then displayed by the ecore-evas backend.
 Correct?

 If so, would it be possible, for instance, using the xine backend to
 render directly to screen using whatever HW-accleration available to
 it, and have the evas-canvas as an 'underlay' to the video screen in
 order to trap events. This would mean modifying the emotion-xine
 module to be an interceptor in the xine pipeline instead of being a
 video_output driver.

 Feasible?

 Arvind R.


 --
 Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
 and much more. Keep your Java skills current with LearnJavaNow -
 200+ hours of step-by-step video tutorials by Java experts.
 SALE $49.99 this month only -- learn more at:
 http://p.sf.net/sfu/learnmore_122612
 ___
 enlightenment-users mailing list
 enlightenment-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/enlightenment-users




-- 
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--
MSN: barbi...@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users


Re: [e-users] Question on emotion using HW-accelerated rendering

2013-01-09 Thread Gustavo Sverzut Barbieri
On Wed, Jan 9, 2013 at 11:49 AM, Arvind R arvin...@gmail.com wrote:

 Hi all,

 My understanding is that emotion gets the video backend to render RGBA
 to the evas canvas that is then displayed by the ecore-evas backend.
 Correct?


Actually it outputs to YUV as well, being converted to RGB by CPU (mmx/sse)
or GPU (OpenGL).


If so, would it be possible, for instance, using the xine backend to
 render directly to screen using whatever HW-accleration available to
 it, and have the evas-canvas as an 'underlay' to the video screen in
 order to trap events. This would mean modifying the emotion-xine
 module to be an interceptor in the xine pipeline instead of being a
 video_output driver.

 Feasible?


Yes, Cedric did this for Gstreamer. There is support for that in
Evas_Object_Image with Evas_Video_Surface that you can use to hook and
change the underlying backend. At the Evas level it will draw an empty hole
in the image region, leaving it to the top/below HW plane to draw it.


-- 
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--
MSN: barbi...@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users