Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-12 Thread Alex Deucher
On Wed, Feb 9, 2011 at 7:51 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Wed, 2011-02-09 at 02:12 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
    Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope 
   to post
   an RFC by the end of this month. We also have a proposal for CEC 
   support in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?
 

 Documentation is somewhat sparse compared to some other APIs.  Mostly
 inline kerneldoc comments in the core functions.  It would be nice to
 improve things.   The modesetting API is very similar to the xrandr
 API in the xserver.

 At the moment a device specific surface manager (Xorg ddx, or some
 other userspace lib) is required to use kms due to device specific
 requirements with respect to memory management and alignment for
 acceleration.  The kms modesetting ioctls are common across all kms
 drm drivers, but the memory management ioctls are device specific.
 GEM itself is an Intel-specific memory manager, although radeon uses
 similar ioctls.  TTM is used internally by radeon, nouveau, and svga
 for managing memory gpu accessible memory pools.  Drivers are free to
 use whatever memory manager they want; an existing one shared with a
 v4l or platform driver, TTM, or something new.
   There is no generic
 userspace kms driver/lib although Dave and others have done some work
 to support that, but it's really hard to make a generic interface
 flexible enough to handle all the strange acceleration requirements of
 GPUs.

 All of the above unfortunately says to me that the KMS API has a fairly
 tightly coupled set of userspace components, because userspace
 applications need have details about the specific underlying hardware
 embeeded in the application to effectively use the API.


At the moment, the only thing that uses the APIs are X-like things,
Xorg, but also, wayland and graphical boot managers like plymouth.
However, embedded devices with graphics often have similar usage
models so the APIs would work for them as well.  I'm sorry if I gave
the wrong impression, I was not implying you should use kms for video
capture, but rather it should be considered for video output type
things.  Right now just about every embedded device out there uses
some device specific hack (either a hacked up kernel fb interface or
some proprietary ioctls) to support video output and framebuffers.
The hardware is not that different from desktop hardware.

 If so, that's not really conducive to getting application developers to
 write applications to the API, since applications will get tied to
 specific sets of hardware.

 Lack of documentation on the API for userpace application writers to use
 exacerbates that issue, as there are no clearly stated guarantees on

        device node conventions
        ioctl's
                arguments and bounds on the arguments
                expected error return values
                

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Hans Verkuil
On Tuesday, February 08, 2011 16:28:32 Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:

snip

The driver supports an interrupt. It is used to detect plug/unplug 
events
  in
  kernel debugs.  The API for detection of such an events in V4L2 API is to 
be
  defined.
 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
post
  an RFC by the end of this month. We also have a proposal for CEC support 
in
  the pipeline.
 
 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

There are various reasons for not going down that road. The most important one 
is that mixing APIs is actually a bad idea. I've done that once in the past 
and I've regretted ever since. The problem with doing that is that it is 
pretty hard on applications who have to mix two different styles of API, 
somehow know where to find the documentation for each and know that both APIs 
can in fact be used on the same device.

Now, if there was a lot of code that could be shared, then that might be 
enough reason to go that way, but in practice there is very little overlap. 
Take CEC: all the V4L API will do is to pass the CEC packets from kernel to 
userspace and vice versa. There is no parsing at all. This is typically used 
by embedded apps that want to do their own CEC processing.

An exception might be a PCI(e) card with HDMI input/output that wants to 
handle CEC internally. At that point we might look at sharing CEC parsing 
code. A similar story is true for EDID handling.

One area that might be nice to look at would be to share drivers for HDMI 
receivers and transmitters. However, the infrastructure for such drivers is 
wildly different between how it is used for GPUs versus V4L and has been for 
10 years or so. I also suspect that most GPUs have there own HDMI internal 
implementation so code sharing will probably be quite limited.

So, no, there are no plans to share anything between the two (except perhaps 
EDID and CEC parsing should that become relevant).

Oh, and let me join Andy in saying that the drm/kms/whatever API documentation 
*really* needs a lot of work.

Regards,

Hans
--
To unsubscribe from this list: send the line unsubscribe linux-samsung-soc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Alex Deucher
On Wed, Feb 9, 2011 at 3:59 AM, Hans Verkuil hansv...@cisco.com wrote:
 On Tuesday, February 08, 2011 16:28:32 Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:

 snip

    The driver supports an interrupt. It is used to detect plug/unplug
 events
  in
  kernel debugs.  The API for detection of such an events in V4L2 API is to
 be
  defined.
 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to
 post
  an RFC by the end of this month. We also have a proposal for CEC support
 in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 There are various reasons for not going down that road. The most important one
 is that mixing APIs is actually a bad idea. I've done that once in the past
 and I've regretted ever since. The problem with doing that is that it is
 pretty hard on applications who have to mix two different styles of API,
 somehow know where to find the documentation for each and know that both APIs
 can in fact be used on the same device.

 Now, if there was a lot of code that could be shared, then that might be
 enough reason to go that way, but in practice there is very little overlap.
 Take CEC: all the V4L API will do is to pass the CEC packets from kernel to
 userspace and vice versa. There is no parsing at all. This is typically used
 by embedded apps that want to do their own CEC processing.

 An exception might be a PCI(e) card with HDMI input/output that wants to
 handle CEC internally. At that point we might look at sharing CEC parsing
 code. A similar story is true for EDID handling.

 One area that might be nice to look at would be to share drivers for HDMI
 receivers and transmitters. However, the infrastructure for such drivers is
 wildly different between how it is used for GPUs versus V4L and has been for
 10 years or so. I also suspect that most GPUs have there own HDMI internal
 implementation so code sharing will probably be quite limited.


You don't need to worry about the rest of the 3D and acceleration
stuff to use the kms modesetting API.  For video output, you have a
timing generator, an encoder that translates a bitstream into
voltages, and an connector that you plug into a monitor.  Additionally
you may want to read an edid or generate a hotplug event and use some
modeline handling helpers.  The kms api provides core modesetting code
and a set of modesetting driver callbacks for crtcs, encoders, and
connectors.  The hardware implementations will vary, but modesetting
is the same.  From drm_crtc_helper.h:

The driver provides the following callbacks for the crtc.  The crtc
loosely refers to the part of the display pipe that generates timing
and framebuffer scanout position.

struct drm_crtc_helper_funcs {
/*
 * Control power levels on the CRTC.  If the mode passed in is
 * unsupported, the provider must use the next lowest power
level.
 */
void (*dpms)(struct drm_crtc *crtc, int mode);
void (*prepare)(struct drm_crtc *crtc);
void (*commit)(struct drm_crtc *crtc);

/* Provider can fixup or change mode timings before modeset occurs */
bool (*mode_fixup)(struct drm_crtc *crtc,
   struct drm_display_mode *mode,
   struct drm_display_mode *adjusted_mode);
/* Actually set the mode */
int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode, int x, int y,
struct drm_framebuffer *old_fb);

/* Move the crtc on the current fb to the given position *optional* */
int (*mode_set_base)(struct drm_crtc *crtc, int x, int y,
 struct drm_framebuffer *old_fb);
int (*mode_set_base_atomic)(struct drm_crtc *crtc,
struct drm_framebuffer *fb, int x, int y,
enum mode_set_atomic);

/* reload the current crtc LUT */
void (*load_lut)(struct drm_crtc *crtc);

/* disable crtc when not in use - more explicit than dpms off */
void (*disable)(struct drm_crtc *crtc);
};

encoders take the bitstream from the crtc and convert it into a set of
voltages understood by the monitor, e.g., TMDS or LVDS encoders.  The
callbacks follow a similar pattern to crtcs.

struct drm_encoder_helper_funcs {
void (*dpms)(struct drm_encoder *encoder, int mode);
void (*save)(struct drm_encoder *encoder);
void (*restore)(struct drm_encoder 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Corbin Simpson
On Wed, Feb 9, 2011 at 9:55 AM, Alex Deucher alexdeuc...@gmail.com wrote:
 On Wed, Feb 9, 2011 at 3:59 AM, Hans Verkuil hansv...@cisco.com wrote:
 On Tuesday, February 08, 2011 16:28:32 Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:

 snip

    The driver supports an interrupt. It is used to detect plug/unplug
 events
  in
  kernel debugs.  The API for detection of such an events in V4L2 API is to
 be
  defined.
 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to
 post
  an RFC by the end of this month. We also have a proposal for CEC support
 in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 There are various reasons for not going down that road. The most important 
 one
 is that mixing APIs is actually a bad idea. I've done that once in the past
 and I've regretted ever since. The problem with doing that is that it is
 pretty hard on applications who have to mix two different styles of API,
 somehow know where to find the documentation for each and know that both APIs
 can in fact be used on the same device.

 Now, if there was a lot of code that could be shared, then that might be
 enough reason to go that way, but in practice there is very little overlap.
 Take CEC: all the V4L API will do is to pass the CEC packets from kernel to
 userspace and vice versa. There is no parsing at all. This is typically used
 by embedded apps that want to do their own CEC processing.

 An exception might be a PCI(e) card with HDMI input/output that wants to
 handle CEC internally. At that point we might look at sharing CEC parsing
 code. A similar story is true for EDID handling.

 One area that might be nice to look at would be to share drivers for HDMI
 receivers and transmitters. However, the infrastructure for such drivers is
 wildly different between how it is used for GPUs versus V4L and has been for
 10 years or so. I also suspect that most GPUs have there own HDMI internal
 implementation so code sharing will probably be quite limited.


 You don't need to worry about the rest of the 3D and acceleration
 stuff to use the kms modesetting API.  For video output, you have a
 timing generator, an encoder that translates a bitstream into
 voltages, and an connector that you plug into a monitor.  Additionally
 you may want to read an edid or generate a hotplug event and use some
 modeline handling helpers.  The kms api provides core modesetting code
 and a set of modesetting driver callbacks for crtcs, encoders, and
 connectors.  The hardware implementations will vary, but modesetting
 is the same.  From drm_crtc_helper.h:

 The driver provides the following callbacks for the crtc.  The crtc
 loosely refers to the part of the display pipe that generates timing
 and framebuffer scanout position.

 struct drm_crtc_helper_funcs {
        /*
         * Control power levels on the CRTC.  If the mode passed in is
         * unsupported, the provider must use the next lowest power
 level.
         */
        void (*dpms)(struct drm_crtc *crtc, int mode);
        void (*prepare)(struct drm_crtc *crtc);
        void (*commit)(struct drm_crtc *crtc);

        /* Provider can fixup or change mode timings before modeset occurs */
        bool (*mode_fixup)(struct drm_crtc *crtc,
                           struct drm_display_mode *mode,
                           struct drm_display_mode *adjusted_mode);
        /* Actually set the mode */
        int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,
                        struct drm_display_mode *adjusted_mode, int x, int y,
                        struct drm_framebuffer *old_fb);

        /* Move the crtc on the current fb to the given position *optional* */
        int (*mode_set_base)(struct drm_crtc *crtc, int x, int y,
                             struct drm_framebuffer *old_fb);
        int (*mode_set_base_atomic)(struct drm_crtc *crtc,
                                    struct drm_framebuffer *fb, int x, int y,
                                    enum mode_set_atomic);

        /* reload the current crtc LUT */
        void (*load_lut)(struct drm_crtc *crtc);

        /* disable crtc when not in use - more explicit than dpms off */
        void (*disable)(struct drm_crtc *crtc);
 };

 encoders take the bitstream from the crtc and convert it into a set of
 voltages understood by the monitor, e.g., TMDS or LVDS encoders.  The
 callbacks follow a similar pattern to crtcs.

 struct drm_encoder_helper_funcs {
        void (*dpms)(struct drm_encoder *encoder, int mode);

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Matt Turner
On Wed, Feb 9, 2011 at 7:12 AM, Alex Deucher alexdeuc...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
  Just two quick notes. I'll try to do a full review this weekend.
 
  On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
  ==
   Introduction
  ==
 
  The purpose of this RFC is to discuss the driver for a TV output 
  interface
  available in upcoming Samsung SoC. The HW is able to generate digital and
  analog signals. Current version of the driver supports only digital 
  output.
 
  Internally the driver uses videobuf2 framework, and CMA memory allocator.
  Not
  all of them are merged by now, but I decided to post the sources to start
  discussion driver's design.

 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
  post
  an RFC by the end of this month. We also have a proposal for CEC support 
  in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 Alex

 I'll toss one out: lack of API documentation for driver or application
 developers to use.


 When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
 possibly get rid of reliance on the ivtv X video driver
 http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
 was really sparse.

 DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
 the userland API wasn't fleshed out.  GEM was talked about a bit in
 there as well, IIRC.

 TTM documentation was essentially non-existant.

 I can't find any KMS documentation either.

 I recall having to read much of the drm code, and having to look at the
 radeon driver, just to tease out what the DRM ioctls needed to do.

 Am I missing a Documentation source for the APIs?

Yes,

My summer of code project's purpose was to create something of a
tutorial for writing a KMS driver. The code, split out into something
like 15 step-by-step patches, and accompanying documentation are
available from Google's website.

http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz

My repository (doesn't include the documentation) is available here:
http://git.kernel.org/?p=linux/kernel/git/mattst88/glint.git;a=summary

There's a 'rebased' branch that contains API changes required for the
code to work with 2.6.37~.

I hope it's useful to you.

I can't image how the lack of documentation of an used and tested API
could be a serious reason to write you own. That makes absolutely no
sense to me, so I hope you'll decide to use KMS.

Matt
--
To unsubscribe from this list: send the line unsubscribe linux-samsung-soc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Hans Verkuil
On Wednesday, February 09, 2011 20:00:38 Matt Turner wrote:
 On Wed, Feb 9, 2011 at 7:12 AM, Alex Deucher alexdeuc...@gmail.com wrote:
  On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope 
   to post
   an RFC by the end of this month. We also have a proposal for CEC 
   support in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?
 
 Yes,
 
 My summer of code project's purpose was to create something of a
 tutorial for writing a KMS driver. The code, split out into something
 like 15 step-by-step patches, and accompanying documentation are
 available from Google's website.
 
 http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz

Nice!

What I still don't understand is if and how this is controlled via userspace.

Is there some documentation of the userspace API somewhere?

 My repository (doesn't include the documentation) is available here:
 http://git.kernel.org/?p=linux/kernel/git/mattst88/glint.git;a=summary
 
 There's a 'rebased' branch that contains API changes required for the
 code to work with 2.6.37~.
 
 I hope it's useful to you.
 
 I can't image how the lack of documentation of an used and tested API
 could be a serious reason to write you own.

That never was the main reason. It doesn't help, though.

 That makes absolutely no
 sense to me, so I hope you'll decide to use KMS.

No, we won't. A GPU driver != a V4L driver. The primary purpose of a V4L2
display driver is to output discrete frames from memory to some device. This
may be a HDMI transmitter, a SDTV transmitter, a memory-to-memory codec, an
FPGA, whatever. In other words, there is not necessarily a monitor on the other
end. We have for some time now V4L2 APIs to set up video formats. The original
ioctl was VIDIOC_G/S_STD to select PAL/NTSC/SECAM. The new ones are
VIDIOC_G/S_DV_PRESETS which set up standard formats (1080p60, 720p60, etc) and
DV_TIMINGS which can be used for custom bt.656/1120 digital video timings.

Trying to mix KMS into the V4L2 API is just a recipe for disaster. Just think
about what it would mean for DRM if DRM would use the V4L2 API for setting
video modes. That would be a disaster as well.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco
--
To unsubscribe from this list: send the line unsubscribe linux-samsung-soc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Alex Deucher
On Wed, Feb 9, 2011 at 2:43 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 On Wednesday, February 09, 2011 20:00:38 Matt Turner wrote:
 On Wed, Feb 9, 2011 at 7:12 AM, Alex Deucher alexdeuc...@gmail.com wrote:
  On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
    Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope 
   to post
   an RFC by the end of this month. We also have a proposal for CEC 
   support in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?

 Yes,

 My summer of code project's purpose was to create something of a
 tutorial for writing a KMS driver. The code, split out into something
 like 15 step-by-step patches, and accompanying documentation are
 available from Google's website.

 http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz

 Nice!

 What I still don't understand is if and how this is controlled via userspace.

 Is there some documentation of the userspace API somewhere?

At the moment, it's only used by Xorg ddxes and the plymouth
bootsplash.  For details see:
http://cgit.freedesktop.org/plymouth/tree/src/plugins/renderers/drm
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/drmmode_display.c


 My repository (doesn't include the documentation) is available here:
 http://git.kernel.org/?p=linux/kernel/git/mattst88/glint.git;a=summary

 There's a 'rebased' branch that contains API changes required for the
 code to work with 2.6.37~.

 I hope it's useful to you.

 I can't image how the lack of documentation of an used and tested API
 could be a serious reason to write you own.

 That never was the main reason. It doesn't help, though.

 That makes absolutely no
 sense to me, so I hope you'll decide to use KMS.

 No, we won't. A GPU driver != a V4L driver. The primary purpose of a V4L2
 display driver is to output discrete frames from memory to some device. This
 may be a HDMI transmitter, a SDTV transmitter, a memory-to-memory codec, an
 FPGA, whatever. In other words, there is not necessarily a monitor on the 
 other
 end. We have for some time now V4L2 APIs to set up video formats. The original
 ioctl was VIDIOC_G/S_STD to select PAL/NTSC/SECAM. The new ones are
 VIDIOC_G/S_DV_PRESETS which set up standard formats (1080p60, 720p60, etc) and
 DV_TIMINGS which can be used for custom bt.656/1120 digital video timings.

 Trying to mix KMS into the V4L2 API is just a recipe for disaster. Just think
 about what it would mean for DRM if DRM would use the V4L2 API for setting
 video modes. That would be a disaster as well.

I still think there's room for cooperation here.  There are GPUs out
there that have a capture interface and a full gamut of display output
options in addition to a 3D engine.  Besides conventional desktop
stuff, you could use this sort of setup to capture frames, run them
through shader-based filters/transforms and render then to memory to
be scanned out by display hardware, or dma'ed to 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-09 Thread Andy Walls
On Wed, 2011-02-09 at 02:12 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
  On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
  On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
   Just two quick notes. I'll try to do a full review this weekend.
  
   On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
   ==
Introduction
   ==
  
   The purpose of this RFC is to discuss the driver for a TV output 
   interface
   available in upcoming Samsung SoC. The HW is able to generate digital 
   and
   analog signals. Current version of the driver supports only digital 
   output.
  
   Internally the driver uses videobuf2 framework, and CMA memory 
   allocator.
   Not
   all of them are merged by now, but I decided to post the sources to 
   start
   discussion driver's design.
 
  
   Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
   post
   an RFC by the end of this month. We also have a proposal for CEC support 
   in
   the pipeline.
 
  Any reason to not use the drm kms APIs for modesetting, display
  configuration, and hotplug support?  We already have the
  infrastructure in place for complex display configurations and
  generating events for hotplug interrupts.  It would seem to make more
  sense to me to fix any deficiencies in the KMS APIs than to spin a new
  API.  Things like CEC would be a natural fit since a lot of desktop
  GPUs support hdmi audio/3d/etc. and are already using kms.
 
  Alex
 
  I'll toss one out: lack of API documentation for driver or application
  developers to use.
 
 
  When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
  possibly get rid of reliance on the ivtv X video driver
  http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
  was really sparse.
 
  DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
  the userland API wasn't fleshed out.  GEM was talked about a bit in
  there as well, IIRC.
 
  TTM documentation was essentially non-existant.
 
  I can't find any KMS documentation either.
 
  I recall having to read much of the drm code, and having to look at the
  radeon driver, just to tease out what the DRM ioctls needed to do.
 
  Am I missing a Documentation source for the APIs?
 
 
 Documentation is somewhat sparse compared to some other APIs.  Mostly
 inline kerneldoc comments in the core functions.  It would be nice to
 improve things.   The modesetting API is very similar to the xrandr
 API in the xserver.
 
 At the moment a device specific surface manager (Xorg ddx, or some
 other userspace lib) is required to use kms due to device specific
 requirements with respect to memory management and alignment for
 acceleration.  The kms modesetting ioctls are common across all kms
 drm drivers, but the memory management ioctls are device specific.
 GEM itself is an Intel-specific memory manager, although radeon uses
 similar ioctls.  TTM is used internally by radeon, nouveau, and svga
 for managing memory gpu accessible memory pools.  Drivers are free to
 use whatever memory manager they want; an existing one shared with a
 v4l or platform driver, TTM, or something new.
   There is no generic
 userspace kms driver/lib although Dave and others have done some work
 to support that, but it's really hard to make a generic interface
 flexible enough to handle all the strange acceleration requirements of
 GPUs.

All of the above unfortunately says to me that the KMS API has a fairly
tightly coupled set of userspace components, because userspace
applications need have details about the specific underlying hardware
embeeded in the application to effectively use the API.

If so, that's not really conducive to getting application developers to
write applications to the API, since applications will get tied to
specific sets of hardware.

Lack of documentation on the API for userpace application writers to use
exacerbates that issue, as there are no clearly stated guarantees on

device node conventions
ioctl's
arguments and bounds on the arguments
expected error return values
behavior on error return and meaning of error return
which are mandatory, which are optional
how to perform atomic transactions
Behavior of other fops:
Are multiple opens allowed or not?
Is llseek() meaningful?
Is poll() meaningful?
Does final close() deallocate all memory objects?
sysfs nodes
location
expected names
data formats
how compliant or not any one driver is with the KMS APIcontrol
where drivers are permitted to behave differently and where they are not
What include files do user space applications need to include


What device nodes should be opened?

[PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Tomasz Stanislawski
==
 Introduction
==

The purpose of this RFC is to discuss the driver for a TV output interface
available in upcoming Samsung SoC. The HW is able to generate digital and
analog signals. Current version of the driver supports only digital output.

Internally the driver uses videobuf2 framework, and CMA memory allocator.  Not
all of them are merged by now, but I decided to post the sources to start
discussion driver's design.

==
 Hardware description
==

The SoC contains a few HW sub-blocks:

1. Video Processor (VP). It is used for processing of NV12 data.  An image
stored in RAM is accessed by DMA. Pixels are cropped, scaled. Additionally,
post processing operations like brightness, sharpness and contrast adjustments
could be performed. The output in YCbCr444 format is send to Mixer.

2. Mixer (MXR). The piece of hardware responsible for mixing and blending
multiple data inputs before passing it to an output device.  The MXR is capable
of handling up to three image layers. One is the output of VP.  Other two are
images in RGB format (multiple variants are supported).  The layers are scaled,
cropped and blended with background color.  The blending factor, and layers'
priority are controlled by MXR's registers. The output is passed either to HDMI
or TVOUT.

3. HDMI. The piece of HW responsible for generation of HDMI packets. It takes
pixel data from mixer and transforms it into data frames. The output is send
to HDMIPHY interface.

4. HDMIPHY. Physical interface for HDMI. Its duties are sending HDMI packets to
HDMI connector. Basically, it contains a PLL that produces source clock for
Mixer, VP and HDMI during streaming.

5. TVOUT. Generation of TV analog signal. (driver not implemented)

6. VideoDAC. Modulator for TVOUT signal. (driver not implemented)


The diagram below depicts connection between all HW pieces.
+---+
NV12 data ---dma---|   Video   |
| Processor |
+---+
  |
  V
+---+
RGB data  ---dma---|   |
|   Mixer   |
RGB data  ---dma---|   |
+---+
  |
  * dmux
 /
  +-*   *--+
  ||
  VV
+---++---+
|HDMI   ||   TVOUT   |
+---++---+
  ||
  VV
+---++---+
|  HDMIPHY  ||  VideoDAC |
+---++---+
  ||
  VV
HDMI   Composite
 connector connector


==
 Driver interface
==

The posted driver implements three V4L2 nodes. Every video node implements V4L2
output buffer. One of nodes corresponds to input of Video Processor. The other
two nodes correspond to RGB inputs of Mixer. All nodes share the same output.
It is one of the Mixer's outputs: TVOUT or HDMI. Changing output in one layer
using S_OUTPUT would change outputs of all other video nodes. The same thing
happens if one try to reconfigure output i.e. by calling S_DV_PRESET. However
it not possible to change or reconfigure the output while streaming. To sum up,
all features in posted version of driver goes as follows:

1. QUERYCAP
2. S_FMT, G_FMT - single and multiplanar API
  a) node named video0 supports formats NV12, NV12, NV12T (tiled version of
NV12), NV12MT (multiplane version of NV12T).
  b) nodes named graph0 and graph1 support formats RGB565, ARGB1555, ARGB,
ARGB.
3. Buffer with USERPTR and MMAP memory.
4. Streaming and buffer control. (STREAMON, STREAMOFF, REQBUF, QBUF, DQBUF)
5. OUTPUT enumeration.
6. DV preset control (SET, GET, ENUM). Currently modes 480P59_94, 720P59_94,
1080P30, 1080P59_94 and 1080P60 work.
7. Positioning layer's window on output display using S_CROP, G_GROP, CROPCAP.
8. Positioning and cropping data in buffer using S_CROP, G_GROP, CROPCAP with
buffer type OVERLAY. *

TODOs:
- add analog TVOUT driver
- add S_OUTPUT
- add S_STD ioctl
- add control of alpha blending / chroma keying via V4L2 controls
- add controls for luminance curve and sharpness in VP
- consider exporting all output functionalities to separate video node
- consider media controller framework
- better control over debugging
- fix dependency between all TV drivers

* The need of cropping in source buffers came from problem with MFC driver for
S5P. The MFC supports only width divisible by 64. If a width of a decoded movie
is not aligned do 64 then padding pixels are filled with zeros. This is an ugly
green color in YCbCr colorspace. Filling it with zeros by a CPU is 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Hans Verkuil
Just two quick notes. I'll try to do a full review this weekend.

On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
 ==
  Introduction
 ==
 
 The purpose of this RFC is to discuss the driver for a TV output interface
 available in upcoming Samsung SoC. The HW is able to generate digital and
 analog signals. Current version of the driver supports only digital output.
 
 Internally the driver uses videobuf2 framework, and CMA memory allocator.  
Not
 all of them are merged by now, but I decided to post the sources to start
 discussion driver's design.
 
 ==
  Hardware description
 ==
 
 The SoC contains a few HW sub-blocks:
 
 1. Video Processor (VP). It is used for processing of NV12 data.  An image
 stored in RAM is accessed by DMA. Pixels are cropped, scaled. Additionally,
 post processing operations like brightness, sharpness and contrast 
adjustments
 could be performed. The output in YCbCr444 format is send to Mixer.
 
 2. Mixer (MXR). The piece of hardware responsible for mixing and blending
 multiple data inputs before passing it to an output device.  The MXR is 
capable
 of handling up to three image layers. One is the output of VP.  Other two 
are
 images in RGB format (multiple variants are supported).  The layers are 
scaled,
 cropped and blended with background color.  The blending factor, and layers'
 priority are controlled by MXR's registers. The output is passed either to 
HDMI
 or TVOUT.
 
 3. HDMI. The piece of HW responsible for generation of HDMI packets. It 
takes
 pixel data from mixer and transforms it into data frames. The output is send
 to HDMIPHY interface.
 
 4. HDMIPHY. Physical interface for HDMI. Its duties are sending HDMI packets 
to
 HDMI connector. Basically, it contains a PLL that produces source clock for
 Mixer, VP and HDMI during streaming.
 
 5. TVOUT. Generation of TV analog signal. (driver not implemented)
 
 6. VideoDAC. Modulator for TVOUT signal. (driver not implemented)
 
 
 The diagram below depicts connection between all HW pieces.
 +---+
 NV12 data ---dma---|   Video   |
 | Processor |
 +---+
   |
   V
 +---+
 RGB data  ---dma---|   |
 |   Mixer   |
 RGB data  ---dma---|   |
 +---+
   |
   * dmux
  /
   +-*   *--+
   ||
   VV
 +---++---+
 |HDMI   ||   TVOUT   |
 +---++---+
   ||
   VV
 +---++---+
 |  HDMIPHY  ||  VideoDAC |
 +---++---+
   ||
   VV
 HDMI   Composite
  connector connector
 
 
 ==
  Driver interface
 ==
 
 The posted driver implements three V4L2 nodes. Every video node implements 
V4L2
 output buffer. One of nodes corresponds to input of Video Processor. The 
other
 two nodes correspond to RGB inputs of Mixer. All nodes share the same 
output.
 It is one of the Mixer's outputs: TVOUT or HDMI. Changing output in one 
layer
 using S_OUTPUT would change outputs of all other video nodes. The same thing
 happens if one try to reconfigure output i.e. by calling S_DV_PRESET. 
However
 it not possible to change or reconfigure the output while streaming. To sum 
up,
 all features in posted version of driver goes as follows:
 
 1. QUERYCAP
 2. S_FMT, G_FMT - single and multiplanar API
   a) node named video0 supports formats NV12, NV12, NV12T (tiled version of
 NV12), NV12MT (multiplane version of NV12T).
   b) nodes named graph0 and graph1 support formats RGB565, ARGB1555, 
ARGB,
 ARGB.

graph0? Do you perhaps mean fb0? I haven't heard about nodes names 'graph' 
before.

 3. Buffer with USERPTR and MMAP memory.
 4. Streaming and buffer control. (STREAMON, STREAMOFF, REQBUF, QBUF, DQBUF)
 5. OUTPUT enumeration.
 6. DV preset control (SET, GET, ENUM). Currently modes 480P59_94, 720P59_94,
 1080P30, 1080P59_94 and 1080P60 work.
 7. Positioning layer's window on output display using S_CROP, G_GROP, 
CROPCAP.
 8. Positioning and cropping data in buffer using S_CROP, G_GROP, CROPCAP 
with
 buffer type OVERLAY. *
 
 TODOs:
 - add analog TVOUT driver
 - add S_OUTPUT
 - add S_STD ioctl
 - add control of alpha blending / chroma keying via V4L2 controls
 - add controls for luminance curve and sharpness in VP
 - consider exporting all output functionalities to separate video node
 - consider media controller framework
 - better control over 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Tomasz Stanislawski

Hans Verkuil wrote:

Just two quick notes. I'll try to do a full review this weekend.

On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:

==
 Introduction
==

The purpose of this RFC is to discuss the driver for a TV output interface
available in upcoming Samsung SoC. The HW is able to generate digital and
analog signals. Current version of the driver supports only digital output.

Internally the driver uses videobuf2 framework, and CMA memory allocator.  

Not

all of them are merged by now, but I decided to post the sources to start
discussion driver's design.

==
 Hardware description
==

The SoC contains a few HW sub-blocks:

1. Video Processor (VP). It is used for processing of NV12 data.  An image
stored in RAM is accessed by DMA. Pixels are cropped, scaled. Additionally,
post processing operations like brightness, sharpness and contrast 

adjustments

could be performed. The output in YCbCr444 format is send to Mixer.

2. Mixer (MXR). The piece of hardware responsible for mixing and blending
multiple data inputs before passing it to an output device.  The MXR is 

capable
of handling up to three image layers. One is the output of VP.  Other two 

are
images in RGB format (multiple variants are supported).  The layers are 

scaled,

cropped and blended with background color.  The blending factor, and layers'
priority are controlled by MXR's registers. The output is passed either to 

HDMI

or TVOUT.

3. HDMI. The piece of HW responsible for generation of HDMI packets. It 

takes

pixel data from mixer and transforms it into data frames. The output is send
to HDMIPHY interface.

4. HDMIPHY. Physical interface for HDMI. Its duties are sending HDMI packets 

to

HDMI connector. Basically, it contains a PLL that produces source clock for
Mixer, VP and HDMI during streaming.

5. TVOUT. Generation of TV analog signal. (driver not implemented)

6. VideoDAC. Modulator for TVOUT signal. (driver not implemented)


The diagram below depicts connection between all HW pieces.
+---+
NV12 data ---dma---|   Video   |
| Processor |
+---+
  |
  V
+---+
RGB data  ---dma---|   |
|   Mixer   |
RGB data  ---dma---|   |
+---+
  |
  * dmux
 /
  +-*   *--+
  ||
  VV
+---++---+
|HDMI   ||   TVOUT   |
+---++---+
  ||
  VV
+---++---+
|  HDMIPHY  ||  VideoDAC |
+---++---+
  ||
  VV
HDMI   Composite
 connector connector


==
 Driver interface
==

The posted driver implements three V4L2 nodes. Every video node implements 

V4L2
output buffer. One of nodes corresponds to input of Video Processor. The 

other
two nodes correspond to RGB inputs of Mixer. All nodes share the same 

output.
It is one of the Mixer's outputs: TVOUT or HDMI. Changing output in one 

layer

using S_OUTPUT would change outputs of all other video nodes. The same thing
happens if one try to reconfigure output i.e. by calling S_DV_PRESET. 

However
it not possible to change or reconfigure the output while streaming. To sum 

up,

all features in posted version of driver goes as follows:

1. QUERYCAP
2. S_FMT, G_FMT - single and multiplanar API
  a) node named video0 supports formats NV12, NV12, NV12T (tiled version of
NV12), NV12MT (multiplane version of NV12T).
  b) nodes named graph0 and graph1 support formats RGB565, ARGB1555, 

ARGB,

ARGB.


graph0? Do you perhaps mean fb0? I haven't heard about nodes names 'graph' 
before.



Hello,
Of course all nodes are named using /dev/video* pattern. By video0, 
graph0, graph1 I mean struct video_device.name. Internal name of a 
mixer's layer.


Regards,
Tomasz Stanislawski





3. Buffer with USERPTR and MMAP memory.
4. Streaming and buffer control. (STREAMON, STREAMOFF, REQBUF, QBUF, DQBUF)
5. OUTPUT enumeration.
6. DV preset control (SET, GET, ENUM). Currently modes 480P59_94, 720P59_94,
1080P30, 1080P59_94 and 1080P60 work.
7. Positioning layer's window on output display using S_CROP, G_GROP, 

CROPCAP.
8. Positioning and cropping data in buffer using S_CROP, G_GROP, CROPCAP 

with

buffer type OVERLAY. *

TODOs:
- add analog TVOUT driver
- add S_OUTPUT
- add S_STD ioctl
- add control of alpha blending / chroma keying via V4L2 controls
- add controls for luminance curve and sharpness in VP
- 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Alex Deucher
On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
 Just two quick notes. I'll try to do a full review this weekend.

 On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
 ==
  Introduction
 ==

 The purpose of this RFC is to discuss the driver for a TV output interface
 available in upcoming Samsung SoC. The HW is able to generate digital and
 analog signals. Current version of the driver supports only digital output.

 Internally the driver uses videobuf2 framework, and CMA memory allocator.
 Not
 all of them are merged by now, but I decided to post the sources to start
 discussion driver's design.

 ==
  Hardware description
 ==

 The SoC contains a few HW sub-blocks:

 1. Video Processor (VP). It is used for processing of NV12 data.  An image
 stored in RAM is accessed by DMA. Pixels are cropped, scaled. Additionally,
 post processing operations like brightness, sharpness and contrast
 adjustments
 could be performed. The output in YCbCr444 format is send to Mixer.

 2. Mixer (MXR). The piece of hardware responsible for mixing and blending
 multiple data inputs before passing it to an output device.  The MXR is
 capable
 of handling up to three image layers. One is the output of VP.  Other two
 are
 images in RGB format (multiple variants are supported).  The layers are
 scaled,
 cropped and blended with background color.  The blending factor, and layers'
 priority are controlled by MXR's registers. The output is passed either to
 HDMI
 or TVOUT.

 3. HDMI. The piece of HW responsible for generation of HDMI packets. It
 takes
 pixel data from mixer and transforms it into data frames. The output is send
 to HDMIPHY interface.

 4. HDMIPHY. Physical interface for HDMI. Its duties are sending HDMI packets
 to
 HDMI connector. Basically, it contains a PLL that produces source clock for
 Mixer, VP and HDMI during streaming.

 5. TVOUT. Generation of TV analog signal. (driver not implemented)

 6. VideoDAC. Modulator for TVOUT signal. (driver not implemented)


 The diagram below depicts connection between all HW pieces.
                     +---+
 NV12 data ---dma---|   Video   |
                     | Processor |
                     +---+
                           |
                           V
                     +---+
 RGB data  ---dma---|           |
                     |   Mixer   |
 RGB data  ---dma---|           |
                     +---+
                           |
                           * dmux
                          /
                   +-*   *--+
                   |                |
                   V                V
             +---+    +---+
             |    HDMI   |    |   TVOUT   |
             +---+    +---+
                   |                |
                   V                V
             +---+    +---+
             |  HDMIPHY  |    |  VideoDAC |
             +---+    +---+
                   |                |
                   V                V
                 HDMI           Composite
              connector         connector


 ==
  Driver interface
 ==

 The posted driver implements three V4L2 nodes. Every video node implements
 V4L2
 output buffer. One of nodes corresponds to input of Video Processor. The
 other
 two nodes correspond to RGB inputs of Mixer. All nodes share the same
 output.
 It is one of the Mixer's outputs: TVOUT or HDMI. Changing output in one
 layer
 using S_OUTPUT would change outputs of all other video nodes. The same thing
 happens if one try to reconfigure output i.e. by calling S_DV_PRESET.
 However
 it not possible to change or reconfigure the output while streaming. To sum
 up,
 all features in posted version of driver goes as follows:

 1. QUERYCAP
 2. S_FMT, G_FMT - single and multiplanar API
   a) node named video0 supports formats NV12, NV12, NV12T (tiled version of
 NV12), NV12MT (multiplane version of NV12T).
   b) nodes named graph0 and graph1 support formats RGB565, ARGB1555,
 ARGB,
 ARGB.

 graph0? Do you perhaps mean fb0? I haven't heard about nodes names 'graph'
 before.

 3. Buffer with USERPTR and MMAP memory.
 4. Streaming and buffer control. (STREAMON, STREAMOFF, REQBUF, QBUF, DQBUF)
 5. OUTPUT enumeration.
 6. DV preset control (SET, GET, ENUM). Currently modes 480P59_94, 720P59_94,
 1080P30, 1080P59_94 and 1080P60 work.
 7. Positioning layer's window on output display using S_CROP, G_GROP,
 CROPCAP.
 8. Positioning and cropping data in buffer using S_CROP, G_GROP, CROPCAP
 with
 buffer type OVERLAY. *

 TODOs:
 - add analog TVOUT driver
 - add S_OUTPUT
 - add S_STD ioctl
 - add control of alpha blending / chroma keying via V4L2 controls
 - add controls for luminance curve and sharpness in VP
 - consider exporting all output functionalities to separate video node
 - consider 

Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Andy Walls
On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
  Just two quick notes. I'll try to do a full review this weekend.
 
  On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
  ==
   Introduction
  ==
 
  The purpose of this RFC is to discuss the driver for a TV output interface
  available in upcoming Samsung SoC. The HW is able to generate digital and
  analog signals. Current version of the driver supports only digital output.
 
  Internally the driver uses videobuf2 framework, and CMA memory allocator.
  Not
  all of them are merged by now, but I decided to post the sources to start
  discussion driver's design.

 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
  post
  an RFC by the end of this month. We also have a proposal for CEC support in
  the pipeline.
 
 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.
 
 Alex

I'll toss one out: lack of API documentation for driver or application
developers to use.


When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
possibly get rid of reliance on the ivtv X video driver
http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
was really sparse.

DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
the userland API wasn't fleshed out.  GEM was talked about a bit in
there as well, IIRC.

TTM documentation was essentially non-existant.

I can't find any KMS documentation either.

I recall having to read much of the drm code, and having to look at the
radeon driver, just to tease out what the DRM ioctls needed to do.

Am I missing a Documentation source for the APIs?



For V4L2 and DVB on ther other hand, one can point to pretty verbose
documentation that application developers can use:

http://linuxtv.org/downloads/v4l-dvb-apis/



Regards,
Andy


--
To unsubscribe from this list: send the line unsubscribe linux-samsung-soc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/5] HDMI driver for Samsung S5PV310 platform

2011-02-08 Thread Alex Deucher
On Tue, Feb 8, 2011 at 5:47 PM, Andy Walls awa...@md.metrocast.net wrote:
 On Tue, 2011-02-08 at 10:28 -0500, Alex Deucher wrote:
 On Tue, Feb 8, 2011 at 4:47 AM, Hans Verkuil hansv...@cisco.com wrote:
  Just two quick notes. I'll try to do a full review this weekend.
 
  On Tuesday, February 08, 2011 10:30:22 Tomasz Stanislawski wrote:
  ==
   Introduction
  ==
 
  The purpose of this RFC is to discuss the driver for a TV output interface
  available in upcoming Samsung SoC. The HW is able to generate digital and
  analog signals. Current version of the driver supports only digital 
  output.
 
  Internally the driver uses videobuf2 framework, and CMA memory allocator.
  Not
  all of them are merged by now, but I decided to post the sources to start
  discussion driver's design.

 
  Cisco (i.e. a few colleagues and myself) are working on this. We hope to 
  post
  an RFC by the end of this month. We also have a proposal for CEC support in
  the pipeline.

 Any reason to not use the drm kms APIs for modesetting, display
 configuration, and hotplug support?  We already have the
 infrastructure in place for complex display configurations and
 generating events for hotplug interrupts.  It would seem to make more
 sense to me to fix any deficiencies in the KMS APIs than to spin a new
 API.  Things like CEC would be a natural fit since a lot of desktop
 GPUs support hdmi audio/3d/etc. and are already using kms.

 Alex

 I'll toss one out: lack of API documentation for driver or application
 developers to use.


 When I last looked at converting ivtvfb to use DRM, KMS, TTM, etc. (to
 possibly get rid of reliance on the ivtv X video driver
 http://dl.ivtvdriver.org/xf86-video-ivtv/ ), I found the documentation
 was really sparse.

 DRM had the most documentation under Documentation/DocBook/drm.tmpl, but
 the userland API wasn't fleshed out.  GEM was talked about a bit in
 there as well, IIRC.

 TTM documentation was essentially non-existant.

 I can't find any KMS documentation either.

 I recall having to read much of the drm code, and having to look at the
 radeon driver, just to tease out what the DRM ioctls needed to do.

 Am I missing a Documentation source for the APIs?


Documentation is somewhat sparse compared to some other APIs.  Mostly
inline kerneldoc comments in the core functions.  It would be nice to
improve things.   The modesetting API is very similar to the xrandr
API in the xserver.

At the moment a device specific surface manager (Xorg ddx, or some
other userspace lib) is required to use kms due to device specific
requirements with respect to memory management and alignment for
acceleration.  The kms modesetting ioctls are common across all kms
drm drivers, but the memory management ioctls are device specific.
GEM itself is an Intel-specific memory manager, although radeon uses
similar ioctls.  TTM is used internally by radeon, nouveau, and svga
for managing memory gpu accessible memory pools.  Drivers are free to
use whatever memory manager they want; an existing one shared with a
v4l or platform driver, TTM, or something new.  There is no generic
userspace kms driver/lib although Dave and others have done some work
to support that, but it's really hard to make a generic interface
flexible enough to handle all the strange acceleration requirements of
GPUs.  kms does however provide a legacy kernel fb interface.

While the documentation is not great, the modesetting API is solid and
it would be nice to get more people involved and working on it (or at
least looking at it) rather than starting something equivalent from
scratch or implementing a device specific modesetting API.  If you
have any questions about it, please ask on dri-devel (CCed).

Alex



 For V4L2 and DVB on ther other hand, one can point to pretty verbose
 documentation that application developers can use:

        http://linuxtv.org/downloads/v4l-dvb-apis/



 Regards,
 Andy



--
To unsubscribe from this list: send the line unsubscribe linux-samsung-soc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html