Re: [Intel-gfx] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Daniel Vetter
Also adding dri-devel and linux-media. Please don't forget these lists for
the next round.
-Daniel

On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
 Adding Greg just as an fyi since we've chatted briefly about the avsink
 bus. Comments below.
 -Daniel
 
 On Tue, May 20, 2014 at 02:52:19AM +, Lin, Mengdong wrote:
  This RFC is based on previous discussion to set up a generic communication 
  channel between display and audio driver and
  an internal design of Intel MCG/VPG HDMI audio driver. It's still an 
  initial draft and your advice would be appreciated
  to improve the design.
  
  The basic idea is to create a new avsink module and let both drm and alsa 
  depend on it.
  This new module provides a framework and APIs for synchronization between 
  the display and audio driver.
  
  1. Display/Audio Client
  
  The avsink core provides APIs to create, register and lookup a 
  display/audio client.
  A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) 
  can create a client, add some resources
  objects (shared power wells, display outputs, and audio inputs, register 
  ops) to the client, and then register this
  client to avisink core. The peer driver can look up a registered client by 
  a name or type, or both. If a client gives
  a valid peer client name on registration, avsink core will bind the two 
  clients as peer for each other. And we
  expect a display client and an audio client to be peers for each other in a 
  system.
  
  int avsink_new_client ( const char *name,
  int type,   /* client type, display or audio */
  struct module *module,
  void *context,
  const char *peer_name,
  struct avsink_client **client_ret);
  
  int avsink_free_client (struct avsink_client *client);
 
 
 Hm, my idea was to create a new avsink bus and let vga drivers register
 devices on that thing and audio drivers register as drivers. There's a bit
 more work involved in creating a full-blown bus, but it has a lot of
 upsides:
 - Established infrastructure for matching drivers (i.e. audio drivers)
   against devices (i.e. avsinks exported by gfx drivers).
 - Module refcounting.
 - power domain handling and well-integrated into runtime pm.
 - Allows integration into componentized device framework since we're
   dealing with a real struct device.
 - Better decoupling between gfx and audio side since registration is done
   at runtime.
 - We can attach drv private date which the audio driver needs.
 
  int avsink_register_client(struct avsink_client *client);
  int avisink_unregister_client(int client_handle);
  
  struct avsink_client *avsink_lookup_client(const char *name, int type);
  
  struct avsink_client {
   const char *name;  /* client name */
   int type; /* client type*/
   void *context;
   struct module *module;  /* top-level module for locking */
  
   struct avsink_client *peer;  /* peer client */
  
   /* shared power wells */
   struct avsink_power_well *power_well;
 
 We need to have an struct power_domain here so that we can do proper
 runtime pm. But like I've said above I think we actually want a full blown
 struct device.
 
   int num_power_wells;
  
   /* endpoints, display outputs or audio inputs */
   struct avsink_endpoint * endpoint;
   int num_endpints;
  
   struct avsink_registers_ops *reg_ops; /* ops to access registers 
  of a client */
   void *private_data;
   ...
  };
 
 I think you're indeed implementing a full blown bus here ;-)
 
 avsink-client = bus devices/childern
 avsink-peer = driver for all this stuff
 avsink-power_well = runtime pm support for the avsink bus
 avsink-reg_ops = driver bind/unbind support
 
  On system boots, the avsink module is loaded before the display and audio 
  driver module. And the display and audio
  driver may be loaded on parallel.
  * If a specific display driver (eg. i915) supports avsink, it can create a 
  display client, add power wells and display
outputs to the client, and then register the display client to the avsink 
  core. Then it may look up if there is any
audio client registered, by name or type, and may find an audio client 
  registered by some audio driver.
  
  * If an audio driver supports avsink, it usually should look up a 
  registered display client by name or type at first,
because it may need the shared power well in GPU and check the display 
  outputs' name to bind the audio inputs. If
the display client is not registered yet, the audio driver can choose to 
  wait (maybe in a work queue) or return
-EAGAIN for a deferred probe. After the display client is found, the 
  audio driver can register an audio client with
the display client's name as the peer name, the avsink core will bind the 
  display 

Re: [Intel-gfx] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)

2014-05-20 Thread Thierry Reding
On Tue, May 20, 2014 at 12:04:38PM +0200, Daniel Vetter wrote:
 Also adding dri-devel and linux-media. Please don't forget these lists for
 the next round.
 -Daniel
 
 On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
  Adding Greg just as an fyi since we've chatted briefly about the avsink
  bus. Comments below.
  -Daniel
  
  On Tue, May 20, 2014 at 02:52:19AM +, Lin, Mengdong wrote:
   This RFC is based on previous discussion to set up a generic 
   communication channel between display and audio driver and
   an internal design of Intel MCG/VPG HDMI audio driver. It's still an 
   initial draft and your advice would be appreciated
   to improve the design.
   
   The basic idea is to create a new avsink module and let both drm and alsa 
   depend on it.
   This new module provides a framework and APIs for synchronization between 
   the display and audio driver.
   
   1. Display/Audio Client
   
   The avsink core provides APIs to create, register and lookup a 
   display/audio client.
   A specific display driver (eg. i915) or audio driver (eg. HD-Audio 
   driver) can create a client, add some resources
   objects (shared power wells, display outputs, and audio inputs, register 
   ops) to the client, and then register this
   client to avisink core. The peer driver can look up a registered client 
   by a name or type, or both. If a client gives
   a valid peer client name on registration, avsink core will bind the two 
   clients as peer for each other. And we
   expect a display client and an audio client to be peers for each other in 
   a system.
   
   int avsink_new_client ( const char *name,
   int type,   /* client type, display or audio 
   */
   struct module *module,
   void *context,
   const char *peer_name,
   struct avsink_client **client_ret);
   
   int avsink_free_client (struct avsink_client *client);
  
  
  Hm, my idea was to create a new avsink bus and let vga drivers register
  devices on that thing and audio drivers register as drivers. There's a bit
  more work involved in creating a full-blown bus, but it has a lot of
  upsides:
  - Established infrastructure for matching drivers (i.e. audio drivers)
against devices (i.e. avsinks exported by gfx drivers).
  - Module refcounting.
  - power domain handling and well-integrated into runtime pm.
  - Allows integration into componentized device framework since we're
dealing with a real struct device.
  - Better decoupling between gfx and audio side since registration is done
at runtime.
  - We can attach drv private date which the audio driver needs.

I think this would be another case where the interface framework[0]
could potentially be used. It doesn't give you all of the above, but
there's no reason it couldn't be extended. Then again, adding too much
would end up duplicating more of the driver core, so if something really
heavy-weight is required here, then the interface framework is not the
best option.

[0]: https://lkml.org/lkml/2014/5/13/525

   On system boots, the avsink module is loaded before the display and audio 
   driver module. And the display and audio
   driver may be loaded on parallel.
   * If a specific display driver (eg. i915) supports avsink, it can create 
   a display client, add power wells and display
 outputs to the client, and then register the display client to the 
   avsink core. Then it may look up if there is any
 audio client registered, by name or type, and may find an audio client 
   registered by some audio driver.
   
   * If an audio driver supports avsink, it usually should look up a 
   registered display client by name or type at first,
 because it may need the shared power well in GPU and check the display 
   outputs' name to bind the audio inputs. If
 the display client is not registered yet, the audio driver can choose 
   to wait (maybe in a work queue) or return
 -EAGAIN for a deferred probe. After the display client is found, the 
   audio driver can register an audio client with

-EPROBE_DEFER is the correct error code for deferred probing.

   6. Display register operation (optional)
   
   Some audio driver needs to access GPU audio registers. The register ops 
   are provided by the peer display client.
   
   struct avsink_registers_ops {
int (*read_register) (uint32_t reg_addr, uint32_t *data, void 
   *context);
int (*write_register) (uint32_t reg_addr, uint32_t data, void 
   *context);
int (*read_modify_register) (uint32_t reg_addr, uint32_t data, 
   uint32_t mask, void *context);
   
   int avsink_define_reg_ops (struct avsink_client *client, struct 
   avsink_registers_ops *ops);
   
   And avsink core provides API for the audio driver to access the display 
   registers:
   
   int avsink_read_display_register(struct avsink_client *client ,