Re: [alsa-devel] [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Wed, Feb 11, 2015 at 11:20:33PM -0800, Patrick Lai wrote: On 2/11/2015 6:53 PM, Mark Brown wrote: On Wed, Feb 11, 2015 at 05:05:52PM -0800, Kenneth Westfield wrote: Replacing DSP-based drivers with LPASS-based drivers would be something that should be handled by Kconfig selections. For the DT, the DSP-related No, it shouldn't be. We should have the ability to build a single kernel image which will run on many systems, including both your system with a DSP and other systems without. Is there expectation that DTB flashed onto the system would define nodes to bind with both LPASS-based driver and DSP-based driver? I hope not as we want to keep LPASS-based driver DSP-based driver mutually exclusive. DTB time selections are a separate thing to Kconfig changes like Kenneth was proposing. They're more viable though it'd be a lot better to avoid needing them, designing out the possibility of doing something is often a sure fire way of finding a user. The selection of DSP use sounds like something which isn't part of the description of the hardware but rather a runtime policy decision (at least in so far as non-DSP is ever an option). Put aside IPQ8064, I would say it is actually more of build time policy decision for QC SoCs with DSP. XPU is programmed by trust zone to allow certain LPASS registers to be accessed by the chosen processor. If ADSP and app processor would have to have access to audio interfaces and DMA, resources partition(i.e # of DMAs go to ADSP while rest of DMA go to app processor is decided after analyzing expected concurrency use case. For case of 8016, MDSP would simply expect it has access to all audio subsystem except digital core of CODEC. But is it possible to configure the TrustZone firmware to leave things open? That's the tricky case. Actually, can we read the configuration TrustZone did? That might be the best answer here, the DT can describe the silicon and then we can check at runtime which bits of it we're actually allowed to talk to. signature.asc Description: Digital signature
Re: [alsa-devel] [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Tue, Feb 10, 2015 at 06:26:34PM -0800, Mark Brown wrote: On Sun, Feb 08, 2015 at 10:45:11PM -0800, Kenneth Westfield wrote: On Sat, Feb 07, 2015 at 06:32:29AM +0800, Mark Brown wrote: I'd really like to see some discussion as to how this is all supposed to be handled - how will these direct hardware access drivers and device trees work when someone does want to use the DSP (without causing problems), and how will we transition from one to the other. This is particularly pressing if there are use cases where people will want to switch between the two modes at runtime. What I'm trying to avoid here is being in a situation where we have existing stable DT bindings which we have to support but which conflict with the way that people want to use the systems. The ipq806x SOC has no LPASS DSP. On SOCs with a DSP, these drivers would not be enabled. OK, but I'm guessing that they're using the same IP that is in other SoCs which do have the DSP so even if you don't care for this device it might still be an issue. These drivers are prefixed with lpass to differentiate themselves from other drivers that would interact with a DSP, rather than the LPASS hardware directly. Right, it may be that all that's needed here is some indication as to how to describe a system which *does* have a DSP. Perhaps require that the devices be children of the DSP, that way if people want to access the hardware directly they can load a dummy driver for the DSP that just passes things through if they don't want to use the DSP? Replacing DSP-based drivers with LPASS-based drivers would be something that should be handled by Kconfig selections. For the DT, the DSP-related nodes and the LPASS-related nodes shouldn't overlap. There should be a DSP-based DT binding and a separate LPASS-based DT binding. Tying one or the other to the sound node (but not both), should work. -- Kenneth Westfield Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project -- To unsubscribe from this list: send the line unsubscribe devicetree in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [alsa-devel] [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Wed, Feb 11, 2015 at 05:05:52PM -0800, Kenneth Westfield wrote: Replacing DSP-based drivers with LPASS-based drivers would be something that should be handled by Kconfig selections. For the DT, the DSP-related No, it shouldn't be. We should have the ability to build a single kernel image which will run on many systems, including both your system with a DSP and other systems without. nodes and the LPASS-related nodes shouldn't overlap. There should be a DSP-based DT binding and a separate LPASS-based DT binding. Tying one or the other to the sound node (but not both), should work. The selection of DSP use sounds like something which isn't part of the description of the hardware but rather a runtime policy decision (at least in so far as non-DSP is ever an option). signature.asc Description: Digital signature
Re: [alsa-devel] [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On 2/11/2015 6:53 PM, Mark Brown wrote: On Wed, Feb 11, 2015 at 05:05:52PM -0800, Kenneth Westfield wrote: Replacing DSP-based drivers with LPASS-based drivers would be something that should be handled by Kconfig selections. For the DT, the DSP-related No, it shouldn't be. We should have the ability to build a single kernel image which will run on many systems, including both your system with a DSP and other systems without. Is there expectation that DTB flashed onto the system would define nodes to bind with both LPASS-based driver and DSP-based driver? I hope not as we want to keep LPASS-based driver DSP-based driver mutually exclusive. nodes and the LPASS-related nodes shouldn't overlap. There should be a DSP-based DT binding and a separate LPASS-based DT binding. Tying one or the other to the sound node (but not both), should work. The selection of DSP use sounds like something which isn't part of the description of the hardware but rather a runtime policy decision (at least in so far as non-DSP is ever an option). Put aside IPQ8064, I would say it is actually more of build time policy decision for QC SoCs with DSP. XPU is programmed by trust zone to allow certain LPASS registers to be accessed by the chosen processor. If ADSP and app processor would have to have access to audio interfaces and DMA, resources partition(i.e # of DMAs go to ADSP while rest of DMA go to app processor is decided after analyzing expected concurrency use case. For case of 8016, MDSP would simply expect it has access to all audio subsystem except digital core of CODEC. Thanks Patrick -- Patrick Lai Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,a Linux Foundation Collaborative Project -- To unsubscribe from this list: send the line unsubscribe devicetree in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Sun, Feb 08, 2015 at 10:45:11PM -0800, Kenneth Westfield wrote: On Sat, Feb 07, 2015 at 06:32:29AM +0800, Mark Brown wrote: I'd really like to see some discussion as to how this is all supposed to be handled - how will these direct hardware access drivers and device trees work when someone does want to use the DSP (without causing problems), and how will we transition from one to the other. This is particularly pressing if there are use cases where people will want to switch between the two modes at runtime. What I'm trying to avoid here is being in a situation where we have existing stable DT bindings which we have to support but which conflict with the way that people want to use the systems. The ipq806x SOC has no LPASS DSP. On SOCs with a DSP, these drivers would not be enabled. OK, but I'm guessing that they're using the same IP that is in other SoCs which do have the DSP so even if you don't care for this device it might still be an issue. These drivers are prefixed with lpass to differentiate themselves from other drivers that would interact with a DSP, rather than the LPASS hardware directly. Right, it may be that all that's needed here is some indication as to how to describe a system which *does* have a DSP. Perhaps require that the devices be children of the DSP, that way if people want to access the hardware directly they can load a dummy driver for the DSP that just passes things through if they don't want to use the DSP? signature.asc Description: Digital signature
Re: [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Sat, Feb 07, 2015 at 06:32:29AM +0800, Mark Brown wrote: On Thu, Feb 05, 2015 at 12:53:36PM -0800, Kenneth Westfield wrote: This patch series adds support for I2S audio playback on the Qualcomm Technologies ipq806x SOC. The ipq806x SOC has audio-related hardware blocks in its low-power audio subsystem (or LPASS). One of the relevant blocks in the LPASS is its low-power audio interface (or LPAIF). This contains an MI2S port, which is what these drivers are configured to use. The LPAIF also contains a DMA engine that is dedicated to moving audio samples into the transmit FIFO of the MI2S port. In addition, there is also low-power memory (LPM) within the audio subsystem, which is used for buffering the audio samples. This is implementing an AP centric audio system design where the AP directly programs all the audio hardware. Given that pretty much all public Qualcomm systems use a DSP centric model where the AP interacts only with a DSP which deals with DMA and the physical interfaces it seems reasonable to suppose that this system also has a DSP which at some future point people are likely to want to use. I'd really like to see some discussion as to how this is all supposed to be handled - how will these direct hardware access drivers and device trees work when someone does want to use the DSP (without causing problems), and how will we transition from one to the other. This is particularly pressing if there are use cases where people will want to switch between the two modes at runtime. What I'm trying to avoid here is being in a situation where we have existing stable DT bindings which we have to support but which conflict with the way that people want to use the systems. The ipq806x SOC has no LPASS DSP. On SOCs with a DSP, these drivers would not be enabled. These drivers are prefixed with lpass to differentiate themselves from other drivers that would interact with a DSP, rather than the LPASS hardware directly. -- Kenneth Westfield Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project -- To unsubscribe from this list: send the line unsubscribe devicetree in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
On Thu, Feb 05, 2015 at 12:53:36PM -0800, Kenneth Westfield wrote: This patch series adds support for I2S audio playback on the Qualcomm Technologies ipq806x SOC. The ipq806x SOC has audio-related hardware blocks in its low-power audio subsystem (or LPASS). One of the relevant blocks in the LPASS is its low-power audio interface (or LPAIF). This contains an MI2S port, which is what these drivers are configured to use. The LPAIF also contains a DMA engine that is dedicated to moving audio samples into the transmit FIFO of the MI2S port. In addition, there is also low-power memory (LPM) within the audio subsystem, which is used for buffering the audio samples. This is implementing an AP centric audio system design where the AP directly programs all the audio hardware. Given that pretty much all public Qualcomm systems use a DSP centric model where the AP interacts only with a DSP which deals with DMA and the physical interfaces it seems reasonable to suppose that this system also has a DSP which at some future point people are likely to want to use. I'd really like to see some discussion as to how this is all supposed to be handled - how will these direct hardware access drivers and device trees work when someone does want to use the DSP (without causing problems), and how will we transition from one to the other. This is particularly pressing if there are use cases where people will want to switch between the two modes at runtime. What I'm trying to avoid here is being in a situation where we have existing stable DT bindings which we have to support but which conflict with the way that people want to use the systems. signature.asc Description: Digital signature
[Patch V4 00/10] ASoC: QCOM: Add support for ipq806x SOC
From: Kenneth Westfield kwest...@codeaurora.org This patch series adds support for I2S audio playback on the Qualcomm Technologies ipq806x SOC. The ipq806x SOC has audio-related hardware blocks in its low-power audio subsystem (or LPASS). One of the relevant blocks in the LPASS is its low-power audio interface (or LPAIF). This contains an MI2S port, which is what these drivers are configured to use. The LPAIF also contains a DMA engine that is dedicated to moving audio samples into the transmit FIFO of the MI2S port. In addition, there is also low-power memory (LPM) within the audio subsystem, which is used for buffering the audio samples. The development board being used for testing contains the ipq806x SOC and a Maxim max98357a DAC/amp. One bus from the MI2S port of the SOC is connected to the DAC/amp for stereo playback. This bus is configured so that the SOC is bus master and consists of DATA, LRCLK, and BCLK. The DAC/amp does not need MCLK to operate. In addition, a single GPIO pin from the SOC is connected to the same DAC/amp, which gives enable/disable control over the DAC/amp. The specific drivers added are: * a codec DAI driver for controlling the DAC/amp * a CPU DAI driver for controlling the MI2S port * a platform driver for controlling the LPAIF DMA engine These drivers, together, are tied into simple-audio-card to complete the audio implementation. Corresponding additions to the device tree for the ipq806x SOC and its documentation has also been added. Also, as this is a new directory, the MAINTAINERS file has been updated as well. The LPASS also contains clocks that need to be controlled. Those drivers have been submitted as a separate patch series: [PATCH v3 0/8] qcom audio clock control drivers http://lkml.org/lkml/2015/1/19/656 = Changes since V3 [Patch V3 00/10] ASoC: QCOM: Add support for ipq806x SOC http://mailman.alsa-project.org/pipermail/alsa-devel/2014-December/085694.html * Placed the content of the inline functions into the callbacks. * Replaced use of readl/writel register access functions with regmap access functions. Notable exception is the ISR, which uses ioread32/iowrite32. * Rearranged the sequencing of the hardware block enables to fit within the ASoC framework callbacks, while remaining functional. REQ 1: The hardware requires the enable sequence to be: LPAIF-DMA[enable],then LPAIF-MI2S[enable], then DAC-GPIO[enable] REQ 2: The hardware requires the disable sequence to be: DAC-GPIO[disable], then LPAIF-MI2S[disable] * Corrected the implementation of the pointer callback. * Utilize the LPM to buffer audio samples, rather than memory external to LPASS. * Corrected the interrupt clearing in the ISR. * Implemented a default system clock (defined by the simple-card DT node), and optional LPASS DT node modifiers that can alter the system clock in order to expand the range of available bit clock frequencies. * Addressed all of the remaining issues raised by Mark Brown. * General code cleanup. = Changes since V2 [Patch v2 00/11] ASoC: QCOM: Add support for ipq806x SOC http://mailman.alsa-project.org/pipermail/alsa-devel/2014-December/085186.html * Removed the PCM platform driver from the DTS platform and tied it to the CPU DAI driver. * Changed I2S pinctrl to use generic naming convention and moved control to CPU DAI driver. It should be controlled now by soc-core's pinctrl_pm_* functionality. * Added stub DAPM support in codec driver. As the DAC GPIO needs to be enabled last when starting playback, and disabled first when stopping playback, it seems as though the trigger function may be the place for this. Suggestions are welcome for a better place to put this. * Removed machine driver and tied DAI drivers to simple-audio-card. * Packaged the build files and Maxim codec files together in one change. * Removed QCOM as vendor from Maxim code and documentation. * Separated the SOC and board definitions into the correct DTS files. * Update device tree documentation to reflect changes. * General code cleanup. = Changes since V1 [PATCH 0/9] ASoC: QCOM: Add support for ipq806x SOC http://mailman.alsa-project.org/pipermail/alsa-devel/2014-November/084322.html * Remove the native LPAIF driver, and move its functionality to the CPU DAI driver. * Add a codec driver to manage the pins going to the external DAC (previously managed by the machine driver). * Use devm_* and dev_* where possible. * ISR only handles relevant DMA channel now. * Update device tree documentation to reflect changes. * General code cleanup. Kenneth Westfield (10): MAINTAINERS: Add QCOM audio ASoC maintainer ASoC: max98357a: Document MAX98357A bindings ASoC: qcom: Document LPASS CPU bindings ASoC: codec: Add MAX98357A codec driver ASoC: ipq806x: add LPASS header files ASoC: ipq806x: Add LPASS CPU DAI driver ASoC: