[PATCH] dt: fix tegra SPI binding examples
Fix name of slink binding and address of sflash example to make it self consistent. Change-Id: Ia89c3017c958bdf670036caf516eabce6f893096 Signed-off-by: Allen Martin --- Documentation/devicetree/bindings/spi/nvidia,tegra20-sflash.txt |2 +- Documentation/devicetree/bindings/spi/nvidia,tegra20-slink.txt |2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/spi/nvidia,tegra20-sflash.txt b/Documentation/devicetree/bindings/spi/nvidia,tegra20-sflash.txt index 8cf24f6..7b53da5 100644 --- a/Documentation/devicetree/bindings/spi/nvidia,tegra20-sflash.txt +++ b/Documentation/devicetree/bindings/spi/nvidia,tegra20-sflash.txt @@ -13,7 +13,7 @@ Recommended properties: Example: -spi@7000d600 { +spi@7000c380 { compatible = "nvidia,tegra20-sflash"; reg = <0x7000c380 0x80>; interrupts = <0 39 0x04>; diff --git a/Documentation/devicetree/bindings/spi/nvidia,tegra20-slink.txt b/Documentation/devicetree/bindings/spi/nvidia,tegra20-slink.txt index f5b1ad1..eefe15e 100644 --- a/Documentation/devicetree/bindings/spi/nvidia,tegra20-slink.txt +++ b/Documentation/devicetree/bindings/spi/nvidia,tegra20-slink.txt @@ -13,7 +13,7 @@ Recommended properties: Example: -slink@7000d600 { +spi@7000d600 { compatible = "nvidia,tegra20-slink"; reg = <0x7000d600 0x200>; interrupts = <0 82 0x04>; -- 1.7.10.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: NCQ support NVidia NForce4 (CK804) SATAII
> > Ask NVIDIA. They are the only company that gives me -zero- > > information on their SATA controllers. > > I thought of that.. *sigh* NVIDIA won't be documenting nForce4 SATA controllers, so Linux NCQ support for nForce4 is unlikely. I'm hoping this will change with future products. > > As such, there are -zero- plans for NCQ on NVIDIA > controllers at this > > time. > > Could it be possible to make reverse engeneering? I think they should > work as the SATA-IO SATAII specification says. The SATA-IO SATA-II specification says nothing about host controller implementations. Intel documents a host controller implemetnation in the AHCI specification which is becoming an industry standard, but nForce4 SATA is not AHCI. -Allen - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: NCQ support NVidia NForce4 (CK804) SATAII
> Erm, why they are not willing to support NCQ under Linux...I > mean many > people using NVIDIA based mainboards. And that against that what I > thought NVidia stands for - Linux friendly but seems only that this > statement fit on graficcards? Is there no "responsible" person that > says...Hello, Linux is a growing market that we need to > serve? With full > driver/program support? > Likely the only way nForce4 NCQ support could be added under Linux would be with a closed source binary driver, and no one really wants that, especially for storage / boot volume. We decided it wasn't worth the headache of a binary driver for this one feature. Future nForce chipsets will have a redesigned SATA controller where we can be more open about documenting it. -Allen - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: sata_nv + ADMA + Samsung disk problem
> > Dunno about the NVidia version. > > Theirs works rather differently - the GO bit is there, but there's > another append register which is used to tell the controller > that a new > tag has been added to the CPB list. > > The only thing we currently use the GO bit for is to switch > between ADMA > and port register mode. Could be there's something we need to > do there, > though, who knows.. > You shouldn't ever need to touch GO other than the ADMA / legacy mode switch as you say. The NVIDIA ADMA hw is not based on the Pacific Digital core. --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: sata_nv + ADMA + Samsung disk problem
> The question I had for NVIDIA regarding this that I never got > answered > was, is there any reason why we would need a delay when switching > between NCQ and non-NCQ commands on ADMA, and if not, is > there any known > cause that could cause the controller to get into this seemingly > locked-up state? When switching from NCQ to non NCQ or vice versa you must make sure all outstanding commands are completed before issuing the new command. The hardware doesn't do anything to prevent queued and non queued commands from going out on the wire at the same time which will certainly cause some drives to fail. -Allen --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: sata_nv + ADMA + Samsung disk problem
> The software definitely provides that guarantee for all NCQ-capable > controllers. > Well if that's not it, it must be some problem entering ADMA legacy mode. Here's what the Windows driver does: ADMACtrl.aGO = 0 ADMACtrl.aEIEN = 0 poll { until ADMAStatus.aLGCY = 1 || timeout } --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: More info on port 80 symptoms on MCP51 machine.
> Alan Cox wrote: > > On Wed, 12 Dec 2007 21:58:25 +0100 > > Rene Herman <[EMAIL PROTECTED]> wrote: > > > >> On 12-12-07 21:26, Rene Herman wrote: > >> > >>> On 12-12-07 21:07, David P. Reed wrote: > Someone might have an in to nVidia to clarify this, > since I don't. > In any case, the udelay(2) approach seems to be a safe > fix for this machine. > >> By the way, _does_ anyone have a contact at nVidia who > could clarify? > >> Alan maybe? I'm quite curious what they did... > > > > I don't. Nvidia are not the most open bunch of people on > the planet. > > This doesn't appear to be a chipset bug anyway but a firmware one > > (other systems with the same chipset work just fine). > > > > The laptop maker might therefore be a better starting point. > > One wonders if it does some SMM trick to capture port 0x80 > writes and attempt to haul them off for debugging; it almost > sounds like some kind of debugging code got let out into the field. > > -hpa Nothing inside the chipset should be decoding port 80 writes. It's possible this board has a port 80 decoder wired onto the board that's misbehaving. I've seen other laptop boards with port 80 decoders wired onto the board, even if the 7 segment display is only populated on debug builds. We use PCI port 80 decoders internally for debugging quite often, so if there were some chipset issue related to port 80 it would have showed up a long time ago, and this is the first I've heard of hangs related to port 80 writes. -Allen --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] Add quirk to set AHCI mode on ICH boards
> Alan Cox wrote: > > On Thu, 8 Nov 2007 22:46:22 -0500 > > Jeff Garzik <[EMAIL PROTECTED]> wrote: > > > >> On Thu, Nov 08, 2007 at 10:29:37PM -0500, Mark Lord wrote: > >>> And I might even privately patch my own kernels to map > the ACHI BAR > >>> in the cases where the BIOS didn't... > >> The inability to do this in the general case is the main > reason why > >> AHCI was not unconditionally enabled, even in IDE mode, > when it was > >> originally added... :/ > > > > We've done it all the time for various devices without > problems (eg S3 > > video cards). I'd like to see it go in - although perhaps > attached to > > a force_ahci boot param initially > > By forcing AHCI, your PATA devices will be inaccessible, in a > common configuration. It also means shuffling users from one > driver to another, which induces breakage. > > I was speaking wishfully. Real life intrudes, alas. > At least for NVIDIA controllers, loading the AHCI driver when the BIOS is set to IDE mode is not recommended by NVIDIA. Any AHCI workarounds in the BIOS are likely to be disabled when set to IDE mode. In practice we don't expect to see a lot of users running an AHCI controller in IDE mode unless they have explicitly disabled AHCI from the BIOS or the system builder has some specific reason for shipping IDE mode by default (like support for some legacy DOS or Win9x tools) -Allen --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] Add quirk to set AHCI mode on ICH boards
> Allen Martin wrote: > > At least for NVIDIA controllers, loading the AHCI driver > when the BIOS > > is set to IDE mode is not recommended by NVIDIA. Any AHCI > workarounds > > in the BIOS are likely to be disabled when set to IDE mode. > In practice > > What workarounds, if any, are needed? > > We need those in the driver not BIOS anyway, in order to > fully support > suspend/resume and host controller reset during runtime operation. What I'm worred about is SMI traps implemented in the SBIOS for AHCI workarounds that may be disabled when in IDE mode. > In Linux at least, we have a bunch of open sata_nv issues, so forcing > users' interface into AHCI mode as a default future policy seems like > the most stable choice on NVIDIA AHCI platforms. I believe most of the issues with sata_nv have been due to lack of documentation of ADMA and swNCQ. The NVIDA AHCI controllers that operate in IDE mode are straight up PATA emulation ANSI/INCITS 370 interface, no hotplug, no NCQ. So I would expect there to not be a lot of issues. I'm with you that AHCI mode is superior and should be used whenever possible, but it probably comes as no suprise that almost all the hardware/BIOS testing is done with Windows, and operating the hardware in a mode that Windows doesn't (enabling AHCI in classcode 0101) seems like asking for trouble. -Allen --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] Add quirk to set AHCI mode on ICH boards
> > What I'm worred about is SMI traps implemented in the SBIOS for AHCI > > workarounds that may be disabled when in IDE mode. > > For Nvidia devices those would only be present if there were problems > with the AHCI hardware right, which would mean you could > simply tell us > what workarounds to implement. Errata for which there is an SBIOS workaround are generally only released to BIOS vendors and under NDA. If Linux users were impacted by such a bug we would most likely release a patch, but a much more likely scenario is that it slips through the cracks because it's not a configuration we would test in our QA. So a small minority of users that are running AHCI in class code 0101 would get some very rare but serious errors that would be impossible to debug. > > I believe most of the issues with sata_nv have been due to lack of > > documentation of ADMA and swNCQ. The NVIDA AHCI controllers that > > I am glad Nvidia accept this point. It would be nice to see it fixed. I don't have any say over that, but it's probably unlikely to be fixed. Going forward we're only using open standards for storage. -Allen --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: data corruption with nvidia chipsets and IDE/SATA drives (k8 cpu errata needed?)
> I'd like to here from Andi how he feels about this? It seems like a > somewhat drastic solution in some ways given a lot of hardware doesn't > seem to be affected (or maybe in those cases it's just really hard to > hit, I don't know). > > > Well we can hope that Nvidia will find out more (though I'm not too > > optimistic). > > Ideally someone from AMD needs to look into this, if some mainboards > really never see this problem, then why is that? Is there errata that > some BIOS/mainboard vendors are dealing with that others are not? NVIDIA and AMD are ivestigating this issue, we don't know what the problem is yet. --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] ACPI driver support for pata
> Couldn't be do this generically inside libata core somehow, > i.e. try to > use ACPI to set the proper mode and fall back to the driver-specific > mode setting code if that didn't work? I think if we could do that it > would solve a number of problems (i.e. we could prevent it from doing > this on SATA controllers which appear to be IDE based on the PCI ID, > like the NVIDIA SATA controllers, since the _GTM and _STM > methods seem > to have undefined behavior on SATA). _GTM and _STM don't have undefined behavior on SATA. They are there for compatability with the MS Windows ATA driver. Any SATA device that reports PCI class code 0101 should implement them if that controller is supposed to work on Windows. They are NOPs for SATA controllers generally. --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Nvidia cable detection problems (was [PATCH] amd74xx: don't configure udma mode higher than BIOS did)
Unfortunately there's no standard way to do host side cable detect on nForce systems without going through ACPI. It's done through a GPIO pin. Board vendors are free to reallocate which GPIO pin is used for this feature. One possible solution is to leave the default DMA mode at whatever the BIOS left it at. So if it's a UDMA5 drive but the BIOS left it at UDMA2 it was because of cable detect. The *real* solution is to use the BIOS ACPI _GTM _STM methods for this. Then you can remove all chipset specific knowledge from the IDE driver. This is what the MS driver does on Windows, so you know it's received a lot of testing from NVIDIA and board vendors. -Allen > -Original Message- > From: Bartlomiej Zolnierkiewicz [mailto:[EMAIL PROTECTED] > Sent: Monday, February 05, 2007 7:09 AM > To: Allen Martin > Subject: Fwd: Nvidia cable detection problems (was [PATCH] > amd74xx: don't configure udma mode higher than BIOS did) > > Hi Allen, > > Would it be possible to get some help on this issue? > > Thanks, > Bart > > -- Forwarded message -- > From: Tejun Heo <[EMAIL PROTECTED]> > Date: Feb 5, 2007 3:50 PM > Subject: Re: Nvidia cable detection problems (was [PATCH] amd74xx: > don't configure udma mode higher than BIOS did) > To: Alan <[EMAIL PROTECTED]> > Cc: [EMAIL PROTECTED], linux-ide@vger.kernel.org, > linux-kernel@vger.kernel.org > > > Alan wrote: > [--snip--] > >> CK804 IDE, at least mine, reports 80c in a lot of cases where it > >> shouldn't. I dunno the reason but it also makes drives confused > >> about cable type. Maybe it has the wrong capacitor > attached or something. > >> This is A8N-E from ASUS, probably one of the popular ones > using nf4. > > > > I take it this was how you came to find every cable related > bug while > > trying to work out what was going on ? > > Yeap, pretty much. I thought fixing drive side cable > detection would fix it, but hell no. > > >> When that happens, libata EH does its job and slows the > interface to > >> udma33 after quite a few error messages. On IDE, if this happens, > >> the drive is put into PIO mode making the machine painful to use. > > > > No the IDE layer does DMA changedown fine, well apart from all the > > error/timer races in the old IDE code. > > I dunno. It always ended up in PIO mode in my case. I can > post the log if necessary. > > [--snip--] > >> I agree with you that this is a hack and ugly as hell. I > don't like > >> it either, but it solves an existing problem which could have and > >> possibly will hit many users. So, I think this problem should at > >> least be verified. If it's just my BIOS/motherboard > that's crazy, I > >> have no problem forgetting about this. > > > > It certainly seems to be Nvidia specific, so perhaps Nvidia can > > provide more details on the Nforce4 cable detection ? As > with a lot of > > Nvidia stuff there was much reverse engineering involved in the > > original code base. > > Hmmm... Anyone happen to have working nvidia contact? > > > And if its a specific board or couple of boards then we > should perhaps > > use DMI to match them specifically. > > > >> So, anyone with CK804 (a.k.a NF4) up for some testing? > > > > If it still goes I've got a rather iffy NF3 but not an NF4 handy. > > Yeah, please. If I connect a hdd at the end of 40c cable and > leaving the middle connector empty, the 80c bit is always one > and the drive says it's 80c cable while the BIOS configured > mode is correctly udma33. This is the same for SAMSUNG > SP0802N, Maxtor 91301U3 and HITACHI_DK23BA-20. > > -- > tejun > --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH -mm] sata_nv: fix ATAPI in ADMA mode
> > static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance) > > { > > struct ata_host *host = dev_instance; > > int i, handled = 0; > > + u32 notifier_clears[2]; > > > > spin_lock(&host->lock); > > > > for (i = 0; i < host->n_ports; i++) { > > struct ata_port *ap = host->ports[i]; > > + notifier_clears[i] = 0; > > Promise us that n_ports will never exceed 2? I promise it will never exceed 2, at least as far as NVIDIA ADMA hardware is concerned. --- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. --- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] shrinker: add atomic.h include
Add atomic.h to provide atomic_long_t used in struct shrinker Signed-off-by: Allen Martin --- include/linux/shrinker.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 4fcacd915d45..02ec84b22f4b 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -1,6 +1,8 @@ #ifndef _LINUX_SHRINKER_H #define _LINUX_SHRINKER_H +#include + /* * This struct is used to pass information from page reclaim to the shrinkers. * We consolidate the values for easier extention later. -- 2.9.2
[PATCH] PREEMPT_RT: sched/rr, sched/fair: defer CFS scheduler put_prev_task()
Defer calling put_prev_task() on a CFS task_struct when there is a pending RT task to run. Instead wait until the next pick_next_task_fair() and do the work there. The put_prev_task() call for a SCHED_OTHER task is currently a source of non determinism in the latency of scheduling a SCHED_FIFO task. This results in a priority inversion as the CFS scheduler is updating load average and balancing the rq rbtree while the SCHED_FIFO task is waiting to run. Instrumented results on a quad core ARM A57. This is measured just across the put_prev_task() in pick_next_task_rt(). before patch cpu: 0 max_instr: 1114 min_instr: 9 avg_instr: 11 min_time: 32ns max_time: 2528ns avg_time: 64ns cpu: 1 max_instr: 1122 min_instr: 9 avg_instr: 11 min_time: 32ns max_time: 3040ns avg_time: 64ns cpu: 2 max_instr: 1118 min_instr: 9 avg_instr: 11 min_time: 32ns max_time: 3584ns avg_time: 64ns cpu: 3 max_instr: 1122 min_instr: 9 avg_instr: 11 min_time: 32ns max_time: 3456ns avg_time: 64ns after patch cpu: 0 max_instr: 23 min_instr: 9 avg_instr: 11 min_time: 64ns max_time: 480ns avg_time: 64ns cpu: 1 max_instr: 23 min_instr: 9 avg_instr: 11 min_time: 64ns max_time: 224ns avg_time: 64ns cpu: 2 max_instr: 12 min_instr: 9 avg_instr: 10 min_time: 64ns max_time: 224ns avg_time: 64ns cpu: 3 max_instr: 44 min_instr: 9 avg_instr: 11 min_time: 64ns max_time: 160ns avg_time: 64ns Signed-off-by: Allen Martin --- include/linux/sched.h | 4 kernel/sched/core.c | 4 kernel/sched/fair.c | 41 + kernel/sched/rt.c | 6 ++ kernel/sched/sched.h | 3 +++ 5 files changed, 58 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index e7ae9273a809..fdaf4ae2383e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1126,6 +1126,10 @@ struct task_struct { void*security; #endif +#ifdef CONFIG_PREEMPT_RT_FULL + int rt_preempt; +#endif + /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3a9899fc26f7..5cd3e1d25238 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6016,6 +6016,10 @@ void __init sched_init(void) INIT_LIST_HEAD(&rq->cfs_tasks); +#ifdef CONFIG_PREEMPT_RT_FULL + rq->cfs_deferred_task = NULL; +#endif + rq_attach_root(rq, &def_root_domain); #ifdef CONFIG_NO_HZ_COMMON rq->last_load_update_tick = jiffies; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a1a7ea8662c4..79cc22cb9698 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4858,6 +4858,13 @@ static inline void hrtick_update(struct rq *rq) } #endif +#ifdef CONFIG_PREEMPT_RT_FULL +static int is_task_deferred(struct rq *rq, struct task_struct *prev) +{ + return rq->cfs_deferred_task == prev; +} +#endif + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -4926,6 +4933,15 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) struct sched_entity *se = &p->se; int task_sleep = flags & DEQUEUE_SLEEP; +#ifdef CONFIG_PREEMPT_RT_FULL + /* see if this was the deferred task */ + if (is_task_deferred(rq, p)) { + /* reinsert it and then go through normal dequeue path */ + rq->cfs_deferred_task = NULL; + put_prev_task(rq, p); + } +#endif + for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); dequeue_entity(cfs_rq, se, flags); @@ -6186,6 +6202,14 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf struct task_struct *p; int new_tasks; +#ifdef CONFIG_PREEMPT_RT_FULL + if (rq->cfs_deferred_task) { + p = rq->cfs_deferred_task; + rq->cfs_deferred_task = NULL; + put_prev_task(rq, p); + } +#endif + again: #ifdef CONFIG_FAIR_GROUP_SCHED if (!cfs_rq->nr_running) @@ -6310,6 +6334,13 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev) struct sched_entity *se = &prev->se; struct cfs_rq *cfs_rq; +#ifdef CONFIG_PREEMPT_RT_FULL + if (unlikely(prev->rt_preempt && prev->on_rq)) { + rq->cfs_deferred_task = prev; + return; + } +#endif + for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); put_prev_entity(cfs_rq, se); @@ -9105,6 +9136,16 @@ static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); +#ifdef CONFIG_PREEMPT_RT_FULL + struct rq *rq = r
[PATCH] ASoC: bcm2835: Add enable/disable clock functions
Add functions to control enable/disable of BCLK output of bcm2835 I2S controller so that BCLK output only starts when dma starts. This resolves issues of audio pop with DACs such as max98357 on rpi. The LRCLK output of bcm2835 only starts when the frame size has been configured and there is data in the FIFO. The max98357 dac makes a loud popping sound when BCLK is toggling but LRCLK is not. Signed-off-by: Allen Martin --- sound/soc/bcm/bcm2835-i2s.c | 35 +++ 1 file changed, 35 insertions(+) diff --git a/sound/soc/bcm/bcm2835-i2s.c b/sound/soc/bcm/bcm2835-i2s.c index e6a12e271b07..5c8649864c0d 100644 --- a/sound/soc/bcm/bcm2835-i2s.c +++ b/sound/soc/bcm/bcm2835-i2s.c @@ -122,9 +122,27 @@ struct bcm2835_i2s_dev { struct regmap *i2s_regmap; struct clk *clk; boolclk_prepared; + boolclk_enabled; int clk_rate; }; +static void bcm2835_i2s_enable_clock(struct bcm2835_i2s_dev *dev) +{ + if (dev->clk_enabled) + return; + + regmap_update_bits(dev->i2s_regmap, BCM2835_I2S_MODE_A_REG, BCM2835_I2S_CLKDIS, 0); + dev->clk_enabled = true; +} + +static void bcm2835_i2s_disable_clock(struct bcm2835_i2s_dev *dev) +{ + if (dev->clk_enabled) + regmap_update_bits(dev->i2s_regmap, BCM2835_I2S_MODE_A_REG, BCM2835_I2S_CLKDIS, BCM2835_I2S_CLKDIS); + + dev->clk_enabled = false; +} + static void bcm2835_i2s_start_clock(struct bcm2835_i2s_dev *dev) { unsigned int master = dev->fmt & SND_SOC_DAIFMT_MASTER_MASK; @@ -145,6 +163,7 @@ static void bcm2835_i2s_start_clock(struct bcm2835_i2s_dev *dev) static void bcm2835_i2s_stop_clock(struct bcm2835_i2s_dev *dev) { + bcm2835_i2s_disable_clock(dev); if (dev->clk_prepared) clk_disable_unprepare(dev->clk); dev->clk_prepared = false; @@ -158,6 +177,7 @@ static void bcm2835_i2s_clear_fifos(struct bcm2835_i2s_dev *dev, uint32_t csreg; uint32_t i2s_active_state; bool clk_was_prepared; + bool clk_was_enabled; uint32_t off; uint32_t clr; @@ -176,6 +196,11 @@ static void bcm2835_i2s_clear_fifos(struct bcm2835_i2s_dev *dev, if (!clk_was_prepared) bcm2835_i2s_start_clock(dev); + /* Enable clock if not enabled */ + clk_was_enabled = dev->clk_enabled; + if (!clk_was_enabled) + bcm2835_i2s_enable_clock(dev); + /* Stop I2S module */ regmap_update_bits(dev->i2s_regmap, BCM2835_I2S_CS_A_REG, off, 0); @@ -207,6 +232,10 @@ static void bcm2835_i2s_clear_fifos(struct bcm2835_i2s_dev *dev, if (!timeout) dev_err(dev->dev, "I2S SYNC error!\n"); + /* Disable clock if it was not enabled before */ + if (!clk_was_enabled) + bcm2835_i2s_disable_clock(dev); + /* Stop clock if it was not running before */ if (!clk_was_prepared) bcm2835_i2s_stop_clock(dev); @@ -414,6 +443,8 @@ static int bcm2835_i2s_hw_params(struct snd_pcm_substream *substream, /* Clock should only be set up here if CPU is clock master */ if (bit_clock_master && (!dev->clk_prepared || dev->clk_rate != bclk_rate)) { + if (dev->clk_enabled) + bcm2835_i2s_disable_clock(dev); if (dev->clk_prepared) bcm2835_i2s_stop_clock(dev); @@ -534,6 +565,8 @@ static int bcm2835_i2s_hw_params(struct snd_pcm_substream *substream, mode |= BCM2835_I2S_FTXP | BCM2835_I2S_FRXP; } + if (!dev->clk_enabled) + mode |= BCM2835_I2S_CLKDIS; mode |= BCM2835_I2S_FLEN(frame_length - 1); mode |= BCM2835_I2S_FSLEN(framesync_length); @@ -668,6 +701,7 @@ static int bcm2835_i2s_trigger(struct snd_pcm_substream *substream, int cmd, case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: bcm2835_i2s_start_clock(dev); + bcm2835_i2s_enable_clock(dev); if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) mask = BCM2835_I2S_RXON; @@ -839,6 +873,7 @@ static int bcm2835_i2s_probe(struct platform_device *pdev) /* get the clock */ dev->clk_prepared = false; + dev->clk_enabled = false; dev->clk = devm_clk_get(&pdev->dev, NULL); if (IS_ERR(dev->clk)) { dev_err(&pdev->dev, "could not get clk: %ld\n", -- 2.20.1
Re: [PATCH] ASoC: bcm2835: Add enable/disable clock functions
On Wed, Oct 28, 2020 at 10:39:12AM +0100, Matthias Reichl wrote: On Wed, Oct 28, 2020 at 01:18:33AM -0700, Allen Martin wrote: Hi, just checking if you had a chance to review this patch. On Sat, Oct 10, 2020 at 12:26 PM Allen Martin wrote: > Add functions to control enable/disable of BCLK output of bcm2835 I2S > controller so that BCLK output only starts when dma starts. This > resolves issues of audio pop with DACs such as max98357 on rpi. The > LRCLK output of bcm2835 only starts when the frame size has been > configured and there is data in the FIFO. The max98357 dac makes a > loud popping sound when BCLK is toggling but LRCLK is not. I'm afraid that changing the clocking in the way you proposed has a high potential of breaking existing setups which need bclk to be present after prepare(). And it complicates the already rather convoluted clock setup even more. So I don't think this patch should be applied. Since you mentioned max98357: have you properly connected and configured the sd-mode GPIO? This chip has very strict timing requirements and is known to "pop" without sd-mode wired up - see the datasheet and devicetree binding docs. The board I'm testing on is this: https://www.adafruit.com/product/3346 which does not have SD_MODE wired to GPIO (schematic is here: https://www.adafruit.com/product/3346). I agree this should ideally be wired to GPIO as described in the max98357 datasheet to enable the click and pop suppression feature, however there are still problems with the clock initialization this patch addresses: 1) In bcm2835_i2s_hw_params() BCLK is enabled before the FIFO is cleared causing residual data in the FIFO to be transmitted when BCLK is initialized. 2) Also in bcm2835_i2s_hw_params() BCLK is enabled before the frame size is configured in MODE_A or DMA is initialized. This causes the i2s controller to transmit a data frame many thousands of bits long, violating the 0.5*BCLK < t < 0.5*LRCLK requirement of the max98357 datasheet. I think the driver should separate clock initialization from output enable and only start trasmitting once everything is initialized and there is data to transmit. Do you have more details about what setups require BCLK output after prepare()? I have access to a PCM5101A DAC, but I have not tested it yet. -Allen > > Signed-off-by: Allen Martin > --- > sound/soc/bcm/bcm2835-i2s.c | 35 +++ > 1 file changed, 35 insertions(+) > > diff --git a/sound/soc/bcm/bcm2835-i2s.c b/sound/soc/bcm/bcm2835-i2s.c > index e6a12e271b07..5c8649864c0d 100644 > --- a/sound/soc/bcm/bcm2835-i2s.c > +++ b/sound/soc/bcm/bcm2835-i2s.c > @@ -122,9 +122,27 @@ struct bcm2835_i2s_dev { > struct regmap *i2s_regmap; > struct clk *clk; > boolclk_prepared; > + boolclk_enabled; > int clk_rate; > }; > > +static void bcm2835_i2s_enable_clock(struct bcm2835_i2s_dev *dev) > +{ > + if (dev->clk_enabled) > + return; > + > + regmap_update_bits(dev->i2s_regmap, BCM2835_I2S_MODE_A_REG, > BCM2835_I2S_CLKDIS, 0); > + dev->clk_enabled = true; > +} > + > +static void bcm2835_i2s_disable_clock(struct bcm2835_i2s_dev *dev) > +{ > + if (dev->clk_enabled) > + regmap_update_bits(dev->i2s_regmap, > BCM2835_I2S_MODE_A_REG, BCM2835_I2S_CLKDIS, BCM2835_I2S_CLKDIS); > + > + dev->clk_enabled = false; > +} > + > static void bcm2835_i2s_start_clock(struct bcm2835_i2s_dev *dev) > { > unsigned int master = dev->fmt & SND_SOC_DAIFMT_MASTER_MASK; > @@ -145,6 +163,7 @@ static void bcm2835_i2s_start_clock(struct > bcm2835_i2s_dev *dev) > > static void bcm2835_i2s_stop_clock(struct bcm2835_i2s_dev *dev) > { > + bcm2835_i2s_disable_clock(dev); > if (dev->clk_prepared) > clk_disable_unprepare(dev->clk); > dev->clk_prepared = false; > @@ -158,6 +177,7 @@ static void bcm2835_i2s_clear_fifos(struct > bcm2835_i2s_dev *dev, > uint32_t csreg; > uint32_t i2s_active_state; > bool clk_was_prepared; > + bool clk_was_enabled; > uint32_t off; > uint32_t clr; > > @@ -176,6 +196,11 @@ static void bcm2835_i2s_clear_fifos(struct > bcm2835_i2s_dev *dev, > if (!clk_was_prepared) > bcm2835_i2s_start_clock(dev); > > + /* Enable clock if not enabled */ > + clk_was_enabled = dev->clk_enabled; > + if (!clk_was_enabled) > + bcm2835_i2s_enable_clock(dev); > + > /* Stop I2S module */ > re