Re: [PATCH] cpumask: fix lg_lock/br_lock.
On 02/29/2012 04:42 PM, Srivatsa S. Bhat wrote: On 02/29/2012 02:47 PM, Ingo Molnar wrote: * Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com wrote: Hi Andrew, On 02/29/2012 02:57 AM, Andrew Morton wrote: On Tue, 28 Feb 2012 09:43:59 +0100 Ingo Molnar mi...@elte.hu wrote: This patch should also probably go upstream through the locking/lockdep tree? Mind sending it us once you think it's ready? Oh goody, that means you own http://marc.info/?l=linux-kernelm=131419353511653w=2. That bug got fixed sometime around Dec 2011. See commit e30e2fdf (VFS: Fix race between CPU hotplug and lglocks) The lglocks code is still CPU-hotplug racy AFAICS, despite the -cpu_lock complication: Consider a taken global lock on a CPU: CPU#1 ... br_write_lock(vfsmount_lock); this takes the lock of all online CPUs: say CPU#1 and CPU#2. Now CPU#3 comes online and takes the read lock: CPU#3 cannot come online! :-) No new CPU can come online until that corresponding br_write_unlock() is completed. That is because br_write_lock acquires name##_cpu_lock and only br_write_unlock will release it. And, CPU_UP_PREPARE callback tries to acquire that very same spinlock, and hence will keep spinning until br_write_unlock() is run. And hence, the CPU#3 or any new CPU online for that matter will not complete until br_write_unlock() is done. It is of course debatable as to how good this design really is, but IMHO, the lglocks code is not CPU-hotplug racy now.. Here is the link to the original discussion during the development of that patch: thread.gmane.org/gmane.linux.file-systems/59750/ CPU#3 br_read_lock(vfsmount_lock); This will succeed while the br_write_lock() is still active, because CPU#1 has only taken the locks of CPU#1 and CPU#2. Crash! The proper fix would be for CPU-online to serialize with all known lglocks, via the notifier callback, i.e. to do something like this: case CPU_UP_PREPARE: for_each_online_cpu(cpu) { spin_lock(name##_cpu_lock); spin_unlock(name##_cpu_lock); } ... I.e. in essence do: case CPU_UP_PREPARE: name##_global_lock_online(); name##_global_unlock_online(); Another detail I noticed, this bit: register_hotcpu_notifier(name##_lg_cpu_notifier); \ get_online_cpus(); \ for_each_online_cpu(i) \ cpu_set(i, name##_cpus);\ put_online_cpus(); \ could be something simpler and loop-less, like: get_online_cpus(); cpumask_copy(name##_cpus, cpu_online_mask); register_hotcpu_notifier(name##_lg_cpu_notifier); put_online_cpus(); While the cpumask_copy is definitely better, we can't put the register_hotcpu_notifier() within get/put_online_cpus() because it will lead to ABBA deadlock with a newly initiated CPU Hotplug operation, the 2 locks involved being the cpu_add_remove_lock and the cpu_hotplug lock. IOW, at the moment there is no absolutely race-free way way to do CPU Hotplug callback registration. Some time ago, while going through the asynchronous booting patch by Arjan [1] I had written up a patch to fix that race because that race got transformed from purely theoretical to very real with the async boot patch, as shown by the powerpc boot failures [2]. But then I stopped short of posting that patch to the lists because I started wondering how important that race would actually turn out to be, in case the async booting design takes a totally different approach altogether.. [And the reason why I didn't post it is also because it would require lots of changes in many parts where CPU Hotplug registration is done, and that wouldn't probably be justified (I don't know..) if the race remained only theoretical, as it is now.] [1]. http://thread.gmane.org/gmane.linux.kernel/1246209 [2]. https://lkml.org/lkml/2012/2/13/383 Ok, now that I mentioned about my patch, let me as well show it some daylight.. It is totally untested, incomplete and probably won't even compile.. (given that I had abandoned working on it some time ago, since I was not sure in what direction the async boot design was headed, which was the original motivation for me to try to fix this race) I really hate to post it when it is in such a state, but atleast let me get the idea out, now that the discussion is around it, atleast just to get some thoughts about whether it is even worth pursuing! (I'll post the patches as a reply to this mail.) By the way, it should solve the powerpc boot failure,
[PATCH 1/3] CPU hotplug: Fix issues with callback registration
Currently, there are several intertwined problems with CPU hotplug callback registration: Code which needs to get notified of CPU hotplug events and additionally wants to do something for each already online CPU, would typically do something like: register_cpu_notifier(foobar_cpu_notifier); A get_online_cpus(); for_each_online_cpu(cpu) { /* Do something */ } put_online_cpus(); At the point marked as A, a CPU hotplug event could sneak in, leaving the code confused. Moving the registration to after put_online_cpus() won't help either, because we could be losing a CPU hotplug event between put_online_cpus() and the callback registration. Also, doing the registration inside the get/put_online_cpus() block is also not going to help, because it will lead to ABBA deadlock with CPU hotplug, the 2 locks being cpu_add_remove_lock and cpu_hotplug lock. It is also to be noted that, at times, we might want to do different setups or initializations depending on whether a CPU is coming online for the first time (as part of booting) or whether it is being only soft-onlined at a later point in time. To achieve this, doing something like the code shown above, with the Do something being different than what the registered callback does wouldn't work out, because of the race conditions mentioned above. The solution to all this is to include history replay upon request within the CPU hotplug callback registration code, while also providing an option for a different callback to be invoked while replaying history. Though the above mentioned race condition was mostly theoretical before, it gets all real when things like asynchronous booting[1] come into the picture, as shown by the PowerPC boot failure in [2]. So this fix is also a step forward in getting cool things like asynchronous booting to work properly. References: [1]. https://lkml.org/lkml/2012/2/14/62 --- include/linux/cpu.h | 15 +++ kernel/cpu.c| 49 ++--- 2 files changed, 61 insertions(+), 3 deletions(-) diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 6e53b48..90a6d76 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -124,16 +124,25 @@ enum { #endif /* #else #if defined(CONFIG_HOTPLUG_CPU) || !defined(MODULE) */ #ifdef CONFIG_HOTPLUG_CPU extern int register_cpu_notifier(struct notifier_block *nb); +extern int register_allcpu_notifier(struct notifier_block *nb, + bool replay_history, int (*history_setup)(void)); extern void unregister_cpu_notifier(struct notifier_block *nb); #else #ifndef MODULE extern int register_cpu_notifier(struct notifier_block *nb); +extern int register_allcpu_notifier(struct notifier_block *nb, + bool replay_history, int (*history_setup)(void)); #else static inline int register_cpu_notifier(struct notifier_block *nb) { return 0; } +static inline int register_allcpu_notifier(struct notifier_block *nb, + bool replay_history, int (*history_setup)(void)) +{ + return 0; +} #endif static inline void unregister_cpu_notifier(struct notifier_block *nb) @@ -155,6 +164,12 @@ static inline int register_cpu_notifier(struct notifier_block *nb) return 0; } +static inline int register_allcpu_notifier(struct notifier_block *nb, + bool replay_history, int (*history_setup)(void)) +{ + return 0; +} + static inline void unregister_cpu_notifier(struct notifier_block *nb) { } diff --git a/kernel/cpu.c b/kernel/cpu.c index d520d34..1564c1d 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -132,12 +132,56 @@ static void cpu_hotplug_done(void) {} /* Need to know about CPUs going up/down? */ int __ref register_cpu_notifier(struct notifier_block *nb) { - int ret; + return register_allcpu_notifier(nb, false, NULL); +} +EXPORT_SYMBOL(register_cpu_notifier); + +int __ref register_allcpu_notifier(struct notifier_block *nb, + bool replay_history, int (*history_setup)(void)) +{ + int cpu, ret = 0; + + if (!replay_history history_setup) + return -EINVAL; + cpu_maps_update_begin(); - ret = raw_notifier_chain_register(cpu_chain, nb); + /* +* We don't race with CPU hotplug, because we just took the +* cpu_add_remove_lock. +*/ + + if (!replay_history) + goto Register; + + if (history_setup) { + /* +* The caller has a special setup routine to rewrite +* history as he desires. Just invoke it. Don't +* proceed with callback registration if this setup is +* unsuccessful. +*/ + ret = history_setup(); + } else { + /* +* Fallback to the usual callback, if a special handler +* for past CPU
[PATCH 2/3] CPU hotplug, arch/powerpc: Fix CPU hotplug callback registration
Restructure CPU hotplug setup and callback registration in topology_init so as to be race-free. --- arch/powerpc/kernel/sysfs.c | 44 +++ arch/powerpc/mm/numa.c | 11 --- 2 files changed, 44 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c index 883e74c..5838b33 100644 --- a/arch/powerpc/kernel/sysfs.c +++ b/arch/powerpc/kernel/sysfs.c @@ -496,6 +496,38 @@ ssize_t arch_cpu_release(const char *buf, size_t count) #endif /* CONFIG_HOTPLUG_CPU */ +static void cpu_register_helper(struct cpu *c, int cpu) +{ + register_cpu(c, cpu); + device_create_file(c-dev, dev_attr_physical_id); +} + +static int __cpuinit sysfs_cpu_notify_first_time(struct notifier_block *self, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned int)(long)hcpu; + struct cpu *c = per_cpu(cpu_devices, cpu); + + if (action == CPU_ONLINE) + if (!c-hotpluggable) /* Avoid duplicate registrations */ + cpu_register_helper(c, cpu); + register_cpu_online(cpu); + } + return NOTIFY_OK; +} +static int __cpuinit sysfs_cpu_notify_setup(void) +{ + int cpu; + + /* +* We don't race with CPU hotplug because we are called from +* the CPU hotplug callback registration function. +*/ + for_each_online_cpu(cpu) + sysfs_cpu_notify_first_time(NULL, CPU_ONLINE, cpu); + + return 0; +} static int __cpuinit sysfs_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu) { @@ -637,7 +669,6 @@ static int __init topology_init(void) int cpu; register_nodes(); - register_cpu_notifier(sysfs_cpu_nb); for_each_possible_cpu(cpu) { struct cpu *c = per_cpu(cpu_devices, cpu); @@ -652,15 +683,12 @@ static int __init topology_init(void) if (ppc_md.cpu_die) c-hotpluggable = 1; - if (cpu_online(cpu) || c-hotpluggable) { - register_cpu(c, cpu); + if (c-hotpluggable) + cpu_register_helper(c, cpu); + } - device_create_file(c-dev, dev_attr_physical_id); - } + register_allcpu_notifier(sysfs_cpu_nb, true, sysfs_cpu_notify_setup); - if (cpu_online(cpu)) - register_cpu_online(cpu); - } #ifdef CONFIG_PPC64 sysfs_create_dscr_default(); #endif /* CONFIG_PPC64 */ diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 3feefc3..e326455 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1014,6 +1014,13 @@ static void __init mark_reserved_regions_for_nid(int nid) } } +static int __cpuinit cpu_numa_callback_setup(void) +{ + cpu_numa_callback(ppc64_numa_nb, CPU_UP_PREPARE, + (void *)(unsigned long)boot_cpuid); + return 0; +} + void __init do_init_bootmem(void) { @@ -1088,9 +1095,7 @@ void __init do_init_bootmem(void) */ setup_node_to_cpumask_map(); - register_cpu_notifier(ppc64_numa_nb); - cpu_numa_callback(ppc64_numa_nb, CPU_UP_PREPARE, - (void *)(unsigned long)boot_cpuid); + register_allcpu_notifier(ppc64_numa_nb, true, cpu_numa_callback_setup); } void __init paging_init(void) ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
[PATCH 3/3] CPU hotplug, arch/sparc: Fix CPU hotplug callback registration
Restructure CPU hotplug setup and callback registration in topology_init so as to be race-free. --- arch/sparc/kernel/sysfs.c |6 ++ 1 files changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/sparc/kernel/sysfs.c b/arch/sparc/kernel/sysfs.c index 654e8aa..22cb881 100644 --- a/arch/sparc/kernel/sysfs.c +++ b/arch/sparc/kernel/sysfs.c @@ -300,16 +300,14 @@ static int __init topology_init(void) check_mmu_stats(); - register_cpu_notifier(sysfs_cpu_nb); - for_each_possible_cpu(cpu) { struct cpu *c = per_cpu(cpu_devices, cpu); register_cpu(c, cpu); - if (cpu_online(cpu)) - register_cpu_online(cpu); } + register_allcpu_notifier(sysfs_cpu_nb, true, NULL); + return 0; } ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
[PATCH v2] powerpc: document the FSL MPIC message register binding
This binding documents how the message register blocks found in some FSL MPIC implementations shall be represented in a device tree. Signed-off-by: Meador Inge meador_i...@mentor.com Signed-off-by: Jia Hongtao b38...@freescale.com --- Changes for v2: * Update compatible type from string to string-list. * Update interrupts description. * Update mpic-msgr-receive-mask description. .../devicetree/bindings/powerpc/fsl/mpic-msgr.txt | 64 1 files changed, 64 insertions(+), 0 deletions(-) create mode 100644 Documentation/devicetree/bindings/powerpc/fsl/mpic-msgr.txt diff --git a/Documentation/devicetree/bindings/powerpc/fsl/mpic-msgr.txt b/Documentation/devicetree/bindings/powerpc/fsl/mpic-msgr.txt new file mode 100644 index 000..d52ac48 --- /dev/null +++ b/Documentation/devicetree/bindings/powerpc/fsl/mpic-msgr.txt @@ -0,0 +1,64 @@ +* FSL MPIC Message Registers + +This binding specifies what properties must be available in the device tree +representation of the message register blocks found in some FSL MPIC +implementations. + +Required properties: + +- compatible: Specifies the compatibility list for the message register + block. The type shall be string-list and the value shall be of the form + fsl,mpic-vversion-msgr, where version is the version number of + the MPIC containing the message registers. + +- reg: Specifies the base physical address(s) and size(s) of the + message register block's addressable register space. The type shall be + prop-encoded-array. + +- interrupts: Specifies a list of interrupt-specifiers which are available + for receiving interrupts. Interrupt-specifier consists of two cells: first + cell is interrupt-number and second cell is level-sense. The type shall be + prop-encoded-array. + +Optional properties: + +- mpic-msgr-receive-mask: Specifies what registers in the containing block + are allowed to receive interrupts. The value is a bit mask where a set + bit at bit 'n' indicates that message register 'n' can receive interrupts. + Note that bit 'n' is numbered from LSB for PPC hardware. The type shall + be u32. If not present, then all of the message registers in the block + are available. + +Aliases: + +An alias should be created for every message register block. They are not +required, though. However, a particular implementation of this binding +may require aliases to be present. Aliases are of the form +'mpic-msgr-blockn', where n is an integer specifying the block's number. +Numbers shall start at 0. + +Example: + + aliases { + mpic-msgr-block0 = mpic_msgr_block0; + mpic-msgr-block1 = mpic_msgr_block1; + }; + + mpic_msgr_block0: mpic-msgr-block@41400 { + compatible = fsl,mpic-v3.1-msgr; + reg = 0x41400 0x200; + // Message registers 0 and 2 in this block can receive interrupts on + // sources 0xb0 and 0xb2, respectively. + interrupts = 0xb0 2 0xb2 2; + mpic-msgr-receive-mask = 0x5; + }; + + mpic_msgr_block1: mpic-msgr-block@42400 { + compatible = fsl,mpic-v3.1-msgr; + reg = 0x42400 0x200; + // Message registers 0 and 2 in this block can receive interrupts on + // sources 0xb4 and 0xb6, respectively. + interrupts = 0xb4 2 0xb6 2; + mpic-msgr-receive-mask = 0x5; + }; + -- 1.7.5.1 ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH v2] bootmem/sparsemem: remove limit constraint in alloc_bootmem_section
On Wed, Feb 29, 2012 at 10:12:33AM -0800, Nishanth Aravamudan wrote: SNIP Signed-off-by: Nishanth Aravamudan n...@us.ibm.com Acked-by: Mel Gorman mgor...@suse.de -- Mel Gorman SUSE Labs ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH 1/2] atomic: Allow atomic_inc_not_zero to be overridden
On Thursday 01 March 2012 02:09:53 Anton Blanchard wrote: We want to implement a ppc64 specific version of atomic_inc_not_zero so wrap it in an ifdef to allow it to be overridden. Acked-by: Mike Frysinger vap...@gentoo.org -mike signature.asc Description: This is a digitally signed message part. ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Sampling instruction pointer on PPC
[Added linuxppc-dev list.] On 3/1/12 10:08 AM, Victor Jimenez wrote: I am trying to sample instruction pointer along time on a Power7 system. I know that there are accurate mechanisms to do so in Intel processors (e.g., PEBS and Branch Trace Store). Is it possible to do something similar in Power7? Will the samples be accurate? I am worried that significant delays (skids) may appear. Thank you, Victor WARNING / LEGAL TEXT: This message is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential, proprietary, or exempt from disclosure under applicable law. If you are not the intended recipient or the person responsible for delivering the message to the intended recipient, you are strictly prohibited from disclosing, distributing, copying, or in any way using this message. If you have received this communication in error, please notify the sender and destroy and delete any copies you may have received. http://www.bsc.es/disclaimer.htm -- To unsubscribe from this list: send the line unsubscribe linux-perf-users in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH] mpc836x: fix failed phy detection for ucc ethernet on MDS
On Feb 27, 2012, at 6:25 AM, Paul Gortmaker wrote: The mpc836x_mds platform has been broken since the commit 6fe3264945ee63292cdfb27b6e95bc52c603bb09 [...] --- [Andy: There may be other boards that could be having this problem git grep -l enet.*ucc arch/powerpc/boot/dts/|xargs grep -L tbi shows four possible candidates -- but I've only got the 8360MDS. ] Shoot. I will go look for those. I suspect I neglected all the UCC-having SOCs in my patch. ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH 1/2] atomic: Allow atomic_inc_not_zero to be overridden
On Thu, 1 Mar 2012 18:09:53 +1100 Anton Blanchard an...@samba.org wrote: We want to implement a ppc64 specific version of atomic_inc_not_zero so wrap it in an ifdef to allow it to be overridden. Signed-off-by: Anton Blanchard an...@samba.org --- Index: linux-build/include/linux/atomic.h === --- linux-build.orig/include/linux/atomic.h 2012-02-11 14:59:23.284714257 +1100 +++ linux-build/include/linux/atomic.h2012-02-11 15:01:14.894764555 +1100 @@ -24,7 +24,9 @@ static inline int atomic_add_unless(atom * Atomically increments @v by 1, so long as @v is non-zero. * Returns non-zero if @v was non-zero, and zero otherwise. */ +#ifndef atomic_inc_not_zero #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) +#endif Please merge this via the ppc tree? And let's ask the hexagon maintainers to take a look at the definition in arch/hexagon/include/asm/atomic.h. I assume that it can be removed, but that might cause problems with files which include asm/atomic.h directly. I have found two such files in non-arch code and have queued fixes. There are no such files in arch/hexagon code, so I think it's safe to zap the hexagon definition of atomic_inc_not_zero(). +static __inline__ int atomic_inc_not_zero(atomic_t *v) Curious: is there a technical reason why ppc uses __inline__ rather than inline? ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH v2] bootmem/sparsemem: remove limit constraint in alloc_bootmem_section
On 29.02.2012 [15:28:30 -0800], Andrew Morton wrote: On Wed, 29 Feb 2012 10:12:33 -0800 Nishanth Aravamudan n...@linux.vnet.ibm.com wrote: While testing AMS (Active Memory Sharing) / CMO (Cooperative Memory Overcommit) on powerpc, we tripped the following: kernel BUG at mm/bootmem.c:483! ... This is BUG_ON(limit goal + size limit); and after some debugging, it seems that goal = 0x700 limit = 0x800 and sparse_early_usemaps_alloc_node - sparse_early_usemaps_alloc_pgdat_section calls return alloc_bootmem_section(usemap_size() * count, section_nr); This is on a system with 8TB available via the AMS pool, and as a quirk of AMS in firmware, all of that memory shows up in node 0. So, we end up with an allocation that will fail the goal/limit constraints. In theory, we could fall-back to alloc_bootmem_node() in sparse_early_usemaps_alloc_node(), but since we actually have HOTREMOVE defined, we'll BUG_ON() instead. A simple solution appears to be to unconditionally remove the limit condition in alloc_bootmem_section, meaning allocations are allowed to cross section boundaries (necessary for systems of this size). Johannes Weiner pointed out that if alloc_bootmem_section() no longer guarantees section-locality, we need check_usemap_section_nr() to print possible cross-dependencies between node descriptors and the usemaps allocated through it. That makes the two loops in sparse_early_usemaps_alloc_node() identical, so re-factor the code a bit. The patch is a bit scary now, so I think we should merge it into 3.4-rc1 and then backport it into 3.3.1 if nothing blows up. Do you think it should be backported into 3.3.x? Earlier kernels? Upon review, it would be good if we can get it pushed back to kernels 3.0.x, 3.1.x and 3.2.x. Thanks, Nish -- Nishanth Aravamudan n...@us.ibm.com IBM Linux Technology Center ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH 1/2] atomic: Allow atomic_inc_not_zero to be overridden
On Thu, Mar 01, 2012 at 03:02:56PM -0800, Andrew Morton wrote: Please merge this via the ppc tree? And let's ask the hexagon maintainers to take a look at the definition in arch/hexagon/include/asm/atomic.h. I assume that it can be removed, but that might cause problems with files which include asm/atomic.h directly. I have found two such files in non-arch code and have queued fixes. There are no such files in arch/hexagon code, so I think it's safe to zap the hexagon definition of atomic_inc_not_zero(). Just tested it; it's safe to zap the Hexagon definition of atomic_inc_not_zero()... I'm fine with this going in through some other tree (still getting mine set up). Thanks, Richard Kuo -- Sent by an employee of the Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum. ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
linux-next: boot failure for next-20120227 and later (pci tree related)
Hi Jesse, Staring with next-20120227, one of my boot tests is failing like this: Freeing unused kernel memory: 488k freed modprobe used greatest stack depth: 10624 bytes left dracut: dracut-004-32.el6 udev: starting version 147 udevd (1161): /proc/1161/oom_adj is deprecated, please use /proc/1161/oom_score_adj instead. setfont used greatest stack depth: 10528 bytes left dracut: Starting plymouth daemon calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2689 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 9 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2701 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 20 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2713 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2725 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2737 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2749 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2761 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 7 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2773 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 7 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2785 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2797 initcall .wait_scan_init+0x0/0xc4 [scsi_wait_scan] returned 0 after 8 usecs calling .wait_scan_init+0x0/0xc4 [scsi_wait_scan] @ 2809 and eventually our test system decides the machine is dead. This is a PowerPC 970 based blade system (several other PowerPC based systems to not fail). A normal boot only shows .wait_scan_init being called once. (I have initcall_debug=y debug on the command line). I bisected this down to: commit 6c5705fec63d83eeb165fe61e34adc92ecc2ce75 Author: Bjorn Helgaas bhelg...@google.com Date: Thu Feb 23 20:19:03 2012 -0700 powerpc/PCI: get rid of device resource fixups Tell the PCI core about host bridge address translation so it can take care of bus-to-resource conversion for us. CC: Benjamin Herrenschmidt b...@kernel.crashing.org Signed-off-by: Bjorn Helgaas bhelg...@google.com The only seemingly relevant differences in the boot logs (good to bad) are: pci :03:02.0: supports D1 D2 +PCI: Cannot allocate resource region 0 of PCI bridge 1, will remap +PCI: Cannot allocate resource region 1 of PCI bridge 1, will remap +PCI: Cannot allocate resource region 0 of PCI bridge 6, will remap +PCI: Cannot allocate resource region 1 of PCI bridge 6, will remap +PCI: Cannot allocate resource region 0 of PCI bridge 3, will remap +PCI: Cannot allocate resource region 1 of PCI bridge 3, will remap +PCI: Cannot allocate resource region 0 of device :01:01.0, will remap +PCI: Cannot allocate resource region 2 of device :01:01.0, will remap +PCI: Cannot allocate resource region 6 of device :01:01.0, will remap +PCI: Cannot allocate resource region 0 of device :03:00.0, will remap +PCI: Cannot allocate resource region 0 of device :03:00.1, will remap +PCI: Cannot allocate resource region 0 of device :03:02.0, will remap +PCI: Cannot allocate resource region 1 of device :03:02.0, will remap +PCI: Cannot allocate resource region 2 of device :03:02.0, will remap +PCI: Cannot allocate resource region 6 of device :03:02.0, will remap +PCI: Cannot allocate resource region 0 of device :06:04.0, will remap +PCI: Cannot allocate resource region 2 of device :06:04.0, will remap +PCI: Cannot allocate resource region 0 of device :06:04.1, will remap +PCI: Cannot allocate resource region 2 of device :06:04.1, will remap PCI: Probing PCI hardware done . . . calling .radeonfb_init+0x0/0x248 @ 1 -radeonfb :03:02.0: Invalid ROM contents -radeonfb (:03:02.0): Invalid ROM signature 7272 should be 0xaa55 -radeonfb: No ATY,RefCLK property ! -xtal calculation failed: 26550 -radeonfb: Used default PLL infos -radeonfb: Reference=27.00 MHz (RefDiv=60) Memory=166.00 Mhz, System=166.00 MHz -radeonfb: PLL min 12000 max 35000 -i2c i2c-1: unable to read EDID block. -i2c i2c-1: unable to read EDID block. -i2c i2c-1: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -i2c i2c-2: unable to read EDID block. -i2c i2c-2: unable to read EDID block. -i2c i2c-2: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -i2c i2c-3: unable to read EDID block. -radeonfb: Monitor 1 type CRT found -radeonfb: Monitor 2 type no found
[PATCH] powerpc/srio: Fix the compile errors when building with 64bit
For the file arch/powerpc/sysdev/fsl_rmu.c, there will be some compile errors while using the corenet64_smp_defconfig: .../fsl_rmu.c:315: error: cast from pointer to integer of different size .../fsl_rmu.c:320: error: cast to pointer from integer of different size .../fsl_rmu.c:320: error: cast to pointer from integer of different size .../fsl_rmu.c:320: error: cast to pointer from integer of different size .../fsl_rmu.c:330: error: cast to pointer from integer of different size .../fsl_rmu.c:332: error: cast to pointer from integer of different size .../fsl_rmu.c:339: error: cast to pointer from integer of different size .../fsl_rmu.c:340: error: cast to pointer from integer of different size .../fsl_rmu.c:341: error: cast to pointer from integer of different size .../fsl_rmu.c:348: error: cast to pointer from integer of different size .../fsl_rmu.c:348: error: cast to pointer from integer of different size .../fsl_rmu.c:348: error: cast to pointer from integer of different size .../fsl_rmu.c:659: error: cast from pointer to integer of different size .../fsl_rmu.c:659: error: format '%8.8x' expects type 'unsigned int', but argument 5 has type 'size_t' .../fsl_rmu.c:985: error: cast from pointer to integer of different size .../fsl_rmu.c:997: error: cast to pointer from integer of different size Rewrote the corresponding code with the support of 64bit building. Signed-off-by: Liu Gang gang@freescale.com Signed-off-by: Shaohui Xie shaohui@freescale.com Signed-off-by: Paul Gortmaker paul.gortma...@windriver.com --- arch/powerpc/sysdev/fsl_rmu.c | 11 ++- 1 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/sysdev/fsl_rmu.c b/arch/powerpc/sysdev/fsl_rmu.c index 1548578..468011e 100644 --- a/arch/powerpc/sysdev/fsl_rmu.c +++ b/arch/powerpc/sysdev/fsl_rmu.c @@ -311,8 +311,8 @@ fsl_rio_dbell_handler(int irq, void *dev_instance) /* XXX Need to check/dispatch until queue empty */ if (dsr DOORBELL_DSR_DIQI) { - u32 dmsg = - (u32) fsl_dbell-dbell_ring.virt + + unsigned long dmsg = + (unsigned long) fsl_dbell-dbell_ring.virt + (in_be32(fsl_dbell-dbell_regs-dqdpar) 0xfff); struct rio_dbell *dbell; int found = 0; @@ -657,7 +657,8 @@ fsl_add_outb_message(struct rio_mport *mport, struct rio_dev *rdev, int mbox, int ret = 0; pr_debug(RIO: fsl_add_outb_message(): destid %4.4x mbox %d buffer \ -%8.8x len %8.8x\n, rdev-destid, mbox, (int)buffer, len); +%8.8lx len %8.8zx\n, rdev-destid, mbox, + (unsigned long)buffer, len); if ((len 8) || (len RIO_MAX_MSG_SIZE)) { ret = -EINVAL; goto out; @@ -972,7 +973,7 @@ out: void *fsl_get_inb_message(struct rio_mport *mport, int mbox) { struct fsl_rmu *rmu = GET_RMM_HANDLE(mport); - u32 phys_buf, virt_buf; + unsigned long phys_buf, virt_buf; void *buf = NULL; int buf_idx; @@ -982,7 +983,7 @@ void *fsl_get_inb_message(struct rio_mport *mport, int mbox) if (phys_buf == in_be32(rmu-msg_regs-ifqepar)) goto out2; - virt_buf = (u32) rmu-msg_rx_ring.virt + (phys_buf + virt_buf = (unsigned long) rmu-msg_rx_ring.virt + (phys_buf - rmu-msg_rx_ring.phys); buf_idx = (phys_buf - rmu-msg_rx_ring.phys) / RIO_MAX_MSG_SIZE; buf = rmu-msg_rx_ring.virt_buffer[buf_idx]; -- 1.7.0.4 ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev