Re: [PATCH 2/2] offb: Add palette hack for qemu standard vga framebuffer

2011-12-15 Thread Benjamin Herrenschmidt
On Thu, 2011-12-15 at 08:49 +0100, Andreas Färber wrote:
 Am 15.12.2011 00:58, schrieb Benjamin Herrenschmidt:
  We rename the mach64 hack to simple since that's also applicable
  to anything using VGA-style DAC IO ports (set to 8-bit DAC) and we
  use it for qemu vga.
  
  Note that this is keyed on a device-tree compatible property that
  is currently only set by an upcoming version of SLOF when using the
  qemu pseries platform. This is on purpose as other qemu ppc platforms
  using OpenBIOS aren't properly setting the DAC to 8-bit at the time of
  the writing of this patch.
  
  We can fix OpenBIOS later to do that and add the required property, in
  which case it will be matched by this change.
 
 Just let me know what's needed for OpenBIOS.
 Is this just for -vga std as opposed to the default cirrus?

Yes. Cirrus isn't the default on mac99 and on pseries (tho I will
eventually add a SLOF driver for it as well).

For OpenBIOS I was thinking about just sending you a patch :-) But if
you have more time than I do, what is needed is:

 - Set the 8-bit DAC bit in the VBE enable register when initializing
the card (0x20 off the top of my mind but dbl check). Remove your  2
in your palette setting.

 - Implement color! so prom_init can set the initial palette (but that's
not strictly necessary).

 - I assume that the VGA device already has a device_type of display,
can be open()'ed from the client interface and will have the necessary
properties to be used by offb (width, height, linebytes, depth, and
address if fits in 32-bit (if not, ignore it, offb will pick the largest
BAR)). 

 - Stick qemu,std-vga into the compatible property of the vga PCI
device.

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] offb: Add palette hack for qemu standard vga framebuffer

2011-12-15 Thread Andreas Färber
Am 15.12.2011 00:58, schrieb Benjamin Herrenschmidt:
 We rename the mach64 hack to simple since that's also applicable
 to anything using VGA-style DAC IO ports (set to 8-bit DAC) and we
 use it for qemu vga.
 
 Note that this is keyed on a device-tree compatible property that
 is currently only set by an upcoming version of SLOF when using the
 qemu pseries platform. This is on purpose as other qemu ppc platforms
 using OpenBIOS aren't properly setting the DAC to 8-bit at the time of
 the writing of this patch.
 
 We can fix OpenBIOS later to do that and add the required property, in
 which case it will be matched by this change.

Just let me know what's needed for OpenBIOS.
Is this just for -vga std as opposed to the default cirrus?

Cheers,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v5 0/7] Kudmp support for PPC440x

2011-12-15 Thread Suzuki K. Poulose
The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V4:

 (Suggested by : Segher Boessenkool seg...@kernel.crashing.org )
 * Added 'sync' between dcbst and icbi for the modified instruction in
   relocate().
 * Better comments on register usage in reloc_32.S
 * Better check for relocation types in relocs_check.pl.
 
Changes from V3:

 * Added a new config - NONSTATIC_KERNEL - to group different types of 
relocatable
   kernel. (Suggested by: Josh Boyer)
 * Added supported ppc relocation types in relocs_check.pl for verifying the
   relocations used in the kernel.

Changes from V2:

 * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
   Suggested by: Scott Wood
 * Added support for DYNAMIC_MEMSTART on PPC440x
 * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
   for relocation based on processing dynamic reloc entries for PPC32.
 * Ensure the modified instructions are flushed and the i-cache invalidated at
   the end of relocate(). - Reported by : Josh Poimboeuf

Changes from V1:

 * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
   of the generic bits to a new patch.
 * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
   retained old style mapping. (Suggested by: Scott Wood)
 * Added support for avoiding the overlapping of uncompressed kernel
   with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, RELOCATABLE should work fine there as we only depend on the 
runtime address and the XLAT entry setup by the boot loader. It would be great 
if
somebody could test these patches on a 47x.

---

Suzuki K. Poulose (7):
  [boot] Change the load address for the wrapper to fit the kernel
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [ppc] Define virtual-physical translations for RELOCATABLE
  [ppc] Process dynamic relocations for kernel
  [44x] Enable DYNAMIC_MEMSTART for 440x
  [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE


 arch/powerpc/Kconfig  |   45 -
 arch/powerpc/Makefile |6 -
 arch/powerpc/boot/wrapper |   20 ++
 arch/powerpc/configs/44x/iss476-smp_defconfig |3 
 arch/powerpc/include/asm/kdump.h  |4 
 arch/powerpc/include/asm/page.h   |   89 ++-
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/crash_dump.c  |4 
 arch/powerpc/kernel/head_44x.S|  105 +
 arch/powerpc/kernel/head_fsl_booke.S  |2 
 arch/powerpc/kernel/machine_kexec.c   |2 
 arch/powerpc/kernel/prom_init.c   |2 
 arch/powerpc/kernel/reloc_32.S|  208 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/mm/44x_mmu.c |2 
 arch/powerpc/mm/init_32.c |7 +
 arch/powerpc/relocs_check.pl  |   14 +-
 17 files changed, 495 insertions(+), 28 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

--
Suzuki K. Poulose

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE

2011-12-15 Thread Suzuki K. Poulose
The current implementation of CONFIG_RELOCATABLE in BookE is based
on mapping the page aligned kernel load address to KERNELBASE. This
approach however is not enough for platforms, where the TLB page size
is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.

The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
dynamic relocations will be introduced in the later in the patch series.

This change would allow the use of the old method of RELOCATABLE for
platforms which can afford to enforce the page alignment (platforms with
smaller TLB size).

Changes since v3:

* Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
  either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)

Suggested-by: Scott Wood scottw...@freescale.com
Tested-by: Scott Wood scottw...@freescale.com

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Scott Wood scottw...@freescale.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   60 +
 arch/powerpc/configs/44x/iss476-smp_defconfig |3 +
 arch/powerpc/include/asm/kdump.h  |4 +-
 arch/powerpc/include/asm/page.h   |4 +-
 arch/powerpc/kernel/crash_dump.c  |4 +-
 arch/powerpc/kernel/head_44x.S|4 +-
 arch/powerpc/kernel/head_fsl_booke.S  |2 -
 arch/powerpc/kernel/machine_kexec.c   |2 -
 arch/powerpc/kernel/prom_init.c   |2 -
 arch/powerpc/mm/44x_mmu.c |2 -
 10 files changed, 56 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7c93c7e..fac92ce 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -364,7 +364,8 @@ config KEXEC
 config CRASH_DUMP
bool Build a kdump crash kernel
depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   select RELOCATABLE if PPC64
+   select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump
@@ -773,6 +774,10 @@ source drivers/rapidio/Kconfig
 
 endmenu
 
+config NONSTATIC_KERNEL
+   bool
+   default n
+
 menu Advanced setup
depends on PPC32
 
@@ -822,23 +827,39 @@ config LOWMEM_CAM_NUM
int Number of CAMs to use to map low memory if LOWMEM_CAM_NUM_BOOL
default 3
 
-config RELOCATABLE
-   bool Build a relocatable kernel (EXPERIMENTAL)
+config DYNAMIC_MEMSTART
+   bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-   help
- This builds a kernel image that is capable of running at the
- location the kernel is loaded at (some alignment restrictions may
- exist).
-
- One use is for the kexec on panic case where the recovery kernel
- must live at a different physical address than the primary
- kernel.
-
- Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
- it has been loaded at and the compile time physical addresses
- CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
- setting can still be useful to bootwrappers that need to know the
- load location of the kernel (eg. u-boot/mkimage).
+   select NONSTATIC_KERNEL
+   help
+ This option enables the kernel to be loaded at any page aligned
+ physical address. The kernel creates a mapping from KERNELBASE to 
+ the address where the kernel is loaded. The page size here implies
+ the TLB page size of the mapping for kernel on the particular 
platform.
+ Please refer to the init code for finding the TLB page size.
+
+ DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
+ kernel image, where the only restriction is the page aligned kernel
+ load address. When this option is enabled, the compile time physical 
+ address CONFIG_PHYSICAL_START is ignored.
+
+# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
+# config RELOCATABLE
+#  bool Build a relocatable kernel (EXPERIMENTAL)
+#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+#  help
+#This builds a kernel image that is capable of running at the
+#location the kernel is loaded at, without any alignment restrictions.
+#
+#One use is for the kexec on panic case where the recovery kernel
+#must live at a different physical address than the primary
+#kernel.
+#
+#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from 

[PATCH v5 4/7] [ppc] Define virtual-physical translations for RELOCATABLE

2011-12-15 Thread Suzuki K. Poulose
We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%TLB_SIZE
|   ||
|   ||
|   ||
Page|---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#ifdef  CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/include/asm/page.h |   85 ++-
 arch/powerpc/mm/init_32.c   |7 +++
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f149967..f072e97 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -97,12 +97,26 @@ extern unsigned int HPAGE_SHIFT;
 
 extern phys_addr_t memstart_addr;
 extern phys_addr_t kernstart_addr;
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+extern long long virt_phys_offset;
 #endif
+
+#endif /* 

[PATCH v5 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x

2011-12-15 Thread Suzuki K. Poulose
DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 +-
 arch/powerpc/kernel/head_44x.S |   12 
 2 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fac92ce..5eafe95 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -829,7 +829,7 @@ config LOWMEM_CAM_NUM
 
 config DYNAMIC_MEMSTART
bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
44x)
select NONSTATIC_KERNEL
help
  This option enables the kernel to be loaded at any page aligned
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 3df7735..d57 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -802,12 +802,24 @@ skpinv:   addir4,r4,1 /* 
Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
+#ifdef CONFIG_DYNAMIC_MEMSTART
+
+   /* Read the XLAT entry for our current mapping */
+   tlbre   r25,r23,PPC44x_TLB_XLAT
+
+   lis r3,KERNELBASE@h
+   ori r3,r3,KERNELBASE@l
+
+   /* Use our current RPN entry */
+   mr  r4,r25
+#else
 
lis r3,PAGE_OFFSET@h
ori r3,r3,PAGE_OFFSET@l
 
/* Kernel is at the base of RAM */
li r4, 0/* Load the kernel physical address */
+#endif
 
/* Load the kernel PID = 0 */
li  r0,0

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 3/7] [ppc] Process dynamic relocations for kernel

2011-12-15 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V4:

 ( Suggested by: Segher Boessenkool seg...@kernel.crashing.org: )
 * Added 'sync' between dcbst and icbi for the modified instruction in
   relocate().
 * Replaced msync with sync.
 * Better comments on register usage in relocate().
 * Better check for relocation types in relocs_check.pl

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Josh Poimboeuf jpoim...@linux.vnet.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   41 ---
 arch/powerpc/Makefile |6 +
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  208 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/relocs_check.pl  |   14 ++
 6 files changed, 256 insertions(+), 23 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..33b1c8c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,30 @@ config DYNAMIC_MEMSTART
  load address. When this option is enabled, the compile time physical 
  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#  bool Build a relocatable kernel (EXPERIMENTAL)
-#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-#  help
-#This builds a kernel image that is capable of running at the
-#location the kernel is loaded at, without any alignment restrictions.
-#
-#One use is for the kexec on panic case where the recovery kernel
-#must live at a different physical address than the primary
-#kernel.
-#
-#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#it has been loaded at and the compile time physical addresses
-#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#setting can still be useful to bootwrappers that need to know the
-#load location of the kernel (eg. u-boot/mkimage).
+ This option is overridden by CONFIG_RELOCATABLE
+
+config RELOCATABLE
+   bool Build a relocatable kernel (EXPERIMENTAL)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   select NONSTATIC_KERNEL
+   help
+ This builds a kernel image that is capable of running at the
+ location the kernel is loaded at, without any alignment restrictions.
+ This feature is a superset of DYNAMIC_MEMSTART and hence overrides it.
+
+ One use is for the kexec on panic case where the recovery kernel
+ must live at a different physical address than the primary
+ kernel.
+
+ Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+ it has been loaded at and the compile time physical addresses
+ CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+ setting can still be useful to bootwrappers that need to know the
+ load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC   += -m$(CONFIG_WORD_SIZE)
 override AR:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=no -mcall-aixdesc
 

[PATCH v5 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-12-15 Thread Suzuki K. Poulose
The following patch adds relocatable kernel support - based on processing
of dynamic relocations - for PPC44x kernel.

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||

The virt_phys_offset is updated accordingly, i.e,

virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 -
 arch/powerpc/kernel/head_44x.S |   95 +++-
 2 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 33b1c8c..8833df5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -847,7 +847,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
bool Build a relocatable kernel (EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  44x
select NONSTATIC_KERNEL
help
  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index d57..885d540 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
mr  r31,r3  /* save device tree ptr */
li  r24,0   /* CPU number */
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * relocate is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+   bl  0f  /* Get our runtime address */
+0: mflrr21 /* Make it accessible */
+   addis   r21,r21,(_stext - 0b)@ha
+   addir21,r21,(_stext - 0b)@l /* Get our current runtime base 
*/
+
+   /*
+* We have the runtime (virutal) address of our base.
+* We calculate our shift of offset from a 256M page.
+* We could map the 256M page we belong to at PAGE_OFFSET and
+* get going from there.
+*/
+   lis r4,KERNELBASE@h
+   ori r4,r4,KERNELBASE@l
+   rlwinm  r6,r21,0,4,31   /* r6 = PHYS_START % 256M */
+   rlwinm  r5,r4,0,4,31/* r5 = KERNELBASE % 256M */
+   subfr3,r5,r6/* r3 = r6 - r5 */
+   add r3,r4,r3/* Required Virutal Address */
+
+   bl  relocate
+#endif
+
bl  init_cpu_state
 
/*
@@ -86,7 +115,64 @@ _ENTRY(_start);
 
bl  early_init
 
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_RELOCATABLE
+   /*
+* Relocatable kernel support based on processing of dynamic
+* relocation entries.
+*
+* r25 will contain RPN/ERPN for the start address of memory
+* r21 will contain the current offset of _stext
+*/
+   lis r3,kernstart_addr@ha
+   la  r3,kernstart_addr@l(r3)
+
+   /*
+* Compute the kernstart_addr.
+* kernstart_addr = (r6,r8)
+* kernstart_addr  ~0xfff = (r6,r7)
+*/
+   rlwinm  r6,r25,0,28,31  /* ERPN. Bits 32-35 of Address */
+   rlwinm  r7,r25,0,0,3/* RPN - assuming 256 MB page size */
+   rlwinm  r8,r21,0,4,31   /* r8 = (_stext  0xfff) */
+   or  r8,r7,r8/* Compute the lower 32bit of kernstart_addr */
+
+   /* 

[PATCH v5 6/7] [44x] Enable CRASH_DUMP for 440x

2011-12-15 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8833df5..fe56229 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,8 +363,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 7/7] [boot] Change the load address for the wrapper to fit the kernel

2011-12-15 Thread Suzuki K. Poulose
The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x0040 and the kernel will be uncompressed
to fit the location 0-0x0040. But with dynamic relocations, the size
of the kernel may exceed 0x0040(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x0040 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=0040, uncompressed 
size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher 
MB,
according to the size of  the uncompressed vmlinux.

With the patch, we get the following message while building the image :

 WARN: Uncompressed kernel (size 0x5b0344) overlaps the address of the 
wrapper(0x40)
 WARN: Fixing the link_address of wrapper to (0x60)


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |   20 
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 14cd4bc..c8d6aaf 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz=$tmpdir/`basename \$kernel\`.$ext
 if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot $kernel ]; then
 ${CROSS}objcopy $objflags $kernel $vmz.$$
 
+strip_size=$(stat -c %s $vmz.$$)
+
 if [ -n $gzip ]; then
 gzip -n -f -9 $vmz.$$
 fi
@@ -266,6 +268,24 @@ if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot 
$kernel ]; then
 else
vmz=$vmz.$$
 fi
+else
+# Calculate the vmlinux.strip size
+${CROSS}objcopy $objflags $kernel $vmz.$$
+strip_size=$(stat -c %s $vmz.$$)
+rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xf)  0xfff0))
+
+round_size=0x$(printf %x $round_size)
+link_addr=$(printf %d $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+echo WARN: Uncompressed kernel (size 0x$(printf %x\n $strip_size)) \
+   overlaps the address of the wrapper($link_address)
+echo WARN: Fixing the link_address of wrapper to ($round_size)
+link_address=$round_size
 fi
 
 vmz=$vmz$gzip

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: linux-next bad Kconfig for drivers/hid

2011-12-15 Thread Jiri Kosina
On Mon, 12 Dec 2011, Tony Breeds wrote:

 On Mon, Dec 12, 2011 at 12:21:16AM +0100, Jiri Kosina wrote:
  On Thu, 8 Dec 2011, Jeremy Fitzhardinge wrote:
   
   Hm.  How about making it depends on HID  POWER_SUPPLY?  I think that
   would needlessly disable it if HID is also modular, but I'm not sure how
   to fix that.  depends on HID  POWER_SUPPLY  HID == POWER_SUPPLY?
 
 That would work, but I think technically I think you could end up with
 HID=m and POWER_SUPPLY=m which would still allow HID_BATTERY_STRENGTH=y
 which is the same problem.
 
 I don't know what kind of .config contortions you'd need to do to get
 there.
  
  How about making it 'default POWER_SUPPLY' instead?
 
 By itself that wont help as POWER_SUPPLY=m statisfies.
 
 So it looks like we have Jeremy's:
   HID  POWER_SUPPLY  HID == POWER_SUPPLY

Tony,

have you actually tested this one to work in the configuration you have 
been seeing it to fail?

I don't seem to be able to find any use of '==' in other Kconfig files 
(and never used it myself), so I'd like to have confirmation that it 
actually works and fixes the problem before I apply it :)

Thanks,

-- 
Jiri Kosina
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 3/3] ppc32/kprobe: don't emulate store when kprobe stwu r1

2011-12-15 Thread Tiejun Chen
We don't do the real store operation for kprobing 'stwu Rx,(y)R1'
since this may corrupt the exception frame, now we will do this
operation safely in exception return code after migrate current
exception frame below the kprobed function stack.

So we only update gpr[1] here and trigger a thread flag to mask
this.

Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
 arch/powerpc/lib/sstep.c |   25 +++--
 1 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 9a52349..5145d10 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -566,7 +566,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned 
int instr)
unsigned long int ea;
unsigned int cr, mb, me, sh;
int err;
-   unsigned long old_ra;
+   unsigned long old_ra, val3;
long ival;
 
opcode = instr  26;
@@ -1486,10 +1486,31 @@ int __kprobes emulate_step(struct pt_regs *regs, 
unsigned int instr)
goto ldst_done;
 
case 36:/* stw */
-   case 37:/* stwu */
val = regs-gpr[rd];
err = write_mem(val, dform_ea(instr, regs), 4, regs);
goto ldst_done;
+   case 37:/* stwu */
+   val = regs-gpr[rd];
+   val3 = dform_ea(instr, regs);
+   /*
+* For PPC32 we always use stwu to change stack point with r1. 
So
+* this emulated store may corrupt the exception frame, now we
+* have to provide the exception frame trampoline, which is 
pushed
+* below the kprobed function stack. So we only update gpr[1] 
but
+* don't emulate the real store operation. We will do real store
+* operation safely in exception return code by checking this 
flag.
+*/
+   if ((ra == 1)  !(regs-msr  MSR_PR)) {
+   /*
+* Check if we already set since that means we'll
+* lose the previous value.
+*/
+   WARN_ON(test_thread_flag(TIF_EMULATE_STACK_STORE));
+   set_thread_flag(TIF_EMULATE_STACK_STORE);
+   err = 0;
+   } else
+   err = write_mem(val, val3, 4, regs);
+   goto ldst_done;
 
case 38:/* stb */
case 39:/* stbu */
-- 
1.5.6

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 1/3] powerpc/kprobe: introduce a new thread flag

2011-12-15 Thread Tiejun Chen
We need to add a new thread flag, TIF_EMULATE_STACK_STORE,
for emulating stack store operation while exiting exception.

Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
 arch/powerpc/include/asm/thread_info.h |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index 836f231..f414fee 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -112,6 +112,8 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_FREEZE 14  /* Freezing for suspend */
 #define TIF_SYSCALL_TRACEPOINT 15  /* syscall tracepoint instrumentation */
 #define TIF_RUNLATCH   16  /* Is the runlatch enabled? */
+#define TIF_EMULATE_STACK_STORE17  /* Is an instruction emulation
+  for stack store? */
 
 /* as above, but as bit values */
 #define _TIF_SYSCALL_TRACE (1TIF_SYSCALL_TRACE)
@@ -130,6 +132,7 @@ static inline struct thread_info *current_thread_info(void)
 #define _TIF_FREEZE(1TIF_FREEZE)
 #define _TIF_SYSCALL_TRACEPOINT(1TIF_SYSCALL_TRACEPOINT)
 #define _TIF_RUNLATCH  (1TIF_RUNLATCH)
+#define _TIF_EMULATE_STACK_STORE   (1TIF_EMULATE_STACK_STORE)
 #define _TIF_SYSCALL_T_OR_A(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT)
 
-- 
1.5.6

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 0/3] ppc32/kprobe: Fix a bug for kprobe stwu r1

2011-12-15 Thread Tiejun Chen
Changes from V1:

* use memcpy simply to withdraw copy_exc_stack
* add !(regs-msr  MSR_PR)) and
WARN_ON(test_thread_flag(TIF_EMULATE_STACK_STORE));
  to make sure we're in goot path.
* move this migration process inside 'restore'
* clear TIF flag atomically 

Tiejun Chen (3):
  powerpc/kprobe: introduce a new thread flag
  ppc32/kprobe: complete kprobe and migrate exception frame
  ppc32/kprobe: don't emulate store when kprobe stwu r1

 arch/powerpc/include/asm/thread_info.h |3 ++
 arch/powerpc/kernel/entry_32.S |   35 
 arch/powerpc/lib/sstep.c   |   25 +-
 3 files changed, 61 insertions(+), 2 deletions(-)

Tiejun
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 2/3] ppc32/kprobe: complete kprobe and migrate exception frame

2011-12-15 Thread Tiejun Chen
We can't emulate stwu since that may corrupt current exception stack.
So we will have to do real store operation in the exception return code.

Firstly we'll allocate a trampoline exception frame below the kprobed
function stack and copy the current exception frame to the trampoline.
Then we can do this real store operation to implement 'stwu', and reroute
the trampoline frame to r1 to complete this exception migration.

Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
 arch/powerpc/kernel/entry_32.S |   35 +++
 1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 56212bc..0cdd27d 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -850,6 +850,41 @@ resume_kernel:
 
/* interrupts are hard-disabled at this point */
 restore:
+   lwz r3,_MSR(r1) /* Returning to user mode? */
+   andi.   r0,r3,MSR_PR
+   bne 1f  
+   /* check current_thread_info, _TIF_EMULATE_STACK_STORE */
+   rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
+   lwz r0,TI_FLAGS(r9)
+   andis.  r0,r0,_TIF_EMULATE_STACK_STORE@h
+   beq+1f  
+
+   addir9,r1,INT_FRAME_SIZE/* Get the kprobed function entry */
+
+   lwz r3,GPR1(r1)
+   subir3,r3,INT_FRAME_SIZE/* dst: Allocate a trampoline exception 
frame */
+   mr  r4,r1   /* src:  current exception frame */
+   li  r5,INT_FRAME_SIZE   /* size: INT_FRAME_SIZE */
+   mr  r1,r3   /* Reroute the trampoline frame to r1 */
+   bl  memcpy  /* Copy from the original to the 
trampoline */
+
+   /* Do real store operation to complete stwu */
+   lwz r5,GPR1(r1)
+   stw r9,0(r5)
+
+   /* Clear _TIF_EMULATE_STACK_STORE flag */
+   rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
+   lis r11,_TIF_EMULATE_STACK_STORE@h
+   addir9,r9,TI_FLAGS
+0: lwarx   r8,0,r9
+   andcr8,r8,r11
+#ifdef CONFIG_IBM405_ERR77
+   dcbt0,r9
+#endif
+   stwcx.  r8,0,r9
+   bne-0b
+1:
+
 #ifdef CONFIG_44x
 BEGIN_MMU_FTR_SECTION
b   1f
-- 
1.5.6

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/4] ppc32/kprobe: complete kprobe and migrate exception frame

2011-12-15 Thread tiejun.chen
Looks we have to go into 'restore' at last as I said previously. I send v2 based
on your all comments.

 I assume it may not necessary to reorganize ret_from_except for *ppc32* .
 
 It might be cleaner but I can do that myself later.
 

I have this version but I'm not 100% sure if its as you expect :)

#define _TIF_WORK_MASK (_TIF_USER_WORK_MASK | _TIF_EMULATE_STACK_STORE)

==
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 56212bc..e52b586 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -791,41 +791,29 @@ ret_from_except:
SYNC/* Some chip revs have problems here... */
MTMSRD(r10) /* disable interrupts */

-   lwz r3,_MSR(r1) /* Returning to user mode? */
-   andi.   r0,r3,MSR_PR
-   beq resume_kernel
-
 user_exc_return:   /* r10 contains MSR_KERNEL here */
/* Check current_thread_info()-flags */
rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
lwz r9,TI_FLAGS(r9)
-   andi.   r0,r9,_TIF_USER_WORK_MASK
-   bne do_work
+   andi.   r0,r9,_TIF_WORK_MASK
+   beq restore

-restore_user:
-#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
-   /* Check whether this process has its own DBCR0 value.  The internal
-  debug mode bit tells us that dbcr0 should be loaded. */
-   lwz r0,THREAD+THREAD_DBCR0(r2)
-   andis.  r10,r0,DBCR0_IDM@h
-   bnel-   load_dbcr0
-#endif
+   lwz r3,_MSR(r1) /* Returning to user mode? */
+   andi.   r0,r3,MSR_PR
+   bne do_user_work

 #ifdef CONFIG_PREEMPT
-   b   restore
-
 /* N.B. the only way to get here is from the beq following ret_from_except. */
-resume_kernel:
/* check current_thread_info-preempt_count */
rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
lwz r0,TI_PREEMPT(r9)
cmpwi   0,r0,0  /* if non-zero, just restore regs and return */
-   bne restore
+   bne 2f
lwz r0,TI_FLAGS(r9)
andi.   r0,r0,_TIF_NEED_RESCHED
-   beq+restore
+   beq+2f
andi.   r0,r3,MSR_EE/* interrupts off? */
-   beq restore /* don't schedule if so */
+   beq 2f  /* don't schedule if so */
 #ifdef CONFIG_TRACE_IRQFLAGS
/* Lockdep thinks irqs are enabled, we need to call
 * preempt_schedule_irq with IRQs off, so we inform lockdep
@@ -844,12 +832,54 @@ resume_kernel:
 */
bl  trace_hardirqs_on
 #endif
-#else
-resume_kernel:
+2:
 #endif /* CONFIG_PREEMPT */

+   /* check current_thread_info, _TIF_EMULATE_STACK_STORE */
+   rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
+   lwz r0,TI_FLAGS(r9)
+   andis.  r0,r0,_TIF_EMULATE_STACK_STORE@h
+   beq+restore
+
+   addir9,r1,INT_FRAME_SIZE/* Get the kprobed function entry */
+
+   lwz r3,GPR1(r1)
+   subir3,r3,INT_FRAME_SIZE/* dst: Allocate a trampoline exception
frame */
+   mr  r4,r1   /* src:  current exception frame */
+   li  r5,INT_FRAME_SIZE   /* size: INT_FRAME_SIZE */
+   mr  r1,r3   /* Reroute the trampoline frame to r1 */
+   bl  memcpy  /* Copy from the original to the
trampoline */
+
+   /* Do real store operation to complete stwu */
+   lwz r5,GPR1(r1)
+   stw r9,0(r5)
+
+   /* Do real store operation to complete stwu */
+   lwz r5,GPR1(r1)
+   stw r9,0(r5)
+
+   /* Clear _TIF_EMULATE_STACK_STORE flag */
+   rlwinm  r9,r1,0,0,(31-THREAD_SHIFT)
+   lis r11,_TIF_EMULATE_STACK_STORE@h
+   addir9,r9,TI_FLAGS
+0: lwarx   r8,0,r9
+   andcr8,r8,r11
+#ifdef CONFIG_IBM405_ERR77
+   dcbt0,r9
+#endif
+   stwcx.  r8,0,r9
+   bne-0b
+
/* interrupts are hard-disabled at this point */
 restore:
+#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
+   lwz r3,_MSR(r1) /* Returning to user mode? */
+   andi.   r0,r3,MSR_PR
+   beq 1f
+   /* Check whether this process has its own DBCR0 value.  The internal
+  debug mode bit tells us that dbcr0 should be loaded. */
+   lwz r0,THREAD+THREAD_DBCR0(r2)
+   andis.  r10,r0,DBCR0_IDM@h
+   bnel-   load_dbcr0
+1:
+#endif
+
 #ifdef CONFIG_44x
 BEGIN_MMU_FTR_SECTION
b   1f
@@ -1159,7 +1189,7 @@ global_dbcr0:
.previous
 #endif /* !(CONFIG_4xx || CONFIG_BOOKE) */

-do_work:   /* r10 contains MSR_KERNEL here */
+do_user_work:  /* r10 contains MSR_KERNEL here */
andi.   r0,r9,_TIF_NEED_RESCHED
beq do_user_signal

@@ -1184,7 +1214,7 @@ recheck:
andi.   r0,r9,_TIF_NEED_RESCHED
bne-do_resched
andi.   r0,r9,_TIF_USER_WORK_MASK
-   beq restore_user
+   beq restore
 do_user_signal:/* r10 contains MSR_KERNEL here 

Re: Linux port availability for p5010 processor

2011-12-15 Thread Vineeth
found p5020_ds.c in platforms/85xx;
why is it a part of 85xx directory ? the core of P5020 is E5500 where the
core of 85xx is e500;

Do we have the processor initialization code (start.S, head.S) files, port
available with linux ?





On Wed, Dec 14, 2011 at 12:58 AM, Scott Wood scottw...@freescale.comwrote:

 On 12/12/2011 11:33 PM, Vineeth wrote:
  Do we have a linux port available for freescale P5010 processor (with
  single E5500 core) ?
  /(found arch/powerpc/platforms/pseries ; and a some details on
  kernel/cputable.c /)

 p5010 is basically a p5020 with one core and memory complex instead of
 two.  The same Linux code should work for both.

 pseries is something completely different.

  Is there any reference board which uses this processor ?

 p5020ds has a p5020.

 Linux support is in arch/powerpc/platforms/85xx/p5020_ds.c (and
 scattered in other places).

  any reference in DTS file also will be helpful.

 arch/powerpc/boot/dts/p5020ds.dts

 -Scott


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 0/5] Make use of hardware reference and change bits in HPT

2011-12-15 Thread Paul Mackerras
This series of patches builds on top of my previous series and
modifies the Book3S HV memory management code to use the hardware
reference and change bits in the guest hashed page table.  This makes
kvm_age_hva() more efficient, lets us implement the dirty page
tracking properly (which in turn means that things like VGA emulation
in qemu can work), and also means that we can supply hardware
reference and change information to the guest -- not that Linux guests
currently use that information, but possibly they will want it in
future, and there is an interface defined in PAPR for it.

Paul.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit

2011-12-15 Thread Paul Mackerras
This changes the implementation of kvm_vm_ioctl_get_dirty_log() for
Book3s HV guests to use the hardware C (changed) bits in the guest
hashed page table.  Since this makes the implementation quite different
from the Book3s PR case, this moves the existing implementation from
book3s.c to book3s_pr.c and creates a new implementation in book3s_hv.c.
That implementation calls kvmppc_hv_get_dirty_log() to do the actual
work by calling kvm_test_clear_dirty on each page.  It iterates over
the HPTEs, clearing the C bit if set, and returns 1 if any C bit was
set (including the saved C bit in the rmap entry).

Signed-off-by: Paul Mackerras pau...@samba.org
---
 arch/powerpc/include/asm/kvm_book3s.h |2 +
 arch/powerpc/kvm/book3s.c |   39 --
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |   69 +
 arch/powerpc/kvm/book3s_hv.c  |   37 +
 arch/powerpc/kvm/book3s_pr.c  |   39 ++
 5 files changed, 147 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index 6ececb4..aa795cc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -158,6 +158,8 @@ extern long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, 
unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel);
 extern long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel);
+extern long kvmppc_hv_get_dirty_log(struct kvm *kvm,
+   struct kvm_memory_slot *memslot);
 
 extern void kvmppc_entry_trampoline(void);
 extern void kvmppc_hv_entry_trampoline(void);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6bf7e05..7d54f4e 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -477,45 +477,6 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
return 0;
 }
 
-/*
- * Get (and clear) the dirty memory log for a memory slot.
- */
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
- struct kvm_dirty_log *log)
-{
-   struct kvm_memory_slot *memslot;
-   struct kvm_vcpu *vcpu;
-   ulong ga, ga_end;
-   int is_dirty = 0;
-   int r;
-   unsigned long n;
-
-   mutex_lock(kvm-slots_lock);
-
-   r = kvm_get_dirty_log(kvm, log, is_dirty);
-   if (r)
-   goto out;
-
-   /* If nothing is dirty, don't bother messing with page tables. */
-   if (is_dirty) {
-   memslot = id_to_memslot(kvm-memslots, log-slot);
-
-   ga = memslot-base_gfn  PAGE_SHIFT;
-   ga_end = ga + (memslot-npages  PAGE_SHIFT);
-
-   kvm_for_each_vcpu(n, vcpu, kvm)
-   kvmppc_mmu_pte_pflush(vcpu, ga, ga_end);
-
-   n = kvm_dirty_bitmap_bytes(memslot);
-   memset(memslot-dirty_bitmap, 0, n);
-   }
-
-   r = 0;
-out:
-   mutex_unlock(kvm-slots_lock);
-   return r;
-}
-
 void kvmppc_decrementer_func(unsigned long data)
 {
struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 926e2b9..783cd35 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -870,6 +870,75 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, 
pte_t pte)
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
 }
 
+static int kvm_test_clear_dirty(struct kvm *kvm, unsigned long *rmapp)
+{
+   struct revmap_entry *rev = kvm-arch.revmap;
+   unsigned long head, i, j;
+   unsigned long *hptep;
+   int ret = 0;
+
+ retry:
+   lock_rmap(rmapp);
+   if (*rmapp  KVMPPC_RMAP_CHANGED) {
+   *rmapp = ~KVMPPC_RMAP_CHANGED;
+   ret = 1;
+   }
+   if (!(*rmapp  KVMPPC_RMAP_PRESENT)) {
+   unlock_rmap(rmapp);
+   return ret;
+   }
+
+   i = head = *rmapp  KVMPPC_RMAP_INDEX;
+   do {
+   hptep = (unsigned long *) (kvm-arch.hpt_virt + (i  4));
+   j = rev[i].forw;
+
+   if (!(hptep[1]  HPTE_R_C))
+   continue;
+
+   if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+   /* unlock rmap before spinning on the HPTE lock */
+   unlock_rmap(rmapp);
+   while (hptep[0]  HPTE_V_HVLOCK)
+   cpu_relax();
+   goto retry;
+   }
+
+   /* Now check and modify the HPTE */
+   if ((hptep[0]  HPTE_V_VALID)  (hptep[1]  HPTE_R_C)) {
+   /* need to make it temporarily absent to clear C */
+   hptep[0] |= HPTE_V_ABSENT;
+   kvmppc_invalidate_hpte(kvm, hptep, i);
+   

[PATCH 2/5] KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits

2011-12-15 Thread Paul Mackerras
This allows both the guest and the host to use the referenced (R) and
changed (C) bits in the guest hashed page table.  The guest has a view
of R and C that is maintained in the guest_rpte field of the revmap
entry for the HPTE, and the host has a view that is maintained in the
rmap entry for the associated gfn.

Both view are updated from the guest HPT.  If a bit (R or C) is zero
in either view, it will be initially set to zero in the HPTE (or HPTEs),
until set to 1 by hardware.  When an HPTE is removed for any reason,
the R and C bits from the HPTE are ORed into both views.  We have to
be careful to read the R and C bits from the HPTE after invalidating
it, but before unlocking it, in case of any late updates by the hardware.

Signed-off-by: Paul Mackerras pau...@samba.org
---
 arch/powerpc/include/asm/kvm_host.h |5 ++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |   48 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |   45 +++--
 3 files changed, 59 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index 968f3aa..1cb6e52 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -200,8 +200,9 @@ struct revmap_entry {
  * index in the guest HPT of a HPTE that points to the page.
  */
 #define KVMPPC_RMAP_LOCK_BIT   63
-#define KVMPPC_RMAP_REF_BIT33
-#define KVMPPC_RMAP_REFERENCED (1ul  KVMPPC_RMAP_REF_BIT)
+#define KVMPPC_RMAP_RC_SHIFT   32
+#define KVMPPC_RMAP_REFERENCED (HPTE_R_R  KVMPPC_RMAP_RC_SHIFT)
+#define KVMPPC_RMAP_CHANGED(HPTE_R_C  KVMPPC_RMAP_RC_SHIFT)
 #define KVMPPC_RMAP_PRESENT0x1ul
 #define KVMPPC_RMAP_INDEX  0xul
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 66d6452..aa51dde 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -505,6 +505,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
unsigned long is_io;
unsigned int writing, write_ok;
struct vm_area_struct *vma;
+   unsigned long rcbits;
 
/*
 * Real-mode code has already searched the HPT and found the
@@ -640,11 +641,17 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
struct kvm_vcpu *vcpu,
goto out_unlock;
}
 
+   /* Only set R/C in real HPTE if set in both *rmap and guest_rpte */
+   rcbits = *rmap  KVMPPC_RMAP_RC_SHIFT;
+   r = rcbits | ~(HPTE_R_R | HPTE_R_C);
+
if (hptep[0]  HPTE_V_VALID) {
/* HPTE was previously valid, so we need to invalidate it */
unlock_rmap(rmap);
hptep[0] |= HPTE_V_ABSENT;
kvmppc_invalidate_hpte(kvm, hptep, index);
+   /* don't lose previous R and C bits */
+   r |= hptep[1]  (HPTE_R_R | HPTE_R_C);
} else {
kvmppc_add_revmap_chain(kvm, rev, rmap, index, 0);
}
@@ -701,50 +708,55 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long 
*rmapp,
struct revmap_entry *rev = kvm-arch.revmap;
unsigned long h, i, j;
unsigned long *hptep;
-   unsigned long ptel, psize;
+   unsigned long ptel, psize, rcbits;
 
for (;;) {
-   while (test_and_set_bit_lock(KVMPPC_RMAP_LOCK_BIT, rmapp))
-   cpu_relax();
+   lock_rmap(rmapp);
if (!(*rmapp  KVMPPC_RMAP_PRESENT)) {
-   __clear_bit_unlock(KVMPPC_RMAP_LOCK_BIT, rmapp);
+   unlock_rmap(rmapp);
break;
}
 
/*
 * To avoid an ABBA deadlock with the HPTE lock bit,
-* we have to unlock the rmap chain before locking the HPTE.
-* Thus we remove the first entry, unlock the rmap chain,
-* lock the HPTE and then check that it is for the
-* page we're unmapping before changing it to non-present.
+* we can't spin on the HPTE lock while holding the
+* rmap chain lock.
 */
i = *rmapp  KVMPPC_RMAP_INDEX;
+   hptep = (unsigned long *) (kvm-arch.hpt_virt + (i  4));
+   if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+   /* unlock rmap before spinning on the HPTE lock */
+   unlock_rmap(rmapp);
+   while (hptep[0]  HPTE_V_HVLOCK)
+   cpu_relax();
+   continue;
+   }
j = rev[i].forw;
if (j == i) {
/* chain is now empty */
-   j = 0;
+   *rmapp = ~(KVMPPC_RMAP_PRESENT | KVMPPC_RMAP_INDEX);
} else {
/* remove i from chain */
h = rev[i].back;

[PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating

2011-12-15 Thread Paul Mackerras
This reworks the implementations of the H_REMOVE and H_BULK_REMOVE
hcalls to make sure that we keep the HPTE locked and in the reverse-
mapping chain until we have finished invalidating it.  Previously
we would remove it from the chain and unlock it before invalidating
it, leaving a tiny window when the guest could access the page even
though we believe we have removed it from the guest (e.g.,
kvm_unmap_hva() has been called for the page and has found no HPTEs
in the chain).  In addition, we'll need this for future patches where
we will need to read the R and C bits in the HPTE after invalidating
it.

Doing this required restructuring kvmppc_h_bulk_remove() substantially.
Since we want to batch up the tlbies, we now need to keep several
HPTEs locked simultaneously.  In order to avoid possible deadlocks,
we don't spin on the HPTE bitlock for any except the first HPTE in
a batch.  If we can't acquire the HPTE bitlock for the second or
subsequent HPTE, we terminate the batch at that point, do the tlbies
that we have accumulated so far, unlock those HPTEs, and then start
a new batch to do the remaining invalidations.

Signed-off-by: Paul Mackerras pau...@samba.org
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |  212 --
 1 files changed, 125 insertions(+), 87 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 7c0fc99..823348d 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -140,6 +140,12 @@ static pte_t lookup_linux_pte(struct kvm_vcpu *vcpu, 
unsigned long hva,
return kvmppc_read_update_linux_pte(ptep, writing);
 }
 
+static inline void unlock_hpte(unsigned long *hpte, unsigned long hpte_v)
+{
+   asm volatile(PPC_RELEASE_BARRIER  : : : memory);
+   hpte[0] = hpte_v;
+}
+
 long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel)
 {
@@ -356,6 +362,7 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long 
flags,
struct kvm *kvm = vcpu-kvm;
unsigned long *hpte;
unsigned long v, r, rb;
+   struct revmap_entry *rev;
 
if (pte_index = HPT_NPTE)
return H_PARAMETER;
@@ -368,30 +375,32 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long 
flags,
hpte[0] = ~HPTE_V_HVLOCK;
return H_NOT_FOUND;
}
-   if (atomic_read(kvm-online_vcpus) == 1)
-   flags |= H_LOCAL;
-   vcpu-arch.gpr[4] = v = hpte[0]  ~HPTE_V_HVLOCK;
-   vcpu-arch.gpr[5] = r = hpte[1];
-   rb = compute_tlbie_rb(v, r, pte_index);
-   if (v  HPTE_V_VALID)
+
+   rev = real_vmalloc_addr(kvm-arch.revmap[pte_index]);
+   v = hpte[0]  ~HPTE_V_HVLOCK;
+   if (v  HPTE_V_VALID) {
+   hpte[0] = ~HPTE_V_VALID;
+   rb = compute_tlbie_rb(v, hpte[1], pte_index);
+   if (!(flags  H_LOCAL)  atomic_read(kvm-online_vcpus)  1) {
+   while (!try_lock_tlbie(kvm-arch.tlbie_lock))
+   cpu_relax();
+   asm volatile(ptesync : : : memory);
+   asm volatile(PPC_TLBIE(%1,%0); eieio; tlbsync
+: : r (rb), r (kvm-arch.lpid));
+   asm volatile(ptesync : : : memory);
+   kvm-arch.tlbie_lock = 0;
+   } else {
+   asm volatile(ptesync : : : memory);
+   asm volatile(tlbiel %0 : : r (rb));
+   asm volatile(ptesync : : : memory);
+   }
remove_revmap_chain(kvm, pte_index, v);
-   smp_wmb();
-   hpte[0] = 0;
-   if (!(v  HPTE_V_VALID))
-   return H_SUCCESS;
-   if (!(flags  H_LOCAL)) {
-   while (!try_lock_tlbie(kvm-arch.tlbie_lock))
-   cpu_relax();
-   asm volatile(ptesync : : : memory);
-   asm volatile(PPC_TLBIE(%1,%0); eieio; tlbsync
-: : r (rb), r (kvm-arch.lpid));
-   asm volatile(ptesync : : : memory);
-   kvm-arch.tlbie_lock = 0;
-   } else {
-   asm volatile(ptesync : : : memory);
-   asm volatile(tlbiel %0 : : r (rb));
-   asm volatile(ptesync : : : memory);
}
+   r = rev-guest_rpte;
+   unlock_hpte(hpte, 0);
+
+   vcpu-arch.gpr[4] = v;
+   vcpu-arch.gpr[5] = r;
return H_SUCCESS;
 }
 
@@ -399,82 +408,113 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 {
struct kvm *kvm = vcpu-kvm;
unsigned long *args = vcpu-arch.gpr[4];
-   unsigned long *hp, tlbrb[4];
-   long int i, found;
-   long int n_inval = 0;
-   unsigned long flags, req, pte_index;
+   unsigned long *hp, *hptes[4], tlbrb[4];
+   long int i, j, k, n, found, indexes[4];
+   unsigned long flags, req, pte_index, 

[PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls

2011-12-15 Thread Paul Mackerras
This adds implementations for the H_CLEAR_REF (test and clear reference
bit) and H_CLEAR_MOD (test and clear changed bit) hypercalls.  These
hypercalls are not used by Linux guests at this stage, and these
implementations are only compile tested.

Signed-off-by: Paul Mackerras pau...@samba.org
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |   69 +++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |4 +-
 2 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 76864a8..718b5a7 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -624,6 +624,75 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long 
flags,
return H_SUCCESS;
 }
 
+long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags,
+   unsigned long pte_index)
+{
+   struct kvm *kvm = vcpu-kvm;
+   unsigned long *hpte, v, r, gr;
+   struct revmap_entry *rev;
+
+   if (pte_index = HPT_NPTE)
+   return H_PARAMETER;
+
+   rev = real_vmalloc_addr(kvm-arch.revmap[pte_index]);
+   hpte = (unsigned long *)(kvm-arch.hpt_virt + (pte_index  4));
+   while (!try_lock_hpte(hpte, HPTE_V_HVLOCK))
+   cpu_relax();
+   v = hpte[0];
+   r = hpte[1];
+   gr = rev-guest_rpte;
+   rev-guest_rpte = ~HPTE_R_R;
+   if (v  HPTE_V_VALID) {
+   gr |= r  (HPTE_R_R | HPTE_R_C);
+   if (r  HPTE_R_R)
+   kvmppc_clear_ref_hpte(kvm, hpte, pte_index);
+   }
+   unlock_hpte(hpte, v  ~HPTE_V_HVLOCK);
+
+   if (!(v  (HPTE_V_VALID | HPTE_V_ABSENT)))
+   return H_NOT_FOUND;
+
+   vcpu-arch.gpr[4] = gr;
+   return H_SUCCESS;
+}
+
+long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags,
+   unsigned long pte_index)
+{
+   struct kvm *kvm = vcpu-kvm;
+   unsigned long *hpte, v, r, gr;
+   struct revmap_entry *rev;
+
+   if (pte_index = HPT_NPTE)
+   return H_PARAMETER;
+
+   rev = real_vmalloc_addr(kvm-arch.revmap[pte_index]);
+   hpte = (unsigned long *)(kvm-arch.hpt_virt + (pte_index  4));
+   while (!try_lock_hpte(hpte, HPTE_V_HVLOCK))
+   cpu_relax();
+   v = hpte[0];
+   r = hpte[1];
+   gr = rev-guest_rpte;
+   rev-guest_rpte = ~HPTE_R_C;
+   if (v  HPTE_V_VALID) {
+   gr |= r  (HPTE_R_R | HPTE_R_C);
+   if (r  HPTE_R_C) {
+   hpte[0] |= HPTE_V_ABSENT;
+   kvmppc_invalidate_hpte(kvm, hpte, pte_index);
+   hpte[1] = ~HPTE_R_C;
+   eieio();
+   hpte[0] = v;
+   }
+   }
+   unlock_hpte(hpte, v  ~HPTE_V_HVLOCK);
+
+   if (!(v  (HPTE_V_VALID | HPTE_V_ABSENT)))
+   return H_NOT_FOUND;
+
+   vcpu-arch.gpr[4] = gr;
+   return H_SUCCESS;
+}
+
 void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep,
unsigned long pte_index)
 {
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S 
b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index b70bf22..4c52d6d 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1276,8 +1276,8 @@ hcall_real_table:
.long   .kvmppc_h_remove - hcall_real_table
.long   .kvmppc_h_enter - hcall_real_table
.long   .kvmppc_h_read - hcall_real_table
-   .long   0   /* 0x10 - H_CLEAR_MOD */
-   .long   0   /* 0x14 - H_CLEAR_REF */
+   .long   .kvmppc_h_clear_mod - hcall_real_table
+   .long   .kvmppc_h_clear_ref - hcall_real_table
.long   .kvmppc_h_protect - hcall_real_table
.long   0   /* 0x1c - H_GET_TCE */
.long   .kvmppc_h_put_tce - hcall_real_table
-- 
1.7.7.3

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 3/5] KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva

2011-12-15 Thread Paul Mackerras
This uses the host view of the hardware R (referenced) bit to speed
up kvm_age_hva() and kvm_test_age_hva().  Instead of removing all
the relevant HPTEs in kvm_age_hva(), we now just reset their R bits
if set.  Also, kvm_test_age_hva() now scans the relevant HPTEs to
see if any of them have R set.

Signed-off-by: Paul Mackerras pau...@samba.org
---
 arch/powerpc/include/asm/kvm_book3s.h |2 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |   81 -
 arch/powerpc/kvm/book3s_hv_rm_mmu.c   |   19 
 3 files changed, 91 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
b/arch/powerpc/include/asm/kvm_book3s.h
index ea9539c..6ececb4 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -149,6 +149,8 @@ extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct 
revmap_entry *rev,
unsigned long *rmap, long pte_index, int realmode);
 extern void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep,
unsigned long pte_index);
+void kvmppc_clear_ref_hpte(struct kvm *kvm, unsigned long *hptep,
+   unsigned long pte_index);
 extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr,
unsigned long *nb_ret);
 extern void kvmppc_unpin_guest_page(struct kvm *kvm, void *addr);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index aa51dde..926e2b9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -772,16 +772,50 @@ int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
 static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
 unsigned long gfn)
 {
-   if (!kvm-arch.using_mmu_notifiers)
-   return 0;
-   if (!(*rmapp  KVMPPC_RMAP_REFERENCED))
-   return 0;
-   kvm_unmap_rmapp(kvm, rmapp, gfn);
-   while (test_and_set_bit_lock(KVMPPC_RMAP_LOCK_BIT, rmapp))
-   cpu_relax();
-   *rmapp = ~KVMPPC_RMAP_REFERENCED;
-   __clear_bit_unlock(KVMPPC_RMAP_LOCK_BIT, rmapp);
-   return 1;
+   struct revmap_entry *rev = kvm-arch.revmap;
+   unsigned long head, i, j;
+   unsigned long *hptep;
+   int ret = 0;
+
+ retry:
+   lock_rmap(rmapp);
+   if (*rmapp  KVMPPC_RMAP_REFERENCED) {
+   *rmapp = ~KVMPPC_RMAP_REFERENCED;
+   ret = 1;
+   }
+   if (!(*rmapp  KVMPPC_RMAP_PRESENT)) {
+   unlock_rmap(rmapp);
+   return ret;
+   }
+
+   i = head = *rmapp  KVMPPC_RMAP_INDEX;
+   do {
+   hptep = (unsigned long *) (kvm-arch.hpt_virt + (i  4));
+   j = rev[i].forw;
+
+   /* If this HPTE isn't referenced, ignore it */
+   if (!(hptep[1]  HPTE_R_R))
+   continue;
+
+   if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+   /* unlock rmap before spinning on the HPTE lock */
+   unlock_rmap(rmapp);
+   while (hptep[0]  HPTE_V_HVLOCK)
+   cpu_relax();
+   goto retry;
+   }
+
+   /* Now check and modify the HPTE */
+   if ((hptep[0]  HPTE_V_VALID)  (hptep[1]  HPTE_R_R)) {
+   kvmppc_clear_ref_hpte(kvm, hptep, i);
+   rev[i].guest_rpte |= HPTE_R_R;
+   ret = 1;
+   }
+   hptep[0] = ~HPTE_V_HVLOCK;
+   } while ((i = j) != head);
+
+   unlock_rmap(rmapp);
+   return ret;
 }
 
 int kvm_age_hva(struct kvm *kvm, unsigned long hva)
@@ -794,7 +828,32 @@ int kvm_age_hva(struct kvm *kvm, unsigned long hva)
 static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
  unsigned long gfn)
 {
-   return !!(*rmapp  KVMPPC_RMAP_REFERENCED);
+   struct revmap_entry *rev = kvm-arch.revmap;
+   unsigned long head, i, j;
+   unsigned long *hp;
+   int ret = 1;
+
+   if (*rmapp  KVMPPC_RMAP_REFERENCED)
+   return 1;
+
+   lock_rmap(rmapp);
+   if (*rmapp  KVMPPC_RMAP_REFERENCED)
+   goto out;
+
+   if (*rmapp  KVMPPC_RMAP_PRESENT) {
+   i = head = *rmapp  KVMPPC_RMAP_INDEX;
+   do {
+   hp = (unsigned long *)(kvm-arch.hpt_virt + (i  4));
+   j = rev[i].forw;
+   if (hp[1]  HPTE_R_R)
+   goto out;
+   } while ((i = j) != head);
+   }
+   ret = 0;
+
+ out:
+   unlock_rmap(rmapp);
+   return ret;
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index bcf6f92..76864a8 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ 

Re: Linux port availability for p5010 processor

2011-12-15 Thread Tabi Timur-B04825
On Thu, Dec 15, 2011 at 5:45 AM, Vineeth vnee...@gmail.com wrote:

 why is it a part of 85xx directory ? the core of P5020 is E5500 where the
 core of 85xx is e500;

e5500 is very similar to e500, so it's all part of the same family of cores.

-- 
Timur Tabi
Linux kernel developer at Freescale
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: linux-next bad Kconfig for drivers/hid

2011-12-15 Thread Randy Dunlap
On 12/15/2011 02:08 AM, Jiri Kosina wrote:
 On Mon, 12 Dec 2011, Tony Breeds wrote:
 
 On Mon, Dec 12, 2011 at 12:21:16AM +0100, Jiri Kosina wrote:
 On Thu, 8 Dec 2011, Jeremy Fitzhardinge wrote:

 Hm.  How about making it depends on HID  POWER_SUPPLY?  I think that
 would needlessly disable it if HID is also modular, but I'm not sure how
 to fix that.  depends on HID  POWER_SUPPLY  HID == POWER_SUPPLY?

 That would work, but I think technically I think you could end up with
 HID=m and POWER_SUPPLY=m which would still allow HID_BATTERY_STRENGTH=y
 which is the same problem.

 I don't know what kind of .config contortions you'd need to do to get
 there.
  
 How about making it 'default POWER_SUPPLY' instead?

 By itself that wont help as POWER_SUPPLY=m statisfies.

 So it looks like we have Jeremy's:
  HID  POWER_SUPPLY  HID == POWER_SUPPLY
 
 Tony,
 
 have you actually tested this one to work in the configuration you have 
 been seeing it to fail?
 
 I don't seem to be able to find any use of '==' in other Kconfig files 
 (and never used it myself), so I'd like to have confirmation that it 
 actually works and fixes the problem before I apply it :)

Documentation/kbuild/kconfig-language.txt does not list ==:

expr ::= symbol (1)
   symbol '=' symbol(2)
   symbol '!=' symbol   (3)
   '(' expr ')'   (4)
   '!' expr   (5)
   expr '' expr   (6)
   expr '||' expr   (7)


-- 
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] phylib: update mdiobus_alloc() to allocate extra private space

2011-12-15 Thread Timur Tabi
Augment mdiobus_alloc() to take a parameter indicating the number of extra
bytes to allocate for private data.  Almost all callers of mdiobus_alloc()
separately allocate a private data structure.  By allowing mdiobus_alloc()
to allocate extra memory, the two allocations can be merged into one.

This patch does not change any of the callers to actually take advantage
of this feature, however.  That change can be made by the individual
maintainers at their leisure.  For now, all callers ask for zero additional
bytes, which mimics the previous behavior.

Signed-off-by: Timur Tabi ti...@freescale.com
---
 arch/powerpc/platforms/pasemi/gpio_mdio.c |2 +-
 drivers/net/ethernet/adi/bfin_mac.c   |2 +-
 drivers/net/ethernet/aeroflex/greth.c |2 +-
 drivers/net/ethernet/amd/au1000_eth.c |2 +-
 drivers/net/ethernet/broadcom/bcm63xx_enet.c  |2 +-
 drivers/net/ethernet/broadcom/sb1250-mac.c|2 +-
 drivers/net/ethernet/broadcom/tg3.c   |2 +-
 drivers/net/ethernet/cadence/macb.c   |2 +-
 drivers/net/ethernet/dnet.c   |2 +-
 drivers/net/ethernet/ethoc.c  |2 +-
 drivers/net/ethernet/faraday/ftgmac100.c  |2 +-
 drivers/net/ethernet/freescale/fec.c  |2 +-
 drivers/net/ethernet/freescale/fec_mpc52xx_phy.c  |2 +-
 drivers/net/ethernet/freescale/fs_enet/mii-fec.c  |2 +-
 drivers/net/ethernet/freescale/fsl_pq_mdio.c  |2 +-
 drivers/net/ethernet/lantiq_etop.c|2 +-
 drivers/net/ethernet/marvell/mv643xx_eth.c|2 +-
 drivers/net/ethernet/marvell/pxa168_eth.c |2 +-
 drivers/net/ethernet/rdc/r6040.c  |2 +-
 drivers/net/ethernet/s6gmac.c |2 +-
 drivers/net/ethernet/smsc/smsc911x.c  |2 +-
 drivers/net/ethernet/smsc/smsc9420.c  |2 +-
 drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c |2 +-
 drivers/net/ethernet/ti/cpmac.c   |2 +-
 drivers/net/ethernet/ti/davinci_mdio.c|2 +-
 drivers/net/ethernet/toshiba/tc35815.c|2 +-
 drivers/net/ethernet/xilinx/ll_temac_mdio.c   |2 +-
 drivers/net/ethernet/xilinx/xilinx_emaclite.c |2 +-
 drivers/net/ethernet/xscale/ixp4xx_eth.c  |2 +-
 drivers/net/phy/fixed.c   |2 +-
 drivers/net/phy/mdio-bitbang.c|2 +-
 drivers/net/phy/mdio-octeon.c |2 +-
 drivers/net/phy/mdio_bus.c|   20 +---
 drivers/staging/et131x/et131x.c   |2 +-
 include/linux/phy.h   |2 +-
 net/dsa/dsa.c |2 +-
 36 files changed, 52 insertions(+), 38 deletions(-)

diff --git a/arch/powerpc/platforms/pasemi/gpio_mdio.c 
b/arch/powerpc/platforms/pasemi/gpio_mdio.c
index 9886296..754a57b 100644
--- a/arch/powerpc/platforms/pasemi/gpio_mdio.c
+++ b/arch/powerpc/platforms/pasemi/gpio_mdio.c
@@ -230,7 +230,7 @@ static int __devinit gpio_mdio_probe(struct platform_device 
*ofdev)
if (!priv)
goto out;
 
-   new_bus = mdiobus_alloc();
+   new_bus = mdiobus_alloc(0);
 
if (!new_bus)
goto out_free_priv;
diff --git a/drivers/net/ethernet/adi/bfin_mac.c 
b/drivers/net/ethernet/adi/bfin_mac.c
index b6d69c9..ea71758 100644
--- a/drivers/net/ethernet/adi/bfin_mac.c
+++ b/drivers/net/ethernet/adi/bfin_mac.c
@@ -1659,7 +1659,7 @@ static int __devinit bfin_mii_bus_probe(struct 
platform_device *pdev)
}
 
rc = -ENOMEM;
-   miibus = mdiobus_alloc();
+   miibus = mdiobus_alloc(0);
if (miibus == NULL)
goto out_err_alloc;
miibus-read = bfin_mdiobus_read;
diff --git a/drivers/net/ethernet/aeroflex/greth.c 
b/drivers/net/ethernet/aeroflex/greth.c
index c885aa9..c6bc550 100644
--- a/drivers/net/ethernet/aeroflex/greth.c
+++ b/drivers/net/ethernet/aeroflex/greth.c
@@ -1326,7 +1326,7 @@ static int greth_mdio_init(struct greth_private *greth)
int ret, phy;
unsigned long timeout;
 
-   greth-mdio = mdiobus_alloc();
+   greth-mdio = mdiobus_alloc(0);
if (!greth-mdio) {
return -ENOMEM;
}
diff --git a/drivers/net/ethernet/amd/au1000_eth.c 
b/drivers/net/ethernet/amd/au1000_eth.c
index cc9262b..5c30544 100644
--- a/drivers/net/ethernet/amd/au1000_eth.c
+++ b/drivers/net/ethernet/amd/au1000_eth.c
@@ -1159,7 +1159,7 @@ static int __devinit au1000_probe(struct platform_device 
*pdev)
goto err_mdiobus_alloc;
}
 
-   aup-mii_bus = mdiobus_alloc();
+   aup-mii_bus = mdiobus_alloc(0);
if (aup-mii_bus == NULL) {
dev_err(pdev-dev, failed to allocate mdiobus structure\n);
err = -ENOMEM;
diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c 

Re: [PATCH 1/3]arch:powerpc:sysdev:mpic.c Remove extra semicolon.

2011-12-15 Thread Justin P. Mattock
what would be the status of these? should I resend/rebase to the current 
etc?..


On 11/21/2011 08:43 AM, Justin P. Mattock wrote:

From: Justin P. Mattockjustinmatt...@gmail.com

The patch below removes an extra semicolon.

Signed-off-by: Justin P. Mattockjustinmatt...@gmail.com
CC: linuxppc-dev@lists.ozlabs.org
CC: Paul Mackerraspau...@samba.org
---
  arch/powerpc/sysdev/mpic.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
index 8c7e852..b3fa3d7 100644
--- a/arch/powerpc/sysdev/mpic.c
+++ b/arch/powerpc/sysdev/mpic.c
@@ -901,7 +901,7 @@ int mpic_set_irq_type(struct irq_data *d, unsigned int 
flow_type)
if (vold != vnew)
mpic_irq_write(src, MPIC_INFO(IRQ_VECTOR_PRI), vnew);

-   return IRQ_SET_MASK_OK_NOCOPY;;
+   return IRQ_SET_MASK_OK_NOCOPY;
  }

  void mpic_set_vector(unsigned int virq, unsigned int vector)


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] phylib: update mdiobus_alloc() to allocate extra private space

2011-12-15 Thread Andy Fleming

On Dec 15, 2011, at 11:51 AM, Timur Tabi wrote:

 Augment mdiobus_alloc() to take a parameter indicating the number of extra
 bytes to allocate for private data.  Almost all callers of mdiobus_alloc()
 separately allocate a private data structure.  By allowing mdiobus_alloc()
 to allocate extra memory, the two allocations can be merged into one.
 
 This patch does not change any of the callers to actually take advantage
 of this feature, however.  That change can be made by the individual
 maintainers at their leisure.  For now, all callers ask for zero additional
 bytes, which mimics the previous behavior.


Why? Doesn't this just obfuscate things a little, while providing no immediate 
benefit?

Andy
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] phylib: update mdiobus_alloc() to allocate extra private space

2011-12-15 Thread Timur Tabi
Andy Fleming wrote:
 Why? Doesn't this just obfuscate things a little, while providing no 
 immediate benefit?

I see code like this frequently:

bus = mdiobus_alloc();
if (bus == NULL)
return -ENOMEM;
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (priv == NULL) {
err = -ENOMEM;
goto out_free;
}
bus-priv = priv;

This can be replaced with:

bus = mdiobus_alloc(sizeof(*priv));
if (bus == NULL)
return -ENOMEM;

So the benefit is in simplifying memory management.  Now you have only one 
allocation to manage, instead of two.

fbdev does the same thing, which is where I got the idea from.  See 
framebuffer_alloc().

-- 
Timur Tabi
Linux kernel developer at Freescale

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/3] mtd/nand : workaround for Freescale FCM to support large-page Nand chip

2011-12-15 Thread Scott Wood
On 12/14/2011 10:59 PM, Li Yang wrote:
 The limitation of the proposed bad block marker migration is that you
 need to make sure the migration is done and only done once.  If it is
 done more than once, the factory bad block marker is totally messed
 up.  It requires a complex mechanism to automatically guarantee the
 migration is only done once, and it still won't be 100% safe.
 
 I would suggest we use a much easier compromise that we form the BBT
 base on the factory bad block marker on first use of the flash, and
 after that the factory bad block marker is dropped.  We just relies on
 the BBT for information about bad blocks.  Although by doing so we
 can't regenerate the BBT again,  as there is mirror for the BBT I
 don't think we have too much risk.

I have corrupted the BBT too often during development (e.g. a bug makes
all accesses fail, so the upper layers decide to mark everything bad) to
be comfortable with this.

Elsewhere in the thread I suggested a way to let the marker be in either
the bbt or in a dedicated block, depending on whether it's a development
situation where the BBT needs to be erasable.

-Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/3]arch:powerpc:sysdev:mpic.c Remove extra semicolon.

2011-12-15 Thread Jiri Kosina
On Thu, 15 Dec 2011, Justin P. Mattock wrote:

 what would be the status of these? should I resend/rebase to the current
 etc?..

Check linux-next, it's already there.

 
 On 11/21/2011 08:43 AM, Justin P. Mattock wrote:
  From: Justin P. Mattockjustinmatt...@gmail.com
  
  The patch below removes an extra semicolon.
  
  Signed-off-by: Justin P. Mattockjustinmatt...@gmail.com
  CC: linuxppc-dev@lists.ozlabs.org
  CC: Paul Mackerraspau...@samba.org
  ---
arch/powerpc/sysdev/mpic.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
  
  diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
  index 8c7e852..b3fa3d7 100644
  --- a/arch/powerpc/sysdev/mpic.c
  +++ b/arch/powerpc/sysdev/mpic.c
  @@ -901,7 +901,7 @@ int mpic_set_irq_type(struct irq_data *d, unsigned int
  flow_type)
  if (vold != vnew)
  mpic_irq_write(src, MPIC_INFO(IRQ_VECTOR_PRI), vnew);
  
  -   return IRQ_SET_MASK_OK_NOCOPY;;
  +   return IRQ_SET_MASK_OK_NOCOPY;
}
  
void mpic_set_vector(unsigned int virq, unsigned int vector)
 

-- 
Jiri Kosina
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/3]arch:powerpc:sysdev:mpic.c Remove extra semicolon.

2011-12-15 Thread Justin Mattock
Ill check through the commits.. Thank you for applying..

On Dec 15, 2011 9:46 AM, Jiri Kosina jkos...@suse.cz wrote:

 On Thu, 15 Dec 2011, Justin P. Mattock wrote:

  what would be the status of these? should I resend/rebase to the current
  etc?..

 Check linux-next, it's already there.

 
  On 11/21/2011 08:43 AM, Justin P. Mattock wrote:
   From: Justin P. Mattockjustinmatt...@gmail.com
  
   The patch below removes an extra semicolon.
  
   Signed-off-by: Justin P. Mattockjustinmatt...@gmail.com
   CC: linuxppc-dev@lists.ozlabs.org
   CC: Paul Mackerraspau...@samba.org
   ---
 arch/powerpc/sysdev/mpic.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
  
   diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
   index 8c7e852..b3fa3d7 100644
   --- a/arch/powerpc/sysdev/mpic.c
   +++ b/arch/powerpc/sysdev/mpic.c
   @@ -901,7 +901,7 @@ int mpic_set_irq_type(struct irq_data *d, unsigned
 int
   flow_type)
   if (vold != vnew)
   mpic_irq_write(src, MPIC_INFO(IRQ_VECTOR_PRI), vnew);
  
   -   return IRQ_SET_MASK_OK_NOCOPY;;
   +   return IRQ_SET_MASK_OK_NOCOPY;
 }
  
 void mpic_set_vector(unsigned int virq, unsigned int vector)
 

 --
 Jiri Kosina
 SUSE Labs

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: linux-next bad Kconfig for drivers/hid

2011-12-15 Thread Tony Breeds
On Thu, Dec 15, 2011 at 11:08:08AM +0100, Jiri Kosina wrote:
 
 Tony,
 
 have you actually tested this one to work in the configuration you have 
 been seeing it to fail?
 
 I don't seem to be able to find any use of '==' in other Kconfig files 
 (and never used it myself), so I'd like to have confirmation that it 
 actually works and fixes the problem before I apply it :)

Sorry I used '=' instead of '==' and that worked.  I figured the '=='
was just a typo.

Yours Tony


pgprX7J3S0olt.pgp
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: Linux port availability for p5010 processor

2011-12-15 Thread tiejun.chen
Vineeth wrote:
 found p5020_ds.c in platforms/85xx;
 why is it a part of 85xx directory ? the core of P5020 is E5500 where the
 core of 85xx is e500;
 

e5500 is e500mc-64bit Power Architecture core.

 Do we have the processor initialization code (start.S, head.S) files, port
 available with linux ?
 

So it should go head_64.S/entry_64.S with CONFIG_PPC_BOOK3E.

Tiejun
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/3] mtd/nand : workaround for Freescale FCM to support large-page Nand chip

2011-12-15 Thread LiuShuo

于 2011年12月15日 04:15, Scott Wood 写道:

On 12/14/2011 02:41 AM, LiuShuo wrote:

于 2011年12月13日 10:46, LiuShuo 写道:

于 2011年12月13日 05:30, Scott Wood 写道:

On 12/12/2011 03:19 PM, Artem Bityutskiy wrote:

On Mon, 2011-12-12 at 15:15 -0600, Scott Wood wrote:

NAND chips come from the factory with bad blocks marked at a certain
offset into each page.  This offset is normally in the OOB area, but
since we change the layout from 4k data, 128 byte oob to 2k
data, 64
byte oob, 2k data, 64 byte oob the marker is no longer in the
oob.  On
first use we need to migrate the markers so that they are still in
the oob.

Ah, I see, thanks. Are you planning to implement in-kernel migration or
use a user-space tool?

That's the kind of answer I was hoping to get from Shuo. :-)

OK, I try to do this. Wait for a couple of days.

-LiuShuo

I found it's too complex to do the migration in Linux driver.

Maybe we can add a uboot command (e.g. nand bbmigrate) to do it, once is
enough.

Any reason not to do it automatically on the first U-Boot bad block
scan, if the flash isn't marked as already migrated?

Further discussion on the details of how to do it in U-Boot should move
to the U-Boot list.


And let user ensure it been completed before linux use the Nand flash chip.

I don't want to trust the user here.  It's too easy to skip it, and
things will appear to work, but have subtle problems.


Even if we don't do the migration, the bad block also can be marked as bad
by wearing. So, do we really need to take much time to implement it ?
(code looks too complex.)

It is not acceptable to ignore factory bad block markers just because
some methods of using the flash may eventually detect an error (possibly
after data is lost -- no guarantee that the badness is ECC-correctable)
and mark the block bad again.

If you don't feel up to the task, I can look at it, but won't have time
until January.

hi Scott,
It's really hard to me and I have much other works to do now. Thanks for 
your help.


hi Artem,
Could this patch be applied now and we make a independent patch for  bad 
block information

migration later?

-LiuShuo


-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2] powerpc/setup_{32, 64}.c: remove unneeded boot_cpuid{, _phys}

2011-12-15 Thread Benjamin Herrenschmidt
On Mon, 2011-11-28 at 22:24 -0600, Matthew McClintock wrote:
 boot_cpuid and init_thread_info.cpu are redundant, just use the
 var that stays around longer and add a define to make boot_cpuid
 point at the correct value
 
 boot_cpudid_phys is not needed and can completly go away from the
 SMP case, we leave it there for the non-SMP case since the paca
 struct is not around to store this info
 
 This patch also has the effect of having the logical cpu number
 of the boot cpu be updated correctly independently of the ordering
 of the cpu nodes in the device tree.

So what about head_fsl_booke.S comparing boot_cpuid to -1 ? That seems
to be broken now in at least 2 ways, boot_cpuid doesn't exist anymore
and you don't initialize it to -1 either...

Cheers,
Ben.

 Signed-off-by: Matthew McClintock m...@freescale.com
 ---
 v2: Fix compile issue for peries
 Remove '-1' initial value
 
  arch/powerpc/include/asm/smp.h |2 +-
  arch/powerpc/kernel/setup_32.c |5 +++--
  arch/powerpc/kernel/setup_64.c |1 -
  arch/powerpc/sysdev/xics/xics-common.c |1 +
  4 files changed, 5 insertions(+), 4 deletions(-)
 
 diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
 index adba970..f26c554 100644
 --- a/arch/powerpc/include/asm/smp.h
 +++ b/arch/powerpc/include/asm/smp.h
 @@ -29,7 +29,7 @@
  #endif
  #include asm/percpu.h
  
 -extern int boot_cpuid;
 +#define boot_cpuid   (init_thread_info.cpu)
  extern int spinning_secondaries;
  
  extern void cpu_die(void);
 diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
 index ac76108..8d4df4c 100644
 --- a/arch/powerpc/kernel/setup_32.c
 +++ b/arch/powerpc/kernel/setup_32.c
 @@ -46,10 +46,11 @@
  
  extern void bootx_init(unsigned long r4, unsigned long phys);
  
 -int boot_cpuid = -1;
 -EXPORT_SYMBOL_GPL(boot_cpuid);
 +/* we need a place to store phys cpu for non-SMP case */
 +#ifndef CONFIG_SMP
  int boot_cpuid_phys;
  EXPORT_SYMBOL_GPL(boot_cpuid_phys);
 +#endif
  
  int smp_hw_index[NR_CPUS];
  
 diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
 index fb9bb46..6d0f00f 100644
 --- a/arch/powerpc/kernel/setup_64.c
 +++ b/arch/powerpc/kernel/setup_64.c
 @@ -73,7 +73,6 @@
  #define DBG(fmt...)
  #endif
  
 -int boot_cpuid = 0;
  int __initdata spinning_secondaries;
  u64 ppc64_pft_size;
  
 diff --git a/arch/powerpc/sysdev/xics/xics-common.c 
 b/arch/powerpc/sysdev/xics/xics-common.c
 index d72eda6..8998b7a 100644
 --- a/arch/powerpc/sysdev/xics/xics-common.c
 +++ b/arch/powerpc/sysdev/xics/xics-common.c
 @@ -20,6 +20,7 @@
  #include linux/of.h
  #include linux/slab.h
  #include linux/spinlock.h
 +#include linux/sched.h
  
  #include asm/prom.h
  #include asm/io.h


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v2] powerpc/setup_{32, 64}.c: remove unneeded boot_cpuid{, _phys}

2011-12-15 Thread McClintock Matthew-B29882
On Thu, Dec 15, 2011 at 9:12 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
 On Mon, 2011-11-28 at 22:24 -0600, Matthew McClintock wrote:
 boot_cpuid and init_thread_info.cpu are redundant, just use the
 var that stays around longer and add a define to make boot_cpuid
 point at the correct value

 boot_cpudid_phys is not needed and can completly go away from the
 SMP case, we leave it there for the non-SMP case since the paca
 struct is not around to store this info

 This patch also has the effect of having the logical cpu number
 of the boot cpu be updated correctly independently of the ordering
 of the cpu nodes in the device tree.

 So what about head_fsl_booke.S comparing boot_cpuid to -1 ? That seems
 to be broken now in at least 2 ways, boot_cpuid doesn't exist anymore
 and you don't initialize it to -1 either...

This is 4/5 which is also waiting for your review.

http://lists.ozlabs.org/pipermail/linuxppc-dev/2011-October/093474.html

-M
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v2] powerpc/setup_{32, 64}.c: remove unneeded boot_cpuid{, _phys}

2011-12-15 Thread Benjamin Herrenschmidt
On Fri, 2011-12-16 at 03:29 +, McClintock Matthew-B29882 wrote:
 On Thu, Dec 15, 2011 at 9:12 PM, Benjamin Herrenschmidt
 b...@kernel.crashing.org wrote:
  On Mon, 2011-11-28 at 22:24 -0600, Matthew McClintock wrote:
  boot_cpuid and init_thread_info.cpu are redundant, just use the
  var that stays around longer and add a define to make boot_cpuid
  point at the correct value
 
  boot_cpudid_phys is not needed and can completly go away from the
  SMP case, we leave it there for the non-SMP case since the paca
  struct is not around to store this info
 
  This patch also has the effect of having the logical cpu number
  of the boot cpu be updated correctly independently of the ordering
  of the cpu nodes in the device tree.
 
  So what about head_fsl_booke.S comparing boot_cpuid to -1 ? That seems
  to be broken now in at least 2 ways, boot_cpuid doesn't exist anymore
  and you don't initialize it to -1 either...
 
 This is 4/5 which is also waiting for your review.
 
 http://lists.ozlabs.org/pipermail/linuxppc-dev/2011-October/093474.html

Ah missed that. This is FSL specific, I'd need Kumar and/or Scott's ack
for that one.

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v2] powerpc/setup_{32, 64}.c: remove unneeded boot_cpuid{, _phys}

2011-12-15 Thread McClintock Matthew-B29882
On Thu, Dec 15, 2011 at 9:35 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
 This is 4/5 which is also waiting for your review.

 http://lists.ozlabs.org/pipermail/linuxppc-dev/2011-October/093474.html

 Ah missed that. This is FSL specific, I'd need Kumar and/or Scott's ack
 for that one.

I believe Kumar was waiting on your review of 5/5. I'll let you guys discuss.

-M
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v3 2/3] hvc_init(): Enforce one-time initialization.

2011-12-15 Thread Amit Shah
On (Tue) 06 Dec 2011 [09:05:38], Miche Baker-Harvey wrote:
 Amit,
 
 Ah, indeed.  I am not using MSI-X, so virtio_pci::vp_try_to_find_vqs()
 calls vp_request_intx() and sets up an interrupt callback.  From
 there, when an interrupt occurs, the stack looks something like this:
 
 virtio_pci::vp_interrupt()
   virtio_pci::vp_vring_interrupt()
 virtio_ring::vring_interrupt()
   vq-vq.callback()  -- in this case, that's 
 virtio_console::control_intr()
 workqueue::schedule_work()
   workqueue::queue_work()
 queue_work_on(get_cpu())  -- queues the work on the current CPU.
 
 I'm not doing anything to keep multiple control message from being
 sent concurrently to the guest, and we will take those interrupts on
 any CPU. I've confirmed that the two instances of
 handle_control_message() are occurring on different CPUs.

Hi Miche,

Here's a quick-and-dirty hack that should help.  I've not tested this,
and it's not yet signed-off-by.  Let me know if this helps.

From 16708fa247c0dd34aa55d78166d65e463f9be6d6 Mon Sep 17 00:00:00 2001
Message-Id: 
16708fa247c0dd34aa55d78166d65e463f9be6d6.1324015123.git.amit.s...@redhat.com
From: Amit Shah amit.s...@redhat.com
Date: Fri, 16 Dec 2011 11:27:04 +0530
Subject: [PATCH 1/1] virtio: console: Serialise control work

We currently allow multiple instances of the control work handler to run
in parallel.  This isn't expected to work; serialise access by disabling
interrupts on new packets from the Host and enable them when all the
existing ones are consumed.
---
 drivers/char/virtio_console.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
index 8e3c46d..72d396c 100644
--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -1466,6 +1466,7 @@ static void control_work_handler(struct work_struct *work)
portdev = container_of(work, struct ports_device, control_work);
vq = portdev-c_ivq;
 
+ start:
spin_lock(portdev-cvq_lock);
while ((buf = virtqueue_get_buf(vq, len))) {
spin_unlock(portdev-cvq_lock);
@@ -1483,6 +1484,10 @@ static void control_work_handler(struct work_struct 
*work)
}
}
spin_unlock(portdev-cvq_lock);
+   if (unlikely(!virtqueue_enable_cb(vq))) {
+   virtqueue_disable_cb(vq);
+   goto start;
+   }
 }
 
 static void out_intr(struct virtqueue *vq)
@@ -1533,6 +1538,7 @@ static void control_intr(struct virtqueue *vq)
 {
struct ports_device *portdev;
 
+   virtqueue_disable_cb(vq);
portdev = vq-vdev-priv;
schedule_work(portdev-control_work);
 }
-- 
1.7.7.4



Amit
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev