Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread Adrian Chadd
Hi Alan!

I just reverted back to the commit before this one and it fixed my MIPS32 boot.

Would you have some time to help me help you figure out why things broke? :)

Thanks!



-a


On 26 July 2014 11:10, Alan Cox a...@freebsd.org wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
   When unwiring a region of an address space, do not assume that the
   underlying physical pages are mapped by the pmap.  If, for example, the
   application has performed an mprotect(..., PROT_NONE) on any part of the
   wired region, then those pages will no longer be mapped by the pmap.
   So, using the pmap to lookup the wired pages in order to unwire them
   doesn't always work, and when it doesn't work wired pages are leaked.

   To avoid the leak, introduce and use a new function vm_object_unwire()
   that locates the wired pages by traversing the object and its backing
   objects.

   At the same time, switch from using pmap_change_wiring() to the recently
   introduced function pmap_unwire() for unwiring the region's mappings.
   pmap_unwire() is faster, because it operates a range of virtual addresses
   rather than a single virtual page at a time.  Moreover, by operating on
   a range, it is superpage friendly.  It doesn't waste time performing
   unnecessary demotions.

   Reported by:  markj
   Reviewed by:  kib
   Tested by:pho, jmg (arm)
   Sponsored by: EMC / Isilon Storage Division

 Modified:
   head/sys/vm/vm_extern.h
   head/sys/vm/vm_fault.c
   head/sys/vm/vm_map.c
   head/sys/vm/vm_object.c
   head/sys/vm/vm_object.h

 Modified: head/sys/vm/vm_extern.h
 ==
 --- head/sys/vm/vm_extern.h Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.h Sat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
  int fault_flags, vm_page_t *m_hold);
  int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len,
  vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
  int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
  int vm_forkproc(struct thread *, struct proc *, struct thread *, struct 
 vmspace *, int);
  void vm_waitproc(struct proc *);

 Modified: head/sys/vm/vm_fault.c
 ==
 --- head/sys/vm/vm_fault.c  Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_fault.c  Sat Jul 26 18:10:18 2014(r269134)
 @@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
  #define PFFOR 4

  static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t *, int 
 *);
 +static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);

  #defineVM_FAULT_READ_BEHIND8
  #defineVM_FAULT_READ_MAX   (1 + VM_FAULT_READ_AHEAD_MAX)
 @@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
   *
   * Unwire a range of virtual addresses in a map.
   */
 -void
 +static void
  vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
  boolean_t fictitious)
  {

 Modified: head/sys/vm/vm_map.c
 ==
 --- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014(r269134)
 @@ -132,6 +132,7 @@ static void _vm_map_init(vm_map_t map, p
  vm_offset_t max);
  static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t 
 system_map);
  static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
 +static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
  #ifdef INVARIANTS
  static void vm_map_zdtor(void *mem, int size, void *arg);
  static void vmspace_zdtor(void *mem, int size, void *arg);
 @@ -2393,16 +2394,10 @@ done:
 (entry-eflags  MAP_ENTRY_USER_WIRED))) {
 if (user_unwire)
 entry-eflags = ~MAP_ENTRY_USER_WIRED;
 -   entry-wired_count--;
 -   if (entry-wired_count == 0) {
 -   /*
 -* Retain the map lock.
 -*/
 -   vm_fault_unwire(map, entry-start, entry-end,
 -   entry-object.vm_object != NULL 
 -   (entry-object.vm_object-flags 
 -   OBJ_FICTITIOUS) != 0);
 -   }
 +   if (entry-wired_count == 1)
 +   vm_map_entry_unwire(map, entry);
 +   else
 +   entry-wired_count--;
 }
 KASSERT((entry-eflags  MAP_ENTRY_IN_TRANSITION) 

Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread Alan Cox

On Aug 28, 2014, at 3:30 AM, Adrian Chadd wrote:

 Hi Alan!
 
 I just reverted back to the commit before this one and it fixed my MIPS32 
 boot.
 
 Would you have some time to help me help you figure out why things broke? :)
 


Have you tried booting a kernel based on r269134?  At the time, I tested a 
64-bit MIPS kernel on gxemul, and it ran ok.  Also, Hiren reports that a 32-bit 
kernel from about two weeks after r269134 works for him.


 
 
 
 
 -a
 
 
 On 26 July 2014 11:10, Alan Cox a...@freebsd.org wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134
 
 Log:
  When unwiring a region of an address space, do not assume that the
  underlying physical pages are mapped by the pmap.  If, for example, the
  application has performed an mprotect(..., PROT_NONE) on any part of the
  wired region, then those pages will no longer be mapped by the pmap.
  So, using the pmap to lookup the wired pages in order to unwire them
  doesn't always work, and when it doesn't work wired pages are leaked.
 
  To avoid the leak, introduce and use a new function vm_object_unwire()
  that locates the wired pages by traversing the object and its backing
  objects.
 
  At the same time, switch from using pmap_change_wiring() to the recently
  introduced function pmap_unwire() for unwiring the region's mappings.
  pmap_unwire() is faster, because it operates a range of virtual addresses
  rather than a single virtual page at a time.  Moreover, by operating on
  a range, it is superpage friendly.  It doesn't waste time performing
  unnecessary demotions.
 
  Reported by:  markj
  Reviewed by:  kib
  Tested by:pho, jmg (arm)
  Sponsored by: EMC / Isilon Storage Division
 
 Modified:
  head/sys/vm/vm_extern.h
  head/sys/vm/vm_fault.c
  head/sys/vm/vm_map.c
  head/sys/vm/vm_object.c
  head/sys/vm/vm_object.h
 
 Modified: head/sys/vm/vm_extern.h
 ==
 --- head/sys/vm/vm_extern.h Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.h Sat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
 int fault_flags, vm_page_t *m_hold);
 int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len,
 vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
 int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
 int vm_forkproc(struct thread *, struct proc *, struct thread *, struct 
 vmspace *, int);
 void vm_waitproc(struct proc *);
 
 Modified: head/sys/vm/vm_fault.c
 ==
 --- head/sys/vm/vm_fault.c  Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_fault.c  Sat Jul 26 18:10:18 2014(r269134)
 @@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
 #define PFFOR 4
 
 static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t *, int 
 *);
 +static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
 
 #defineVM_FAULT_READ_BEHIND8
 #defineVM_FAULT_READ_MAX   (1 + VM_FAULT_READ_AHEAD_MAX)
 @@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
  *
  * Unwire a range of virtual addresses in a map.
  */
 -void
 +static void
 vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
 boolean_t fictitious)
 {
 
 Modified: head/sys/vm/vm_map.c
 ==
 --- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014(r269134)
 @@ -132,6 +132,7 @@ static void _vm_map_init(vm_map_t map, p
 vm_offset_t max);
 static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t 
 system_map);
 static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
 +static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
 #ifdef INVARIANTS
 static void vm_map_zdtor(void *mem, int size, void *arg);
 static void vmspace_zdtor(void *mem, int size, void *arg);
 @@ -2393,16 +2394,10 @@ done:
(entry-eflags  MAP_ENTRY_USER_WIRED))) {
if (user_unwire)
entry-eflags = ~MAP_ENTRY_USER_WIRED;
 -   entry-wired_count--;
 -   if (entry-wired_count == 0) {
 -   /*
 -* Retain the map lock.
 -*/
 -   vm_fault_unwire(map, entry-start, 
 entry-end,
 -   entry-object.vm_object != NULL 
 -   (entry-object.vm_object-flags 
 -   OBJ_FICTITIOUS) != 0);
 -   }
 +   if (entry-wired_count 

Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread hiren panchasara
On Thu, Aug 28, 2014 at 9:30 AM, Alan Cox a...@rice.edu wrote:

 On Aug 28, 2014, at 3:30 AM, Adrian Chadd wrote:

 Hi Alan!

 I just reverted back to the commit before this one and it fixed my MIPS32 
 boot.

 Would you have some time to help me help you figure out why things broke? :)



 Have you tried booting a kernel based on r269134?  At the time, I tested a 
 64-bit MIPS kernel on gxemul, and it ran ok.  Also, Hiren reports that a 
 32-bit kernel from about two weeks after r269134 works for him.


To be precise, I am on r269799 which works for me.

cheers,
Hiren






 -a


 On 26 July 2014 11:10, Alan Cox a...@freebsd.org wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
  When unwiring a region of an address space, do not assume that the
  underlying physical pages are mapped by the pmap.  If, for example, the
  application has performed an mprotect(..., PROT_NONE) on any part of the
  wired region, then those pages will no longer be mapped by the pmap.
  So, using the pmap to lookup the wired pages in order to unwire them
  doesn't always work, and when it doesn't work wired pages are leaked.

  To avoid the leak, introduce and use a new function vm_object_unwire()
  that locates the wired pages by traversing the object and its backing
  objects.

  At the same time, switch from using pmap_change_wiring() to the recently
  introduced function pmap_unwire() for unwiring the region's mappings.
  pmap_unwire() is faster, because it operates a range of virtual addresses
  rather than a single virtual page at a time.  Moreover, by operating on
  a range, it is superpage friendly.  It doesn't waste time performing
  unnecessary demotions.

  Reported by:  markj
  Reviewed by:  kib
  Tested by:pho, jmg (arm)
  Sponsored by: EMC / Isilon Storage Division

 Modified:
  head/sys/vm/vm_extern.h
  head/sys/vm/vm_fault.c
  head/sys/vm/vm_map.c
  head/sys/vm/vm_object.c
  head/sys/vm/vm_object.h

 Modified: head/sys/vm/vm_extern.h
 ==
 --- head/sys/vm/vm_extern.h Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.h Sat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
 int fault_flags, vm_page_t *m_hold);
 int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len,
 vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
 int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
 int vm_forkproc(struct thread *, struct proc *, struct thread *, struct 
 vmspace *, int);
 void vm_waitproc(struct proc *);

 Modified: head/sys/vm/vm_fault.c
 ==
 --- head/sys/vm/vm_fault.c  Sat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_fault.c  Sat Jul 26 18:10:18 2014(r269134)
 @@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
 #define PFFOR 4

 static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t *, int 
 *);
 +static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);

 #defineVM_FAULT_READ_BEHIND8
 #defineVM_FAULT_READ_MAX   (1 + VM_FAULT_READ_AHEAD_MAX)
 @@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
  *
  * Unwire a range of virtual addresses in a map.
  */
 -void
 +static void
 vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
 boolean_t fictitious)
 {

 Modified: head/sys/vm/vm_map.c
 ==
 --- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014(r269134)
 @@ -132,6 +132,7 @@ static void _vm_map_init(vm_map_t map, p
 vm_offset_t max);
 static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t 
 system_map);
 static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
 +static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
 #ifdef INVARIANTS
 static void vm_map_zdtor(void *mem, int size, void *arg);
 static void vmspace_zdtor(void *mem, int size, void *arg);
 @@ -2393,16 +2394,10 @@ done:
(entry-eflags  MAP_ENTRY_USER_WIRED))) {
if (user_unwire)
entry-eflags = ~MAP_ENTRY_USER_WIRED;
 -   entry-wired_count--;
 -   if (entry-wired_count == 0) {
 -   /*
 -* Retain the map lock.
 -*/
 -   vm_fault_unwire(map, entry-start, 
 entry-end,
 -   entry-object.vm_object != NULL 
 -   (entry-object.vm_object-flags 
 -

Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread Adrian Chadd
On 28 August 2014 09:52, hiren panchasara hi...@freebsd.org wrote:
 On Thu, Aug 28, 2014 at 9:30 AM, Alan Cox a...@rice.edu wrote:

 On Aug 28, 2014, at 3:30 AM, Adrian Chadd wrote:

 Hi Alan!

 I just reverted back to the commit before this one and it fixed my MIPS32 
 boot.

 Would you have some time to help me help you figure out why things broke? :)



 Have you tried booting a kernel based on r269134?  At the time, I tested a 
 64-bit MIPS kernel on gxemul, and it ran ok.  Also, Hiren reports that a 
 32-bit kernel from about two weeks after r269134 works for him.


 To be precise, I am on r269799 which works for me.

Ok. I'll jump forward to r269799 and then inch forward until I find
where it broke.

Thanks!


-a
___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread Adrian Chadd
Just tried it, it didn't work:

RedBoot load kernel.RSPRO

Using default protocol (TFTP)

Entry point: 0x80050100, address range: 0x8005-0x805b11cc

RedBoot

RedBoot go

CPU platform: Atheros AR7161 rev 2

CPU Frequency=720 MHz

CPU DDR Frequency=360 MHz

CPU AHB Frequency=180 MHz

platform frequency: 720 MHz

CPU reference clock: 40 MHz

CPU MDIO clock: 40 MHz

arguments:

  a0 = 80050100

  a1 = 8ff0

  a2 = 0001

  a3 = 0007

Cmd line:

Environment:

envp is invalid

Cache info:

  picache_stride= 4096

  picache_loopcount = 16

  pdcache_stride= 4096

  pdcache_loopcount = 8

cpu0: MIPS Technologies processor v116.147

  MMU: Standard TLB, 16 entries

  L1 i-cache: 4 ways of 512 sets, 32 bytes per line

  L1 d-cache: 4 ways of 256 sets, 32 bytes per line

  Config1=0x9ee3519ePerfCount,WatchRegs,MIPS16,EJTAG

  Config3=0x20

KDB: debugger backends: ddb

KDB: current backend: ddb

Copyright (c) 1992-2014 The FreeBSD Project.

Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994

The Regents of the University of California. All rights reserved.

FreeBSD is a registered trademark of The FreeBSD Foundation.

FreeBSD 11.0-CURRENT #3 r269799: Fri Aug 29 00:17:22 UTC 2014


adrian@adrian-hackbox:/usr/home/adrian/work/freebsd/embedded/head/obj/mips/mips.mips/usr/home/adrian/work/freebsd/embedded/head/src/sys/RSPRO
mips

gcc version 4.2.1 20070831 patched [FreeBSD]

WARNING: WITNESS option enabled, expect reduced performance.

MEMGUARD DEBUGGING ALLOCATOR INITIALIZED:

MEMGUARD map base: 0xc080

MEMGUARD map size: 45696 KBytes

real memory  = 33554432 (32768K bytes)

avail memory = 22806528 (21MB)

random device not loaded; using insecure entropy

random: Software, Yarrow initialized

nexus0: MIPS32 root nexus

clock0: Generic MIPS32 ticker on nexus0

Timecounter MIPS32 frequency 36000 Hz quality 800

Event timer MIPS32 frequency 36000 Hz quality 800

argemdio0: Atheros AR71xx built-in ethernet interface, MDIO
controller at mem 0x1900-0x19000fff on nexus0

mdio0: MDIO on argemdio0

mdioproxy0: MII/MDIO proxy, MDIO side on mdio0

arswitch0: Atheros AR8316 Ethernet Switch on mdio0

arswitch0: ar8316_hw_setup: MAC port == RGMII, port 4 = dedicated PHY

arswitch0: ar8316_hw_setup: port 4 RGMII workaround

miibus0: MII bus on arswitch0

ukphy0: Generic IEEE 802.3u media interface PHY 0 on miibus0

ukphy0:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto

miibus1: MII bus on arswitch0

ukphy1: Generic IEEE 802.3u media interface PHY 1 on miibus1

ukphy1:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto

miibus2: MII bus on arswitch0

ukphy2: Generic IEEE 802.3u media interface PHY 2 on miibus2

ukphy2:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto

miibus3: MII bus on arswitch0

ukphy3: Generic IEEE 802.3u media interface PHY 3 on miibus3

ukphy3:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto

etherswitch0: Switch controller on arswitch0

mdio1: MDIO on arswitch0

mdioproxy1: MII/MDIO proxy, MDIO side on mdio1

apb0 at irq 4 on nexus0

uart0: 16550 or compatible at mem 0x18020003-0x1802001a irq 3 on apb0

uart0: console (115200,n,8,1)

gpio0: Atheros AR71XX GPIO driver at mem 0x1804-0x18040fff irq 2 on apb0

gpio0: [GIANT-LOCKED]

gpio0: function_set: 0x0

gpio0: function_clear: 0x0

gpio0: gpio pinmask=0xff

gpioc0: GPIO controller on gpio0

gpiobus0: GPIO bus on gpio0

gpioled0: GPIO led at pin(s) 2 on gpiobus0

ehci0: AR71XX Integrated USB 2.0 controller at mem
0x1b00-0x1bff irq 1 on nexus0

usbus0: set host controller mode

usbus0: EHCI version 1.0

usbus0: set host controller mode

usbus0 on ehci0

pcib0 at irq 0 on nexus0

pcib0: ar71xx_pci_attach: missing hint 'baseslot', default to
AR71XX_PCI_BASE_SLOT

pci0: PCI bus on pcib0

ath0: Atheros 9220 irq 0 at device 17.0 on pci0

[ath] enabling AN_TOP2_FIXUP

ath0: [HT] enabling HT modes

ath0: [HT] 1 stream STBC receive enabled

ath0: [HT] 1 stream STBC transmit enabled

ath0: [HT] 2 RX streams; 2 TX streams

ath0: AR9220 mac 128.2 RF5133 phy 13.0

ath0: 2GHz radio: 0x; 5GHz radio: 0x00c0

ath1: Atheros 9220 irq 1 at device 18.0 on pci0

[ath] enabling AN_TOP2_FIXUP

ath1: [HT] enabling HT modes

ath1: [HT] 1 stream STBC receive enabled

ath1: [HT] 1 stream STBC transmit enabled

ath1: [HT] 2 RX streams; 2 TX streams

ath1: AR9220 mac 128.2 RF5133 phy 13.0

ath1: 2GHz radio: 0x; 5GHz radio: 0x00c0

arge0: Atheros AR71xx built-in ethernet interface at mem
0x1900-0x19000fff irq 2 on nexus0

arge0: arge_attach: overriding MII mode to 'RGMII'

miiproxy0: MII/MDIO proxy, MII side on arge0

miiproxy0: attached to target mdio1

arge0: finishing attachment, phymask 0010, proxy set

miibus4: MII bus on miiproxy0

ukphy4: Generic IEEE 802.3u media interface PHY 4 on 

Re: svn commit: r269134 - head/sys/vm

2014-08-28 Thread Adrian Chadd
Hm, ok, r269134 worked but r269799 didn't.

I'll keep digging, thanks!


-a
___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-08-01 Thread Alan Cox
On 07/29/2014 05:38, Slawa Olhovchenkov wrote:
 On Sat, Jul 26, 2014 at 06:10:18PM +, Alan Cox wrote:

 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
   When unwiring a region of an address space, do not assume that the
   underlying physical pages are mapped by the pmap.  If, for example, the
   application has performed an mprotect(..., PROT_NONE) on any part of the
   wired region, then those pages will no longer be mapped by the pmap.
   So, using the pmap to lookup the wired pages in order to unwire them
   doesn't always work, and when it doesn't work wired pages are leaked.
   
   To avoid the leak, introduce and use a new function vm_object_unwire()
   that locates the wired pages by traversing the object and its backing
   objects.
 MFC planed?


At some point, yes.  However, I'm not sure that it will be MFCed in time
for 10.1.


   At the same time, switch from using pmap_change_wiring() to the recently
   introduced function pmap_unwire() for unwiring the region's mappings.
   pmap_unwire() is faster, because it operates a range of virtual addresses
   rather than a single virtual page at a time.  Moreover, by operating on
   a range, it is superpage friendly.  It doesn't waste time performing
   unnecessary demotions.
   
   Reported by:   markj
   Reviewed by:   kib
   Tested by: pho, jmg (arm)
   Sponsored by:  EMC / Isilon Storage Division


___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-07-31 Thread Andreas Tobler

On 31.07.14 06:35, Alan Cox wrote:

On 07/30/2014 16:26, Andreas Tobler wrote:

On 30.07.14 23:17, Alan Cox wrote:

On 07/30/2014 15:15, Andreas Tobler wrote:

On 30.07.14 21:54, Alan Cox wrote:

On 07/30/2014 14:46, Alan Cox wrote:

On 07/30/2014 13:58, Andreas Tobler wrote:

Hi Alan,

On 26.07.14 20:10, Alan Cox wrote:

Author: alc
Date: Sat Jul 26 18:10:18 2014
New Revision: 269134
URL: http://svnweb.freebsd.org/changeset/base/269134

Log:
  When unwiring a region of an address space, do not assume that
the
  underlying physical pages are mapped by the pmap.  If, for
example, the
  application has performed an mprotect(..., PROT_NONE) on
any part
of the
  wired region, then those pages will no longer be mapped by the
pmap.
  So, using the pmap to lookup the wired pages in order to
unwire them
  doesn't always work, and when it doesn't work wired pages are
leaked.

  To avoid the leak, introduce and use a new function
vm_object_unwire()
  that locates the wired pages by traversing the object and its
backing
  objects.

  At the same time, switch from using pmap_change_wiring() to
the
recently
  introduced function pmap_unwire() for unwiring the region's
mappings.
  pmap_unwire() is faster, because it operates a range of
virtual
addresses
  rather than a single virtual page at a time.  Moreover, by
operating on
  a range, it is superpage friendly.  It doesn't waste time
performing
  unnecessary demotions.

  Reported by:markj
  Reviewed by:kib
  Tested by:pho, jmg (arm)
  Sponsored by:EMC / Isilon Storage Division

This commit brings my 32- and 64-bit PowerMac's into panic.
Unfortunately I'm not able to give you a backtrace in the form of a
textdump nor of a core dump.

The only thing I have is this picture:

http://people.freebsd.org/~andreast/r269134_panic.jpg

Exactly this revision gives a panic and breaks the textdump/coredump
facility.

How can I help debugging?


It appears to me that moea64_pvo_enter() had a pre-existing bug that
got
tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
the PVO_WIRED flag when an unwired mapping already exists.  It just
returns with the mapping still in an unwired state.  Consequently,
when
pmap_unwire() finally runs, it doesn't find a wired mapping.

Try this:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
uma_zone_t
   if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
   if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
== pa 
   (pvo-pvo_pte.lpte.pte_lo 
(LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
   if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
   /* Re-insert if spilled */
   i = MOEA64_PTE_INSERT(mmu,
ptegidx,



The new conditional test needs to be inverted.  Try this instead:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
uma_zone_t
   if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
   if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
== pa 
   (pvo-pvo_pte.lpte.pte_lo 
(LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) ==
0) {
   if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
   /* Re-insert if spilled */
   i = MOEA64_PTE_INSERT(mmu,
ptegidx,




The panic stays, but the message is different:

panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
moea64_pteg_table but valid in pvo.



My attempted fix is doing something else wrong.  Do you have a stack
trace?


iPhone sei Dank:

http://people.freebsd.org/~andreast/r269134-1_panic.jpg


Ok, this patch should fix both the original panic and the new one.  They
are two distinct problems.


Yep, thank you!

Additionally I tried to adapt the 32-bit path and successfully booted 
the below, ok?


Again, thanks a lot!
Andreas

Index: powerpc/aim/mmu_oea.c
===
--- powerpc/aim/mmu_oea.c   (revision 269326)
+++ 

Re: svn commit: r269134 - head/sys/vm

2014-07-31 Thread Alan Cox
On 07/31/2014 04:00, Andreas Tobler wrote:
 On 31.07.14 06:35, Alan Cox wrote:
 On 07/30/2014 16:26, Andreas Tobler wrote:
 On 30.07.14 23:17, Alan Cox wrote:
 On 07/30/2014 15:15, Andreas Tobler wrote:
 On 30.07.14 21:54, Alan Cox wrote:
 On 07/30/2014 14:46, Alan Cox wrote:
 On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
   When unwiring a region of an address space, do not
 assume that
 the
   underlying physical pages are mapped by the pmap.  If, for
 example, the
   application has performed an mprotect(..., PROT_NONE) on
 any part
 of the
   wired region, then those pages will no longer be mapped
 by the
 pmap.
   So, using the pmap to lookup the wired pages in order to
 unwire them
   doesn't always work, and when it doesn't work wired
 pages are
 leaked.

   To avoid the leak, introduce and use a new function
 vm_object_unwire()
   that locates the wired pages by traversing the object
 and its
 backing
   objects.

   At the same time, switch from using pmap_change_wiring() to
 the
 recently
   introduced function pmap_unwire() for unwiring the region's
 mappings.
   pmap_unwire() is faster, because it operates a range of
 virtual
 addresses
   rather than a single virtual page at a time.  Moreover, by
 operating on
   a range, it is superpage friendly.  It doesn't waste time
 performing
   unnecessary demotions.

   Reported by:markj
   Reviewed by:kib
   Tested by:pho, jmg (arm)
   Sponsored by:EMC / Isilon Storage Division
 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form
 of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the
 textdump/coredump
 facility.

 How can I help debugging?

 It appears to me that moea64_pvo_enter() had a pre-existing bug
 that
 got
 tickled by this change.  Specifically, moea64_pvo_enter()
 doesn't set
 the PVO_WIRED flag when an unwired mapping already exists.  It just
 returns with the mapping still in an unwired state.  Consequently,
 when
 pmap_unwire() finally runs, it doesn't find a wired mapping.

 Try this:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
 uma_zone_t
if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) ==
 va) {
if ((pvo-pvo_pte.lpte.pte_lo 
 LPTE_RPGN)
 == pa 
(pvo-pvo_pte.lpte.pte_lo 
 (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC |
 LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC |
 LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags) 
 PVO_WIRED)) {
if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
/* Re-insert if
 spilled */
i =
 MOEA64_PTE_INSERT(mmu,
 ptegidx,


 The new conditional test needs to be inverted.  Try this instead:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
 uma_zone_t
if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) ==
 va) {
if ((pvo-pvo_pte.lpte.pte_lo 
 LPTE_RPGN)
 == pa 
(pvo-pvo_pte.lpte.pte_lo 
 (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags) 
 PVO_WIRED) ==
 0) {
if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
/* Re-insert if
 spilled */
i =
 MOEA64_PTE_INSERT(mmu,
 ptegidx,



 The panic stays, but the message is different:

 panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
 moea64_pteg_table but valid in pvo.


 My attempted fix is doing something else wrong.  Do you have a stack
 trace?

 iPhone sei Dank:

 http://people.freebsd.org/~andreast/r269134-1_panic.jpg

 Ok, this patch should fix both the original panic and the new one.  They
 are two distinct problems.

 Yep, thank you!

 Additionally I tried to adapt the 32-bit path and successfully booted
 the below, ok?

 Again, thanks a lot!
 Andreas

 

Re: svn commit: r269134 - head/sys/vm

2014-07-31 Thread Andreas Tobler

On 31.07.14 20:34, Alan Cox wrote:


Here is a better fix for the problem in moea64_pvo_enter().  The
original fix destroys and recreates the PTE in order to wire it.  This
new fix simply updates the PTE.

In the case of moea_pvo_enter(), there is also no need to destroy and
recreate the PTE.


Awesome! All with no runtime tests?

Nothing to say beside it works, on PowerMac G5 (64-bit)(2/4-CPU's) and 
MacMini G4 (32-bit)(1-CPU).


Thank you again!
Andreas

___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Andreas Tobler

Hi Alan,

On 26.07.14 20:10, Alan Cox wrote:

Author: alc
Date: Sat Jul 26 18:10:18 2014
New Revision: 269134
URL: http://svnweb.freebsd.org/changeset/base/269134

Log:
   When unwiring a region of an address space, do not assume that the
   underlying physical pages are mapped by the pmap.  If, for example, the
   application has performed an mprotect(..., PROT_NONE) on any part of the
   wired region, then those pages will no longer be mapped by the pmap.
   So, using the pmap to lookup the wired pages in order to unwire them
   doesn't always work, and when it doesn't work wired pages are leaked.

   To avoid the leak, introduce and use a new function vm_object_unwire()
   that locates the wired pages by traversing the object and its backing
   objects.

   At the same time, switch from using pmap_change_wiring() to the recently
   introduced function pmap_unwire() for unwiring the region's mappings.
   pmap_unwire() is faster, because it operates a range of virtual addresses
   rather than a single virtual page at a time.  Moreover, by operating on
   a range, it is superpage friendly.  It doesn't waste time performing
   unnecessary demotions.

   Reported by: markj
   Reviewed by: kib
   Tested by:   pho, jmg (arm)
   Sponsored by:EMC / Isilon Storage Division


This commit brings my 32- and 64-bit PowerMac's into panic.
Unfortunately I'm not able to give you a backtrace in the form of a 
textdump nor of a core dump.


The only thing I have is this picture:

http://people.freebsd.org/~andreast/r269134_panic.jpg

Exactly this revision gives a panic and breaks the textdump/coredump 
facility.


How can I help debugging?

TIA,
Andreas


Modified:
   head/sys/vm/vm_extern.h
   head/sys/vm/vm_fault.c
   head/sys/vm/vm_map.c
   head/sys/vm/vm_object.c
   head/sys/vm/vm_object.h

Modified: head/sys/vm/vm_extern.h
==
--- head/sys/vm/vm_extern.h Sat Jul 26 17:59:25 2014(r269133)
+++ head/sys/vm/vm_extern.h Sat Jul 26 18:10:18 2014(r269134)
@@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
  int fault_flags, vm_page_t *m_hold);
  int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len,
  vm_prot_t prot, vm_page_t *ma, int max_count);
-void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
  int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
  int vm_forkproc(struct thread *, struct proc *, struct thread *, struct 
vmspace *, int);
  void vm_waitproc(struct proc *);

Modified: head/sys/vm/vm_fault.c
==
--- head/sys/vm/vm_fault.c  Sat Jul 26 17:59:25 2014(r269133)
+++ head/sys/vm/vm_fault.c  Sat Jul 26 18:10:18 2014(r269134)
@@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
  #define PFFOR 4

  static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t *, int *);
+static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);

  #define   VM_FAULT_READ_BEHIND8
  #define   VM_FAULT_READ_MAX   (1 + VM_FAULT_READ_AHEAD_MAX)
@@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
   *
   *Unwire a range of virtual addresses in a map.
   */
-void
+static void
  vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
  boolean_t fictitious)
  {

Modified: head/sys/vm/vm_map.c
==
--- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
+++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014(r269134)
@@ -132,6 +132,7 @@ static void _vm_map_init(vm_map_t map, p
  vm_offset_t max);
  static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t 
system_map);
  static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
+static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
  #ifdef INVARIANTS
  static void vm_map_zdtor(void *mem, int size, void *arg);
  static void vmspace_zdtor(void *mem, int size, void *arg);
@@ -2393,16 +2394,10 @@ done:
(entry-eflags  MAP_ENTRY_USER_WIRED))) {
if (user_unwire)
entry-eflags = ~MAP_ENTRY_USER_WIRED;
-   entry-wired_count--;
-   if (entry-wired_count == 0) {
-   /*
-* Retain the map lock.
-*/
-   vm_fault_unwire(map, entry-start, entry-end,
-   entry-object.vm_object != NULL 
-   (entry-object.vm_object-flags 
-   OBJ_FICTITIOUS) != 0);
-   }
+   if (entry-wired_count == 1)
+   vm_map_entry_unwire(map, entry);
+   else

Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Alan Cox
On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap.  If, for
 example, the
application has performed an mprotect(..., PROT_NONE) on any part
 of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.

To avoid the leak, introduce and use a new function
 vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.

At the same time, switch from using pmap_change_wiring() to the
 recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual
 addresses
rather than a single virtual page at a time.  Moreover, by
 operating on
a range, it is superpage friendly.  It doesn't waste time performing
unnecessary demotions.

Reported by:markj
Reviewed by:kib
Tested by:pho, jmg (arm)
Sponsored by:EMC / Isilon Storage Division

 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the textdump/coredump
 facility.

 How can I help debugging?



For now, that's all I need to know.




 Modified:
head/sys/vm/vm_extern.h
head/sys/vm/vm_fault.c
head/sys/vm/vm_map.c
head/sys/vm/vm_object.c
head/sys/vm/vm_object.h

 Modified: head/sys/vm/vm_extern.h
 ==

 --- head/sys/vm/vm_extern.hSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.hSat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
   int fault_flags, vm_page_t *m_hold);
   int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr,
 vm_size_t len,
   vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_forkproc(struct thread *, struct proc *, struct thread *,
 struct vmspace *, int);
   void vm_waitproc(struct proc *);

 Modified: head/sys/vm/vm_fault.c
 ==

 --- head/sys/vm/vm_fault.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_fault.cSat Jul 26 18:10:18 2014(r269134)
 @@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
   #define PFFOR 4

   static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t
 *, int *);
 +static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t,
 boolean_t);

   #defineVM_FAULT_READ_BEHIND8
   #defineVM_FAULT_READ_MAX(1 + VM_FAULT_READ_AHEAD_MAX)
 @@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
*
*Unwire a range of virtual addresses in a map.
*/
 -void
 +static void
   vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
   boolean_t fictitious)
   {

 Modified: head/sys/vm/vm_map.c
 ==

 --- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014(r269134)
 @@ -132,6 +132,7 @@ static void _vm_map_init(vm_map_t map, p
   vm_offset_t max);
   static void vm_map_entry_deallocate(vm_map_entry_t entry, boolean_t
 system_map);
   static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
 +static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
   #ifdef INVARIANTS
   static void vm_map_zdtor(void *mem, int size, void *arg);
   static void vmspace_zdtor(void *mem, int size, void *arg);
 @@ -2393,16 +2394,10 @@ done:
   (entry-eflags  MAP_ENTRY_USER_WIRED))) {
   if (user_unwire)
   entry-eflags = ~MAP_ENTRY_USER_WIRED;
 -entry-wired_count--;
 -if (entry-wired_count == 0) {
 -/*
 - * Retain the map lock.
 - */
 -vm_fault_unwire(map, entry-start, entry-end,
 -entry-object.vm_object != NULL 
 -(entry-object.vm_object-flags 
 -OBJ_FICTITIOUS) != 0);
 -}
 +if (entry-wired_count == 1)
 +vm_map_entry_unwire(map, entry);
 +else
 +entry-wired_count--;
   }
   

Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Alan Cox
On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap.  If, for
 example, the
application has performed an mprotect(..., PROT_NONE) on any part
 of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.

To avoid the leak, introduce and use a new function
 vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.

At the same time, switch from using pmap_change_wiring() to the
 recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual
 addresses
rather than a single virtual page at a time.  Moreover, by
 operating on
a range, it is superpage friendly.  It doesn't waste time performing
unnecessary demotions.

Reported by:markj
Reviewed by:kib
Tested by:pho, jmg (arm)
Sponsored by:EMC / Isilon Storage Division

 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the textdump/coredump
 facility.

 How can I help debugging?


It appears to me that moea64_pvo_enter() had a pre-existing bug that got
tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
the PVO_WIRED flag when an unwired mapping already exists.  It just
returns with the mapping still in an unwired state.  Consequently, when
pmap_unwire() finally runs, it doesn't find a wired mapping.

Try this:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN) == pa 
(pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
/* Re-insert if spilled */
i = MOEA64_PTE_INSERT(mmu, ptegidx,



 Modified:
head/sys/vm/vm_extern.h
head/sys/vm/vm_fault.c
head/sys/vm/vm_map.c
head/sys/vm/vm_object.c
head/sys/vm/vm_object.h

 Modified: head/sys/vm/vm_extern.h
 ==

 --- head/sys/vm/vm_extern.hSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.hSat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
   int fault_flags, vm_page_t *m_hold);
   int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr,
 vm_size_t len,
   vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_forkproc(struct thread *, struct proc *, struct thread *,
 struct vmspace *, int);
   void vm_waitproc(struct proc *);

 Modified: head/sys/vm/vm_fault.c
 ==

 --- head/sys/vm/vm_fault.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_fault.cSat Jul 26 18:10:18 2014(r269134)
 @@ -106,6 +106,7 @@ __FBSDID($FreeBSD$);
   #define PFFOR 4

   static int vm_fault_additional_pages(vm_page_t, int, int, vm_page_t
 *, int *);
 +static void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t,
 boolean_t);

   #defineVM_FAULT_READ_BEHIND8
   #defineVM_FAULT_READ_MAX(1 + VM_FAULT_READ_AHEAD_MAX)
 @@ -1186,7 +1187,7 @@ vm_fault_wire(vm_map_t map, vm_offset_t
*
*Unwire a range of virtual addresses in a map.
*/
 -void
 +static void
   vm_fault_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end,
   boolean_t fictitious)
   {

 Modified: head/sys/vm/vm_map.c
 ==

 --- head/sys/vm/vm_map.cSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_map.cSat Jul 26 18:10:18 2014

Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Alan Cox
On 07/30/2014 14:46, Alan Cox wrote:
 On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap.  If, for
 example, the
application has performed an mprotect(..., PROT_NONE) on any part
 of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.

To avoid the leak, introduce and use a new function
 vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.

At the same time, switch from using pmap_change_wiring() to the
 recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual
 addresses
rather than a single virtual page at a time.  Moreover, by
 operating on
a range, it is superpage friendly.  It doesn't waste time performing
unnecessary demotions.

Reported by:markj
Reviewed by:kib
Tested by:pho, jmg (arm)
Sponsored by:EMC / Isilon Storage Division
 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the textdump/coredump
 facility.

 How can I help debugging?

 It appears to me that moea64_pvo_enter() had a pre-existing bug that got
 tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
 the PVO_WIRED flag when an unwired mapping already exists.  It just
 returns with the mapping still in an unwired state.  Consequently, when
 pmap_unwire() finally runs, it doesn't find a wired mapping.

 Try this:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
 if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
 if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN) == pa 
 (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
 if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
 /* Re-insert if spilled */
 i = MOEA64_PTE_INSERT(mmu, ptegidx,


The new conditional test needs to be inverted.  Try this instead:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN) == pa 
(pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) == 0) {
if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
/* Re-insert if spilled */
i = MOEA64_PTE_INSERT(mmu, ptegidx,

 Modified:
head/sys/vm/vm_extern.h
head/sys/vm/vm_fault.c
head/sys/vm/vm_map.c
head/sys/vm/vm_object.c
head/sys/vm/vm_object.h

 Modified: head/sys/vm/vm_extern.h
 ==

 --- head/sys/vm/vm_extern.hSat Jul 26 17:59:25 2014(r269133)
 +++ head/sys/vm/vm_extern.hSat Jul 26 18:10:18 2014(r269134)
 @@ -81,7 +81,6 @@ int vm_fault_hold(vm_map_t map, vm_offse
   int fault_flags, vm_page_t *m_hold);
   int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr,
 vm_size_t len,
   vm_prot_t prot, vm_page_t *ma, int max_count);
 -void vm_fault_unwire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_fault_wire(vm_map_t, vm_offset_t, vm_offset_t, boolean_t);
   int vm_forkproc(struct thread *, struct proc *, struct thread *,
 struct vmspace *, int);
   void vm_waitproc(struct proc *);

 Modified: head/sys/vm/vm_fault.c
 

Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Andreas Tobler

On 30.07.14 21:54, Alan Cox wrote:

On 07/30/2014 14:46, Alan Cox wrote:

On 07/30/2014 13:58, Andreas Tobler wrote:

Hi Alan,

On 26.07.14 20:10, Alan Cox wrote:

Author: alc
Date: Sat Jul 26 18:10:18 2014
New Revision: 269134
URL: http://svnweb.freebsd.org/changeset/base/269134

Log:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap.  If, for
example, the
application has performed an mprotect(..., PROT_NONE) on any part
of the
wired region, then those pages will no longer be mapped by the pmap.
So, using the pmap to lookup the wired pages in order to unwire them
doesn't always work, and when it doesn't work wired pages are leaked.

To avoid the leak, introduce and use a new function
vm_object_unwire()
that locates the wired pages by traversing the object and its backing
objects.

At the same time, switch from using pmap_change_wiring() to the
recently
introduced function pmap_unwire() for unwiring the region's mappings.
pmap_unwire() is faster, because it operates a range of virtual
addresses
rather than a single virtual page at a time.  Moreover, by
operating on
a range, it is superpage friendly.  It doesn't waste time performing
unnecessary demotions.

Reported by:markj
Reviewed by:kib
Tested by:pho, jmg (arm)
Sponsored by:EMC / Isilon Storage Division

This commit brings my 32- and 64-bit PowerMac's into panic.
Unfortunately I'm not able to give you a backtrace in the form of a
textdump nor of a core dump.

The only thing I have is this picture:

http://people.freebsd.org/~andreast/r269134_panic.jpg

Exactly this revision gives a panic and breaks the textdump/coredump
facility.

How can I help debugging?


It appears to me that moea64_pvo_enter() had a pre-existing bug that got
tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
the PVO_WIRED flag when an unwired mapping already exists.  It just
returns with the mapping still in an unwired state.  Consequently, when
pmap_unwire() finally runs, it doesn't find a wired mapping.

Try this:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
 if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
 if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN) == pa 
 (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
 if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
 /* Re-insert if spilled */
 i = MOEA64_PTE_INSERT(mmu, ptegidx,



The new conditional test needs to be inverted.  Try this instead:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
 if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
 if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN) == pa 
 (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) == 0) {
 if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
 /* Re-insert if spilled */
 i = MOEA64_PTE_INSERT(mmu, ptegidx,




The panic stays, but the message is different:

panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in 
moea64_pteg_table but valid in pvo.


Andreas

___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Alan Cox
On 07/30/2014 15:15, Andreas Tobler wrote:
 On 30.07.14 21:54, Alan Cox wrote:
 On 07/30/2014 14:46, Alan Cox wrote:
 On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
 When unwiring a region of an address space, do not assume that
 the
 underlying physical pages are mapped by the pmap.  If, for
 example, the
 application has performed an mprotect(..., PROT_NONE) on any part
 of the
 wired region, then those pages will no longer be mapped by the
 pmap.
 So, using the pmap to lookup the wired pages in order to
 unwire them
 doesn't always work, and when it doesn't work wired pages are
 leaked.

 To avoid the leak, introduce and use a new function
 vm_object_unwire()
 that locates the wired pages by traversing the object and its
 backing
 objects.

 At the same time, switch from using pmap_change_wiring() to the
 recently
 introduced function pmap_unwire() for unwiring the region's
 mappings.
 pmap_unwire() is faster, because it operates a range of virtual
 addresses
 rather than a single virtual page at a time.  Moreover, by
 operating on
 a range, it is superpage friendly.  It doesn't waste time
 performing
 unnecessary demotions.

 Reported by:markj
 Reviewed by:kib
 Tested by:pho, jmg (arm)
 Sponsored by:EMC / Isilon Storage Division
 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the textdump/coredump
 facility.

 How can I help debugging?

 It appears to me that moea64_pvo_enter() had a pre-existing bug that
 got
 tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
 the PVO_WIRED flag when an unwired mapping already exists.  It just
 returns with the mapping still in an unwired state.  Consequently, when
 pmap_unwire() finally runs, it doesn't find a wired mapping.

 Try this:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
  if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
  if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
 == pa 
  (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
  if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
  /* Re-insert if spilled */
  i = MOEA64_PTE_INSERT(mmu,
 ptegidx,


 The new conditional test needs to be inverted.  Try this instead:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
  if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
  if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
 == pa 
  (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) ==
 0) {
  if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
  /* Re-insert if spilled */
  i = MOEA64_PTE_INSERT(mmu,
 ptegidx,



 The panic stays, but the message is different:

 panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
 moea64_pteg_table but valid in pvo.


My attempted fix is doing something else wrong.  Do you have a stack trace?


___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Andreas Tobler

On 30.07.14 23:17, Alan Cox wrote:

On 07/30/2014 15:15, Andreas Tobler wrote:

On 30.07.14 21:54, Alan Cox wrote:

On 07/30/2014 14:46, Alan Cox wrote:

On 07/30/2014 13:58, Andreas Tobler wrote:

Hi Alan,

On 26.07.14 20:10, Alan Cox wrote:

Author: alc
Date: Sat Jul 26 18:10:18 2014
New Revision: 269134
URL: http://svnweb.freebsd.org/changeset/base/269134

Log:
 When unwiring a region of an address space, do not assume that
the
 underlying physical pages are mapped by the pmap.  If, for
example, the
 application has performed an mprotect(..., PROT_NONE) on any part
of the
 wired region, then those pages will no longer be mapped by the
pmap.
 So, using the pmap to lookup the wired pages in order to
unwire them
 doesn't always work, and when it doesn't work wired pages are
leaked.

 To avoid the leak, introduce and use a new function
vm_object_unwire()
 that locates the wired pages by traversing the object and its
backing
 objects.

 At the same time, switch from using pmap_change_wiring() to the
recently
 introduced function pmap_unwire() for unwiring the region's
mappings.
 pmap_unwire() is faster, because it operates a range of virtual
addresses
 rather than a single virtual page at a time.  Moreover, by
operating on
 a range, it is superpage friendly.  It doesn't waste time
performing
 unnecessary demotions.

 Reported by:markj
 Reviewed by:kib
 Tested by:pho, jmg (arm)
 Sponsored by:EMC / Isilon Storage Division

This commit brings my 32- and 64-bit PowerMac's into panic.
Unfortunately I'm not able to give you a backtrace in the form of a
textdump nor of a core dump.

The only thing I have is this picture:

http://people.freebsd.org/~andreast/r269134_panic.jpg

Exactly this revision gives a panic and breaks the textdump/coredump
facility.

How can I help debugging?


It appears to me that moea64_pvo_enter() had a pre-existing bug that
got
tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
the PVO_WIRED flag when an unwired mapping already exists.  It just
returns with the mapping still in an unwired state.  Consequently, when
pmap_unwire() finally runs, it doesn't find a wired mapping.

Try this:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
  if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
  if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
== pa 
  (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
  if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
  /* Re-insert if spilled */
  i = MOEA64_PTE_INSERT(mmu,
ptegidx,



The new conditional test needs to be inverted.  Try this instead:

Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
  if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
  if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
== pa 
  (pvo-pvo_pte.lpte.pte_lo  (LPTE_NOEXEC |
LPTE_PP))
-   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
+   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
+   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) ==
0) {
  if (!(pvo-pvo_pte.lpte.pte_hi 
LPTE_VALID)) {
  /* Re-insert if spilled */
  i = MOEA64_PTE_INSERT(mmu,
ptegidx,




The panic stays, but the message is different:

panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
moea64_pteg_table but valid in pvo.



My attempted fix is doing something else wrong.  Do you have a stack trace?


iPhone sei Dank:

http://people.freebsd.org/~andreast/r269134-1_panic.jpg

Thanks!
Andreas


___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org


Re: svn commit: r269134 - head/sys/vm

2014-07-30 Thread Alan Cox
On 07/30/2014 16:26, Andreas Tobler wrote:
 On 30.07.14 23:17, Alan Cox wrote:
 On 07/30/2014 15:15, Andreas Tobler wrote:
 On 30.07.14 21:54, Alan Cox wrote:
 On 07/30/2014 14:46, Alan Cox wrote:
 On 07/30/2014 13:58, Andreas Tobler wrote:
 Hi Alan,

 On 26.07.14 20:10, Alan Cox wrote:
 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134

 Log:
  When unwiring a region of an address space, do not assume that
 the
  underlying physical pages are mapped by the pmap.  If, for
 example, the
  application has performed an mprotect(..., PROT_NONE) on
 any part
 of the
  wired region, then those pages will no longer be mapped by the
 pmap.
  So, using the pmap to lookup the wired pages in order to
 unwire them
  doesn't always work, and when it doesn't work wired pages are
 leaked.

  To avoid the leak, introduce and use a new function
 vm_object_unwire()
  that locates the wired pages by traversing the object and its
 backing
  objects.

  At the same time, switch from using pmap_change_wiring() to
 the
 recently
  introduced function pmap_unwire() for unwiring the region's
 mappings.
  pmap_unwire() is faster, because it operates a range of
 virtual
 addresses
  rather than a single virtual page at a time.  Moreover, by
 operating on
  a range, it is superpage friendly.  It doesn't waste time
 performing
  unnecessary demotions.

  Reported by:markj
  Reviewed by:kib
  Tested by:pho, jmg (arm)
  Sponsored by:EMC / Isilon Storage Division
 This commit brings my 32- and 64-bit PowerMac's into panic.
 Unfortunately I'm not able to give you a backtrace in the form of a
 textdump nor of a core dump.

 The only thing I have is this picture:

 http://people.freebsd.org/~andreast/r269134_panic.jpg

 Exactly this revision gives a panic and breaks the textdump/coredump
 facility.

 How can I help debugging?

 It appears to me that moea64_pvo_enter() had a pre-existing bug that
 got
 tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
 the PVO_WIRED flag when an unwired mapping already exists.  It just
 returns with the mapping still in an unwired state.  Consequently,
 when
 pmap_unwire() finally runs, it doesn't find a wired mapping.

 Try this:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
 uma_zone_t
   if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
   if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
 == pa 
   (pvo-pvo_pte.lpte.pte_lo 
 (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED)) {
   if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
   /* Re-insert if spilled */
   i = MOEA64_PTE_INSERT(mmu,
 ptegidx,


 The new conditional test needs to be inverted.  Try this instead:

 Index: powerpc/aim/mmu_oea64.c
 ===
 --- powerpc/aim/mmu_oea64.c (revision 269127)
 +++ powerpc/aim/mmu_oea64.c (working copy)
 @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
 uma_zone_t
   if (pvo-pvo_pmap == pm  PVO_VADDR(pvo) == va) {
   if ((pvo-pvo_pte.lpte.pte_lo  LPTE_RPGN)
 == pa 
   (pvo-pvo_pte.lpte.pte_lo 
 (LPTE_NOEXEC |
 LPTE_PP))
 -   == (pte_lo  (LPTE_NOEXEC | LPTE_PP))) {
 +   == (pte_lo  (LPTE_NOEXEC | LPTE_PP)) 
 +   ((pvo-pvo_vaddr ^ flags)  PVO_WIRED) ==
 0) {
   if (!(pvo-pvo_pte.lpte.pte_hi 
 LPTE_VALID)) {
   /* Re-insert if spilled */
   i = MOEA64_PTE_INSERT(mmu,
 ptegidx,



 The panic stays, but the message is different:

 panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
 moea64_pteg_table but valid in pvo.


 My attempted fix is doing something else wrong.  Do you have a stack
 trace?

 iPhone sei Dank:

 http://people.freebsd.org/~andreast/r269134-1_panic.jpg

Ok, this patch should fix both the original panic and the new one.  They
are two distinct problems.




Index: powerpc/aim/mmu_oea64.c
===
--- powerpc/aim/mmu_oea64.c (revision 269127)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -1090,6 +1090,7 @@ moea64_unwire(mmu_t mmu, pmap_t pm, vm_offset_t sv
  

Re: svn commit: r269134 - head/sys/vm

2014-07-29 Thread Slawa Olhovchenkov
On Sat, Jul 26, 2014 at 06:10:18PM +, Alan Cox wrote:

 Author: alc
 Date: Sat Jul 26 18:10:18 2014
 New Revision: 269134
 URL: http://svnweb.freebsd.org/changeset/base/269134
 
 Log:
   When unwiring a region of an address space, do not assume that the
   underlying physical pages are mapped by the pmap.  If, for example, the
   application has performed an mprotect(..., PROT_NONE) on any part of the
   wired region, then those pages will no longer be mapped by the pmap.
   So, using the pmap to lookup the wired pages in order to unwire them
   doesn't always work, and when it doesn't work wired pages are leaked.
   
   To avoid the leak, introduce and use a new function vm_object_unwire()
   that locates the wired pages by traversing the object and its backing
   objects.

MFC planed?

   At the same time, switch from using pmap_change_wiring() to the recently
   introduced function pmap_unwire() for unwiring the region's mappings.
   pmap_unwire() is faster, because it operates a range of virtual addresses
   rather than a single virtual page at a time.  Moreover, by operating on
   a range, it is superpage friendly.  It doesn't waste time performing
   unnecessary demotions.
   
   Reported by:markj
   Reviewed by:kib
   Tested by:  pho, jmg (arm)
   Sponsored by:   EMC / Isilon Storage Division
___
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to svn-src-all-unsubscr...@freebsd.org