Re: PV i386 patch
On Thu, 2011-12-29 at 14:52 -0800, Alan Cox wrote: On 12/29/2011 16:28, Sean Bruno wrote: On Thu, 2011-12-29 at 12:22 -0800, Alan Cox wrote: Please try this patch. It eliminates a race condition that might actually account for some of the crashes in FreeBSD= 9 on Xen. Alan ref10-xen32.freebsd.org has this applied now. Looks ok to me? I know that lmbench's bw_pipe program exercises the code path that I changed if you want to give it a bit more testing. Alan Ok, I ran a thing ... I have a lot of poking around to do in lmbench now. Thanks for pointing me at this. Cursory run with patch xen-pmap.c [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe Pipe bandwidth: 892.52 MB/sec [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe -N 5 Pipe bandwidth: 982.46 MB/sec [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe -N 5 -P 2 Pipe bandwidth: 899.34 MB/sec Cursory run of vanilla-current: [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe Pipe bandwidth: 984.37 MB/sec [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe -N 5 Pipe bandwidth: 977.54 MB/sec [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ ./bw_pipe -N 5 -P 2 Pipe bandwidth: 887.26 MB/sec [sbruno@ref10-xen32 /usr/local/lib/lmbench/bin/i386-freebsd10.0]$ uname -a FreeBSD ref10-xen32.freebsd.org 10.0-CURRENT FreeBSD 10.0-CURRENT #1 r228971: Fri Dec 30 18:27:01 UTC 2011 sbr...@ref10-xen32.freebsd.org:/var/tmp/dumpster/scratch/sbruno-scratch/head/sys/XEN i386 ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
Please try this patch. It eliminates a race condition that might actually account for some of the crashes in FreeBSD = 9 on Xen. Alan Index: i386/xen/pmap.c === --- i386/xen/pmap.c (revision 228935) +++ i386/xen/pmap.c (working copy) @@ -1122,7 +1122,7 @@ vm_page_t pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot) { pd_entry_t pde; - pt_entry_t pte; + pt_entry_t pte, *ptep; vm_page_t m; vm_paddr_t pa; @@ -1142,21 +1142,17 @@ retry: vm_page_hold(m); } } else { - sched_pin(); - pte = PT_GET(pmap_pte_quick(pmap, va)); - if (*PMAP1) - PT_SET_MA(PADDR1, 0); - if ((pte PG_V) + ptep = pmap_pte(pmap, va); + pte = PT_GET(ptep); + pmap_pte_release(ptep); + if (pte != 0 ((pte PG_RW) || (prot VM_PROT_WRITE) == 0)) { if (vm_page_pa_tryrelock(pmap, pte PG_FRAME, - pa)) { - sched_unpin(); + pa)) goto retry; - } m = PHYS_TO_VM_PAGE(pte PG_FRAME); vm_page_hold(m); } - sched_unpin(); } } PA_UNLOCK_COND(pa); @@ -2316,6 +2312,8 @@ pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offse * Calculate index for next page table. */ pdnxt = (sva + NBPDR) ~PDRMASK; + if (pdnxt sva) + pdnxt = eva; if (pmap-pm_stats.resident_count == 0) break; @@ -2471,6 +2469,8 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offs u_int pdirindex; pdnxt = (sva + NBPDR) ~PDRMASK; + if (pdnxt sva) + pdnxt = eva; pdirindex = sva PDRSHIFT; ptpaddr = pmap-pm_pdir[pdirindex]; @@ -3172,6 +3172,8 @@ pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_off (pmap_copy: invalid to pmap_copy page tables)); pdnxt = (addr + NBPDR) ~PDRMASK; + if (pdnxt addr) + pdnxt = end_addr; ptepindex = addr PDRSHIFT; srcptepaddr = PT_GET(src_pmap-pm_pdir[ptepindex]); ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Thu, 2011-12-29 at 12:22 -0800, Alan Cox wrote: Please try this patch. It eliminates a race condition that might actually account for some of the crashes in FreeBSD = 9 on Xen. Alan ref10-xen32.freebsd.org has this applied now. Looks ok to me? Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Tue, 2011-12-27 at 22:14 -0800, Adrian Chadd wrote: On 27 December 2011 15:24, Sean Bruno sean...@yahoo-inc.com wrote: Initial testing looks ok from here. Single CPU PV DomU is up and running as ref10-xen32.f.o if you want to poke around at all. I'm updating the HVM enabled ref10-xen64.f.o as well to check it out. Since I don't yet have my test environment going here, is anyone here running (developer) accessible PVM hosts (32 bit) that I can get access to? I can run a whole sleuth of thrashing tests on it to see if it breaks. Thanks, Adrian Yes. I've been keeping a linux dom0 running in the fbsd cluster. Go ahead and poke at ref10-xen32.f.o ... that's what its there for. Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
Dear developers, I am happy to hear that PV i386 is evolving. I am using releng_8_2 right now, and would like to know if some of this patches will ever get merged to that tree. Thanks in advance, Kojedzinszky Richard Euronet Magyarorszag Informatikai Zrt. On Wed, 28 Dec 2011, Sean Bruno wrote: Date: Wed, 28 Dec 2011 04:47:40 -0800 From: Sean Bruno sean...@yahoo-inc.com To: Adrian Chadd adr...@freebsd.org Cc: x...@freebsd.org x...@freebsd.org, Alan Cox a...@rice.edu Subject: Re: PV i386 patch On Tue, 2011-12-27 at 22:14 -0800, Adrian Chadd wrote: On 27 December 2011 15:24, Sean Bruno sean...@yahoo-inc.com wrote: Initial testing looks ok from here. Single CPU PV DomU is up and running as ref10-xen32.f.o if you want to poke around at all. I'm updating the HVM enabled ref10-xen64.f.o as well to check it out. Since I don't yet have my test environment going here, is anyone here running (developer) accessible PVM hosts (32 bit) that I can get access to? I can run a whole sleuth of thrashing tests on it to see if it breaks. Thanks, Adrian Yes. I've been keeping a linux dom0 running in the fbsd cluster. Go ahead and poke at ref10-xen32.f.o ... that's what its there for. Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Tue, 2011-12-27 at 09:40 -0800, Alan Cox wrote: On 12/23/2011 16:25, Sean Bruno wrote: On Wed, 2011-12-21 at 12:47 -0800, Alan Cox wrote: Can you please try the attached patch? I'm trying to reduce the number of differences between the native and Xen pmap implementations. Alan Without really looking at the output, I note that this didn't apply cleanly against -head ... can you regenerate it? My bad. I gave you the wrong patch. Try this instead. Alan Initial testing looks ok from here. Single CPU PV DomU is up and running as ref10-xen32.f.o if you want to poke around at all. I'm updating the HVM enabled ref10-xen64.f.o as well to check it out. Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On 27 December 2011 15:24, Sean Bruno sean...@yahoo-inc.com wrote: Initial testing looks ok from here. Single CPU PV DomU is up and running as ref10-xen32.f.o if you want to poke around at all. I'm updating the HVM enabled ref10-xen64.f.o as well to check it out. Since I don't yet have my test environment going here, is anyone here running (developer) accessible PVM hosts (32 bit) that I can get access to? I can run a whole sleuth of thrashing tests on it to see if it breaks. Thanks, Adrian ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On 21 December 2011 12:47, Alan Cox a...@rice.edu wrote: Can you please try the attached patch? I'm trying to reduce the number of differences between the native and Xen pmap implementations. Hi, When I last tinkered with Xen, I noticed that it was _very_ easy to end up with FS corruption by just doing a whole lot of parallel software builds (in my case squid). This happened with single-CPU VMs too. I wonder if this would be fixed by your work.. Adrian ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Wed, 2011-12-21 at 12:47 -0800, Alan Cox wrote: Can you please try the attached patch? I'm trying to reduce the number of differences between the native and Xen pmap implementations. Alan I will test this today, this should apply against 9 and -current ? Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Tue, 2011-12-20 at 10:49 -0800, Alan Cox wrote: On 12/20/2011 07:28, Sean Bruno wrote: The code that panics shouldn't even exist in the Xen pmap. Try the attached patch. Alan Indeed how on earth did we ever use this stuff? :-) Tested to 2G on ref9-xen32.f.o should I go any higher? Sure. Right now, I don't know of any reason that it should crash with more memory. :-) Do either of you know if there is a PR in gnats for this 768 MB limitation bug that I should mention in the commit log? Alan No, I just checked and I didn't put a PR in ... lame. I think I discussed it with colin or kip, but I don't remember if we ever figured out anything else. Sean ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On 12/20/2011 13:57, Sean Bruno wrote: On Tue, 2011-12-20 at 10:49 -0800, Alan Cox wrote: On 12/20/2011 07:28, Sean Bruno wrote: The code that panics shouldn't even exist in the Xen pmap. Try the attached patch. Alan Indeed how on earth did we ever use this stuff? :-) Tested to 2G on ref9-xen32.f.o should I go any higher? Sure. Right now, I don't know of any reason that it should crash with more memory. :-) Do either of you know if there is a PR in gnats for this 768 MB limitation bug that I should mention in the commit log? Alan No, I just checked and I didn't put a PR in ... lame. I think I discussed it with colin or kip, but I don't remember if we ever figured out anything else. Ok. I'll commit the patch shortly. (Note that what I'll commit will remove another bit of unnecessary code.) Alan ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On 12/20/11 10:49, Alan Cox wrote: Do either of you know if there is a PR in gnats for this 768 MB limitation bug that I should mention in the commit log? The only one I'm aware of is kern/153789. -- Colin Percival Security Officer, FreeBSD | freebsd.org | The power to serve Founder / author, Tarsnap | tarsnap.com | Online backups for the truly paranoid ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On Sat, 2011-12-17 at 18:01 -0800, Colin Percival wrote: On 12/17/11 16:56, Sean Bruno wrote: This seems happy on our ref9 VMs. I don't suppose this means I can go above 768M of Ram now? Can't hurt to try... whatever the problem is with our code and large amounts of RAM, the fact that it's an insta-panic during paging setup suggests that it's something at a similar level of fail. Nope, insta panic ... early though. 768M works, 1024M panics. [root@xen1 sbruno]# /usr/sbin/xm create -c ref9-xen32 Using config file /etc/xen/ref9-xen32. Started domain ref9-xen32 (id=109) WARNING: loader(8) metadata is missing! GDB: no debug ports present KDB: debugger backends: ddb KDB: current backend: ddb Copyright (c) 1992-2011 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.0-PRERELEASE #0: Sat Dec 17 16:13:02 PST 2011 sbr...@ref9-xen32.freebsd.org:/dumpster/scratch/sbruno-scratch/9/sys/i386/compile/XEN i386 WARNING: WITNESS option enabled, expect reduced performance. panic: pmap_init: page table page is out of range cpuid = 0 KDB: enter: panic [ thread pid 0 tid 0 ] Stopped at 0xc0181d7a: movl$0,0xc0478174 ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
On 12/17/2011 18:56, Sean Bruno wrote: On Fri, 2011-12-16 at 11:32 -0800, Alan Cox wrote: Is anyone here actively working on fixing problems with SMP support under PV i386? While doing some other maintenance on the vm_page_alloc() callers in the source tree, I happened to take a look at cpu_initialize_context() in mp_machdep.c. This function is involved in bringing up the 2nd, 3rd, etc. CPUs on an SMP system. I spotted a couple obvious errors. First, the size parameter given to kmem_*() functions is expected to be in terms of bytes and not pages. Second, I believe that PV i386 requires PAE to be used. If so, there are out of range accesses to the array m[]. Index: i386/xen/mp_machdep.c === --- i386/xen/mp_machdep.c (revision 228561) +++ i386/xen/mp_machdep.c (working copy) @@ -810,7 +810,7 @@ cpu_initialize_context(unsigned int cpu) { /* vcpu_guest_context_t is too large to allocate on the stack. * Hence we allocate statically and protect it with a lock */ - vm_page_t m[4]; + vm_page_t m[NPGPTD + 2]; static vcpu_guest_context_t ctxt; vm_offset_t boot_stack; vm_offset_t newPTD; @@ -831,8 +831,8 @@ cpu_initialize_context(unsigned int cpu) pmap_zero_page(m[i]); } - boot_stack = kmem_alloc_nofault(kernel_map, 1); - newPTD = kmem_alloc_nofault(kernel_map, NPGPTD); + boot_stack = kmem_alloc_nofault(kernel_map, PAGE_SIZE); + newPTD = kmem_alloc_nofault(kernel_map, NPGPTD * PAGE_SIZE); ma[0] = VM_PAGE_TO_MACH(m[0])|PG_V; #ifdef PAE @@ -854,7 +854,7 @@ cpu_initialize_context(unsigned int cpu) nkpt*sizeof(vm_paddr_t)); pmap_qremove(newPTD, 4); - kmem_free(kernel_map, newPTD, 4); + kmem_free(kernel_map, newPTD, 4 * PAGE_SIZE); /* * map actual idle stack to boot_stack */ This seems happy on our ref9 VMs. I don't suppose this means I can go above 768M of Ram now? It's not clear to me that this test actually exercised the function that I changed, which is only executed when you spin up a 2nd, 3rd, etc. processor, and it runs only on those processors. Am I missing something? Alan [root@xen1 sbruno]# /usr/sbin/xm create -c ref9-xen32 Using config file /etc/xen/ref9-xen32. Started domain ref9-xen32 (id=106) WARNING: loader(8) metadata is missing! GDB: no debug ports present KDB: debugger backends: ddb KDB: current backend: ddb Copyright (c) 1992-2011 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.0-PRERELEASE #0: Sat Dec 17 16:13:02 PST 2011 sbr...@ref9-xen32.freebsd.org:/dumpster/scratch/sbruno-scratch/9/sys/i386/compile/XEN i386 WARNING: WITNESS option enabled, expect reduced performance. Xen reported: 2493.756 MHz processor. Timecounter ixen frequency 1953125 Hz quality 0 CPU: Intel(R) Xeon(R) CPU L5420 @ 2.50GHz (2493.76-MHz 686-class CPU) Origin = GenuineIntel Id = 0x10676 Family = 6 Model = 17 Stepping = 6 Features=0xbfe3fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE Features2=0xce3bdSSE3,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,DCA,SSE4.1 AMD Features=0x2010NX,LM AMD Features2=0x1LAHF real memory = 805306368 (768 MB) avail memory = 776830976 (740 MB) [XEN] IPI cpu=0 irq=128 vector=RESCHEDULE_VECTOR (0) [XEN] IPI cpu=0 irq=129 vector=CALL_FUNCTION_VECTOR (1) [XEN] xen_rtc_probe: probing Hypervisor RTC clock rtc0:Xen Hypervisor Clock on motherboard [XEN] xen_rtc_attach: attaching Hypervisor RTC clock xenstore0:XenStore on motherboard xc0:Xen Console on motherboard Event timer ixen quality 600 Timecounters tick every 10.000 msec xenbusb_front0:Xen Frontend Devices on xenstore0 [XEN] hypervisor wallclock nudged; nudging TOD. xn0:Virtual Network Interface at device/vif/0 on xenbusb_front0 xn0: Ethernet address: 00:16:3e:00:00:03 xenbusb_back0:Xen Backend Devices on xenstore0 xctrl0:Xen Control Device on xenstore0 xn0: backend features: feature-sg feature-gso-tcp4 xbd0: 10240MBVirtual Block Device at device/vbd/768 on xenbusb_front0 xbd0: attaching as ad0 Timecounter TSC frequency 2493756000 Hz quality 800 WARNING: WITNESS option enabled, expect reduced performance. Trying to mount root from ufs:/dev/ad0p2 []... rtc0: [XEN] xen_rtc_gettime rtc0: [XEN] xen_rtc_gettime: wallclock 1313550543 sec; 871707442 nsec rtc0: [XEN] xen_rtc_gettime: uptime 10619933 sec; 620343100 nsec rtc0: [XEN] xen_rtc_gettime: TOD 1324170477 sec; 492050542 nsec Setting hostuuid: 1c127834-ab5a-c2e4-7b24-5ea29d364d9d. Setting hostid: 0xdea9fbfd. Entropy harvesting: interrupts ethernet point_to_point kickstart. Starting file system checks:
Re: PV i386 patch
On Fri, 2011-12-16 at 11:32 -0800, Alan Cox wrote: Is anyone here actively working on fixing problems with SMP support under PV i386? While doing some other maintenance on the vm_page_alloc() callers in the source tree, I happened to take a look at cpu_initialize_context() in mp_machdep.c. This function is involved in bringing up the 2nd, 3rd, etc. CPUs on an SMP system. I spotted a couple obvious errors. First, the size parameter given to kmem_*() functions is expected to be in terms of bytes and not pages. Second, I believe that PV i386 requires PAE to be used. If so, there are out of range accesses to the array m[]. Index: i386/xen/mp_machdep.c === --- i386/xen/mp_machdep.c (revision 228561) +++ i386/xen/mp_machdep.c (working copy) @@ -810,7 +810,7 @@ cpu_initialize_context(unsigned int cpu) { /* vcpu_guest_context_t is too large to allocate on the stack. * Hence we allocate statically and protect it with a lock */ - vm_page_t m[4]; + vm_page_t m[NPGPTD + 2]; static vcpu_guest_context_t ctxt; vm_offset_t boot_stack; vm_offset_t newPTD; @@ -831,8 +831,8 @@ cpu_initialize_context(unsigned int cpu) pmap_zero_page(m[i]); } - boot_stack = kmem_alloc_nofault(kernel_map, 1); - newPTD = kmem_alloc_nofault(kernel_map, NPGPTD); + boot_stack = kmem_alloc_nofault(kernel_map, PAGE_SIZE); + newPTD = kmem_alloc_nofault(kernel_map, NPGPTD * PAGE_SIZE); ma[0] = VM_PAGE_TO_MACH(m[0])|PG_V; #ifdef PAE @@ -854,7 +854,7 @@ cpu_initialize_context(unsigned int cpu) nkpt*sizeof(vm_paddr_t)); pmap_qremove(newPTD, 4); - kmem_free(kernel_map, newPTD, 4); + kmem_free(kernel_map, newPTD, 4 * PAGE_SIZE); /* * map actual idle stack to boot_stack */ This seems happy on our ref9 VMs. I don't suppose this means I can go above 768M of Ram now? [root@xen1 sbruno]# /usr/sbin/xm create -c ref9-xen32 Using config file /etc/xen/ref9-xen32. Started domain ref9-xen32 (id=106) WARNING: loader(8) metadata is missing! GDB: no debug ports present KDB: debugger backends: ddb KDB: current backend: ddb Copyright (c) 1992-2011 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.0-PRERELEASE #0: Sat Dec 17 16:13:02 PST 2011 sbr...@ref9-xen32.freebsd.org:/dumpster/scratch/sbruno-scratch/9/sys/i386/compile/XEN i386 WARNING: WITNESS option enabled, expect reduced performance. Xen reported: 2493.756 MHz processor. Timecounter ixen frequency 1953125 Hz quality 0 CPU: Intel(R) Xeon(R) CPU L5420 @ 2.50GHz (2493.76-MHz 686-class CPU) Origin = GenuineIntel Id = 0x10676 Family = 6 Model = 17 Stepping = 6 Features=0xbfe3fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE Features2=0xce3bdSSE3,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,DCA,SSE4.1 AMD Features=0x2010NX,LM AMD Features2=0x1LAHF real memory = 805306368 (768 MB) avail memory = 776830976 (740 MB) [XEN] IPI cpu=0 irq=128 vector=RESCHEDULE_VECTOR (0) [XEN] IPI cpu=0 irq=129 vector=CALL_FUNCTION_VECTOR (1) [XEN] xen_rtc_probe: probing Hypervisor RTC clock rtc0: Xen Hypervisor Clock on motherboard [XEN] xen_rtc_attach: attaching Hypervisor RTC clock xenstore0: XenStore on motherboard xc0: Xen Console on motherboard Event timer ixen quality 600 Timecounters tick every 10.000 msec xenbusb_front0: Xen Frontend Devices on xenstore0 [XEN] hypervisor wallclock nudged; nudging TOD. xn0: Virtual Network Interface at device/vif/0 on xenbusb_front0 xn0: Ethernet address: 00:16:3e:00:00:03 xenbusb_back0: Xen Backend Devices on xenstore0 xctrl0: Xen Control Device on xenstore0 xn0: backend features: feature-sg feature-gso-tcp4 xbd0: 10240MB Virtual Block Device at device/vbd/768 on xenbusb_front0 xbd0: attaching as ad0 Timecounter TSC frequency 2493756000 Hz quality 800 WARNING: WITNESS option enabled, expect reduced performance. Trying to mount root from ufs:/dev/ad0p2 []... rtc0: [XEN] xen_rtc_gettime rtc0: [XEN] xen_rtc_gettime: wallclock 1313550543 sec; 871707442 nsec rtc0: [XEN] xen_rtc_gettime: uptime 10619933 sec; 620343100 nsec rtc0: [XEN] xen_rtc_gettime: TOD 1324170477 sec; 492050542 nsec Setting hostuuid: 1c127834-ab5a-c2e4-7b24-5ea29d364d9d. Setting hostid: 0xdea9fbfd. Entropy harvesting: interrupts ethernet point_to_point kickstart. Starting file system checks: /dev/ad0p2: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ad0p2: clean, 1874771 free (883 frags, 234236 blocks, 0.0% fragmentation) Mounting local file systems:. Setting hostname: ref9-xen32.freebsd.org. xn0: link state changed to DOWN xn0: link state
Re: PV i386 patch
On 12/17/11 16:56, Sean Bruno wrote: This seems happy on our ref9 VMs. I don't suppose this means I can go above 768M of Ram now? Can't hurt to try... whatever the problem is with our code and large amounts of RAM, the fact that it's an insta-panic during paging setup suggests that it's something at a similar level of fail. -- Colin Percival Security Officer, FreeBSD | freebsd.org | The power to serve Founder / author, Tarsnap | tarsnap.com | Online backups for the truly paranoid ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org
Re: PV i386 patch
I'll test this out on the VMs in the fbsd cluster later. Sean On Fri, 2011-12-16 at 11:32 -0800, Alan Cox wrote: Is anyone here actively working on fixing problems with SMP support under PV i386? While doing some other maintenance on the vm_page_alloc() callers in the source tree, I happened to take a look at cpu_initialize_context() in mp_machdep.c. This function is involved in bringing up the 2nd, 3rd, etc. CPUs on an SMP system. I spotted a couple obvious errors. First, the size parameter given to kmem_*() functions is expected to be in terms of bytes and not pages. Second, I believe that PV i386 requires PAE to be used. If so, there are out of range accesses to the array m[]. Index: i386/xen/mp_machdep.c === --- i386/xen/mp_machdep.c (revision 228561) +++ i386/xen/mp_machdep.c (working copy) @@ -810,7 +810,7 @@ cpu_initialize_context(unsigned int cpu) { /* vcpu_guest_context_t is too large to allocate on the stack. * Hence we allocate statically and protect it with a lock */ - vm_page_t m[4]; + vm_page_t m[NPGPTD + 2]; static vcpu_guest_context_t ctxt; vm_offset_t boot_stack; vm_offset_t newPTD; @@ -831,8 +831,8 @@ cpu_initialize_context(unsigned int cpu) pmap_zero_page(m[i]); } - boot_stack = kmem_alloc_nofault(kernel_map, 1); - newPTD = kmem_alloc_nofault(kernel_map, NPGPTD); + boot_stack = kmem_alloc_nofault(kernel_map, PAGE_SIZE); + newPTD = kmem_alloc_nofault(kernel_map, NPGPTD * PAGE_SIZE); ma[0] = VM_PAGE_TO_MACH(m[0])|PG_V; #ifdef PAE @@ -854,7 +854,7 @@ cpu_initialize_context(unsigned int cpu) nkpt*sizeof(vm_paddr_t)); pmap_qremove(newPTD, 4); - kmem_free(kernel_map, newPTD, 4); + kmem_free(kernel_map, newPTD, 4 * PAGE_SIZE); /* * map actual idle stack to boot_stack */ ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org ___ freebsd-xen@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-xen To unsubscribe, send any mail to freebsd-xen-unsubscr...@freebsd.org