On Sat, 16 Dec 2006 11:34, Jimi Xenidis wrote:
If you really want to explore mem/page copy for XenPPC then you have
to understand that since we run without an MMU, profiling code with
MMU on, _including_ RMA, is not helpful because the access is guarded ...
Please run your experiments
Using dcbz avoids first reading a cache line from memory before writing to the
line.
Timing results (starting with clean cache, ie no write-backs for dirty lines):
JS20:
elapsed time: 0x9f5e
elapsed time using dcbz: 0x569e
elapsed time: 0x9fe9
elapsed time
So do you have a patch for copy_page()?
In Xen for PPC, the only copy_page() is in arch/powerpc/mm.c:
extern void copy_page(void *dp, void *sp)
{
if (on_systemsim()) {
systemsim_memcpy(dp, sp, PAGE_SIZE);
} else {
memcpy(dp, sp, PAGE_SIZE);
}
}
1) Also copy_page is
3) Useful when PPC must do page copies in place of 'page flipping'.
So you're saying we should worry about it later?
For the future, copy_page using dcbz:
diff -r 7669fca80bfc xen/arch/powerpc/mm.c
--- a/xen/arch/powerpc/mm.c Mon Dec 04 11:46:53 2006 -0500
+++ b/xen/arch/powerpc/mm.c
session.
+ *
+ * Copyright (c) 2003, K A Fraser.
+ * Rewritten for PPC: Dan Poff [EMAIL PROTECTED], Yi Ge [EMAIL PROTECTED]
+ */
+
+#include inttypes.h
+#include stdlib.h
+#include unistd.h
+#include xen/asm/htab.h
+
+#include xg_private.h
+
+#define DECOR 0x8000
If this machine would like 192.x machines to acces the 9.x network
the ip_forwrding is necessary.
Right - I was thinking of DomU access to only 192.x.
For machines on 192.x to access 9.x, you need forwarding...
___
Xen-ppc-devel mailing list
I think the problem is that the bridge favors the default route, so I
do not think swapping would work.
On my victim JS21, eth0 is the default, and xen scripts work as written.
However, in this case, DomU accesses the 9.2x network, rather than 192.x
If this machine would like 192.x machines to acces the 9.x network
the ip_forwrding is necessary.
Right - I was thinking of DomU access to only 192.x.
For machines on 192.x to access 9.x, you need forwarding...
On the other hand...
CSO is setup with 192.168.0.1 as router/gateway (on the
BTW: If you would like to have DomUs to have access to the outside
world then you also want to make sure you have ip forwarding turned
on:
# echo 1 /proc/sys/net/ipv4/ip_forward
forever change IP_FORWARD= from no to yes in /etc/sysconfig/sysctl
'ip forwarding' is not necessary - I
We have had a couple network configuration mysteries, (including setting
'network-bridge netdev=eth0')
due to CSO usage of eth1 as default/extenal adapter, rather than eth0. Probably
the xen scripts would
have worked without mods if eth0 1 were swapped...
cso89:~ # netstat -rn
netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt
Iface
9.2.78.10.0.0.0 255.255.255.255 UH0 0 0
eth0
9.2.78.00.0.0.0 255.255.255.0 U 0 0 0
I now think the console prints in previous mail are useless.
Example 2 runs while example 3 wedges, yet the prints are
roughly equivalent...
Also today there have been several runs similar to example 2.
I modified python code to skip the 'unpause' at the end of
domain restore. The drill: boot,
'xm restore' immediately following boot usually wedges the cpu.
However, xm save followed by xm restore works fine (even when
guest domain and htab are relocated to new memory areas).
^AAA shows: with .plpar_hcall_norets @ c003af78
and .HYPERVISOR_sched_op @
Hmm.. I capped my Dom0 to 192M and 64M and it worked fine. The only
reason that mempool_create() can fail is if an underlying kmalloc
failed, I don't think that we are trying to get so much memory.
Hey! did you update Xen as well? because the number of pages was way
too large before.
arch_gnttab_map: grant table at d80082016000
setup_grant_area: mempool_create(): failed
kernel BUG in setup_grant_area at
/home/poff/linux-ppc-2.6-work.hg/arch/powerpc/platforms/xen/gnttab.c:420!
cpu 0x0: Vector: 700 (Program Check) at [c0a3fa50]
pc: c0043fc0: .arch_gnttab_map+0x1bc
Dan, we assumed that xend built a domain shell before the restor
process, is this not the case.
The restore code does not call initDomain(), so allocMem() is never run:
If I change restore to include initDomain(), then good mfns showup...
Uncomfortable with this 'hack' - don't know where
It seems like an odd disconnect. I wonder if some of what we have in
initDomain should actually be in vm.construct()?!
Also, curious that createDevices() and createChannels() is included in
initDomain(),
while vm.restore() calls them directly.
___
'restore' maps guest memory then reads the saved memory image from disk.
xc_get_pfn_list() provides mfns of guest memory, used for mapping the guest
pages.
When using these mfns, the system crashes. Ki realized these mfns are too small.
Also when creating a new domain, the console prints message
Looking at xlate.c, the htab and entries are access in following way:
struct vcpu *v = get_current();
struct domain *d = v-domain;
struct domain_htab *htab = d-arch.htab;
union pte volatile *pte;
pte = htab-map[ptex];
htab-map is the HTAB, remember it is treated like
I don't know if I'm off base but have you added appropriate code to
linux? specifically arch/powerpc/platforms/xen/hcall.c ?
Yes, this was the problem, but I had been looking at xen/arch/powerpc/hcalls.c,
not realizing that you were pointing-out a different file...
In face hcall.c resides on
Looking at xlate.c, the htab and entries are access in following way:
struct vcpu *v = get_current();
struct domain *d = v-domain;
struct domain_htab *htab = d-arch.htab;
union pte volatile *pte;
pte = htab-map[ptex];
I've inserted this code into xen/arch/powerpc/domctl.c,
Similar panic whenever 'shutdown -h' on JS20 (using local ide for root fs):
...
/dev/hda2 umounted done
done
Shutting down MD Raid
2) How do you 'refresh' python?
Answer: restart xend
___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel
Have updated LTC Wiki to include http://watgsa.ibm.com/projects/s/slof
as local source for SLOF images. Could you please include local source
for 'update_flash' utility as well (ie copy to //watgsa for examaple?)
Can find this utility at klinux7:/home/reflash/slof/
SOL is broken on our older model bladecenter.
To be clear, this is a problem with _your_specific_ blade center or
older models
if you can use SOL to talk to your linux console without Xen and you
cannot _with_ Xen then we need to get to the bottom of that.
SOL is broken on the 'older
chosen from 1 choice
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
USB Universal Host Controller Interface driver v3.0
usbcore: registered new driver usbhid
/home/poff/linux-ppc-2.6-work.hg/drivers/usb/input/hid-core.c: v2.6:USB HID
core driver
pegasus: v0.6.13 (2005/11/13), Pegasus
If you want ssh, you need init scripts to run, so you're going to need
to drop the init=/bin/bash here.
Yes, that worked nicely -
Was confused since when booting Dom0 with an nfs root, sshd comes up
even though 'init=/sbin/quickinit noshell' ... looks like quickinit
provides some services.
Following code includes assembler versions of clear_page_cacheable(), by
Xenidis,
copy_page(), and copy_page_cacheable(). The 'cacheable' versions use 'dcbz' for
clearing cache lines; the target page is assumed to be cacheable.
This code has been debugged with a small application program on
This code walks the OF dev tree, finding end-of-memory and memory holes.
All memory beyond the hypervisor's RMA is added to domheap. (Previously
only memory upto 1st hole was used.) Finally, parts of setup.c have been
swept into memory.c as cleanup.
diff -r 9c72449e4370 xen/arch/powerpc/setup.c
-iwithprefix include -Wall -Werror -pipe
-I/home/poff/xenppc-unstable-work7.hg/xen/include
-I/home/poff/xenppc-unstable-work7.hg/xen/include/asm-powerpc/mach-generic
-I/home/poff/xenppc-unstable-work7.hg/xen/include/asm-powerpc/mach-default
-Wpointer-arith -Wredundant-decls -Wpacked -msoft-float -O2 -O0 -g
+47,7 @@ obj-y += elf32.o
# These are extra warnings like for the arch/ppc directory but may not
# allow the rest of the tree to build.
PPC_C_WARNINGS += -Wundef -Wmissing-prototypes -Wmissing-declarations
+PPC_C_WARNINGS += -Wshadow
CFLAGS += $(PPC_C_WARNINGS)
make -f /home/poff/xenppc
We have xen running on an Intel blade with SuSE10; may be helpful to view
scripts logs...
___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel
32 matches
Mail list logo