Isaku Yamahata wrote: >> Dual compile could be a good approach. Another alternative will be >> X86 pv_ops like dynamic binary patching per compile time hints. The >> later method also uses macro for those different path, but this >> macro will generate a special section including the information for >> runtime patch base on pv_ops hook. (some kind of similar to >> Yamahata's binary patch method though it addresses C side code) >> >> With dynamic pv_ops patch, the native machine side will again see >> original single instruction + some nop. So I guess the performance >> lose will be very minor. Both approach is very similar and could be >> left to Linux community's favorite in future :) > > Actually already we adopted dual compilatin approach for gate page. > See gate.S in xenLinux arch/ia64/kernel/gate.S and Makefile. > I'm guessing that dual compiling approach is easier than binary > patching approach because some modification of xenivt.S doesn't > correspond to single instruction. Yes I agree that we can go for > either way according to upstream favor.
Yes, it is there already. When we implemented pv_ops, I would assume we define the APIs and would ask future kernel patches to follow too (not conflict those APIs). So we have to define clear clobber register in those MACRO, and then modify many original linux IVT code to provide those clobber register effectively. Current XenLinux provide one solution, but I saw 2 issues: 1: The coding style is not as good as original IVT code. For example: #ifdef CONFIG_XEN mov r24=r8 mov r8=r18 ;; (p10) XEN_HYPER_ITC_I ;; (p11) XEN_HYPER_ITC_D ;; mov r8=r24 ;; #else This kind of save/restore R8 in each replacement (MACRO) is kind of not well tuned. We probably need a big IVT code change to avoid frequent save/restore in each MACRO. This needs many effort. Of course taking shortcut before into upstream. 2: We are not using function pointer which pv_ops wants. But this one can be avoided if we use dual IVT. This is kind of very high level pv_ops (hypervisor provide whole IVT table), not normal pv_ops address (for low level instruction API). But anyway I love the idea too if the upstream guys like too > > >> Another problem I want to raise is about pv_ops calling convention. >> Unlike X86 where stack is mostly available, IPF ASM code such as >> IVT entrance doesn't invoke stack, so I think we have to define >> static registers only pv_ops & stacked registers pv_ops like PAL. > > With respect to hypervisor ABI, we have already differentiate them. > ia64 specific HYPERVIRVOPS as static registers convention and > normal xen hypercall as stacked registers convention. Yes, hyperpriv is doing something similar, so I think people won't have much resist here. > > >> For most ASM code (ivt), it have to use static registers only pv_ops. >> We need to carefully define the clobber registers used and do >> manual modification to Linux IVT.s. Dual IVT table or binary >> patching is preferred for performance. >> >> Stacked register pv_ops could follow C convention and it is less >> performance critical, so probably no need to do dynamic patching. > > I'm guessing one important exception is masking/unmasking interrupts. > i.e. ssm/rsm psr.i. Anyway we will see during our merge effort. If it is called in C, I won't say it is critical becuase it is slow path in native OS too. But some time later, we can add more after the Linux community takes it. This is the advantage of pv_ops when people argued about ABI level abstraction or API level abstraction at very beginning when Vmware raises their VMI spec. API approach can have on going improvement :) Thx, eddie _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel