On 5/3/10, Igor Kovalenko <igor.v.kovale...@gmail.com> wrote: > On Mon, May 3, 2010 at 11:54 PM, Blue Swirl <blauwir...@gmail.com> wrote: > > On 5/3/10, Igor Kovalenko <igor.v.kovale...@gmail.com> wrote: > >> On Mon, May 3, 2010 at 11:24 PM, Blue Swirl <blauwir...@gmail.com> wrote: > >> > On 5/3/10, Igor Kovalenko <igor.v.kovale...@gmail.com> wrote: > >> >> Hi! > >> >> > >> >> There is an issue with lazy conditional codes evaluation where > >> >> we return from trap handler with mismatching conditionals. > >> >> > >> >> I seldom reproduce it here when dragging qemu window while > >> >> machine is working through silo initialization. I use gentoo minimal > cd > >> >> install-sparc64-minimal-20100322.iso but I think anything with silo > boot > >> >> would experience the same. Once in a while it would report crc error, > >> >> unable to open cd partition or it would fail to decompress image. > >> > > >> > I think I've also seen this. > >> > > >> >> Pattern that fails appears to require a sequence of compare insn > >> >> possibly followed by a few instructions which do not touch > conditionals, > >> >> then conditional branch insn. If it happens that we trap while > processing > >> >> conditional branch insn so it is restarted after return from trap > then > >> >> seldom conditional codes are calculated incorrectly. > >> >> > >> >> I cannot point to exact cause but it appears that after trap return > >> >> we may have CC_OP and CC_SRC* mismatch somewhere, > >> >> since adding more cond evaluation flushes over the code helps. > >> >> > >> >> We already tried doing flush more frequently and it is still not > >> >> complete, so the question is how to finally do this once and right :) > >> >> > >> >> Obviously I do not get the design of lazy evaluation right, but > >> >> the following list appears to be good start. Plan is to prepare > >> >> a change to qemu and find a way to test it. > >> >> > >> >> 1. Since SPARC* is a RISC CPU it seems to be not profitable to > >> >> use DisasContext->cc_op to predict if flags should be not evaluated > >> >> due to overriding insn. Instead we can drop cc_op from disassembler > >> >> context and simplify code to only use cc_op from env. > >> > > >> > Not currently, but in the future we may use that to do even lazier > >> > flags computation. For example the sequence 'cmp x, y; bne target' > >> > could be much more optimal by changing the branch to do the > >> > comparison. Here's an old unfinished patch to do some of this. > >> > > >> >> Another point is that we always write to env->cc_op when > >> >> translating *cc insns > >> >> This should solve any issue with dc->cc_op prediction going > >> >> out of sync with env->cc_op and cpu_cc_src* > >> > > >> > I think this is what is happening now. > >> > > >> >> 2. We must flush lazy evaluation back to CC_OP_FLAGS in a few cases > when > >> >> a. conditional code is required by insn (like addc, cond branch > etc.) > >> >> - here we can optimize by evaluating specific bits (carry?) > >> >> - not sure if it works in case we have two cond consuming insns, > >> >> where first needs carry another needs the rest of flags > >> > > >> > Here's another patch to optimize C flag handling. It doesn't pass my > >> > tests though. > >> > > >> >> b. CCR is read by rdccr (helper_rdccr) > >> >> - have to compute all flags > >> >> c. trap occurs and we prepare trap level context (saving pstate) > >> >> - have to compute all flags > >> >> d. control goes out of tcg runtime (so gdbstub reads correct value > from env) > >> >> - have to compute all flags > >> > > >> > Fully agree. > >> > >> > >> Cool > >> > >> Still I'd propose to kill dc->cc_op, find a reliable way to test it > >> and then add it back possibly with more optimizations. > >> I'm lost in the code up to the point where I believe we need to > >> save/restore cc_op and cpu_cc* while switching trap levels. > > > > I'd think this should do the trick: > > > > diff --git a/target-sparc/op_helper.c b/target-sparc/op_helper.c > > index b27778b..94921cd 100644 > > --- a/target-sparc/op_helper.c > > +++ b/target-sparc/op_helper.c > > @@ -3506,6 +3506,8 @@ void do_interrupt(CPUState *env) > > } > > tsptr = cpu_tsptr(env); > > > > + helper_compute_psr(); > > + > > tsptr->tstate = ((uint64_t)GET_CCR(env) << 32) | > > ((env->asi & 0xff) << 24) | ((env->pstate & 0xf3f) << 8) | > > GET_CWP64(env); > > > > > Thanks, this change seems to work here for silo issue. > > Another change would be to flush for gdbstub use of GET_CCR and for > helper_rdccr. > I tried to embed flush into GET_CCR but the code looks ugly since we > need to proxy a call to helper_compute_psr from gdbstub passing > available env pointer. > > Not really tested with your changes, but still what is the breakage you see?
Aurora 2.0 (http://distro.ibiblio.org/pub/linux/distributions/aurora/build-2.0/sparc/iso/) breaks. This is what I get with git HEAD, having pressed enter key twice: Welcome to Aurora SPARC Linux +--------------+ CD Found +--------------+ | | | To begin testing the CD media before | | installation press OK. | | | | Choose Skip to skip the media test | | and start the installation. | | | | +----+ +------+ | | | OK | | Skip | | | +----+ +------+ | | | | | +----------------------------------------+ <Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen This is what I get with the C flag patch applied: Welcome to Aurora SPARC Linux +--------------+ Error +---------------+ | | | failed to read keymap information: | | Success | | | | +----+ | | | OK | | | +----+ | | | | | +--------------------------------------+ <Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen