> Now, CPUs is where I have only a vague idea of what would be needed to > simulate. I know there are up to three levels of caches and main memory, > which all have different access times. The CPU itself has a pipeline and > branch prediction and such which could invalidate the contents of > pipeline up to a given point (of branching). > > I think the most time consuming operation which should be properly > simulated is memory access. For this to work properly, all levels of > caches must be emulated, too. > > How much do misses on the branch prediction level cost? How much > pipeline interlocks? I don't think those would be _that_ dramatic. Since > today's compilers are said to be optimizing quite well...
The most complex thing to accurately simulate a modern CPU (including ARMs) is the data cache and by far. In comparison, getting accurate core pipeline simulation is *very* easy. There is a company that claims to be able to accurately simulate an at 200 Mhz (http://www.vastsystems.com). I bet there are using statistical cycle counting and so are probably very wrong :) Laurent _______________________________________________ Qemu-devel mailing list Qemu-devel@nongnu.org http://lists.nongnu.org/mailman/listinfo/qemu-devel