On Tue, Mar 29, 2011 at 12:45 PM, Ishwar Bhati <[email protected]> wrote:
> Hi All, > > The task/thread scheduling when using multi-programs/mult-threaded > applications in MARSS is not very predictable. Sometime, one core just get > idle in the kernel mode. > In general, the kernel mode instructions percentage is more than expected. > And, I suspect it is due to the preemptive scheduling in the Linux based on > the timeslice, which is typically > 10-100ms. > I am wondering, that when we are in the simulation mode, how the kernel is > getting the time, is it based on the host clock or qemu_virtual clock or on > the simulated clock. Because, based on this > the context switching burden will increase, and hence the kernel mode > instruction getting higher. > > Time returned to simulated system is also slowed down based on the simulator's speed. So kernel will not switch until it has simulated 10-100ms (or time-slot allocated to a task). I do not want to set the affinity of processes to specific core, since we > want to take the real OS/multi-programming scenario in the consideration. > > If at least one of the core is executing user space code/application then scenario is that first the kernel assigns all the threads to the same core and when they heavily load the CPU then only it will reschedule threads on different core. So you might be seeing such behavior from the kernel. Because the simulation is slow you might have to run the simulation for long time for kernel to reschedule. - Avadh Do any body know, how these clocks are used in the kernel or had found any > solution to this problem. > > Thanks > Ishwar > > _______________________________________________ > http://www.marss86.org > Marss86-Devel mailing list > [email protected] > https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel >
_______________________________________________ http://www.marss86.org Marss86-Devel mailing list [email protected] https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
