There are bugs in the O3 model such that restoring directly from a
checkpoint into O3 doesn't work.  That's why the standard-switch model
exists.  I don't think it has anything to do with the memory size.

Steve

On Wed, Oct 13, 2010 at 11:54 AM, Lide Duan <[email protected]> wrote:
> Hi,
>
> I noticed that the default memory size set in Benchmarks.py is 128MB, isn't
> it too small for reasonable simulations?
>
> Previously when I was using ALPHA_SE, the physmem is set to "2GB", and the
> simulation ran well. In FS mode, however, if 2GB is used, booting up Linux
> (with atomic CPU) becomes extremely slow; if 1GB or 512MB is used, I can
> boot up the OS, start the program and make a checkpoint successfully.
> However, restoring from the checkpoint directly with detailed CPU
> (--detailed) gives me "segmentation fault", the interesting thing is: if I
> restore the checkpoint with atomic CPU and then switch to timing and
> detailed ones (--standard-switch), the simulation runs well. For the default
> value 128MB, both --detailed and --standard-switch can run. I am confused by
> this observation. Am I missing anything here? What is a reasonable memory
> size in FS mode (say, for PARSEC programs)?
>
> Thanks,
> Lide
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to