On Tue, Sep 16, 2014 at 08:57:31AM -0400, Andrew Jones wrote:
> 
> 
> ----- Original Message -----
> > Il 16/09/2014 14:43, Andrew Jones ha scritto:
> > > I don't think we need to worry about this case. AFAIU, enabling the
> > > caches for a particular cpu shouldn't require any synchronization.
> > > So we should be able to do
> > > 
> > >     enable caches
> > >     spin_lock
> > >     start other processors
> > >     spin_unlock
> > 
> > Ok, I'll test and apply your patch then.
> > 
> > Once you change the code to enable caches, please consider hanging on
> > spin_lock with caches disabled.
> 
> Unfortunately I can't do that without changing spin_lock into a wrapper
> function. Early setup code calls functions that use spin_locks, e.g.
> puts(), and we won't want to move the cache enablement into early setup
> code, as that should be left for unit tests to turn on off as they wish.
> Thus we either need to be able to change the spin_lock implementation
> dynamically, or just leave the test/return as is.
> 
My take on this whole thing is that we're doing something fundamentally
wrong.  I think what we should do is to always enable the MMU for
running actual tests, bringing up multiple CPUs etc.  We could have an
early_printf() that doesn't use the spinlock.  I think this will just be
a more stable setup.

Do we have clear ideas of which kinds of tests it would make sense to
run without the MMU turned on?  If we can be more concrete on this
subject, perhaps a special path (or build) that doesn't enable the MMU
for running the aforementioned test cases could be added.

-Christoffer
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to