Really?  Doesn't that imply that you're snooping on uncacheables in real life?

On Thu, Jul 15, 2010 at 4:38 PM, Ali Saidi <sa...@umich.edu> wrote:
>
> I think something like that could work, although I won't be advocating for
> the forwarding snoop thing, since it seems as though ARM can support LLSC
> on uncachable accesses.
>
> Ali
>
>
> On Thu, 15 Jul 2010 16:27:56 -0700, Steve Reinhardt
> begin_of_the_skype_highlighting     end_of_the_skype_highlighting
> <ste...@gmail.com>
> wrote:
>> Alpha has to do the same thing on interrupts... the way this is
>> handled is that there's a per-thread lock flag in the CPU that gets
>> cleared on interrupts, and if that flag is not set then we fail the SC
>> without even sending it to the cache.  (At least that's my
>> recollection of how it works.)  Seems like ARM could/should do
>> something similar.
>>
>> Another possible way to handle these (that would be more realistic and
>> perhaps simpler overall) would be to forward all invalidation snoops
>> to the CPUs (which we can easily do), then have each CPU track the
>> address it has locked (if any) and clear the lock if it sees an
>> invalidation on the block it cares about.  I think we didn't do it
>> that way because originally we did not send snoops to the CPU, but
>> with the current memory system it would be a minor change for M5
>> classic (not sure about Ruby).  It would also have the benefit of
>> eliminating the two separate LL/SC implementations in the caches and
>> in physmem.  It's also a non-trivial change that might not be worth
>> it, and might be harder than you think it should be in Ruby.
>>
>> Steve
>>
>> On Thu, Jul 15, 2010 at 4:13 PM, Ali Saidi <sa...@umich.edu> wrote:
>>>
>>> ARM has an instruction that clears the lock flag (CLREX). To implement
>>> that in physical memory, it's easy enough, on the other hand with the
>>> cache
>>> it requires calling clearLoadLocks() on every block in the cache.
>>>
>>> Ali
>>>
>>>
>>>
>>>
>>> On Thu, 15 Jul 2010 16:01:09 -0700, nathan binkert <n...@binkert.org>
>>> wrote:
>>>>> You're right that it could be done either way.  I think the rationale
>>>>> is that this way you don't need to search a list to see if your
>>>>> address is on it.  If the common case is that there are no locked
>>>>> blocks in the entire cache though then that's not a big deal since
> the
>>>>> list will be empty anyway.  I can't think of any other reason.
>>>>
>>>> Why do you need a list of lock addresses?  The only reason I can think
>>>> of is because of multiple threads.  Is that what you're referring to?
>>>> I guess the other issue is that the lock address would have to be
>>>> checked on all stores in the system which could be a pain.  Another
>>>> reason is that you're already accessing the tag for coherence
>>>> operations, so you might as well put the lock info there.  You could
>>>> for example update MESI to have a "locked exclusive" or "locked
>>>> modified" state.
>>>>
>>>>
>>>>   Nate
>>>> _______________________________________________
>>>> m5-dev mailing list
>>>> m5-dev@m5sim.org
>>>> http://m5sim.org/mailman/listinfo/m5-dev
>>> _______________________________________________
>>> m5-dev mailing list
>>> m5-dev@m5sim.org
>>> http://m5sim.org/mailman/listinfo/m5-dev
>>>
>> _______________________________________________
>> m5-dev mailing list
>> m5-dev@m5sim.org
>> http://m5sim.org/mailman/listinfo/m5-dev
> _______________________________________________
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev
>
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to