On Mon, Mar 17, 2008 at 3:37 AM, Robert Collins
<[EMAIL PROTECTED]> wrote:
> This reminds me, one of the things I was thinking heavily about a few
>  years ago was locality of reference in N-CPU situations. That is, making
>  sure we don't cause thrashing unnecessarily. For instance - given
>  chunking we can't really avoid seeing all the bytes for a MISS, so does
>  it matter if process all the request on one CPU, or part on one part on
>  another? Given NUMA it clearly does matter, but how many folk run
>  squid/want to run squid on a NUMA machines?

Well, all SMP AMD/HyperTransport boxes are NUMA under the hood, so
anyone wishing to have more than 32Gb of RAM (current maximum stock
Intel limit) have to.

>  Or, should we make acl lookups come back to the same cpu, but do all the
>  acl lookups on one cpu, trading potential locking (a non-read-blocking
>  cache can allow result lookups cheaply) for running the same acl code
>  over extended sets of acls. (Not quite SIMD, but think about the problem
>  from that angle for a bit).

Hm.. this is too far ahead IMO. I'd just try to parallelize that part
as much as possible, only having shared {mem,disk} store.

-- 
 /kinkie

Reply via email to