On 06/28/2010 12:27 PM, Alexander Graf wrote:
Am I looking at old code?


Apparently. Check book3s_mmu_*.c

I don't have that pattern.



(another difference is using struct hlist_head instead of list_head, which I recommend since it saves space)

Hrm. I thought about this quite a bit before too, but that makes invalidation more complicated, no? We always need to remember the previous entry in a list.

hlist_for_each_entry_safe() does that.


+int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu)
+{
+    char kmem_name[128];
+
+    /* init hpte slab cache */
+    snprintf(kmem_name, 128, "kvm-spt-%p", vcpu);
+    vcpu->arch.hpte_cache = kmem_cache_create(kmem_name,
+ sizeof(struct hpte_cache), sizeof(struct hpte_cache), 0, NULL);


Why not one global cache?

You mean over all vcpus? Or over all VMs?

Totally global.  As in 'static struct kmem_cache *kvm_hpte_cache;'.

What would be the benefit?

Less and simpler code, better reporting through slabtop, less wastage of partially allocated slab pages.

Because this way they don't interfere. An operation on one vCPU doesn't inflict anything on another. There's also no locking necessary this way.


The slab writers have solved this for everyone, not just us. kmem_cache_alloc() will usually allocate from a per-cpu cache, so no interference and/or locking. See ____cache_alloc().

If there's a problem in kmem_cache_alloc(), solve it there, don't introduce workarounds.

So you would still keep different hash arrays and everything, just allocate the objects from a global pool?

Yes.

I still fail to see how that benefits anyone.

See above.

--
error compiling committee.c: too many arguments to function

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to