On Wed, 6 Feb 2008, Andrew Morton wrote:
> > @@ -1357,17 +1366,22 @@ static struct page *get_partial(struct k
> > static void unfreeze_slab(struct kmem_cache *s, struct page *page, int
> > tail)
> > {
> > struct kmem_cache_node *n = get_node(s, page_to_nid(page));
> > + struct kmem_cache_
On Mon, 4 Feb 2008 22:20:04 -0800 (PST) Christoph Lameter <[EMAIL PROTECTED]>
wrote:
> The statistics provided here allow the monitoring of allocator behavior
> at the cost of some (minimal) loss of performance. Counters are placed in
> SLUB's per cpu data structure that is already written to by
Hi Christoph,
On Tue, 5 Feb 2008, Pekka Enberg wrote:
> > > We could do that Any idea how to display that kind of information in a
> > > meaningful way. Parameter conventions for slabinfo?
> >
> > We could just print out one total summary and one summary for each CPU (and
> > maybe show % of
On Tue, 5 Feb 2008, Pekka Enberg wrote:
> > We could do that Any idea how to display that kind of information in a
> > meaningful way. Parameter conventions for slabinfo?
>
> We could just print out one total summary and one summary for each CPU (and
> maybe show % of total allocations/fees.
Christoph Lameter wrote:
On Tue, 5 Feb 2008, Pekka J Enberg wrote:
Hi Christoph,
On Mon, 4 Feb 2008, Christoph Lameter wrote:
The statistics provided here allow the monitoring of allocator behavior
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data s
On Tue, 5 Feb 2008, Eric Dumazet wrote:
> > Well we could do the same as for numa stats. Output the global count and
> > then add
> >
> > c=count
> >
>
> Yes, or the reverse, to avoid two loops and possible sum errors (Sum of
> c=count different than the global count)
The numa output uses on
On Tue, 5 Feb 2008 10:08:00 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Tue, 5 Feb 2008, Pekka J Enberg wrote:
>
> > Heh, sure, but it's not exported to userspace which is required for
> > slabinfo to display the statistics.
>
> Well we could do the same as for numa stats. Out
On Tue, 5 Feb 2008, Pekka J Enberg wrote:
> Heh, sure, but it's not exported to userspace which is required for
> slabinfo to display the statistics.
Well we could do the same as for numa stats. Output the global count and
then add
c=count
?
--
To unsubscribe from this list: send the line "u
On Tue, 5 Feb 2008, Pekka J Enberg wrote:
> Hi Christoph,
>
> On Mon, 4 Feb 2008, Christoph Lameter wrote:
> > The statistics provided here allow the monitoring of allocator behavior
> > at the cost of some (minimal) loss of performance. Counters are placed in
> > SLUB's per cpu data structure th
On Tue, 5 Feb 2008, Eric Dumazet wrote:
> > Looks good but I am wondering if we want to make the statistics per-CPU so
> > that we can see the kmalloc/kfree ping-pong of, for example, hackbench
> > better?
>
> AFAIK Christoph patch already have percpu statistics :)
Heh, sure, but it's not exporte
Pekka J Enberg a écrit :
Hi Christoph,
On Mon, 4 Feb 2008, Christoph Lameter wrote:
The statistics provided here allow the monitoring of allocator behavior
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure that is already written to by other
Hi Christoph,
On Mon, 4 Feb 2008, Christoph Lameter wrote:
> The statistics provided here allow the monitoring of allocator behavior
> at the cost of some (minimal) loss of performance. Counters are placed in
> SLUB's per cpu data structure that is already written to by other code.
Looks good but
The statistics provided here allow the monitoring of allocator behavior
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure that is already written to by other code.
The per cpu structure may be extended by the statistics to be more than
one ca
13 matches
Mail list logo