Bruce Evans wrote:
Try profiling it one another type of CPU, to get different performance
counters but hopefully not very different stalls. If the other CPU doesn't
stall at all, put another black mark against P4 and delete your copies of
it :-).
I have tried to profile the same system with th
Julian Elischer <[EMAIL PROTECTED]> writes:
> Dag-Erling Smørgrav <[EMAIL PROTECTED]> writes:
> > Julian Elischer <[EMAIL PROTECTED]> writes:
> > > you mean FILO or LIFO right?
> > Uh, no. You want to reuse the last-freed object, as it is most
> > likely to still be in cache.
> exactly.. FILO or L
On Mon, 4 Feb 2008, Alexander Motin wrote:
Kris Kennaway wrote:
You can look at the raw output from pmcstat, which is a collection of
instruction pointers that you can feed to e.g. addr2line to find out
exactly where in those functions the events are occurring. This will often
help to track
Kris Kennaway wrote:
You can look at the raw output from pmcstat, which is a collection of
instruction pointers that you can feed to e.g. addr2line to find out
exactly where in those functions the events are occurring. This will
often help to track down the precise causes.
Thanks to the hint
Dag-Erling Smørgrav wrote:
Julian Elischer <[EMAIL PROTECTED]> writes:
Robert Watson <[EMAIL PROTECTED]> writes:
be a good time to try to revalidate that. Basically, the goal would
be to make the pcpu cache FIFO as much as possible as that maximizes
the chances that the newly allocated object
Julian Elischer <[EMAIL PROTECTED]> writes:
> Robert Watson <[EMAIL PROTECTED]> writes:
> > be a good time to try to revalidate that. Basically, the goal would
> > be to make the pcpu cache FIFO as much as possible as that maximizes
> > the chances that the newly allocated object already has lines
Robert Watson wrote:
be a good time to try to revalidate that. Basically, the goal would be
to make the pcpu cache FIFO as much as possible as that maximizes the
you mean FILO or LIFO right?
chances that the newly allocated object already has lines in the cache.
It's a fairly trivial twea
On Sat, Feb 02, 2008 at 11:31:31AM +0200, Alexander Motin wrote:
>To check UMA dependency I have made a trivial one-element cache which in my
>test case allows to avoid two for four allocations per packet.
You should be able to implement this lockless using atomic(9). I haven't
verified it, but
Robert Watson wrote:
Basically, the goal would be
to make the pcpu cache FIFO as much as possible as that maximizes the
chances that the newly allocated object already has lines in the cache.
Why FIFO? I think LIFO (stack) should be better for this goal as the
last freed object has more cha
Am Sa, 2.02.2008, 23:05, schrieb Alexander Motin:
> Robert Watson wrote:
>> Hence my request for drilling down a bit on profiling -- the question
>> I'm asking is whether profiling shows things running or taking time that
>> shouldn't be.
>
> I have not yet understood why does it happend, but hwpm
On Sun, 3 Feb 2008, Alexander Motin wrote:
Robert Watson wrote:
Basically, the goal would be to make the pcpu cache FIFO as much as
possible as that maximizes the chances that the newly allocated object
already has lines in the cache.
Why FIFO? I think LIFO (stack) should be better for this
On Sat, 2 Feb 2008, Kris Kennaway wrote:
Alexander Motin wrote:
Robert Watson wrote:
Hence my request for drilling down a bit on profiling -- the question I'm
asking is whether profiling shows things running or taking time that
shouldn't be.
I have not yet understood why does it happend, bu
Alexander Motin wrote:
Robert Watson wrote:
Hence my request for drilling down a bit on profiling -- the question
I'm asking is whether profiling shows things running or taking time
that shouldn't be.
I have not yet understood why does it happend, but hwpmc shows huge
amount of "p4-resource-
Robert Watson wrote:
Hence my request for drilling down a bit on profiling -- the question
I'm asking is whether profiling shows things running or taking time that
shouldn't be.
I have not yet understood why does it happend, but hwpmc shows huge
amount of "p4-resource-stall"s in UMA functions
On Sat, Feb 02, 2008 at 09:56:42PM +0200, Alexander Motin wrote:
>Peter Jeremy ?:
>> On Sat, Feb 02, 2008 at 11:31:31AM +0200, Alexander Motin wrote:
>>> To check UMA dependency I have made a trivial one-element cache which in
>>> my test case allows to avoid two for four allocations per packe
Peter Jeremy пишет:
On Sat, Feb 02, 2008 at 11:31:31AM +0200, Alexander Motin wrote:
To check UMA dependency I have made a trivial one-element cache which in my
test case allows to avoid two for four allocations per packet.
You should be able to implement this lockless using atomic(9). I have
> Thanks, I have already found this. There was only problem, that by
> default it counts cycles only when both logical cores are active while
> one of my cores was halted.
Did you try the 'active' event modifier: "p4-global-power-events,active=any"?
> Sampling on this, profiler shown results clos
> I have tried it for measuring number of instructions. But I am in doubt
> that instructions is a correct counter for performance measurement as
> different instructions may have very different execution times depending
> on many reasons, like cache misses and current memory traffic. I have
> trie
Joseph Koshy wrote:
You cannot sample with the TSC since the TSC does not interrupt the CPU.
For CPU cycles you would probably want to use "p4-global-power-events";
see pmc(3).
Thanks, I have already found this. There was only problem, that by
default it counts cycles only when both logical co
Robert Watson wrote:
I guess the question is: where are the cycles going? Are we suffering
excessive cache misses in managing the slabs? Are you effectively
"cycling through" objects rather than using a smaller set that fits
better in the cache?
In my test setup only several objects from zo
On Sat, 2 Feb 2008, Alexander Motin wrote:
Robert Watson wrote:
I guess the question is: where are the cycles going? Are we suffering
excessive cache misses in managing the slabs? Are you effectively "cycling
through" objects rather than using a smaller set that fits better in the
cache?
On Fri, 1 Feb 2008, Alexander Motin wrote:
Robert Watson wrote:
It would be very helpful if you could try doing some analysis with hwpmc --
"high resolution profiling" is of increasingly limited utility with modern
You mean "of increasingly greater utility with modern CPUs". Low resolution
k
Hi.
Robert Watson wrote:
It would be very helpful if you could try doing some analysis with hwpmc
-- "high resolution profiling" is of increasingly limited utility with
modern CPUs, where even a high frequency timer won't run very often.
It's also quite subject to cycle events that align with
On Fri, 1 Feb 2008, Alexander Motin wrote:
That was actually my second question. As there is only 512 items by default
and they are small in size I can easily preallocate them all on boot. But is
it a good way? Why UMA can't do just the same when I have created zone with
specified element siz
Alexander Motin wrote:
Kris Kennaway пишет:
Alexander Motin wrote:
Alexander Motin пишет:
While profiling netgraph operation on UP HEAD router I have found
that huge amount of time it spent on memory allocation/deallocation:
I have forgotten to tell that it was mostly GENERIC kernel just bui
Julian Elischer пишет:
Alexander Motin wrote:
Hi.
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
0.14 0.05 132119/545292 ip_forward [12]
0.14 0.05 133127/545292 fxp_add_rfab
Kris Kennaway пишет:
Alexander Motin wrote:
Alexander Motin пишет:
While profiling netgraph operation on UP HEAD router I have found
that huge amount of time it spent on memory allocation/deallocation:
I have forgotten to tell that it was mostly GENERIC kernel just built
without INVARIANTS,
Alexander Motin wrote:
Julian Elischer пишет:
Alexander Motin wrote:
Hi.
While profiling netgraph operation on UP HEAD router I have found
that huge amount of time it spent on memory allocation/deallocation:
0.14 0.05 132119/545292 ip_forward [12]
0.14 0.05 133127/
Alexander Motin пишет:
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
I have forgotten to tell that it was mostly GENERIC kernel just built
without INVARIANTS, WITNESS and SMP but with 'profile 2'.
--
Ale
Hi.
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
0.14 0.05 132119/545292 ip_forward [12]
0.14 0.05 133127/545292 fxp_add_rfabuf [18]
0.27 0.10 266236/545292 n
Alexander Motin wrote:
Hi.
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
0.14 0.05 132119/545292 ip_forward [12]
0.14 0.05 133127/545292 fxp_add_rfabuf [18]
0.27 0.
Alexander Motin wrote:
Alexander Motin пишет:
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
I have forgotten to tell that it was mostly GENERIC kernel just built
without INVARIANTS, WITNESS and SMP but wi
Alexander Motin wrote:
Hi.
While profiling netgraph operation on UP HEAD router I have found that
huge amount of time it spent on memory allocation/deallocation:
0.14 0.05 132119/545292 ip_forward [12]
0.14 0.05 133127/545292 fxp_add_rfabuf [18]
0.27 0.
On Mon, 25 Apr 2005, Robert Watson wrote:
On Mon, 25 Apr 2005, Robert Watson wrote:
I now have updated versions of these patches, which correct some
inconsistencies in approach (universal use of curcpu now, for example),
remove some debugging code, etc. I've received relatively little
performan
On Mon, 25 Apr 2005, Robert Watson wrote:
I now have updated versions of these patches, which correct some
inconsistencies in approach (universal use of curcpu now, for example),
remove some debugging code, etc. I've received relatively little
performance feedback on them, and would appreciate
I now have updated versions of these patches, which correct some
inconsistencies in approach (universal use of curcpu now, for example),
remove some debugging code, etc. I've received relatively little
performance feedback on them, and would appreciate it if I could get some.
:-) Especially a
On Sun, 17 Apr 2005, Robert Watson wrote:
I'd like to confirm that for the first two patches, for interesting
workloads, performance generally improves, and that stability doesn't
degrade. For the third partch, I'd like to quantify the cost of the
changes for interesting workloads, and likewise
37 matches
Mail list logo