On Sun, Feb 25, 2007 at 11:41:54AM +0100, Pavel Machek ([EMAIL PROTECTED])
wrote:
> > > I've done so, with some interesting results. Source on
> > > http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
> > > to your CPU frequency if you care about absolute numbers!
> > >
> >
Hi!
> > I've done so, with some interesting results. Source on
> > http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
> > to your CPU frequency if you care about absolute numbers!
> >
> > These are two groups, each consisting of 10 consecutive nonblocking UDP
> > recvfroms
Arjan van de Ven wrote:
> also.. running "vmstat 3" and looking at the "cs" column is interesting;
> it shouldn't be above 50 or so in idle (well not above 10 but our
> userland stinks too much for that)
I average 6 or so with my normal configuration.
Chuck "kill the daemons" Ebbert
-
To unsubsc
On Wed, Feb 21, 2007 at 02:06:34PM +0300, Evgeniy Polyakov wrote:
> Here is data for 50 bytes reading for essentially idle machine
> (core duo 2.4 ghz):
>
> delta for syscall: 3326961404-3326969261: 7857 cycles = 3.273750 us
Can you oprofile it too?
-Andi
-
To unsubscribe from this list: send t
Here is data for 50 bytes reading for essentially idle machine
(core duo 2.4 ghz):
delta for syscall: 3326961404-3326969261: 7857 cycles = 3.273750 us
delta for syscall: 3326975687-3326980979: 5292 cycles = 2.205000 us
delta for syscall: 3327199967-3327205583: 5616 cycles = 2.34 us
delta for
On 2/21/07, bert hubert <[EMAIL PROTECTED]> wrote:
I'm trying to figure out which processes have the most impact, I had already
killed anything non-essential. But that still leaves 140 pids.
Bert
That sounds way too many pids. I run a script to shut down processes
when I do testing as
> I'm trying to figure out which processes have the most impact, I had already
> killed anything non-essential. But that still leaves 140 pids.
btw if you have systemtap on your system you can see who is doing evil
with
http://www.fenrus.org/cstop.stp
also.. running "vmstat 3" and looking at th
I measure a huge slope, however. Starting at 1usec for back-to-back system
calls, it rises to 2usec after interleaving calls with a count to 20
million.
4usec is hit after 110 million.
The graph, with semi-scientific error-bars is on
http://ds9a.nl/tmp/recvfrom-usec-vs-wait.png
The code to gene
On Tue, Feb 20, 2007 at 02:02:00PM -0800, Rick Jones wrote:
> The slope appears to be flattening-out the farther out to the right it
> goes. Perhaps that is the length of time it takes to take all the
> requisite cache misses.
The rate of flattening out appears to correlate with the number of
On Tue, 20 Feb 2007 21:45:05 +0100
bert hubert <[EMAIL PROTECTED]> wrote:
> On Tue, Feb 20, 2007 at 02:40:40PM -0500, Benjamin LaHaise wrote:
>
> > Make sure your system is idle. Userspace bloat means that *lots* of idle
> > activity occurs in between timer ticks on recent distributions -- all
On Tue, Feb 20, 2007 at 02:40:40PM -0500, Benjamin LaHaise wrote:
> Make sure your system is idle. Userspace bloat means that *lots* of idle
> activity occurs in between timer ticks on recent distributions -- all those
You hit the nail on the head. I had previously measured with X shut down,
b
On Tue, Feb 20, 2007 at 08:33:20PM +0100, bert hubert wrote:
> I'm investigating this further for other system calls. It might be that my
> measurements are off, but it appears even a slight delay between calls
> incurs a large penalty.
Make sure your system is idle. Userspace bloat means that *l
On Tue, Feb 20, 2007 at 09:48:59PM +0300, Evgeniy Polyakov wrote:
> Likely first overhead related to cache population or gamma-ray radiation.
> If it happens only one (it does in my test), then everything is ok I
> think. Bert, how frequently you get that long recvfrom()?
I have plotted the avera
On Tue, Feb 20, 2007 at 01:42:42PM -0500, Josef Sipek ([EMAIL PROTECTED]) wrote:
> A better thing would be to use getuid - it turns into just a return with a
> memory dereference). I ran it on my 3.06GHz P4 (HT, but only UP kernel),
> PREEMPT, HZ=1000...
>
> 3.290196 0.470588 0.402614 0.396078 0.3
On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
> On Tue, Feb 20, 2007 at 05:27:14PM +0100, bert hubert ([EMAIL PROTECTED])
> wrote:
> > I've done so, with some interesting results. Source on
> > http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
> > to yo
On Tue, Feb 20, 2007 at 08:11:20PM +0300, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
> I would try it today - but it is a bit late in Moscow already - and
> there are some things to complete yet. So, tomorrow I will create a patch
> and run it, but I seriously doubt that there is _that_ high per-
On Tue, Feb 20, 2007 at 06:02:32PM +0100, bert hubert ([EMAIL PROTECTED]) wrote:
> On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
>
> > It can be recvfrom only problem - syscall overhead on my p4 (core duo,
> > debian testing) is bout 300 usec - to test I ran read('dev/zero', &d
On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
> It can be recvfrom only problem - syscall overhead on my p4 (core duo,
> debian testing) is bout 300 usec - to test I ran read('dev/zero', &data,
> 0) in a loop.
nsec I assume?
The usec numbers for read(fd, &c, 0) where fd is /d
On Tuesday 20 February 2007 17:27, bert hubert wrote:
> On Tue, Feb 20, 2007 at 11:50:13AM +0100, Andi Kleen wrote:
> > P4s are pretty slow at taking locks (or rather doing atomical operations)
> > and there are several of them in this path. You could try it with a UP
> > kernel. Actually hotunplug
On Tue, Feb 20, 2007 at 05:27:14PM +0100, bert hubert ([EMAIL PROTECTED]) wrote:
> I've done so, with some interesting results. Source on
> http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
> to your CPU frequency if you care about absolute numbers!
>
> These are two group
On Tue, Feb 20, 2007 at 11:50:13AM +0100, Andi Kleen wrote:
> P4s are pretty slow at taking locks (or rather doing atomical operations)
> and there are several of them in this path. You could try it with a UP
> kernel. Actually hotunplugging the other virtual CPU should be sufficient
> with recent
21 matches
Mail list logo