be open source. It's a simulated web client and web server, running
inside the kernel. It's good for load-testing and performance-testing
many kinds of network devices. With two 1-GHz PIII boxes (one acting
as the client and the other acting as the server) it can generate
around 5
In message [EMAIL PROTECTED], John Polstra writes:
Mike Smith [EMAIL PROTECTED] wrote:
It's not necessarily caused by interrupt latency. Here's the assumption
that's being made.
[...]
Thanks for the superb explanation! I appreciate it.
My apologies for never getting the timecounter
In message: [EMAIL PROTECTED]
Poul-Henning Kamp [EMAIL PROTECTED] writes:
: But the i8254 is a piece of shit in this context, and due to
: circumstances (apm being enabled0 most machines end up using the
: i8254 by default.
:
: My (and I belive Bruce's) diagnosis so far is that most
In message [EMAIL PROTECTED], M. Warner Losh writes:
In message: [EMAIL PROTECTED]
Poul-Henning Kamp [EMAIL PROTECTED] writes:
: But the i8254 is a piece of shit in this context, and due to
: circumstances (apm being enabled0 most machines end up using the
: i8254 by default.
:
: My
In article [EMAIL PROTECTED],
John Baldwin [EMAIL PROTECTED] wrote:
like, If X is never locked out for longer than Y, this problem
cannot happen. I'm looking for definitions of X and Y. X might be
hardclock() or softclock() or non-interrupt kernel processing. Y
would be some measure
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
That's the global variable named timecounter, right? I did notice
one potential problem: that variable is not declared volatile. So
in this part ...
This may be a
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
John Baldwin [EMAIL PROTECTED] wrote:
like, If X is never locked out for longer than Y, this problem
cannot happen. I'm looking
In message [EMAIL PROTECTED], John Polstra writes:
Agreed. But in the cases I'm worrying about right now, the
timecounter is the TSC.
Now, *that* is very interesting, how reproducible is it ?
Can you try to MFC rev 1.111 and see if that changes anything ?
--
Poul-Henning Kamp | UNIX
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Agreed. But in the cases I'm worrying about right now, the
timecounter is the TSC.
Now, *that* is very interesting, how reproducible is it ?
I can reproduce it
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still testing under
very heavy network interrupt load.
In article [EMAIL PROTECTED],
John Polstra [EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
John Polstra [EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Can you try to MFC rev 1.111 and see if
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
John Polstra [EMAIL PROTECTED] wrote:
Another interesting thing is that the jumps are always 7.7x seconds
back -- usually 7.79
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still testing under
very heavy network interrupt load. With the change from 1.111, I
still get the microuptime messages about as often. But look how
much larger the reported
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
Sanity-check: this is NOT a multi-CPU system, right ?
Right. These are all single-CPU systems with non-SMP -stable
kernels.
John
--
John Polstra
John D. Polstra Co., Inc.Seattle,
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
This may be a problem, I have yet to see GCC make different code for
that but I should probably have committed the volatile anyway.
It should be committed, but it is not causing the problem in this
case. I
In message [EMAIL PROTECTED], Nate Williams writes:
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still testing under
very heavy network interrupt load. With the change from 1.111, I
still get the microuptime messages about
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
John Polstra [EMAIL PROTECTED] wrote:
Another interesting thing is that the jumps
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Yes, I think you're onto something now. It's a 550 MHz. machine, so
the TSC increments every 1.82 nsec. And 1.82 nsec * 2^32 is 7.81
seconds. :-)
In that case I'm
In message [EMAIL PROTECTED], Nate Williams writes:
How are issues (1) and (3) above different?
ps. I'm just trying to understand, and am *NOT* trying to start a
flame-war. :) :) :)
If the starvation happens to hardclock() or rather tc_windup() the effect
will be cummulative and show up in
In message [EMAIL PROTECTED], John Polstra writes:
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Yes, I think you're onto something now. It's a 550 MHz. machine, so
the TSC increments every 1.82 nsec. And 1.82
How are issues (1) and (3) above different?
ps. I'm just trying to understand, and am *NOT* trying to start a
flame-war. :) :) :)
If the starvation happens to hardclock() or rather tc_windup() the effect
will be cummulative and show up in permanent jumps in the output of date
for
Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still testing under
very heavy network interrupt load. With the change from 1.111, I
still get the microuptime messages about as often. But look how
much larger the
OK, adding the splhigh() around the body of microuptime seems to have
solved the problem. After 45 minutes of running the same test as
before, I haven't gotten a single message. If I get one later, I'll
let you know.
I'm testing that now. But for how long would microuptime have to
be
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
I don't follow that. As I read the code, the current timecounter
is only advanced every second -- not every 1/HZ seconds. Why should
more of them be needed when HZ is
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], John Polstra writes:
Could you try this combination:
NTIMECOUNTER = HZ (or even 5 * HZ)
tco_method = 0
no splhigh protection for microuptime() ?
After 25 minutes
In message [EMAIL PROTECTED], John Polstra writes:
After 25 minutes testing that with NTIMECOUNTER=5, I haven't
gotten any microuptime messages. So it appears that my problem was
just that the current timecounter wrapped all the way around the ring
while microuptime was interrupted, due to
In article [EMAIL PROTECTED],
Poul-Henning Kamp [EMAIL PROTECTED] wrote:
Well, either way I will commit the volatile and this NTIMECOUNTER to
-current now, it's certainly better than what is there now.
Great, thanks.
Thanks for the help, I owe you one at BSDcon!
I'll look forward to it!
Btw, regarding the volatile thing:
If I do
extern volatile struct timecounter *timecounter;
microtime()
{
struct timecounter *tc;
tc = timecounter;
The compiler complains about loosing the volatile thing.
How do I tell it that it is
In article [EMAIL PROTECTED],
Bakul Shah [EMAIL PROTECTED] wrote:
[I see that jdp has answered your question but] cdecl is your friend!
$ cdecl
Type `help' or `?' for help
cdecl explain volatile struct timecounter *timecounter
declare timecounter as pointer to volatile struct timecounter
On Tue, Feb 05, 2002 at 02:42:38PM -0800, Bakul Shah wrote:
PS: Chances are most people don't have cdecl any more. You
can get it like this:
You can also get it like this:
cd /usr/ports/devel/cdecl ; make install
which I just went and did. Pretty helpful utility :)
--K
To
PS: Chances are most people don't have cdecl any more. You
can get it like this:
cd /usr/ports/devel/cdecl make install
:)
-Anthony.
msg31489/pgp0.pgp
Description: PGP signature
Is C a great language, or what? ;-)
Nah, just mediocre even when it comes to obfuscation!
Have you played with unlambda?!
The way I always remember it is that you read the declaration
inside-out: starting with the variable name and then heading toward
the outside while obeying the
John Polstra wrote:
After 25 minutes testing that with NTIMECOUNTER=5, I haven't
gotten any microuptime messages. So it appears that my problem was
just that the current timecounter wrapped all the way around the ring
while microuptime was interrupted, due to the high HZ value and the
In message: [EMAIL PROTECTED]
John Polstra [EMAIL PROTECTED] writes:
: I'm testing that now. But for how long would microuptime have to
: be interrupted to make this happen? Surely not 7.81 seconds! On
: this same machine I have a curses application running which is
: updating the
I'm trying to understand the timecounter code, and in particular the
reason for the microuptime went backwards messages which I see on
just about every machine I have, whether running -stable or -current.
This problem is usually attributed to too much interrupt latency. My
question is, how much
On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
I'm trying to understand the timecounter code, and in particular the
reason for the microuptime went backwards messages which I see on
just about every machine I have, whether running -stable or -current.
I see them everywhere with
In article [EMAIL PROTECTED],
Dominic Marks [EMAIL PROTECTED] wrote:
On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
I'm trying to understand the timecounter code, and in particular the
reason for the microuptime went backwards messages which I see on
just about every
In article [EMAIL PROTECTED],
Dominic Marks [EMAIL PROTECTED] wrote:
On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
I'm trying to understand the timecounter code, and in particular the
reason for the microuptime went backwards messages which I see on
just about every
In article [EMAIL PROTECTED],
Mike Smith [EMAIL PROTECTED] wrote:
It's not necessarily caused by interrupt latency. Here's the assumption
that's being made.
[...]
Thanks for the superb explanation! I appreciate it.
There is a ring of timecounter structures, of some size. In testing,
On 04-Feb-02 John Polstra wrote:
In article [EMAIL PROTECTED],
Dominic Marks [EMAIL PROTECTED] wrote:
On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
I'm trying to understand the timecounter code, and in particular the
reason for the microuptime went backwards messages
42 matches
Mail list logo