On Tue, 30 Mar 1999 [EMAIL PROTECTED] wrote:

> On Tue, Mar 30, 1999 at 09:30:45PM -0600, Robert M. Hyatt wrote:
> > > There is no conceivable reason to want that. Just pretend your OS is
> > > slightly less efficient, and 5% turns into 0%.
> > 
> > have you ever spent time 'tweaking' a program to make it run faster?
> > If so, you'd know why "I" want this.  Because some changes are going 
> > after a 1-2% improvement.  Other compute-bound things that get slipped
> > in and out steal cpu time, zap the cache, and possibly even cause paging
> > that wouldn't happen if they were 'quiet'.
> 
> But the problem is that you want a "quiet" machine that is also running a 
> compute bound program that you ran. There is a simple userspace solution.
> You have a benchmark setup program that stops your background unwanted 
> processes, forks, and waits, while the child executes your benchmark.
> When the child exits, the parent restarts the background process. 
> This works, it is a direct solution to your problem, and it does not
> introduce any additional costs for anyone else.

partially correct.  But suppose this machine is used by others at
times.  And _they_ want to do this.  So now I am into 'setuid' mode
to give them something to suspend _my_ 'background' process while they
do their testing.  And I need a setuid program to suspend their 
background task if I want to test.

yes it can be done.  However, this 'idea' is quite popular, in the 
realm of 'cycle stealing'...   where there are complex programs written
to detect any sort of activity and stop 'background' processes when this
happens.

I don't see a serious reason why this can't be done, unless it is just
an issue of 'enough don't want it'.  But I'd bet the RCS guys and lots of
others would use it if it was available.

I'm hardly 'demanding' anything here, so don't take this wrong.  But it
would be useful in some specific circumstances.  Yes there are user-space
workarounds.  However, in this case, I am looking for a simple, fool-proof
solution to something that happens regularly on a couple of machines I
control...




> 
> 
> > > > anything like that.  I want to say "give it nothing" unless there is
> > > > nothing else running.  Anything else would be 'gravy' but that is,
> > > > and has been, my basic request.  Because I would like to run things
> > > > in the background that might run for days, yet I want to benchmark
> > > > a _real_ 100% compute-bound job (no, no I/O, no system calls, no
> > > > kernel interaction of _any_ kind, just pure computation) and I want
> > > 
> > > But the OS and its daemons still run and they use up some measurable
> > > percentage of the CPU. So your 100% compute bound job does not get 100%
> > > of processor time. Sorry. Use DOS.
> > 
> > I can get 99.9%.  And sorry, dos won't do.  And maybe I don't have a bunch
> > of daemons running, either?
> 
> I don't understand why it is ok to configure your kernel to not run daemons
> but it is too hard to remove user programs that cause similar performance
> hits.
> 

my kernels do run daemons, but no httpd stuff, no www stuff, etc. so there
isn't a lot of junk running.  The purpose of these machines are number-
crunchers that run some long-running applications and are also used to do
performance analysis/tweaking...

getting 99.9% of a machine is ok.  But getting 95% (or less, as there is
nothing that says two or more 'nice 20' jobs can't run together) makes
this more 'sticky'...




> > > > to get valid performance data.  Not losing 5-8% of available cpu
> > > > resources.  Yes I could "kill -STOP" the running process, which is
> > > > what I do now.  But then I can also forget to "kill -CONT" the thing
> > > > and blow 24 hours of potential run-time.
> > > 
> > > So you want to introduce potential deadlocks into the OS  for this critical
> > > problem? Write a script. 
> > 
> > 
> > There doesn't have to be potential deadlocks.  The problem can be fixed.
> 
> How? I don't see a fix that does not have serious effects on everything else.
> 

A lot depends on how the kernel does things.  One solution is that if a
'zero-priority' procss does a system call, ramp it's priority up by at
least 1, so that it will run.  It ought to be ramped way up as the
problem still is ugly in present code (ie a high priority task waits on
a low-priority task to release something, yet that low-priority task can't
run, etc.)

If there is no 'spinning' then as high-priority things get 'blocked'
because that 'don't hardly ever run' task has something locked, but
can't run, eventually nothing will be left to run, and that guy finally
gets back in to clear the log-jam.

Yes there are possibly places where a deadlock might occur, but if so,
it would seem to me that these deadlocks could happen normally, because
the lowest priority job on the system can (at present) grab a resource
just as we are discussing.  And it will eventually run to release it.

note that my suggested nice 20 doesn't say "only run when machine is
not running other processes", it only says "only run when all other
processes are <blocked>" which is not quite the same thing.



> > Whether enough want it or not is another issue.  But it _has_ been done in
> > the past.  And the solution to the 'deadlock' issue is well-known.
> 
> What solution?
> 
> > If you don't agree with the idea, doesn't bother me.  But don't offer
> > false reasons why it can't be done.  Maybe it _shouldn't_ be done, but
> > that is a different issue.
> 
> Almost everything can be done, if you ignore costs.
> 
> 



true.  but in this case we are talking about fractions of pennies, not
hundreds of dollars, metaphorically speaking...



-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to