On Friday, February 22, 2013 2:12:54 pm Ian Lepore wrote:
I'm curious why the concept of scheduling niceness applies only to an
entire process, and it's not possible to have nice threads within a
process. Is there any fundamental reason why it couldn't be supported
with some extra bookkeeping
I'm curious why the concept of scheduling niceness applies only to an
entire process, and it's not possible to have nice threads within a
process. Is there any fundamental reason why it couldn't be supported
with some extra bookkeeping to track niceness per thread?
-- Ian
The problem, IMHO, is none of this is in any way:
* documented;
* modellable by a user;
* explorable by a user (eg by an easy version of schedgraph to explore
things in a useful way.
Arnaud raises a valid point - he's given a synthetic benchmark whose
numbers are unpredictable. He's asking why.
Hi,
2012/4/9 Alexander Motin m...@freebsd.org:
[...]
I have strong feeling that while this test may be interesting for profiling,
it's own results in first place depend not from how fast scheduler is, but
from the pipes capacity and other alike things. Can somebody hint me what
except pipe
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motinm...@freebsd.org:
[...]
I have strong feeling that while this test may be interesting for profiling,
it's own results in first place depend not from how fast scheduler is, but
from the pipes capacity and other alike things. Can
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motinm...@freebsd.org:
[...]
I have strong feeling that while this test may be interesting for
profiling,
it's own results in first place depend not from how fast scheduler
is, but
from the
Hi,
On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motin m...@freebsd.org wrote:
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motinm...@freebsd.org:
[...]
I have strong feeling that while this test may be interesting for
profiling,
On 04/10/12 21:46, Arnaud Lacombe wrote:
On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motinm...@freebsd.org wrote:
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motinm...@freebsd.org:
I have strong feeling that while this test may be
On Tue, 10 Apr 2012 12:58:00 -0400
Arnaud Lacombe lacom...@gmail.com wrote:
Let me disagree on your conclusion. If OS A does a task in X seconds,
and OS B does the same task in Y seconds, if Y X, then OS B is just
not performing good enough.
Others have pointed out one problem with this
Hi,
On Tue, Apr 10, 2012 at 4:05 PM, Mike Meyer m...@mired.org wrote:
On Tue, 10 Apr 2012 12:58:00 -0400
Arnaud Lacombe lacom...@gmail.com wrote:
Let me disagree on your conclusion. If OS A does a task in X seconds,
and OS B does the same task in Y seconds, if Y X, then OS B is just
not
On Tue, 10 Apr 2012 16:50:39 -0400
Arnaud Lacombe lacom...@gmail.com wrote:
On Tue, Apr 10, 2012 at 4:05 PM, Mike Meyer m...@mired.org wrote:
On Tue, 10 Apr 2012 12:58:00 -0400
Arnaud Lacombe lacom...@gmail.com wrote:
Let me disagree on your conclusion. If OS A does a task in X seconds,
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing
needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final
minutes run
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan
. Measurements on 5 minutes run
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
is stationary as it should. With 9 threads I see
regular
and random load move between all 8 CPUs. Measurements on 5 minutes run
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing
needed
at
all
threads everything is stationary as it should. With 9 threads I see
regular
and random load move between all 8 CPUs. Measurements on 5 minutes run
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing
needed
as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final patch of this series (more to come
. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final patch of this series (more to come
В Fri, 2 Mar 2012 19:24:42 -0800
Adrian Chadd adr...@freebsd.org пишет:
He's reporting that your ULE work hasn't improved his (very)
degenerate case.
That's not true!
Thanks!
Thanks!
Adrian
___
freebsd-hackers@freebsd.org mailing list
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
And thanks for this, it's great to see some further explanation of the
current issues the scheduler faces.
Adrian
On 2 March 2012 23:40, Alexander Motin m...@freebsd.org wrote:
Hi.
On
On 03/03/12 10:59, Adrian Chadd wrote:
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
Have no idea. I am new at this area and haven't looked on PRs yet.
And thanks for this, it's great to see some further explanation of the
current
On 03/03/12 11:12, Alexander Motin wrote:
On 03/03/12 10:59, Adrian Chadd wrote:
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
Have no idea. I am new at this area and haven't looked on PRs yet.
And thanks for this, it's great to see
В Sat, 03 Mar 2012 14:54:17 +0200
Alexander Motin m...@freebsd.org пишет:
On 03/03/12 11:12, Alexander Motin wrote:
On 03/03/12 10:59, Adrian Chadd wrote:
Right. Is this written up in a PR somewhere explaining the problem
in as much depth has you just have?
Have no idea. I am new at
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build kernel with only r232454 patch Xorg log contains:
...
[ 504.865] [drm] failed to load kernel module
On Saturday 03 March 2012 13:30:50 Alexander Motin wrote:
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build kernel with only r232454 patch Xorg
On 03.03.2012 18:57, Mario Lobo wrote:
On Saturday 03 March 2012 13:30:50 Alexander Motin wrote:
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build
В Sat, 03 Mar 2012 18:30:50 +0200
Alexander Motin m...@freebsd.org пишет:
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build kernel with only
Hi George,
Have you thought about providing schedgraph traces with your
particular workload?
I'm sure that'll help out the scheduler hackers quite a bit.
THanks,
Adrian
___
freebsd-hackers@freebsd.org mailing list
On 03/02/12 18:06, Adrian Chadd wrote:
Hi George,
Have you thought about providing schedgraph traces with your
particular workload?
I'm sure that'll help out the scheduler hackers quite a bit.
THanks,
Adrian
I posted a couple back in December but I haven't created any more
recently:
Hi,
CC'ing mav@, who started this thread.
mav@, can you please take a look at George's traces and see if there's
anything obviously silly going on?
He's reporting that your ULE work hasn't improved his (very) degenerate case.
Thanks!
Adrian
On 2 March 2012 16:14, George Mitchell
Hi.
On 03/03/12 05:24, Adrian Chadd wrote:
mav@, can you please take a look at George's traces and see if there's
anything obviously silly going on?
He's reporting that your ULE work hasn't improved his (very) degenerate case.
As I can see, my patch has nothing to do with the problem. My
2012/2/27 George Mitchell george+free...@m5p.com:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix SCHED_ULE's poor performance for
interactive processes with a full load on interactive processes. It
doesn't help.
On 02/27/12 05:35, Olivier Smedts wrote:
2012/2/27 George Mitchellgeorge+free...@m5p.com:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix SCHED_ULE's poor performance for
interactive processes with a full load on interactive processes.
2012/2/27 George Mitchell george+free...@m5p.com:
On 02/27/12 05:35, Olivier Smedts wrote:
2012/2/27 George Mitchellgeorge+free...@m5p.com:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix SCHED_ULE's poor performance for
interactive
on 27/02/2012 13:28 Olivier Smedts said the following:
Can you try with hald, or directly with the mouse device, without
using moused ? Others reported they had better interactivity without
sysmouse/moused. Really better (no mouse lag or freeze when under high
load).
I wonder if re-nice-ing
On 02/27/12 06:28, Olivier Smedts wrote:
2012/2/27 George Mitchellgeorge+free...@m5p.com:
On 02/27/12 05:35, Olivier Smedts wrote:
2012/2/27 George Mitchellgeorge+free...@m5p.com:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix
On 02/17/12 12:03, Alexander Motin wrote:
On 17.02.2012 18:53, Arnaud Lacombe wrote:
On Fri, Feb 17, 2012 at 11:29 AM, Alexander Motinm...@freebsd.org wrote:
[...]So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be
On 02/26/12 19:32, George Mitchell wrote:
[...] SCHED_ULE's poor performance for
interactive processes with a full load on interactive processes. It
^^
Should be of compute-bound.
doesn't help. -- George Mitchell
deviation of only about 5 seconds. It
is the same deviation as I see caused by only scheduling of 16 threads
on 8 cores without any balancing needed at all. So I believe this code
works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final
see regular
and random load move between all 8 CPUs. Measurements on 5 minutes run show
deviation of only about 5 seconds. It is the same deviation as I see caused
by only scheduling of 16 threads on 8 cores without any balancing needed at
all. So I believe this code works as it should.
Here
as it should. With 9 threads I see regular
and random load move between all 8 CPUs. Measurements on 5 minutes run show
deviation of only about 5 seconds. It is the same deviation as I see caused
by only scheduling of 16 threads on 8 cores without any balancing needed at
all. So I believe this code works
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to 10-15% in super-smack MySQL and
PostgreSQL
On 02/16/12 10:48, Alexander Motin wrote:
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to
improved same as
for the first patch. That CPU is quite difficult to handle as with mix
of effective SMT and lack of L3 cache different scheduling approaches
give different results in different situations.
Specific performance numbers can be found here:
http://people.freebsd.org/~mav/bench.ods
as with mix
of effective SMT and lack of L3 cache different scheduling approaches
give different results in different situations.
Specific performance numbers can be found here:
http://people.freebsd.org/~mav/bench.ods
Every point there includes at least 5 samples and except pbzip2 test
that is quite
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to 10-15% in super-smack MySQL and
PostgreSQL
network performance improved same as for the first
patch. That CPU is quite difficult to handle as with mix of effective SMT and
lack of L3 cache different scheduling approaches give different results in
different situations.
Specific performance numbers can be found here:
http://people.freebsd.org
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
On 02/14/12 00:38, Alexander Motin wrote:
I see no much point in committing them sequentially, as they are quite
orthogonal. I need to make one decision. I am going on small vacation
next week. It will give
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final tests today I'll probably publish new version.
It took more time, but finally I think
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final tests today I'll probably publish new
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am
On 13.02.2012 23:39, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the
ideas
are
on 06/02/2012 09:04 Alexander Motin said the following:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE
knows about HTT and when doing load balancing once a second, it does right
things. Unluckily, if some other thread gets in the way, process can be
On 02/11/12 15:35, Andriy Gapon wrote:
on 06/02/2012 09:04 Alexander Motin said the following:
I've analyzed scheduler behavior and think found the problem with HTT. SCHED_ULE
knows about HTT and when doing load balancing once a second, it does right
things. Unluckily, if some other thread gets
On Sat, Feb 11, 2012 at 04:21:25PM +0200, Alexander Motin wrote:
At this moment I am using different penalty coefficients for SMT and
shared caches (for unrelated processes sharing is is not good). No
problem to add more types there. Separate flag for shared FPU could be
used to have
on 11/02/2012 15:35 Andriy Gapon said the following:
It seems that on modern CPUs the caches are either inclusive or some smart as
if inclusive caches. As a result, if two cores have a shared cache at any
level, then it should be relatively cheap to move a thread from one core to
the
other.
in lock contention with itself that necessary.
I.e. it's usable only in very borderline cases.
algorithm would be a scheduling infrastructure similar to GEOM. that way it
would be much easier to implement new algorithms (maybe in XML).
I don't think XML would be applicable beyond fine-tuning
On 2012/2/6 15:44, Alexander Motin wrote:
On 06.02.2012 09:40, David Xu wrote:
On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things.
On Mon, 06 Feb 2012 09:04:31 +0200
Alexander Motin m...@freebsd.org wrote:
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things. Unluckily, if some other thread gets in the way,
On Mon Feb 6 12, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things. Unluckily, if some other thread gets in the way,
process can be easily pushed
to the first level of CPU topology (for
HTT systems it is one physical core). If it sees no good candidate, it
just looks for the CPU with minimal load, ignoring thread priority. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority thread is running, where
to priority violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priority thread.
My patch does more searches, that allows to handle priorities better.
But why would unrar have a higher
violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priority thread.
My patch does more searches, that allows to handle priorities better.
But why would unrar have a higher priority
On 06.02.12 08:59, David Xu wrote:
On 2012/2/6 15:44, Alexander Motin wrote:
On 06.02.2012 09:40, David Xu wrote:
On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once
for the CPU with minimal load, ignoring thread priority. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priority thread.
My patch does more searches
On 02/06/12 21:08, Florian Smeets wrote:
On 06.02.12 08:59, David Xu wrote:
On 2012/2/6 15:44, Alexander Motin wrote:
On 06.02.2012 09:40, David Xu wrote:
On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows
looks for the CPU with minimal load, ignoring thread priority. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priority thread.
My patch does more searches
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things. Unluckily, if some other thread gets in the way,
process can be easily pushed out to another CPU, where it will stay for
jan 2010, om 12:16 heeft Dag-Erling Smørgrav het volgende geschreven:
Bernard van Gastel bvgas...@bitpowder.com writes:
What is the scheduling policy of the different thread libraries?
Threads are scheduled by the kernel, not by the library. Look at
sys/kern/sched_umtx.c and sys/kern/sched_
geschreven:
In the last episode (Jan 19), Bernard van Gastel said:
I'm curious to the exact scheduling policy of POSIX threads in relation to
mutexes and conditions. If there are two threads (a b), both with the
following code:
while (1) {
pthread_mutex_lock(mutex
On Thu, 21 Jan 2010, Bernard van Gastel wrote:
But the descheduling of threads if the mutex is not available is done
by the library. And especially the order of rescheduling of the
threads (thats what I'm interested in). Or am I missing something in
the sys/kern/sched files (btw I don't have
Bernard van Gastel bvgas...@bitpowder.com writes:
But the descheduling of threads if the mutex is not available is done
by the library. And especially the order of rescheduling of the
threads (thats what I'm interested in). Or am I missing something in
the sys/kern/sched files (btw I don't
On Thu, 21 Jan 2010, Bernard van Gastel wrote:
In real world application such a proposed queue would work almost
always, but I'm trying to exclude all starvation situations primarily
(speed is less relevant). And although such a worker can execute it
work and be scheduled fairly, the addition
On Thursday 21 January 2010 11:27:23 Bernard van Gastel wrote:
In real world application such a proposed queue would work almost always,
but I'm trying to exclude all starvation situations primarily (speed is
less relevant). And although such a worker can execute it work and be
scheduled
Bernard van Gastel wrote:
But the descheduling of threads if the mutex is not available is done by the
library. And especially the order of rescheduling of the threads (thats what
I'm interested in). Or am I missing something in the sys/kern/sched files (btw
I don't have the umtx file).
Hi everyone,
I'm curious to the exact scheduling policy of POSIX threads in relation to
mutexes and conditions. If there are two threads (a b), both with the
following code:
while (1) {
pthread_mutex_lock(mutex);
...
pthread_mutex_unlock(mutex);
}
What
Bernard van Gastel bvgas...@bitpowder.com writes:
What is the scheduling policy of the different thread libraries?
Threads are scheduled by the kernel, not by the library. Look at
sys/kern/sched_umtx.c and sys/kern/sched_{4bsd,ule}.c.
DES
--
Dag-Erling Smørgrav - d...@des.no
In the last episode (Jan 19), Bernard van Gastel said:
I'm curious to the exact scheduling policy of POSIX threads in relation to
mutexes and conditions. If there are two threads (a b), both with the
following code:
while (1) {
pthread_mutex_lock(mutex
Hi,
Thank you very much again Ulf.
I found this http://en.wikipedia.org/wiki/Native_POSIX_Thread_Library and it
describes 1:1 correspondence of Linux threads. So, you were right and thank
you very much again.
Regards,
Mehmet
On Thu, Jan 8, 2009 at 4:59 PM, Ulf Lilleengen
Hi all,
After I had a bit googling I got confused.
My questions are simple and they are as follows :
1-) Are pthreads (or threads in general) of one process scheduled to
different cores on multi-core systems running Linux or BSD?
2-) What if there are multiple processes which have multiple
On Thu, Jan 08, 2009 at 04:23:08AM -0500, Mehmet Ali Aksoy TÜYSÜZ wrote:
Hi all,
After I had a bit googling I got confused.
My questions are simple and they are as follows :
1-) Are pthreads (or threads in general) of one process scheduled to
different cores on multi-core systems
Hi,
Thank you very much for your response Ulf. It is a very clear answer. Thanks
again.
By the way, any information for the Linux case?
Regards,
Mehmet
On Thu, Jan 8, 2009 at 10:08 AM, Ulf Lilleengen ulf.lilleen...@gmail.comwrote:
On Thu, Jan 08, 2009 at 04:23:08AM -0500, Mehmet Ali Aksoy
On tor, jan 08, 2009 at 09:16:26am -0500, Mehmet Ali Aksoy TÜYSÜZ wrote:
Hi,
Thank you very much for your response Ulf. It is a very clear answer. Thanks
again.
By the way, any information for the Linux case?
I think this applies to Linux as well, since it's NPTL(Native Posix Threading
Hi all, i have a patch from freebsd 4.x not developed for me, but this
is very good for appreciate and or upgrade the patch for versions 5.x
6.x or current. This use sysctl oids to limit memory ram and cpu use.
Regards and sorry for my bad english,
Roberto Lima.
jail_seperation.v7.patch
the guy doing the project, and I've been
spending the last two weeks coming up to speed on scheduling and the
like.
What I'd like from freebsd-hackers is the following:
- are there any good references on scheduling that you know of
which I should read? I've already got Design Implementation
On Sat, Jun 10, 2006 at 11:51:33PM -0600, Chris Jones wrote:
- what're your thoughts on making the existing scheduler jail-
aware as opposed to writing a sort of 'meta-scheduler' that would
schedule between jails, and then delegate to a scheduler per jail
(which could be very similar,
On 11-Jun-06, at 6:50 AM, Pieter de Goeje wrote:
For my CS study I picked up Operating System Concepts by
Silberschatz,
Galvin and Gagne. It has a fairly detailed description of the inner
workings
of a scheduler and the various algorithms involved, but no actual
implementation.
Yep, we
doing the project, and I've been
spending the last two weeks coming up to speed on scheduling and the
like.
What I'd like from freebsd-hackers is the following:
- are there any good references on scheduling that you know of
which I should read? I've already got Design Implementation
On 11-Jun-06, at 6:50 AM, Pieter de Goeje wrote:
For my CS study I picked up Operating System Concepts by
Silberschatz,
Galvin and Gagne. It has a fairly detailed description of the inner
workings
of a scheduler and the various algorithms involved, but no actual
implementation.
Yep, we
On Sun, 2006-Jun-11 14:50:30 +0200, Pieter de Goeje wrote:
I suppose by limiting the jail CPU usage you mean that jails contending over
CPU each get their assigned share. But when the system is idle one jail can
get all the CPU it wants.
IBM MVS had an interesting alternative approach, which I
I personally prefer the notion of layering the normal scheduler on top
of a simple fair-share scheduler. This would not add any overhead for
the non-jailed case. Complicating the process scheduler poses
maintenance, scalability, and general performance problems.
-Kip
On 6/11/06, Peter
two weeks coming up to speed on scheduling and the
like.
What I'd like from freebsd-hackers is the following:
- are there any good references on scheduling that you know of
which I should read? I've already got Design Implementation of
FreeBSD and the Petrou / Milford / Gibson
Adam Migus wrote:
So if you gimme webspace can i promise you code and
output shortly after? If you want input into design I can
give you the code now with the understanding that it is
WIP.
Sure. If you can wait a week, I'll be able to sort you out. Right now, the
server is in need of some
Mike,
I don't have the test, but I've built a generic performance
testing framework for FreeBSD over the past couple of months
that would make running such a test trivial. I'd post a link
but the page has no permanent home yet. When it gets one I can
follow it up with a link.
I'd be happy
It's very WIP right now and will remain so for another couple
of weeks. I'd planned to show more people a 'working' version
when a) i got a home for the page and b) the numbers its
producing have reasonable variance.
I'd prefer defering a public release until those goals are
reached. You've
Mike,
I don't have the test, but I've built a generic performance
testing framework for FreeBSD over the past couple of months
that would make running such a test trivial. I'd post a link
but the page has no permanent home yet. When it gets one I can
follow it up with a link.
For now, the
On Fri, 28 Feb 2003, Paul Robinson wrote:
Well, I'm just a hanger-on without a commit bit, so I'll work on making it
production ready in the next few weeks, post up a patch and if somebody
wants to commit it, great. At the moment it's all based on 4.3-RELEASE and
isn't really production
David Schultz wrote:
The original anticipatory scheduler implementation was done for
FreeBSD 4.3. See
http://www.cs.rice.edu/~ssiyer/r/antsched/
Yeah, I managed to grab that 45 seconds after sending my original post. I've
also contacted Sitaram Iyer directly to see how he feels about
FWIW,
Although the original anticipatory scheduler prototype
was made for FreeBSD, it cannot be used in the base
system, unless reimplemented, due to the license. I
wonder if the Linux guys redid it or simply didn't
notice.
The option of configuring it for runtime is welcome, I
think.
1 - 100 of 138 matches
Mail list logo