On 22/4/18 9:43 pm, Rick Macklem wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
I decided to start a new thread on current related to SCHED_ULE, since I
On 22/4/18 10:36 pm, Rodney W. Grimes wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
I decided to start a new thread on current related to SCHED_ULE
On 22/4/18 10:36 pm, Rodney W. Grimes wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
Konstantin Belousov wrote:
On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
I decided to start a new thread on current related to SCHED_ULE
> Konstantin Belousov wrote:
> >On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
> >> Konstantin Belousov wrote:
> >> >On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
> >> >> I decided to start a new thread on curre
Konstantin Belousov wrote:
>On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
>> Konstantin Belousov wrote:
>> >On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
>> >> I decided to start a new thread on current related to SCHED_ULE, since I
On Sat, Apr 21, 2018 at 11:30:55PM +, Rick Macklem wrote:
> Konstantin Belousov wrote:
> >On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
> >> I decided to start a new thread on current related to SCHED_ULE, since I
> >> see
> >> more than j
> On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
> > I decided to start a new thread on current related to SCHED_ULE, since I see
> > more than just performance degradation and on a recent current kernel.
> > (I cc'd a couple of the people discussi
Konstantin Belousov wrote:
>On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
>> I decided to start a new thread on current related to SCHED_ULE, since I see
>> more than just performance degradation and on a recent current kernel.
>> (I cc'd a coupl
On Sat, Apr 21, 2018 at 07:21:58PM +, Rick Macklem wrote:
> I decided to start a new thread on current related to SCHED_ULE, since I see
> more than just performance degradation and on a recent current kernel.
> (I cc'd a couple of the people discussing performance problems
I decided to start a new thread on current related to SCHED_ULE, since I see
more than just performance degradation and on a recent current kernel.
(I cc'd a couple of the people discussing performance problems in freebsd-stable
recently under a subject line of "Re: kern.sched.quant
Colin Percival wrote:
>On 05/28/17 13:16, Rick Macklem wrote:
>> cperciva@ is running a highly parallelized buuildworld and he sees better
>> slightly better elapsed times and much lower system CPU for SCHED_ULE.
>>
>> As such, I suspect it is the single threaded
On 05/28/17 13:16, Rick Macklem wrote:
> cperciva@ is running a highly parallelized buuildworld and he sees better
> slightly better elapsed times and much lower system CPU for SCHED_ULE.
>
> As such, I suspect it is the single threaded, processes mostly sleeping
> waiting
> fo
On 28/05/2017 01:20, Rick Macklem wrote:
> - with the "obvious change" mentioned in r312426's commit message, using
>(flags & SW_TYPE_MASK) == SWT_RELINQUISH instead of (flag & SWT_RELINQUISH)
>121minutes
Rick,
can I see how exactly
ses are the same single threaded kernel build, same hardware, etc. The
only
changes are recent vs 1yr old head kernel and what is noted.)
- 1yr old kernel, SMP, SCHED_ULE 94minutes
- 1yr old kernel, no SMP, SCHED_ULE 111minutes
- recent kernel, SMP, SCHED_4BSD
On 28/05/2017 01:20, Rick Macklem wrote:
> After poking at this some more, it appears that r312426 is the main cause of
> this degradation.
Rick,
thank you for the investigation!
A quick question before a longer reply: what network driver do you use in your
test setup? Is it ixl by a chance?
--
the last post, I got rid of most of the degradation by disabling
>SMP.
>
>- same kernel build running recent kernel with SCHED_4BSD 104minutes
>
After poking at this some more, it appears that r312426 is the main cause of
this degradation.
Doing SMP enabled test runs using SCHED_ULE
On Fri, May 26, 2017 at 09:57:16PM +, Rick Macklem wrote:
> I have now found I can get rid of almost all of the degradation by building
> the
> recent kernel with
> options SCHED_4BSD
> instead of
> options SCHED_ULE
>
> The 1yr old kernel was built with SCHED_ULE, so
the degradation by disabling
SMP.
- same kernel build running recent kernel with SCHED_4BSD 104minutes
I have now found I can get rid of almost all of the degradation by building the
recent kernel with
options SCHED_4BSD
instead of
options SCHED_ULE
The 1yr old kernel was built with SCHED_ULE
> # top -P
>>>>> CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
>>>>> CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
>>>>> CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100%
Hello, Lev.
You wrote 12 января 2012 г., 15:00:20:
>> But what mav says makes sense.
> It is it -- stack size. Setting KSTACK_PAGES=6 fixes situation.
OOOPS. Not. After another 5 minutes ng_queue again consumes 100% CPU
:(
--
// Black Lion AKA Lev Serebryakov
___
Hello, Andriy.
You wrote 12 января 2012 г., 14:29:57:
> But what mav says makes sense.
It is it -- stack size. Setting KSTACK_PAGES=6 fixes situation.
Feature request: warn user when ng_queue is used due to stack
limitations :) I know from mav, that sometime it is unavoidable (with
protocols
eer counts in torrent client.
> But what mav says makes sense.
I'm rebuilding system with ULE and KSTACK_PAGES=6 (3 is default on i386)
now.
> Also I remember seeing some very old reports about some strange issues with
> SCHED_ULE and dummynet.
> Some links that I found:
> h
calls.
>
> Really, here is diff between "md5" of all files of one and other
> images:
Well, I mostly meant things like uptime, load level and pattern, etc.
But what mav says makes sense.
Also I remember seeing some very old reports about some strange issues with
SCHED_ULE
Hello, Andriy.
You wrote 12 января 2012 г., 13:54:41:
>> Switching to 4BSD helps. 4BSD works as usual: all CPU time is
>> interrupts and network thread, system is responsive under heaviest load,
>> normal operations of DNS, DHCP and hostapd.
> How reproducible is this result?
100%
> In other
on 12/01/2012 11:31 Lev Serebryakov said the following:
> Switching to 4BSD helps. 4BSD works as usual: all CPU time is
> interrupts and network thread, system is responsive under heaviest load,
> normal operations of DNS, DHCP and hostapd.
How reproducible is this result?
In other words, have
Hello, Freebsd-current.
I have router, which connects to upstream ISP with mpd5 from ports
using PPPoE.
I've used SCHED_ULE for long time without nay problems. Under heavy
network load (router is not the fastest one -- 500Mhz Geode CPU) main
consumer of CPU was "intr{swi1
On Mon, 19 Dec 2011 23:22:40 +0200
Andriy Gapon wrote:
> on 19/12/2011 17:50 Nathan Whitehorn said the following:
> > The thing I've seen is that ULE is substantially more enthusiastic about
> > migrating processes between cores than 4BSD.
>
> Hmm, this seems to be contrary to my theoretical exp
On Mon Dec 19 11, Nathan Whitehorn wrote:
> On 12/18/11 04:34, Adrian Chadd wrote:
> >The trouble is that there's lots of anecdotal evidence, but noone's
> >really gone digging deep into _their_ example of why it's broken. The
> >developers who know this stuff don't see anything wrong. That hints t
on 19/12/2011 17:50 Nathan Whitehorn said the following:
> The thing I've seen is that ULE is substantially more enthusiastic about
> migrating processes between cores than 4BSD.
Hmm, this seems to be contrary to my theoretical expectations. I thought that
with 4BSD all threads that were not in o
On 12/18/11 04:34, Adrian Chadd wrote:
The trouble is that there's lots of anecdotal evidence, but noone's
really gone digging deep into _their_ example of why it's broken. The
developers who know this stuff don't see anything wrong. That hints to
me it may be something a little more creepy - as
The trouble is that there's lots of anecdotal evidence, but noone's
really gone digging deep into _their_ example of why it's broken. The
developers who know this stuff don't see anything wrong. That hints to
me it may be something a little more creepy - as an example, the
interplay between netisr/
On Sun, 18 Dec 2011 02:37:52 +, Bruce Cran wrote:
> On 13/12/2011 09:00, Andrey Chernov wrote:
> > I observe ULE interactivity slowness even on single core machine (Pentium
> > 4) in very visible places, like 'ps ax' output stucks in the middle by ~1
> > second. When I switch back to SHED_4
On Thu, Dec 15, 2011 at 05:26:27PM +0100, Attilio Rao wrote:
> 2011/12/13 Jeremy Chadwick :
> > On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
> >> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> >> > issue. And yes, ther
re than 1 of them and 1 will use at
most 1/2 of a multi-CPU system.
But no one has unsubscribed to my letter, my patch helps or not in
the case of Core2Duo...
There is a suspicion that the problems stem from the sections of
code associated with the SMP...
Maybe I'm in something wrong, but I want to
Hi,
What Attilllo and others need are KTR traces in the most stripped down
example of interactive-busting workload you can find.
Eg: if you're doing 32 concurrent buildworlds and trying to test
interactivity - fine, but that's going to result in a lot of KTR
stuff.
If you can reproduce it using a
eral
> seconds. Sometimes even the "Password:" prompt can take a couple of
> seconds to appear after typing my username.
>
I reported ages ago several problems using SCHED_ULE on FreeBSD 8/9 when
doing heavy I/O, either disk or network bound (that time I realised the
problem on server
On 18/12/2011 10:34, Adrian Chadd wrote:
I applaud reppie for trying to make it as easy as possible for people
to use KTR to provide scheduler traces for him to go digging with, so
please, if you have these issues and you can absolutely reproduce
them, please follow his instructions and work with
> seem to be speed (at least not a huge difference), but interactivity.
>
> one of the tests i performed was the following
>
> ttyv0: untar a *huge* (+10G) archive
> ttyv1: after ~ 30 seconds of untaring do 'ls -la $direcory', where directory
>contains a lo
erence between the two does *not*
seem to be speed (at least not a huge difference), but interactivity.
one of the tests i performed was the following
ttyv0: untar a *huge* (+10G) archive
ttyv1: after ~ 30 seconds of untaring do 'ls -la $direcory', where directory
contains a lot
On Sun, Dec 18, 2011 at 05:51:47PM +1100, Ian Smith wrote:
> On Sun, 18 Dec 2011 02:37:52 +, Bruce Cran wrote:
> > On 13/12/2011 09:00, Andrey Chernov wrote:
> > > I observe ULE interactivity slowness even on single core machine (Pentium
> > > 4) in very visible places, like 'ps ax' output s
On 13/12/2011 09:00, Andrey Chernov wrote:
I observe ULE interactivity slowness even on single core machine
(Pentium 4) in very visible places, like 'ps ax' output stucks in the
middle by ~1 second. When I switch back to SHED_4BSD, all slowness is
gone.
I'm also seeing problems with ULE on a
On Thu, Dec 15, 2011 at 9:58 PM, Mike Tancsa wrote:
> On 12/15/2011 11:56 AM, Attilio Rao wrote:
>> So, as very first thing, can you try the following:
>> - Same codebase, etc. etc.
>> - Make the test 4 times, discard the first and ministat for the other 3
>> - Reboot
>> - Change the steal_thresh
2011/12/15 Mike Tancsa :
> On 12/15/2011 11:56 AM, Attilio Rao wrote:
>> So, as very first thing, can you try the following:
>> - Same codebase, etc. etc.
>> - Make the test 4 times, discard the first and ministat for the other 3
>> - Reboot
>> - Change the steal_thresh value
>> - Make the test 4 t
On 12/15/2011 11:56 AM, Attilio Rao wrote:
> So, as very first thing, can you try the following:
> - Same codebase, etc. etc.
> - Make the test 4 times, discard the first and ministat for the other 3
> - Reboot
> - Change the steal_thresh value
> - Make the test 4 times, discard the first and minis
>> > Not fully right, boinc defaults to run on idprio 31 so this
> >> >> > isn't an issue. And yes, there are cases where SCHED_ULE
> >> >> > shows much better performance then SCHED_4BSD. ??[...]
> >> >>
> >> >> Do we h
x27;t an
>> >> > issue. And yes, there are cases where SCHED_ULE shows much better
>> >> > performance then SCHED_4BSD. ??[...]
>> >>
>> >> Do we have any proof at hand for such cases where SCHED_ULE performs
>> >> much better than SC
y opinion.
Therefore I like "artificial" benchmarks: have a set of programs that
can be compiled and take the time if compilation time is important.
Well, your one-shot test would show, that there is indeed a marginal
advantage of SCHED_ULE, if the number of cores is big enou
2011/12/15 Mike Tancsa :
> On 12/15/2011 11:42 AM, Attilio Rao wrote:
>>
>> I'm thinking now to a better test-case for this: can you try that on a
>> tmpfs volume?
>
> There is enough RAM in the box so that it should not touch the disk, and
> I was sending the output to /dev/null, so it was not wri
On 12/15/2011 11:42 AM, Attilio Rao wrote:
>
> I'm thinking now to a better test-case for this: can you try that on a
> tmpfs volume?
There is enough RAM in the box so that it should not touch the disk, and
I was sending the output to /dev/null, so it was not writing to the disk.
>
> Also what
2011/12/15 Mike Tancsa :
> On 12/15/2011 11:26 AM, Attilio Rao wrote:
>>
>> Hi Mike,
>> was that just the same codebase with the switch SCHED_4BSD/SCHED_ULE?
>
> Hi Attilio,
> It was the same codebase.
>
>
>> Could you retry the bench checking
On 12/15/2011 11:26 AM, Attilio Rao wrote:
>
> Hi Mike,
> was that just the same codebase with the switch SCHED_4BSD/SCHED_ULE?
Hi Attilio,
It was the same codebase.
> Could you retry the bench checking CPU usage and possible thread
> migration around for both cases?
I
2011/12/13 Jeremy Chadwick :
> On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
>> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
>> > issue. And yes, there are cases where SCHED_ULE shows much better
>> > performance then SCHE
7;s t, pooled s = 0.425627)
>
> a value of 1 is *slightly* faster.
Hi Mike,
was that just the same codebase with the switch SCHED_4BSD/SCHED_ULE?
Also, the results here should be in the 3% interval for the avg case,
which is not yet at the 'alarm level' but could still be an
indicat
iddle by ~1 second. When I switch back to SHED_4BSD, all
> > > slowness is gone.
> >
> > Are you able to provide KTR traces of the scheduler results?
> > Something that can be fed to schedgraph?
>
> Sorry, this machine is not mine anymore. I try SCHED_ULE on C
the scheduler results? Something
> that can be fed to schedgraph?
Sorry, this machine is not mine anymore. I try SCHED_ULE on Core 2 Duo
instead and don't notice this effect, but it is overall pretty fast
comparing to that Pentium 4.
--
http://ache.vniz.net/
_
On 12/13/2011 7:01 PM, m...@freebsd.org wrote:
>
> Has anyone experiencing problems tried to set sysctl
> kern.sched.steal_thresh=1 ?
>
> I don't remember what our specific problem at $WORK was, perhaps it
> was just interrupt threads not getting serviced fast enough, but we've
> hard-coded this
В Tue, 13 Dec 2011 16:01:56 -0800
m...@freebsd.org пишет:
> On Tue, Dec 13, 2011 at 3:39 PM, Ivan Klymenko wrote:
> > В Wed, 14 Dec 2011 00:04:42 +0100
> > Jilles Tjoelker пишет:
> >
> >> On Tue, Dec 13, 2011 at 10:40:48AM +0200, Ivan Klymenko wrote:
> >> > If the algorithm ULE does not contain
On Tue, Dec 13, 2011 at 3:39 PM, Ivan Klymenko wrote:
> В Wed, 14 Dec 2011 00:04:42 +0100
> Jilles Tjoelker пишет:
>
>> On Tue, Dec 13, 2011 at 10:40:48AM +0200, Ivan Klymenko wrote:
>> > If the algorithm ULE does not contain problems - it means the
>> > problem has Core2Duo, or in a piece of cod
В Tue, 13 Dec 2011 23:02:15 +
Marcus Reid пишет:
> On Mon, Dec 12, 2011 at 04:29:14PM -0800, Doug Barton wrote:
> > On 12/12/2011 05:47, O. Hartmann wrote:
> > > Do we have any proof at hand for such cases where SCHED_ULE
> > > performs much better than SCHED_
В Wed, 14 Dec 2011 00:04:42 +0100
Jilles Tjoelker пишет:
> On Tue, Dec 13, 2011 at 10:40:48AM +0200, Ivan Klymenko wrote:
> > If the algorithm ULE does not contain problems - it means the
> > problem has Core2Duo, or in a piece of code that uses the ULE
> > scheduler. I already wrote in a mailing
On Mon, Dec 12, 2011 at 04:29:14PM -0800, Doug Barton wrote:
> On 12/12/2011 05:47, O. Hartmann wrote:
> > Do we have any proof at hand for such cases where SCHED_ULE performs
> > much better than SCHED_4BSD?
>
> I complained about poor interactive performance of ULE in a
On Tue, Dec 13, 2011 at 10:40:48AM +0200, Ivan Klymenko wrote:
> If the algorithm ULE does not contain problems - it means the problem
> has Core2Duo, or in a piece of code that uses the ULE scheduler.
> I already wrote in a mailing list that specifically in my case (Core2Duo)
> partially helps the
On 12/13/2011 13:31, Malin Randstrom wrote:
> stop sending me spam mail ... you never stop despite me having unsubscribeb
> several times. stop this!
If you had actually unsubscribed, the mail would have stopped. :)
You can see the instructions you need to follow below.
> ___
On 12/13/2011 10:54 AM, Steve Kargl wrote:
>
> I have given the WHY in previous discussions of ULE, based
> on what you call legacy benchmarks. I have not seen any
> commit to sched_ule.c that would lead me to believe that
> the performance issues with ULE and cpu-bound numerical
> codes have bee
;>> issue. And yes, there are cases where SCHED_ULE shows much better
> >>> performance then SCHED_4BSD. [...]
> >>
> >> Do we have any proof at hand for such cases where SCHED_ULE performs
> >> much better than SCHED_4BSD? Whenever the subject comes up, it is
&g
On 12/12/11 16:51, Steve Kargl wrote:
> On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
>>
>>> Not fully right, boinc defaults to run on idprio 31 so this isn't an
>>> issue. And yes, there are cases where SCHED_ULE shows much better
>>> perfor
On Tue, Dec 13, 2011 at 12:13:42PM +0100, O. Hartmann wrote:
> On 12/12/11 16:13, Vincent Hoffman wrote:
> >
> > On 12/12/2011 13:47, O. Hartmann wrote:
> >
> >>> Not fully right, boinc defaults to run on idprio 31 so this isn't an
> >>> issue. A
On 12/12/11 16:13, Vincent Hoffman wrote:
>
> On 12/12/2011 13:47, O. Hartmann wrote:
>
>>> Not fully right, boinc defaults to run on idprio 31 so this isn't an
>>> issue. And yes, there are cases where SCHED_ULE shows much better
>>> performance then SCH
On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> > issue. And yes, there are cases where SCHED_ULE shows much better
> > performance then SCHED_4BSD. [...]
>
> Do we have any proof
On 13 December 2011 01:00, Andrey Chernov wrote:
>> If the algorithm ULE does not contain problems - it means the problem
>> has Core2Duo, or in a piece of code that uses the ULE scheduler.
>
> I observe ULE interactivity slowness even on single core machine (Pentium
> 4) in very visible places,
On Tue, Dec 13, 2011 at 10:40:48AM +0200, Ivan Klymenko wrote:
> > On 12/12/2011 05:47, O. Hartmann wrote:
> > > Do we have any proof at hand for such cases where SCHED_ULE performs
> > > much better than SCHED_4BSD?
> >
> > I complained about poor interac
> On 12/12/2011 05:47, O. Hartmann wrote:
> > Do we have any proof at hand for such cases where SCHED_ULE performs
> > much better than SCHED_4BSD?
>
> I complained about poor interactive performance of ULE in a desktop
> environment for years. I had numerous people try t
On 12/12/2011 05:47, O. Hartmann wrote:
> Do we have any proof at hand for such cases where SCHED_ULE performs
> much better than SCHED_4BSD?
I complained about poor interactive performance of ULE in a desktop
environment for years. I had numerous people try to help, including
Jeff, with v
On 12/12/2011 23:48, O. Hartmann wrote:
Is the tuning of kern.sched.preempt_thresh and a proper method of
estimating its correct value for the intended to use workload
documented in the manpages, maybe tuning()? I find it hard to crawl a
lot of pros and cons of mailing lists for evaluating a co
On 12/12/11 18:06, Steve Kargl wrote:
> On Mon, Dec 12, 2011 at 04:18:35PM +, Bruce Cran wrote:
>> On 12/12/2011 15:51, Steve Kargl wrote:
>>> This comes up every 9 months or so, and must be approaching FAQ
>>> status. In a HPC environment, I recommend 4BSD. Depending on the
>>> workload, ULE
On Mon, Dec 12, 2011 at 01:03:30PM -0600, Scott Lambert wrote:
> On Mon, Dec 12, 2011 at 09:06:04AM -0800, Steve Kargl wrote:
> > Tuning kern.sched.preempt_thresh did not seem to help for
> > my workload. My code is a classic master-slave OpenMPI
> > application where the master runs on one node a
On Mon, Dec 12, 2011 at 09:06:04AM -0800, Steve Kargl wrote:
> Tuning kern.sched.preempt_thresh did not seem to help for
> my workload. My code is a classic master-slave OpenMPI
> application where the master runs on one node and all
> cpu-bound slaves are sent to a second node. If I send
> send
On Monday, December 12, 2011 12:06:04 pm Steve Kargl wrote:
> On Mon, Dec 12, 2011 at 04:18:35PM +, Bruce Cran wrote:
> > On 12/12/2011 15:51, Steve Kargl wrote:
> > >This comes up every 9 months or so, and must be approaching FAQ
> > >status. In a HPC environment, I recommend 4BSD. Depending
On Mon, Dec 12, 2011 at 04:18:35PM +, Bruce Cran wrote:
> On 12/12/2011 15:51, Steve Kargl wrote:
> >This comes up every 9 months or so, and must be approaching FAQ
> >status. In a HPC environment, I recommend 4BSD. Depending on the
> >workload, ULE can cause a severe increase in turn around
sh: SHA1
> >>
> >> On 12/12/2011 13:47, O. Hartmann wrote:
> >> >
> >> >> Not fully right, boinc defaults to run on idprio 31 so this isn't an
> >> >> issue. And yes, there are cases where SCHED_ULE shows much better
> >> >>
12 16:32:21 MEZ 2011
> An: Vincent Hoffman
> CC: "O. Hartmann" , Current FreeBSD
> , freebsd-sta...@freebsd.org,
> freebsd-performa...@freebsd.org
> Betreff: Re: SCHED_ULE should not be the default
>
>
> On Mon, 12 Dec 2011 15:13:00 +
> Vincent
On Monday 12 December 2011 14:47:57 O. Hartmann wrote:
> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> > issue. And yes, there are cases where SCHED_ULE shows much better
> > performance then SCHED_4BSD. [...]
>
> Do we have any proof
В Mon, 12 Dec 2011 16:18:35 +
Bruce Cran пишет:
> On 12/12/2011 15:51, Steve Kargl wrote:
> > This comes up every 9 months or so, and must be approaching FAQ
> > status. In a HPC environment, I recommend 4BSD. Depending on the
> > workload, ULE can cause a severe increase in turn around tim
On 12/12/2011 15:51, Steve Kargl wrote:
This comes up every 9 months or so, and must be approaching FAQ
status. In a HPC environment, I recommend 4BSD. Depending on the
workload, ULE can cause a severe increase in turn around time when
doing already long computations. If you have an MPI applica
g, Current FreeBSD
, freebsd-sta...@freebsd.org
Betreff: Re: SCHED_ULE should not be the default
On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
>
> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> > issue. And yes, there are cases whe
Did you use -jX to build the world?
_
Von: Gary Jennejohn
Versendet am: Mon Dec 12 16:32:21 MEZ 2011
An: Vincent Hoffman
CC: "O. Hartmann" , Current FreeBSD
, freebsd-sta...@freebsd.org,
freebsd-performa...@freebsd.org
Betreff: Re: SCHED_
ully right, boinc defaults to run on idprio 31 so this isn't an
>> >> issue. And yes, there are cases where SCHED_ULE shows much better
>> >> performance then SCHED_4BSD. [...]
>> >
>> > Do we have any proof at hand for such cases where SCHED_ULE per
On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
>
> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> > issue. And yes, there are cases where SCHED_ULE shows much better
> > performance then SCHED_4BSD. [...]
>
> Do we have
On Mon, 12 Dec 2011 15:13:00 +
Vincent Hoffman wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 12/12/2011 13:47, O. Hartmann wrote:
> >
> >> Not fully right, boinc defaults to run on idprio 31 so this isn't an
> >> issue. An
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/12/2011 13:47, O. Hartmann wrote:
>
>> Not fully right, boinc defaults to run on idprio 31 so this isn't an
>> issue. And yes, there are cases where SCHED_ULE shows much better
>> performance then SCHED_4BSD. [...]
&
> Not fully right, boinc defaults to run on idprio 31 so this isn't an
> issue. And yes, there are cases where SCHED_ULE shows much better
> performance then SCHED_4BSD. [...]
Do we have any proof at hand for such cases where SCHED_ULE performs
much better than SCHED_4BSD? Whenev
on 03/11/2011 22:17 Jeff Roberson said the following:
> On Thu, 15 Sep 2011, Andriy Gapon wrote:
>
>>
>> This is more of a "just for the record" email.
>> I think I've already stated the following observations, but I suspect that
>> they
>> drowned in the noise of a thread in which I mentioned th
On Thu, 15 Sep 2011, Andriy Gapon wrote:
This is more of a "just for the record" email.
I think I've already stated the following observations, but I suspect that they
drowned in the noise of a thread in which I mentioned them.
1. Incorrect topology is built for single-package SMP systems.
Tha
This is more of a "just for the record" email.
I think I've already stated the following observations, but I suspect that they
drowned in the noise of a thread in which I mentioned them.
1. Incorrect topology is built for single-package SMP systems.
That topology has two levels ("shared nothing"
On Wednesday, September 01, 2010 12:54:13 pm m...@freebsd.org wrote:
> On Wed, Sep 1, 2010 at 6:49 AM, John Baldwin wrote:
> > On Tuesday, August 31, 2010 2:53:12 pm m...@freebsd.org wrote:
> >> On Tue, Aug 31, 2010 at 10:16 AM, wrote:
> >> > I recorded the stack any time ts->ts_cpu was set and
m...@freebsd.org wrote:
[snip]
I will test this patch out; thanks for the help!
Two questions:
1) How does a thread get moved between CPUs when it's not running? I
see that we change the runqueue for non-running threads that are on a
runqueue. Does the code always check for THREAD_CAN_SCHED w
On Wed, Sep 1, 2010 at 6:49 AM, John Baldwin wrote:
> On Tuesday, August 31, 2010 2:53:12 pm m...@freebsd.org wrote:
>> On Tue, Aug 31, 2010 at 10:16 AM, wrote:
>> > I recorded the stack any time ts->ts_cpu was set and when a thread was
>> > migrated by sched_switch() I printed out the recorded
On Tuesday, August 31, 2010 2:53:12 pm m...@freebsd.org wrote:
> On Tue, Aug 31, 2010 at 10:16 AM, wrote:
> > I recorded the stack any time ts->ts_cpu was set and when a thread was
> > migrated by sched_switch() I printed out the recorded info. Here's
> > what I found:
> >
> >
> > XXX bug 67957:
On Tue, Aug 31, 2010 at 10:16 AM, wrote:
> I recorded the stack any time ts->ts_cpu was set and when a thread was
> migrated by sched_switch() I printed out the recorded info. Here's
> what I found:
>
>
> XXX bug 67957: moving 0xff003ff9b800 from 3 to 1
> [1]: pin 0 state 4 move 3 -> 1 done
I recorded the stack any time ts->ts_cpu was set and when a thread was
migrated by sched_switch() I printed out the recorded info. Here's
what I found:
XXX bug 67957: moving 0xff003ff9b800 from 3 to 1
[1]: pin 0 state 4 move 3 -> 1 done by 0xff000cc44000:
#0 0x802b36b4 at bug6795
1 - 100 of 190 matches
Mail list logo