So you can try the following...

$ curl get.pharo.org/50+vm | bash
$ ./pharo Pharo.image eval "Delay delaySchedulerClass:
DelayExperimentalSemaphoreScheduler.   CommandLineTestRunner
runClasses:  {GLMTreeMorphicTest. GLMWatcherMorphicTest.
GLMPagerMorphTest} named: 'x'. CommandLineTestRunner runPackages:
#('Kernel-Tests')

btw, these are named "experimental" only because they were integrated
a week before the Pharo 4 release (delayed due to some conflict with
the CI system).  For me they had been stable for months before that
(only some CI conflicts delayed integration.)  I believe these are
ready for wider community testing by promoting an "experimental" delay
scheduler to be default.  Probably leave its name unchanged for now.
There is some refactoring I want to do to clean up its hierarchy that
will be simplified by leaving the name as it is for now.

cheers -ben

On Fri, Nov 20, 2015 at 11:43 PM, Ben Coman <b...@openinworld.com> wrote:
> Could you try the various DelayExperimentalXXXSchedulers?  Under
> System > Settings > System > Delay Scheduler.
>
> I believe there's a good chance one of them may fix also the problem.
> I had planned to push one of these as the default in Pharo 5 but
> hadn't decided which, got distracted and stretched a bit thin IRL. I
> believe these experimental schedulers are better than the current
> mutex based scheduler, since passing Delays from your normal-priority
> process to the timing-priority delay scheduling process is a
> signalling paradigm more than a shared memory paradigm.  I came to
> this conclusion reading [2] which says...
>
> "The correct use of a semaphore is for signalling from one task to
> another. A mutex is meant to be taken and released, always in that
> order, by [the same task using] the shared resource it protects. By
> contrast, tasks that use semaphores either signal or wait—not both.
> ... Another important distinction between a mutex and a semaphore is
> that the proper use of a mutex to protect a shared resource can have a
> dangerous unintended side effect ... any two tasks that operate at
> different priorities and coordinate via a mutex, create the
> opportunity for priority inversion ... Fortunately ... semaphores ...
> do not cause priority inversion when used for signalling."
>
> A mutex makes   DelayXXXScheduler>>#schedule:   susceptible to
> priority inversion since it is used by many processes at different
> priority levels.  Using a signalling-semaphore might avoid the
> problem.
>
> The experimental delay schedulers were developed to address the
> problem reported in the "super fast delay" [2], but there may also
> have been a fix for that through another means.
>
> [1] http://www.barrgroup.com/Embedded-Systems/How-To/RTOS-Mutex-Semaphore
> [2] http://forum.world.st/Super-fast-delay-td4787257.html
>
> cheers -ben
>
> On Fri, Nov 20, 2015 at 8:03 PM, Thierry Goubier
> <thierry.goub...@gmail.com> wrote:
>>
>>
>> 2015-11-20 12:46 GMT+01:00 Martin Dias <tinchod...@gmail.com>:
>>>
>>> Hi!
>>>
>>> With Guille and Pablo this morning we found what is the problem and
>>> apparently fixed it. The problem is that there is a process in priority 10
>>> that takes the delay lock (accessProtect semaphore) and go to sleep. Then
>>> the #benchFor: launches another process in priority 79 that tries to take
>>> the delay lock but, since it's taken by the other process, it goes to sleep.
>>> But the UI process is in priority 40 so it will always have priority over
>>> the one that has the lock. Consequence: the process in priority 10 has the
>>> delay lock and will never be awaken until the UI process gets suspended.
>>
>>
>> I believe this is a text book example of priority inversion.
>>
>> Kudo's for finding it!
>>
>> Thierry
>>
>>>
>>>
>>> More in detail, the following method makes the assumption that once the
>>> delay process finishes the handling of the delay to schedule, it will return
>>> to same process. But that is not true if there is another process with more
>>> priority.
>>>
>>> schedule: aDelay
>>> | scheduled |
>>> scheduled := false.
>>> aDelay schedulerBeingWaitedOn ifTrue: [^self error: 'This Delay has
>>> already been scheduled.'].
>>> accessProtect critical: [
>>> scheduledDelay := aDelay.
>>> timingSemaphore signal. "#handleTimerEvent: sets scheduledDelay:=nil"
>>> scheduled := (scheduledDelay == nil).
>>> ].
>>> ^scheduled.
>>>
>>> Our solution is to run the critical section in a higher priority to ensure
>>> that always comes back to the same process to release the delay lock.
>>>
>>> schedule: aDelay | scheduled | scheduled := false. aDelay
>>> schedulerBeingWaitedOn ifTrue: [ ^ self error: 'This Delay has already been
>>> scheduled.' ]. [ accessProtect critical: [ scheduledDelay := aDelay.
>>> timingSemaphore signal. "#handleTimerEvent: sets scheduledDelay:=nil"
>>> scheduled := scheduledDelay == nil ]. ^ scheduled ] valueAt: Processor
>>> timingPriority - 1
>>>
>>> Also, to not have interference with the benchmarking process we reduce the
>>> priority of the process used in benchFor:
>>>
>>> benchFor: duration
>>> "Run me for duration and return a BenchmarkResult"
>>> "[ 100 factorial ] benchFor: 2 seconds"
>>> | count run started |
>>> count := 0.
>>> run := true.
>>> [ duration wait. run := false ] forkAt: Processor timingPriority - 2.
>>> started := Time millisecondClockValue.
>>> [ run ] whileTrue: [ self value. count := count + 1 ].
>>> ^ BenchmarkResult new
>>> iterations: count;
>>> elapsedTime: (Time millisecondsSince: started) milliSeconds;
>>> yourself
>>>
>>>
>>> We will create an entry in fogbugz and submit the change.
>>>
>>> Regards,
>>> Martin, Guille and Pablo
>>>
>>>
>>>
>>> On Wed, Nov 18, 2015 at 8:33 PM, Max Leske <maxle...@gmail.com> wrote:
>>>>
>>>> I went ahead and implemented a simple fix for the styler. Not sure how
>>>> well it performs but in my simple experiments it seems to work fine. Please
>>>> test it and let me know what you think.
>>>>
>>>>
>>>> https://pharo.fogbugz.com/f/cases/17050/SHTextStyler-styleInBackgroundProcess-spawns-too-many-processes
>>>>
>>>> Cheers,
>>>> Max
>>>>
>>>>
>>>> > On 18 Nov 2015, at 19:02, Andreas Wacknitz <a.wackn...@gmx.de> wrote:
>>>> >
>>>> >
>>>> >
>>>> > Am 18.11.15 um 11:48 schrieb Andrei Chis:
>>>> >> Can you try the script below in the latest Pharo image and let me know
>>>> >> how much does it take to execute on your machine:
>>>> >>
>>>> >>   |duration benchmarkResult|
>>>> >>   100 timesRepeat: [
>>>> >>  RubScrolledTextMorph new
>>>> >>  model: (RubScrolledTextModel new setInitialText: '1+1'; yourself);
>>>> >>  beForSmalltalkScripting;
>>>> >>  yourself
>>>> >>  ] .
>>>> >>
>>>> >>  duration := 100 milliSeconds.
>>>> >>  benchmarkResult := [ 100 factorial ] benchFor: duration.
>>>> >>
>>>> >> In my case it takes around 1 minute. The inner loop finishes
>>>> >> immediately and then most time is spend executing the benchmark.
>>>> >>
>>>> >> At the end I get the following result:
>>>> >> "a BenchmarkResult(2,060,386 iterations in 1 minute 7 seconds 498
>>>> >> milliseconds. 30,525 per second)"
>>>> >>
>>>> >> So the benchmark is executed for over one minute before the delay
>>>> >> expires.
>>>> >>
>>>> >>
>>>> >>
>>>> > "a BenchmarkResult(1,263,573 iterations in 37 seconds 689 milliseconds.
>>>> > 33,526 per second)"
>>>> > (HP Z420 with Xeon E5-1620 under OpenIndiana).
>>>> >
>>>> > Regards
>>>> > Andreas
>>>>
>>>>
>>>
>>

Reply via email to