I found a single test method I could use to narrow down the problem (be aware 
that this might not work for you. I have other (fresh) images where the issue 
never occurs). The method I used is 
EpRedoAndUndoVisitorTest>>testClassModification.

I was able to mitigate the hang by making a small modification to the method 
EpUndoVisitor>>visitClassModification: Simply add

2 seconds asDelay wait. Smalltalk garbageCollect.

or

Smalltalk garbageCollect. 2 seconds asDelay wait.

after the compilation phase. My guess is, that announcements build up (a simple 
printout showed 1 announcement instance before and 74 instances after the 
#evaluate:) and do stuff in the system that influence the process scheduling. 
For instance, a lot of those announcements will probably be weak and need to be 
finalized at some point.

I don’t have enough experience with the ClassBuilder to go on hunting but this 
should hopefully give you a good head start.


For testing I used the following command line: 

./pharo found_case.image eval "CommandLineTestRunner runPackages: #('Epicea' 
'Kernel-Tests’)"

Install Epicea with

./pharo Pharo.image get Epicea 6.5


To isolate the test case I simply moved test classes from Epicea to a 
differently named package and once I had found the class I started excluding 
methods (e.g. with “self fail.” at the beginning).


HTH,
Max




> On 12 Nov 2015, at 18:57, Martin Dias <tinchod...@gmail.com> wrote:
> 
> Yes! with Max Leske we have been isolating the problem and we realized that a 
> fresh image was enough:
> 
> 1)
> curl get.pharo.org/50+vm <http://get.pharo.org/50+vm> | bash
> 
> 2)
> ./pharo Pharo.image eval "CommandLineTestRunner runClasses:  
> {GLMTreeMorphicTest. GLMWatcherMorphicTest. GLMPagerMorphTest} named: 'x'. 
> CommandLineTestRunner runPackages: #('Kernel-Tests')"
> 
> Martin
> 
> On Thu, Nov 12, 2015 at 6:42 PM, Sven Van Caekenberghe <s...@stfx.eu 
> <mailto:s...@stfx.eu>> wrote:
> Then it must be that waiting on the delay never ends, which is probably a 
> problem all by itself.
> 
> > On 12 Nov 2015, at 18:39, Marcus Denker <marcus.den...@inria.fr 
> > <mailto:marcus.den...@inria.fr>> wrote:
> >
> > The fun thing is that I now get the same problem when integrating the GT 
> > Tools update.
> >
> > After doing the update for issue 16957, the test run hangs with
> >
> > starting testcase: BlockClosureTest>>testBenchFor ...
> >
> >
> >> On 11 Nov 2015, at 22:42, Martin Dias <tinchod...@gmail.com 
> >> <mailto:tinchod...@gmail.com>> wrote:
> >>
> >> Hello,
> >>
> >> I'm trying to understand why this method hangs in certain conditions*. 
> >> What happens is that the #whileTrue: in the following method is never cut.
> >>
> >> benchFor: duration
> >>      "Run me for duration and return a BenchmarkResult"
> >>
> >>      "[ 100 factorial ] benchFor: 2 seconds"
> >>
> >>      | count run started |
> >>      count := 0.
> >>      run := true.
> >>      [ duration wait. run := false ] forkAt: Processor timingPriority - 1.
> >>      started := Time millisecondClockValue.
> >>      [ run ] whileTrue: [ self value. count := count + 1 ].
> >>      ^ BenchmarkResult new
> >>              iterations: count;
> >>              elapsedTime: (Time millisecondsSince: started) milliSeconds;
> >>              yourself
> >>
> >> Strangely, I verified with several #logCr that the forked block starts, 
> >> but the #wait never ends.
> >> ---> Maybe you see something I don't in the code. Any clue?
> >>
> >> I will continue trying to isolate the problem, but maybe you have some 
> >> idea.
> >>
> >> * the conditions to reproduce are:
> >> 1) curl get.pharo.org/50+vm <http://get.pharo.org/50+vm> | bash
> >> 2) ./pharo Pharo.image get Epicea 6.5
> >> 3) ./pharo Pharo.image test '^(?!Metacello)[A-L].*'
> >>
> >> Notes:
> >> - It doesn't hang when running from UI
> >> - it doesn't hang when running less tests, like:
> >>   pharo Pharo.image test '^(?!Metacello)[E-L].*'
> >> - it *does* hang even when removing all Epicea tests
> >> - Epicea doesn't extend or override "kernel" behavior...
> >>
> >> Regards,
> >> Martin
> >
> 
> 
> 

Reply via email to