Re: [Pharo-dev] [Pharo-users] [ANN] Pharo TechTalk 21 Nov: Discord Demo

2017-11-21 Thread Sean P. DeNigris
Juraj Kubelka wrote
> the TechTalk record is available at the same link:
> https://www.youtube.com/watch?v=33kXsOiP6wA 

Thanks! The quality seems to top out at 480p, which is pretty blurry. It
makes the code nearly unreadable. Any way to bump the resolution on the next
one?



-
Cheers,
Sean
--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html



Re: [Pharo-dev] OpalCompiler evaluate speed

2017-11-21 Thread Ben Coman
On 22 November 2017 at 05:49, Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com> wrote:

>
>
> 2017-11-21 14:19 GMT+01:00 Nicolas Cellier  gmail.com>:
>
>> I have an ArbitraryPrecisionFloatTests doing an exhaustive test for
>> printing and reevaluating all positve half precision float.
>>
>> That's about 2^15 or approximately 32k loop which evaluate snippets like
>>
>> (ArbitraryPrecisionFloat readFrom: '1.123' readStream numBits: 10)
>>
>> The test was naively written with Compiler evaluate: and was using the
>> legacy Compiler.
>>
>> If I rewrite self class compiler evaluate: the test times out.
>> Let's see what increase is necessary:
>>
>> [ ArbitraryPrecisionFloatTest new testPrintAndEvaluate  ] timeToRun.
>> -> 3s with legacy Compiler
>> -> 14s with OpalCompiler
>>
>> It's not unexpected that intermediate representation (IR) reification has
>> a cost, but here the 4.5x is a bit too much...
>> This test did account for 1/4 of total test duration already (3s out of
>> 12s).
>> With Opal, the total test duration doubles... (14s out of 23s)
>>
>> So let's analyze the hot spot with:
>>
>> MessageTally  spyOn: [ ArbitraryPrecisionFloatTest new
>> testPrintAndEvaluate  ].
>>
>> (I didn't use AndreasSystemProfiler becuase outputs seems a bit garbbled,
>> no matter since the primitives do not account that much, a MessageTally
>> will do the job)
>>
>> I first see a hot spot which does not seem that necessary:
>>
>>   ||24.6% {3447ms} RBMethodNode(RBProgramNode)>>formattedCode
>>
>> From the comments I understand that AST-based stuff requires a pattern
>> (DoIt) and an explicit return (^), but this expensive formatting seems too
>> much for just evaluating. i think that we should change that.
>>
>> Then comes:
>>
>>   ||20.7% {2902ms} RBMethodNode>>generate:
>>
>> which is split in two halves, ATS->IR and IR->bytecode
>>
>>   ||  |9.3% {1299ms} RBMethodNode>>generateIR
>>
>>   ||  |  |11.4% {1596ms} IRMethod>>generate:
>>
>> But then I see this cost a 2nd time which also leave room for progress:
>>
>>   ||10.9% {1529ms} RBMethodNode>>generateIR
>>
>>   ||  |12.9% {1814ms} IRMethod>>generate:
>>
>> The first is in RBMethodNode>>generateWithSource, the second in
>> OpalCompiler>>compile
>>
>> Last comes the parse time (sourceCode -> AST)
>>
>>   |  13.2% {1858ms} OpalCompiler>>parse
>>
>> Along with semantic analysis
>>
>>   |  6.0% {837ms} OpalCompiler>>doSemanticAnalysis
>>
>> ---
>>
>> For comparison, the legacy Compiler decomposes into:
>>
>>   ||61.5% {2223ms} Parser>>parse:class:category:n
>> oPattern:context:notifying:ifFail:
>>
>> which more or less covers parse time + semantic analysis time.
>> That means that Opal does a fair work for this stage.
>>
>> Then, the direct AST->byteCode phase is:
>>
>>  |  16.9% {609ms} MethodNode>>generate
>>
>> IR costs almost a 5x on this phase, but we know it's the price to pay for
>> the additional features that it potentially offers. If only we would do it
>> once...
>>
>> And that's all for the legacy one...
>>
>> --
>>
>> This little exercize shows that a 2x acceleration of OpalCompiler
>> evaluate seems achievable:
>> - simplify the uselessely expensive formatted code
>> - generate bytecodes once, not twice
>>
>> Then it will be a bit more 2x slower than legacy, which is a better trade
>> for yet to come superior features potentially brought by Opal.
>>
>> It would be interesting to carry same analysis on method compilation
>>
>
> Digging further here is what I find:
>
> compile sends generate: and answer a CompiledMethod
> translate sends compile but throw the CompiledMethod away, and just answer
> the AST.
> Most senders of translate will also generate: (thus we generate: twice
> quite often, loosing a 2x factor in compilation).
>
> A 2x gain is a huge gain when installing big code bases, especially if the
> custom is to throw image away and reconstruct.
> No matter if a bot does the job, it does it for twice many watts and at
> the end, we're waiting for the bot.
>
> However, before changing anything, further clarification is required:
> translate does one more thing, it catches ReparseAfterSourceEditing and
> retry compilation (once).
> So my question: are there some cases when generate: will cause
> ReparseAfterSourceEditing?
>

I don't know the full answer about other cases, but I can provide the
background why ReparseAfterSourceEditing was added.

IIRC, a few years ago with the move to an AST based system, there was a
problem with syntax highlighting where
the AST referenced its original source which caused highlighting offsets
when reference to source modified in the editor.
Trying to work backwards from modified source to update all AST elements
source-location proved an intractable problem.
The workaround I 

Re: [Pharo-dev] [Pharo-users] [ANN] Pharo TechTalk 21 Nov: Discord Demo

2017-11-21 Thread Dimitris Chloupis
Well done :)

Now you can make a discord client inside the Pharo image if you want.
On Tue, 21 Nov 2017 at 21:03, Juraj Kubelka 
wrote:

> Hi,
>
> the TechTalk record is available at the same link:
> https://www.youtube.com/watch?v=33kXsOiP6wA
> and includes outline to simplify navigation.
>
> Cheers,
> Juraj
>
>
> TechTalk Outline:
> - 01:58 The beginning of the talk
> - 04:30 Webhook
>   - 04:33 How to Create Webhook
>   - 05:39 Webhook Examples
>   - 10:33 Webhook Use Case: Script of the Day from Nautilus Code Browser
>   - 18:39 Webhook Use Case: Server Problem Notification
> - 22:45 Bot App (chatbot)
>   - 24:47 How to Create a Bot App
>   - 28:28 Bot App Examples
>   - 33:17 Bot Use Case: Source Code Expertise
> - 41:40 Standard User Client
>   - 42:57 User Client Example
>   - 45:06 User Client Use Case: Asking Directly from Pharo Playground
>   - 47:06 User Client Use Case: Receiving Questions and Answering in Pharo
>   - 50:50 Final Thoughts About Discord Integration in Inspector and
> Debugger
> - 52:44 Discussion
>
> On Nov 21, 2017, at 12:54, Juraj Kubelka  wrote:
>
> Hi!
>
> The TechTalk starts in about 10 minutes. Join us on Discord, the techtalk
> channel.
>
> Cheers,
> Juraj
>
>
> On Nov 21, 2017, at 10:11, Marcus Denker  wrote:
>
> The link to the live stream is this:
>
>  https://www.youtube.com/watch?v=33kXsOiP6wA
>
> It start in a bit less than 3 hours.
>
> Marcus
>
> On 18 Nov 2017, at 09:13, Marcus Denker  wrote:
>
> Pharo TechTalk: Discord Demo
> When?  21 Nov 2017 5:00 PM - 7:00 PM (UTC+01:00)
>
> Topic: "Discord communication Demo”, how to script discord from Pharo.
>
> https://association.pharo.org/event-2642665
>
>
>
>
>


Re: [Pharo-dev] OpalCompiler evaluate speed

2017-11-21 Thread Nicolas Cellier
2017-11-21 14:19 GMT+01:00 Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com>:

> I have an ArbitraryPrecisionFloatTests doing an exhaustive test for
> printing and reevaluating all positve half precision float.
>
> That's about 2^15 or approximately 32k loop which evaluate snippets like
>
> (ArbitraryPrecisionFloat readFrom: '1.123' readStream numBits: 10)
>
> The test was naively written with Compiler evaluate: and was using the
> legacy Compiler.
>
> If I rewrite self class compiler evaluate: the test times out.
> Let's see what increase is necessary:
>
> [ ArbitraryPrecisionFloatTest new testPrintAndEvaluate  ] timeToRun.
> -> 3s with legacy Compiler
> -> 14s with OpalCompiler
>
> It's not unexpected that intermediate representation (IR) reification has
> a cost, but here the 4.5x is a bit too much...
> This test did account for 1/4 of total test duration already (3s out of
> 12s).
> With Opal, the total test duration doubles... (14s out of 23s)
>
> So let's analyze the hot spot with:
>
> MessageTally  spyOn: [ ArbitraryPrecisionFloatTest new
> testPrintAndEvaluate  ].
>
> (I didn't use AndreasSystemProfiler becuase outputs seems a bit garbbled,
> no matter since the primitives do not account that much, a MessageTally
> will do the job)
>
> I first see a hot spot which does not seem that necessary:
>
>   ||24.6% {3447ms} RBMethodNode(RBProgramNode)>>formattedCode
>
> From the comments I understand that AST-based stuff requires a pattern
> (DoIt) and an explicit return (^), but this expensive formatting seems too
> much for just evaluating. i think that we should change that.
>
> Then comes:
>
>   ||20.7% {2902ms} RBMethodNode>>generate:
>
> which is split in two halves, ATS->IR and IR->bytecode
>
>   ||  |9.3% {1299ms} RBMethodNode>>generateIR
>
>   ||  |  |11.4% {1596ms} IRMethod>>generate:
>
> But then I see this cost a 2nd time which also leave room for progress:
>
>   ||10.9% {1529ms} RBMethodNode>>generateIR
>
>   ||  |12.9% {1814ms} IRMethod>>generate:
>
> The first is in RBMethodNode>>generateWithSource, the second in
> OpalCompiler>>compile
>
> Last comes the parse time (sourceCode -> AST)
>
>   |  13.2% {1858ms} OpalCompiler>>parse
>
> Along with semantic analysis
>
>   |  6.0% {837ms} OpalCompiler>>doSemanticAnalysis
>
> ---
>
> For comparison, the legacy Compiler decomposes into:
>
>   ||61.5% {2223ms} Parser>>parse:class:category:
> noPattern:context:notifying:ifFail:
>
> which more or less covers parse time + semantic analysis time.
> That means that Opal does a fair work for this stage.
>
> Then, the direct AST->byteCode phase is:
>
>  |  16.9% {609ms} MethodNode>>generate
>
> IR costs almost a 5x on this phase, but we know it's the price to pay for
> the additional features that it potentially offers. If only we would do it
> once...
>
> And that's all for the legacy one...
>
> --
>
> This little exercize shows that a 2x acceleration of OpalCompiler evaluate
> seems achievable:
> - simplify the uselessely expensive formatted code
> - generate bytecodes once, not twice
>
> Then it will be a bit more 2x slower than legacy, which is a better trade
> for yet to come superior features potentially brought by Opal.
>
> It would be interesting to carry same analysis on method compilation
>

Digging further here is what I find:

compile sends generate: and answer a CompiledMethod
translate sends compile but throw the CompiledMethod away, and just answer
the AST.
Most senders of translate will also generate: (thus we generate: twice
quite often, loosing a 2x factor in compilation).

A 2x gain is a huge gain when installing big code bases, especially if the
custom is to throw image away and reconstruct.
No matter if a bot does the job, it does it for twice many watts and at the
end, we're waiting for the bot.

However, before changing anything, further clarification is required:
translate does one more thing, it catches ReparseAfterSourceEditing and
retry compilation (once).
So my question: are there some cases when generate: will cause
ReparseAfterSourceEditing?
That could happen in generation phase if some byte code limit is exceeded,
and an interactive handling corrects code...
I did not see any such condition, but code base is huge...


Re: [Pharo-dev] Pharo 6.1 64 bits segmentation fault loading code on Linux

2017-11-21 Thread Thierry Goubier

Le 21/11/2017 à 21:00, Gabriel Cotelli a écrit :
I can't find a latest 6.1 vm so I tried with the Pharo 7 Latest VM (wget 
-O- get.pharo.org/64/vmTLatest70  | 
bash) and the 6.1 image but crashes with a segmentation fault also.


I also have a segfault with vmTLatest70.

At the moment, I use cog_linux64x64_pharo.cog.spur_201711061254, and 
this one doesn't segfault.


Otherwise, you can use the snap version of Pharo. This one doesn't 
segfault. You may have to install snap on your linux mint, and the snap 
has effect when you're calling external commands from Pharo.


Thierry



On Tue, Nov 21, 2017 at 12:03 PM, Thierry Goubier 
> wrote:


Hi Gabriel,

I had such segfaults with the stable 6.1/64bits on Linux. I solved
them by ensuring the use of a more recent version than the stable one.

Thierry


2017-11-21 15:56 GMT+01:00 Gabriel Cotelli >:

I've created the following issue:


https://pharo.fogbugz.com/f/cases/20737/Segmentation-fault-trying-to-load-code-into-a-64-bits-Pharo-6-1-Linux


I can reproduce it easily so if you need some more info let me know.

Regards,
Gabriel








[Pharo-dev] Pharo Filter usability due to same background as List (Iceberg)

2017-11-21 Thread Torsten Bergmann
Hi,

I do not know if this is only an issue of Iceberg or of the themes in general:

Due to the topic of todays techtalk I added "DiscordSt" repository to 
Iceberg leading to the attached screenshot.

I wondered why there are only 7 packages while I was pretty sure that there
were more packages in the GitHub page of the repo.

So it LOOKED LIKE THE LIST ENDED/ONLY HAS A FEW ITEMS because the filter is 
displayed right at the bottom of the list.

You can call me stupid - but I initially thought of an Iceberg synch
synch issue were some packages were not downloaded/cloned - but then after 
several seconds 
I realized that the list just ends because of the filter area and there are more
items - one just has to scroll.

This is OK but I interpreted the white area of the filter as there is no 
further list item
instead of realizing the filter possibility itself.

Maybe I'm the only one but: as the (package) list background is white and the 
filter 
background is white too this is hardly to distinguish. Also the placeholder 
"Filter" 
is centered instead of right aligned ... 

So depending on windows size it is displayed in a wider distance to the list 
item text 
and one does not notice that the white area is a filter.

I guess a small thin line to separate the filter widget from the list
part of the widget would solve the issue. Would that be possible? If so is this 
only
an issue of Iceberg or the Pharo theme in general?

Also I wonder: the "Filter ..." placeholder label is centered in the middle. If 
one 
clicks into the filter the "Filter..." placeholder surprisingly jumps to the 
left.

I guess we have to catch up more with the usability of our tools.

Thanks
T.





Re: [Pharo-dev] Pharo 6.1 64 bits segmentation fault loading code on Linux

2017-11-21 Thread Gabriel Cotelli
I can't find a latest 6.1 vm so I tried with the Pharo 7 Latest VM (wget
-O- get.pharo.org/64/vmTLatest70 | bash) and the 6.1 image but crashes with
a segmentation fault also.

On Tue, Nov 21, 2017 at 12:03 PM, Thierry Goubier  wrote:

> Hi Gabriel,
>
> I had such segfaults with the stable 6.1/64bits on Linux. I solved them by
> ensuring the use of a more recent version than the stable one.
>
> Thierry
>
>
> 2017-11-21 15:56 GMT+01:00 Gabriel Cotelli :
>
>> I've created the following issue:
>>
>> https://pharo.fogbugz.com/f/cases/20737/Segmentation-fault-
>> trying-to-load-code-into-a-64-bits-Pharo-6-1-Linux
>> I can reproduce it easily so if you need some more info let me know.
>>
>> Regards,
>> Gabriel
>>
>
>


Re: [Pharo-dev] [Pharo-users] [ANN] Pharo TechTalk 21 Nov: Discord Demo

2017-11-21 Thread Juraj Kubelka
Hi,

the TechTalk record is available at the same link: 
https://www.youtube.com/watch?v=33kXsOiP6wA 
 
and includes outline to simplify navigation.

Cheers,
Juraj


TechTalk Outline:
- 01:58 The beginning of the talk
- 04:30 Webhook
  - 04:33 How to Create Webhook
  - 05:39 Webhook Examples
  - 10:33 Webhook Use Case: Script of the Day from Nautilus Code Browser
  - 18:39 Webhook Use Case: Server Problem Notification
- 22:45 Bot App (chatbot)
  - 24:47 How to Create a Bot App
  - 28:28 Bot App Examples
  - 33:17 Bot Use Case: Source Code Expertise
- 41:40 Standard User Client
  - 42:57 User Client Example
  - 45:06 User Client Use Case: Asking Directly from Pharo Playground
  - 47:06 User Client Use Case: Receiving Questions and Answering in Pharo
  - 50:50 Final Thoughts About Discord Integration in Inspector and Debugger
- 52:44 Discussion

> On Nov 21, 2017, at 12:54, Juraj Kubelka  wrote:
> 
> Hi!
> 
> The TechTalk starts in about 10 minutes. Join us on Discord, the techtalk 
> channel.
> 
> Cheers,
> Juraj
> 
> 
>> On Nov 21, 2017, at 10:11, Marcus Denker > > wrote:
>> 
>> The link to the live stream is this:
>> 
>>   https://www.youtube.com/watch?v=33kXsOiP6wA 
>> 
>> 
>> It start in a bit less than 3 hours.
>> 
>>  Marcus
>> 
>>> On 18 Nov 2017, at 09:13, Marcus Denker >> > wrote:
>>> 
>>> Pharo TechTalk: Discord Demo
>>> When?  21 Nov 2017 5:00 PM - 7:00 PM (UTC+01:00)
>>> 
>>> Topic: "Discord communication Demo”, how to script discord from Pharo.
>>> 
>>> https://association.pharo.org/event-2642665 
>>> 
>> 
> 



Re: [Pharo-dev] [Pharo-users] [ANN] Pharo TechTalk 21 Nov: Discord Demo

2017-11-21 Thread Juraj Kubelka
Hi!

The TechTalk starts in about 10 minutes. Join us on Discord, the techtalk 
channel.

Cheers,
Juraj


> On Nov 21, 2017, at 10:11, Marcus Denker  wrote:
> 
> The link to the live stream is this:
> 
>https://www.youtube.com/watch?v=33kXsOiP6wA 
> 
> 
> It start in a bit less than 3 hours.
> 
>   Marcus
> 
>> On 18 Nov 2017, at 09:13, Marcus Denker > > wrote:
>> 
>> Pharo TechTalk: Discord Demo
>> When?  21 Nov 2017 5:00 PM - 7:00 PM (UTC+01:00)
>> 
>> Topic: "Discord communication Demo”, how to script discord from Pharo.
>> 
>>  https://association.pharo.org/event-2642665 
>> 
> 



Re: [Pharo-dev] Pharo 6.1 64 bits segmentation fault loading code on Linux

2017-11-21 Thread Thierry Goubier
Hi Gabriel,

I had such segfaults with the stable 6.1/64bits on Linux. I solved them by
ensuring the use of a more recent version than the stable one.

Thierry


2017-11-21 15:56 GMT+01:00 Gabriel Cotelli :

> I've created the following issue:
>
> https://pharo.fogbugz.com/f/cases/20737/Segmentation-
> fault-trying-to-load-code-into-a-64-bits-Pharo-6-1-Linux
> I can reproduce it easily so if you need some more info let me know.
>
> Regards,
> Gabriel
>


[Pharo-dev] Pharo 6.1 64 bits segmentation fault loading code on Linux

2017-11-21 Thread Gabriel Cotelli
I've created the following issue:

https://pharo.fogbugz.com/f/cases/20737/Segmentation-fault-trying-to-load-code-into-a-64-bits-Pharo-6-1-Linux
I can reproduce it easily so if you need some more info let me know.

Regards,
Gabriel


[Pharo-dev] OpalCompiler evaluate speed

2017-11-21 Thread Nicolas Cellier
I have an ArbitraryPrecisionFloatTests doing an exhaustive test for
printing and reevaluating all positve half precision float.

That's about 2^15 or approximately 32k loop which evaluate snippets like

(ArbitraryPrecisionFloat readFrom: '1.123' readStream numBits: 10)

The test was naively written with Compiler evaluate: and was using the
legacy Compiler.

If I rewrite self class compiler evaluate: the test times out.
Let's see what increase is necessary:

[ ArbitraryPrecisionFloatTest new testPrintAndEvaluate  ] timeToRun.
-> 3s with legacy Compiler
-> 14s with OpalCompiler

It's not unexpected that intermediate representation (IR) reification has a
cost, but here the 4.5x is a bit too much...
This test did account for 1/4 of total test duration already (3s out of
12s).
With Opal, the total test duration doubles... (14s out of 23s)

So let's analyze the hot spot with:

MessageTally  spyOn: [ ArbitraryPrecisionFloatTest new
testPrintAndEvaluate  ].

(I didn't use AndreasSystemProfiler becuase outputs seems a bit garbbled,
no matter since the primitives do not account that much, a MessageTally
will do the job)

I first see a hot spot which does not seem that necessary:

  ||24.6% {3447ms} RBMethodNode(RBProgramNode)>>formattedCode

>From the comments I understand that AST-based stuff requires a pattern
(DoIt) and an explicit return (^), but this expensive formatting seems too
much for just evaluating. i think that we should change that.

Then comes:

  ||20.7% {2902ms} RBMethodNode>>generate:

which is split in two halves, ATS->IR and IR->bytecode

  ||  |9.3% {1299ms} RBMethodNode>>generateIR

  ||  |  |11.4% {1596ms} IRMethod>>generate:

But then I see this cost a 2nd time which also leave room for progress:

  ||10.9% {1529ms} RBMethodNode>>generateIR

  ||  |12.9% {1814ms} IRMethod>>generate:

The first is in RBMethodNode>>generateWithSource, the second in
OpalCompiler>>compile

Last comes the parse time (sourceCode -> AST)

  |  13.2% {1858ms} OpalCompiler>>parse

Along with semantic analysis

  |  6.0% {837ms} OpalCompiler>>doSemanticAnalysis

---

For comparison, the legacy Compiler decomposes into:

  ||61.5% {2223ms}
Parser>>parse:class:category:noPattern:context:notifying:ifFail:

which more or less covers parse time + semantic analysis time.
That means that Opal does a fair work for this stage.

Then, the direct AST->byteCode phase is:

 |  16.9% {609ms} MethodNode>>generate

IR costs almost a 5x on this phase, but we know it's the price to pay for
the additional features that it potentially offers. If only we would do it
once...

And that's all for the legacy one...

--

This little exercize shows that a 2x acceleration of OpalCompiler evaluate
seems achievable:
- simplify the uselessely expensive formatted code
- generate bytecodes once, not twice

Then it will be a bit more 2x slower than legacy, which is a better trade
for yet to come superior features potentially brought by Opal.

It would be interesting to carry same analysis on method compilation


Re: [Pharo-dev] [ANN] Pharo TechTalk 21 Nov: Discord Demo

2017-11-21 Thread Marcus Denker
The link to the live stream is this:

 https://www.youtube.com/watch?v=33kXsOiP6wA 


It start in a bit less than 3 hours.

Marcus

> On 18 Nov 2017, at 09:13, Marcus Denker  wrote:
> 
> Pharo TechTalk: Discord Demo
> When?  21 Nov 2017 5:00 PM - 7:00 PM (UTC+01:00)
> 
> Topic: "Discord communication Demo”, how to script discord from Pharo.
> 
>   https://association.pharo.org/event-2642665



[Pharo-dev] [Pharo 7.0-dev] Build #315: Fix the sign of FloatNegativeZero

2017-11-21 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!

The status of the build #315 was: SUCCESS.

The Pull Request #515 was integrated: "Fix the sign of FloatNegativeZero"
Pull request url: https://github.com/pharo-project/pharo/pull/515

Issue Url: https://pharo.fogbugz.com/f/cases/19629
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/315/


[Pharo-dev] [Pharo 7.0-dev] Build #314: Fix the sign of FloatNegativeZero

2017-11-21 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!

The status of the build #314 was: FAILURE.

The Pull Request #515 was integrated: "Fix the sign of FloatNegativeZero"
Pull request url: https://github.com/pharo-project/pharo/pull/515

Issue Url: https://pharo.fogbugz.com/f/cases/19629
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/314/


[Pharo-dev] [Pharo 7.0-dev] Build #313: 20384-Converted-rules-to-Renraku-architecture-2

2017-11-21 Thread ci-pharo-ci-jenkins2
There is a new Pharo build available!

The status of the build #313 was: SUCCESS.

The Pull Request #445 was integrated: 
"20384-Converted-rules-to-Renraku-architecture-2"
Pull request url: https://github.com/pharo-project/pharo/pull/445

Issue Url: https://pharo.fogbugz.com/f/cases/20384
Build Url: 
https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/313/