Re: [Pharo-users] Performance Testing Tools

2017-07-19 Thread Luke Gorrie
Hi Evan,

I am also really interesting in this topic and have been doing a bunch of
work on automating statistical benchmarks. I don't have a background in
statistics or formal QA but I am learning as I go along :).

The tools I'm building are outside Smalltalk. Our full performance test
suite takes about a week of machine time to run because tests ~15,000 QEMU
VMs with different software versions / configurations / workloads. There is
a CI server that runs all those tests, getting pretty fast turnarounds by
distributing across a cluster of servers and reusing results from
unmodified software branches, and spits out a CSV with one row per test
result (giving the benchmark score and the parameters of the test.)

Then what to do with that ~15,000 line CSV file? Just now I run Rmarkdown
to make a report on the distribution of results and then manually inspect
that to check for interesting differences. I lump all of the different
configurations in together and treat them as one population at the moment.
Here is an example report:
https://hydra.snabb.co/build/1604171/download/2/report.html

It's a bit primitive but it is getting the job done for release
engineering. I'm reasonably confident that new software releases don't
break or slow down in obscure configurations. We are building network
equipment and performance regressions are generally not acceptable.

I'm looking into more clever ways to automatically interpret the results,
e.g. fumbling around at
https://stats.stackexchange.com/questions/288416/non-parametric-test-if-two-samples-are-drawn-from-the-same-distribution
.

Could relate to your ambitions somehow?


On 19 July 2017 at 02:00, Evan Donahue  wrote:

> Hi,
>
> I've been doing a lot of performance testing lately, and I've found myself
> wanting to upgrade my methods from ad hoc use of bench and message tally.
> Is there any kind of framework for like, statistically comparing
> improvements in performance benchmarks across different versions of code,
> or anything that generally helps manage the test-tweak-test loop? Just
> curious what's out there before I go writing something. Too many useful
> little libraries to keep track of!
>
> Evan
>


Re: [Pharo-users] could not find module vm-display-X11

2017-07-19 Thread Hilaire

Hi,

It does not work on a new installed system. It is related to the 
installed system, other report it to me with the same system



Le 19/07/2017 à 00:01, Alistair Grant a écrit :

Is this something that was working and has suddenly stopped, or a new
install that doesn't work?

Assuming that it is installed correctly, can you provide the output of:

cd /to/where/the/pharo/exe/is
ldd pharo

ldd pharo
linux-gate.so.1 =>  (0xf000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7705000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf770)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e2000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf752c000)
/lib/ld-linux.so.2 (0x5655a000)

ldd vm-display-X11

ldd vm-display-X11
linux-gate.so.1 =>  (0xf773a000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e4000)
libGL.so.1 => not found
libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf7598000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf73e2000)
/lib/ld-linux.so.2 (0x56649000)
libxcb.so.1 => /usr/lib/i386-linux-gnu/libxcb.so.1 (0xf73bc000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf73b7000)
libXau.so.6 => /usr/lib/i386-linux-gnu/libXau.so.6 (0xf73b3000)
libXdmcp.so.6 => /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xf73ab000)




Cheers,
Alistair


--
Dr. Geo
http://drgeo.eu



Re: [Pharo-users] could not find module vm-display-X11

2017-07-19 Thread Nicolai Hess
Looks like libgl is the issue


Am 19.07.2017 9:47 vorm. schrieb "Hilaire" :

> Hi,
>
> It does not work on a new installed system. It is related to the installed
> system, other report it to me with the same system
>
> Le 19/07/2017 à 00:01, Alistair Grant a écrit :
>
> Is this something that was working and has suddenly stopped, or a new
> install that doesn't work?
>
> Assuming that it is installed correctly, can you provide the output of:
>
> cd /to/where/the/pharo/exe/is
> ldd pharo
>
> ldd pharo
> linux-gate.so.1 =>  (0xf000)
> libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7705000)
> libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf770)
> libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e2000)
> libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf752c000)
> /lib/ld-linux.so.2 (0x5655a000)
>
> ldd vm-display-X11
>
> ldd vm-display-X11
> linux-gate.so.1 =>  (0xf773a000)
> libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e4000)
> libGL.so.1 => not found
> libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf7598000)
> libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf73e2000)
> /lib/ld-linux.so.2 (0x56649000)
> libxcb.so.1 => /usr/lib/i386-linux-gnu/libxcb.so.1 (0xf73bc000)
> libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf73b7000)
> libXau.so.6 => /usr/lib/i386-linux-gnu/libXau.so.6 (0xf73b3000)
> libXdmcp.so.6 => /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xf73ab000)
>
>
>
>
> Cheers,
> Alistair
>
>
> --
> Dr. Geohttp://drgeo.eu
>
>


Re: [Pharo-users] get output of a forked process on windows

2017-07-19 Thread Christophe Demarey
Hi Thierry,

> Le 18 juil. 2017 à 15:40, Thierry Goubier  a écrit 
> :
> 
> Hi Christophe,
> 
> You have to use ProcessWrapper.
> 
> Metacello new
>   configuration: 'ProcessWrapper';
>   repository: 'http://smalltalkhub.com/mc/Pharo/MetaRepoForPharo40/main 
> ';
>   load

Thanks, it is working with the given example from 
http://smalltalkhub.com/#!/~hernan/ProcessWrapper/.

Best regards,
Christophe



Re: [Pharo-users] could not find module vm-display-X11

2017-07-19 Thread Alistair Grant
Hi Hilaire,

On 19 July 2017 at 09:46, Hilaire  wrote:
> Hi,
>
> It does not work on a new installed system. It is related to the installed
> system, other report it to me with the same system
>
>
> Le 19/07/2017 à 00:01, Alistair Grant a écrit :
>
> Is this something that was working and has suddenly stopped, or a new
> install that doesn't work?
>
> Assuming that it is installed correctly, can you provide the output of:
>
> cd /to/where/the/pharo/exe/is
> ldd pharo
>
> ldd pharo
> linux-gate.so.1 =>  (0xf000)
> libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7705000)
> libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf770)
> libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e2000)
> libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf752c000)
> /lib/ld-linux.so.2 (0x5655a000)
>
> ldd vm-display-X11
>
> ldd vm-display-X11
> linux-gate.so.1 =>  (0xf773a000)
> libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf76e4000)
> libGL.so.1 => not found
> libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf7598000)
> libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf73e2000)
> /lib/ld-linux.so.2 (0x56649000)
> libxcb.so.1 => /usr/lib/i386-linux-gnu/libxcb.so.1 (0xf73bc000)
> libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf73b7000)
> libXau.so.6 => /usr/lib/i386-linux-gnu/libXau.so.6 (0xf73b3000)
> libXdmcp.so.6 => /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xf73ab000)

I'm in a bit of a rush, but as Nicolai says, it looks like LibGL.so.1
is the problem.  A google search should help, but maybe try:

sudo apt-get install libgl1-mesa-glx:i386

Cheers,
Alistair



Re: [Pharo-users] could not find module vm-display-X11

2017-07-19 Thread Hilaire

Hi Alistair,

Indeed installing this dependency makes Dr. Geo to work again, and the 
"could not find module vm-display-X11" error is gone.


Thanks guys

Hilaire


Le 19/07/2017 à 10:22, Alistair Grant a écrit :


I'm in a bit of a rush, but as Nicolai says, it looks like LibGL.so.1
is the problem.  A google search should help, but maybe try:

sudo apt-get install libgl1-mesa-glx:i386

Cheers,
Alistair




--
Dr. Geo
http://drgeo.eu





[Pharo-users] [Pharo News] Free Ephemeric Cloud for Association Members

2017-07-19 Thread Marcus Denker
Free Ephemeric Cloud for Association Members
http://pharo.org/news/FreeEphemeric
online: now.



Re: [Pharo-users] Performance Testing Tools

2017-07-19 Thread Mariano Martinez Peck
The ones I remember are Smark [1] and CalipeL [2]

Cheers,

[1] http://www.smalltalkhub.com/#!/~StefanMarr/SMark
[2] https://bitbucket.org/janvrany/jv-calipel

On Wed, Jul 19, 2017 at 4:17 AM, Luke Gorrie  wrote:

> Hi Evan,
>
> I am also really interesting in this topic and have been doing a bunch of
> work on automating statistical benchmarks. I don't have a background in
> statistics or formal QA but I am learning as I go along :).
>
> The tools I'm building are outside Smalltalk. Our full performance test
> suite takes about a week of machine time to run because tests ~15,000 QEMU
> VMs with different software versions / configurations / workloads. There is
> a CI server that runs all those tests, getting pretty fast turnarounds by
> distributing across a cluster of servers and reusing results from
> unmodified software branches, and spits out a CSV with one row per test
> result (giving the benchmark score and the parameters of the test.)
>
> Then what to do with that ~15,000 line CSV file? Just now I run Rmarkdown
> to make a report on the distribution of results and then manually inspect
> that to check for interesting differences. I lump all of the different
> configurations in together and treat them as one population at the moment.
> Here is an example report:
> https://hydra.snabb.co/build/1604171/download/2/report.html
>
> It's a bit primitive but it is getting the job done for release
> engineering. I'm reasonably confident that new software releases don't
> break or slow down in obscure configurations. We are building network
> equipment and performance regressions are generally not acceptable.
>
> I'm looking into more clever ways to automatically interpret the results,
> e.g. fumbling around at https://stats.stackexchange.
> com/questions/288416/non-parametric-test-if-two-
> samples-are-drawn-from-the-same-distribution.
>
> Could relate to your ambitions somehow?
>
>
> On 19 July 2017 at 02:00, Evan Donahue  wrote:
>
>> Hi,
>>
>> I've been doing a lot of performance testing lately, and I've found
>> myself wanting to upgrade my methods from ad hoc use of bench and message
>> tally. Is there any kind of framework for like, statistically comparing
>> improvements in performance benchmarks across different versions of code,
>> or anything that generally helps manage the test-tweak-test loop? Just
>> curious what's out there before I go writing something. Too many useful
>> little libraries to keep track of!
>>
>> Evan
>>
>
>


-- 
Mariano
http://marianopeck.wordpress.com


[Pharo-users] Problem to access the "book compilation farm"

2017-07-19 Thread Matteo via Pharo-users
--- Begin Message ---
Dears,

I cannot accesses the site:
https://ci.inria.fr/pharo-contribution/view/Books/

Is there some problem with the server?


Further, I saw serval "strange" messages on the main page of the
Pharo forum, http://forum.world.st/Pharo-f1294836.html.

Should these messages be removed?


Thanks,

Matteo.


--- End Message ---


Re: [Pharo-users] Problem to access the "book compilation farm"

2017-07-19 Thread Cyril Ferlicot
On Wed, Jul 19, 2017 at 2:24 PM, Matteo via Pharo-users
 wrote:
>
>
> -- Forwarded message --
> From: Matteo 
> To: "Pharo is welcome (ML)" 
> Cc:
> Bcc:
> Date: Wed, 19 Jul 2017 14:24:18 +0200
> Subject: Problem to access the "book compilation farm"
> Dears,
>
> I cannot accesses the site:
> https://ci.inria.fr/pharo-contribution/view/Books/
>
> Is there some problem with the server?
>
>

Hi,

The CI is down for an update. It should come back tomorrow or later this week.

> Further, I saw serval "strange" messages on the main page of the
> Pharo forum, http://forum.world.st/Pharo-f1294836.html.
>
> Should these messages be removed?
>
>
> Thanks,
>
> Matteo.
>
>
>

-- 
Cyril Ferlicot
https://ferlicot.fr

http://www.synectique.eu
2 rue Jacques Prévert 01,
59650 Villeneuve d'ascq France



Re: [Pharo-users] Creating the smallest server runtime footprint

2017-07-19 Thread Esteban A. Maringolo
I don't know how "mainstream" solutions perform on AWS Lambda or EC2,
but this seems really fast to me. 50 ms is great, assuming it bills by
every 100ms, you still have room to perform your computation.

Thank you for pursuing this path, it could open a new territory for
using Pharo at big scale.

Esteban A. Maringolo


2017-07-17 8:32 GMT-03:00 Tim Mackinnon :
> Well I’ve been shooting in the dark a bit - but I also left out the sound
> and display so’s (e.g. -x execlude the following and add back the null so's
>
> zip -r --symlinks ../deploy/$LAMBDA_NAME.zip * -x pharo-local/\* \*.sources
> \*.changes \*.st \*.log
> */libgit2.* */libSDL2* */B3DAccelerator* */JPEGRead* */vm-sound*
> */vm-display* tmp/\* */__MACOSX\*
> - zip -uR ../deploy/$LAMBDA_NAME.zip *-null.so
>
>
> And everything seems to run clean. (Would be useful to get some feedback
> from those in the know - does just leaving out so’s incurred a penalty if
> you don’t recompile the VM? Presumably something would get written to std
> error or pharodebug if it was an issue).
>
> In fact my run times on EC2 are pretty impressive:
>
> PharoLambdaMin]$ time ./pharo Pharo.image exec "Lambda processJSON: '{}'"
> {"outputSpeech":{"text":"Good Morning, it's eleven
> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
> Successful","title":"Pharo Lambda","type":"Simple"}}
>
> real 0m0.039s
> user 0m0.028s
> sys 0m0.000s
> [ec2-user@ip-172-31-44-73 PharoLambdaMin]$ time ./pharo Pharo.image exec
> "Lambda processJSON: '{}'"
> {"outputSpeech":{"text":"Good Morning, it's eleven
> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
> Successful","title":"Pharo Lambda","type":"Simple"}}
>
> real 0m0.039s
> user 0m0.020s
> sys 0m0.008s
>
>
> Not bad eh?
>
> Tim
>
> On 17 Jul 2017, at 07:00, Tim Mackinnon  wrote:
>
> Thanks again Pavel - I'll try the 6.0 step 4 or possibly step 5 with sunit
> (as many libraries don't separate out their tests).
>
> I've also tried leaving out libgit and libsdl2 .so's on my server build  and
> that seems fine too - making me wonder what others I can safely leave out?
> Sound is a candidate (but small fry in size but do you need the null
> variant?).
>
> Libcrypto is big - but I wonder if https routines would use that (and it
> sounds server processing'y so maybe best left).
>
> I was hoping to find a list explaining them somewhere - but it remains
> rather mysterious.
>
> However, at this point, I think I may have hit the sweet spot in size where
> AWS seems to load efficiently below a zip of 10mb?
>
> Tim
>
> Sent from my iPhone
>
> On 15 Jul 2017, at 09:35, Pavel Krivanek  wrote:
>
> If you want to stay with Pharo 6 image, you can try the bootstrapped version
> of the minimal image:
> https://ci.inria.fr/pharo/view/6.0-SysConf/job/Pharo-6.0-Step-04-01-ConfigurationOfMinimalPharo/
>
> -- Pavel
>
> 2017-07-15 10:33 GMT+02:00 Pavel Krivanek :
>>
>> Try the Pharo 7 metacello image (=Pharo 7 minimal image that the CI is
>> already converting to 64bit). There should be no problem with STON because
>> whole Pharo is loaded into it using metacello and filetree. Pharo 6 minimal
>> image is done differently (by shrinking) and not so well tested.
>>
>> For the conversion of 32-bit image to 64-bit image you need a VMMaker
>> image:
>>
>> https://ci.inria.fr/pharo/job/Spur-Git-Tracker/lastSuccessfulBuild/artifact/vmmaker-image.zip
>> and then evaluate:
>> ./pharo generator.image eval "[Spur32to64BitBootstrap new bootstrapImage:
>> 'conversion.image'] on: AssertionFailure do: [ :fail | fail resumeUnchecked:
>> nil ]"
>>
>> -- Pavel
>>
>>
>>
>> 2017-07-15 10:19 GMT+02:00 Tim Mackinnon :
>>>
>>> Hi Pavel - thanks for getting me to the point where I could even have a
>>> minimal image. As I’m on the edge of my Pharo knowledge here, I’ll try and
>>> run with this as best I can.
>>>
>>> I’d been using the 6.0 image you suggested to me - but maybe I could use
>>> a 70 image with Pharo 6 for a while (until the VM diverges) right?
>>>
>>> The bit I haven’t quite understood however, is how the 64bit image is
>>> created - as your reference is to a 32bit version? Is the 64bit one
>>> converted from 32 in a later stage? (For AWS Lambda I need 64bit) - am I
>>> right in thinking the pipeline stage after this one is the one you sent me -
>>> and the travis.yml file shows me what it does? But I can’t see a trivis.yml
>>> in the conversion stage so I’m not sure how it does that. (Question - how do
>>> I see what the pipelines do to answer my own questions?)
>>>
>>> I was hoping that there was a basic image that got me up to metacello
>>> baseline level to load git file tree packages/baselines  in my own repo as
>>> well baselines on the internet. The one you sent me is fairly close to that
>>> (its just missing STON in the image and seems to have an issue with
>>> resolving undeclared classes that get loaded in - should do a fogbugz on
>>> that?)
>>>
>>> The follow-on from a metacello imag

Re: [Pharo-users] Creating the smallest server runtime footprint

2017-07-19 Thread Sven Van Caekenberghe

> On 19 Jul 2017, at 14:55, Esteban A. Maringolo  wrote:
> 
> I don't know how "mainstream" solutions perform on AWS Lambda or EC2,
> but this seems really fast to me. 50 ms is great, assuming it bills by
> every 100ms, you still have room to perform your computation.

Yes, it seems incredibly fast. I'll have to try this myself to check, but I 
have no time now.

> Thank you for pursuing this path, it could open a new territory for
> using Pharo at big scale.
> 
> Esteban A. Maringolo
> 
> 
> 2017-07-17 8:32 GMT-03:00 Tim Mackinnon :
>> Well I’ve been shooting in the dark a bit - but I also left out the sound
>> and display so’s (e.g. -x execlude the following and add back the null so's
>> 
>> zip -r --symlinks ../deploy/$LAMBDA_NAME.zip * -x pharo-local/\* \*.sources
>> \*.changes \*.st \*.log
>>*/libgit2.* */libSDL2* */B3DAccelerator* */JPEGRead* */vm-sound*
>> */vm-display* tmp/\* */__MACOSX\*
>> - zip -uR ../deploy/$LAMBDA_NAME.zip *-null.so
>> 
>> 
>> And everything seems to run clean. (Would be useful to get some feedback
>> from those in the know - does just leaving out so’s incurred a penalty if
>> you don’t recompile the VM? Presumably something would get written to std
>> error or pharodebug if it was an issue).
>> 
>> In fact my run times on EC2 are pretty impressive:
>> 
>> PharoLambdaMin]$ time ./pharo Pharo.image exec "Lambda processJSON: '{}'"
>> {"outputSpeech":{"text":"Good Morning, it's eleven
>> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
>> Successful","title":"Pharo Lambda","type":"Simple"}}
>> 
>> real 0m0.039s
>> user 0m0.028s
>> sys 0m0.000s
>> [ec2-user@ip-172-31-44-73 PharoLambdaMin]$ time ./pharo Pharo.image exec
>> "Lambda processJSON: '{}'"
>> {"outputSpeech":{"text":"Good Morning, it's eleven
>> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
>> Successful","title":"Pharo Lambda","type":"Simple"}}
>> 
>> real 0m0.039s
>> user 0m0.020s
>> sys 0m0.008s
>> 
>> 
>> Not bad eh?
>> 
>> Tim
>> 
>> On 17 Jul 2017, at 07:00, Tim Mackinnon  wrote:
>> 
>> Thanks again Pavel - I'll try the 6.0 step 4 or possibly step 5 with sunit
>> (as many libraries don't separate out their tests).
>> 
>> I've also tried leaving out libgit and libsdl2 .so's on my server build  and
>> that seems fine too - making me wonder what others I can safely leave out?
>> Sound is a candidate (but small fry in size but do you need the null
>> variant?).
>> 
>> Libcrypto is big - but I wonder if https routines would use that (and it
>> sounds server processing'y so maybe best left).
>> 
>> I was hoping to find a list explaining them somewhere - but it remains
>> rather mysterious.
>> 
>> However, at this point, I think I may have hit the sweet spot in size where
>> AWS seems to load efficiently below a zip of 10mb?
>> 
>> Tim
>> 
>> Sent from my iPhone
>> 
>> On 15 Jul 2017, at 09:35, Pavel Krivanek  wrote:
>> 
>> If you want to stay with Pharo 6 image, you can try the bootstrapped version
>> of the minimal image:
>> https://ci.inria.fr/pharo/view/6.0-SysConf/job/Pharo-6.0-Step-04-01-ConfigurationOfMinimalPharo/
>> 
>> -- Pavel
>> 
>> 2017-07-15 10:33 GMT+02:00 Pavel Krivanek :
>>> 
>>> Try the Pharo 7 metacello image (=Pharo 7 minimal image that the CI is
>>> already converting to 64bit). There should be no problem with STON because
>>> whole Pharo is loaded into it using metacello and filetree. Pharo 6 minimal
>>> image is done differently (by shrinking) and not so well tested.
>>> 
>>> For the conversion of 32-bit image to 64-bit image you need a VMMaker
>>> image:
>>> 
>>> https://ci.inria.fr/pharo/job/Spur-Git-Tracker/lastSuccessfulBuild/artifact/vmmaker-image.zip
>>> and then evaluate:
>>> ./pharo generator.image eval "[Spur32to64BitBootstrap new bootstrapImage:
>>> 'conversion.image'] on: AssertionFailure do: [ :fail | fail resumeUnchecked:
>>> nil ]"
>>> 
>>> -- Pavel
>>> 
>>> 
>>> 
>>> 2017-07-15 10:19 GMT+02:00 Tim Mackinnon :
 
 Hi Pavel - thanks for getting me to the point where I could even have a
 minimal image. As I’m on the edge of my Pharo knowledge here, I’ll try and
 run with this as best I can.
 
 I’d been using the 6.0 image you suggested to me - but maybe I could use
 a 70 image with Pharo 6 for a while (until the VM diverges) right?
 
 The bit I haven’t quite understood however, is how the 64bit image is
 created - as your reference is to a 32bit version? Is the 64bit one
 converted from 32 in a later stage? (For AWS Lambda I need 64bit) - am I
 right in thinking the pipeline stage after this one is the one you sent me 
 -
 and the travis.yml file shows me what it does? But I can’t see a trivis.yml
 in the conversion stage so I’m not sure how it does that. (Question - how 
 do
 I see what the pipelines do to answer my own questions?)
 
 I was hoping that there was a basic image that got me up to metacello
 baseline level to load git 

Re: [Pharo-users] Creating the smallest server runtime footprint

2017-07-19 Thread Stephane Ducasse
Hi tim

if you see libraries not separating their tests: you should report it
to their authors or to us if this is our mistakes.
You can also load the SUnit package.

Stef

On Mon, Jul 17, 2017 at 8:00 AM, Tim Mackinnon  wrote:
> Thanks again Pavel - I'll try the 6.0 step 4 or possibly step 5 with sunit
> (as many libraries don't separate out their tests).
>
> I've also tried leaving out libgit and libsdl2 .so's on my server build  and
> that seems fine too - making me wonder what others I can safely leave out?
> Sound is a candidate (but small fry in size but do you need the null
> variant?).
>
> Libcrypto is big - but I wonder if https routines would use that (and it
> sounds server processing'y so maybe best left).
>
> I was hoping to find a list explaining them somewhere - but it remains
> rather mysterious.
>
> However, at this point, I think I may have hit the sweet spot in size where
> AWS seems to load efficiently below a zip of 10mb?
>
> Tim
>
> Sent from my iPhone
>
> On 15 Jul 2017, at 09:35, Pavel Krivanek  wrote:
>
> If you want to stay with Pharo 6 image, you can try the bootstrapped version
> of the minimal image:
> https://ci.inria.fr/pharo/view/6.0-SysConf/job/Pharo-6.0-Step-04-01-ConfigurationOfMinimalPharo/
>
> -- Pavel
>
> 2017-07-15 10:33 GMT+02:00 Pavel Krivanek :
>>
>> Try the Pharo 7 metacello image (=Pharo 7 minimal image that the CI is
>> already converting to 64bit). There should be no problem with STON because
>> whole Pharo is loaded into it using metacello and filetree. Pharo 6 minimal
>> image is done differently (by shrinking) and not so well tested.
>>
>> For the conversion of 32-bit image to 64-bit image you need a VMMaker
>> image:
>>
>> https://ci.inria.fr/pharo/job/Spur-Git-Tracker/lastSuccessfulBuild/artifact/vmmaker-image.zip
>> and then evaluate:
>> ./pharo generator.image eval "[Spur32to64BitBootstrap new bootstrapImage:
>> 'conversion.image'] on: AssertionFailure do: [ :fail | fail resumeUnchecked:
>> nil ]"
>>
>> -- Pavel
>>
>>
>>
>> 2017-07-15 10:19 GMT+02:00 Tim Mackinnon :
>>>
>>> Hi Pavel - thanks for getting me to the point where I could even have a
>>> minimal image. As I’m on the edge of my Pharo knowledge here, I’ll try and
>>> run with this as best I can.
>>>
>>> I’d been using the 6.0 image you suggested to me - but maybe I could use
>>> a 70 image with Pharo 6 for a while (until the VM diverges) right?
>>>
>>> The bit I haven’t quite understood however, is how the 64bit image is
>>> created - as your reference is to a 32bit version? Is the 64bit one
>>> converted from 32 in a later stage? (For AWS Lambda I need 64bit) - am I
>>> right in thinking the pipeline stage after this one is the one you sent me -
>>> and the travis.yml file shows me what it does? But I can’t see a trivis.yml
>>> in the conversion stage so I’m not sure how it does that. (Question - how do
>>> I see what the pipelines do to answer my own questions?)
>>>
>>> I was hoping that there was a basic image that got me up to metacello
>>> baseline level to load git file tree packages/baselines  in my own repo as
>>> well baselines on the internet. The one you sent me is fairly close to that
>>> (its just missing STON in the image and seems to have an issue with
>>> resolving undeclared classes that get loaded in - should do a fogbugz on
>>> that?)
>>>
>>> The follow-on from a metacello image is how we can get people to create
>>> better baselines that give you more minimal loading options (e.g.
>>> conditionally leave out the test cases perhaps)
>>>
>>> Tim
>>>
>>> On 15 Jul 2017, at 08:24, Pavel Krivanek 
>>> wrote:
>>>
>>> Hi Tim,
>>>
>>> you can base the your work on the bootstrapped image, see
>>> https://ci.inria.fr/pharo/view/7.0/job/70-Bootstrap-32bit/, file
>>> Pharo7.0-core-*.zip
>>>
>>> This image does not have a lot of basic components like Monticello or
>>> network but it has a compiler so the code can be imported as *.st files.
>>> Then we have Pharo7.0-monticello-*.zip which will be easier to use and
>>> probably can fit your needs. Monticello and network support are included.
>>> But you cannot use baselines nor configurations to load your code.
>>>
>>> -- Pavel
>>>
>>> 2017-07-14 9:59 GMT+02:00 Tim Mackinnon :

 Hi - buoyed by the success of a minimal image (thanks Pavel), I'm
 wondering if I can get even smaller.

 There are lots of .so's in the vm which wouldn't make sense on a server
 once deployed - sound, maybe libgit ...

 Is there a list of the essential ones, or tips on what I can strip out
 of the Linux deployment? I also recall that i can leave out .sources and
 .changes as well right?

 Tim

 Sent from my iPhone

>>>
>>>
>>
>



[Pharo-users] Can anyone answer this?

2017-07-19 Thread horrido
Miles Fidelman (at Quora) and I were having an argument about the suitability
of Smalltalk (Pharo) for large maintainable software projects. The problem
is, I've never used Smalltalk in a commercial setting, esp. with respect to
large projects. Without that experience, I am wholly unqualified to answer
his response, which follows...

*I just spent a little time looking at a couple of big projects that use(d)
Smalltalk - JWARS & the Seaside web server. And I discovered that both have
basically avoided the “live coding environment” aspects of Smalltalk.

JWARS incorporates a lot of access & configuration controls that limit who
can change which parts of the system.

Seaside seems to follow standard development processes - with a version
control system, and formal releases.

Which kind of reinforces what I see as issues with Smalltalk from a system
building & maintenance point of view:

1) When everything is a work in progress, it’s impossible to manage a
project, maintain deployed code (“what version do you have, what did you
modify? clearly that’s where the bug is”), update things (interfaces change,
updates overwrite local mods), etc.

2) The typical deployment model is to deploy a completely new virtual
machine & environment. For some things (e.g., servers), that works - and
seems to be the way of the world with containerization - but for other
things (e.g., desktop applications), deploying an entire new environment for
every patch is just a bit match.

3) Gross violation of “principle of least privilege.” Live code,
particularly multi-user code, that can be modified by its users - now that
is a surefire recipe for disaster.*



--
View this message in context: 
http://forum.world.st/Can-anyone-answer-this-tp4955861.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Re: [Pharo-users] Creating the smallest server runtime footprint

2017-07-19 Thread Tim Mackinnon
Hi - I neglected to mentioned “the catch” with Lambda, next to my results. So 
on a tiny EC2 instance you get those kinds of results (this is where I measured 
the numbers of 50ms) - however on Lambda you aren’t entirely clear what 
hardware its running on - and there are 2 aspects to consider - a cold start 
(where you are allocated a new Lambda instance, and so it has to bring in your 
deployed package) and then there appears to be a cached start - where it seems 
that one of your old Lambda environments  can be reused. On top of both of 
these states - there is an extra cost of spawning out to Pharo as its not 
supported natively.

I mention this in the Readme on the gitlab page (it’s possibly a bit subtle) - 
but I was pointed to the Sparta GoLang project (who aren’t supported natively 
either) where they have measured that that cost of spawning out to GoLang (and 
it looks fairly similar for Pharo) is 700ms. Essentially this spawning is the 
cost of loading up a NodeJS environment (presumably some Docker like image they 
have already prepared - although they don’t reveal how this is done), 
“requiring” the ‘child-process’ node module to get an exec method, and then 
your code to shell out. (In my repo - this is the PharoLambda.js file).

Empirically I am seeing results from 500ms to 1200ms which are in line with 
Sparta (possibly better? I haven’t loaded up a Go environment to understand 
what they need to package up to deploy an app that can be exec’d and how that 
compares to our 10mb'ish footprint).

If I look at a basic NodeJS hello world app - I see .5ms to 290ms responses - 
(the min billing unit is 100ms). I got the impression for a recent serverless 
meet-up that sub 500 is what people aim for. Which means we are at least in the 
running.

I don’t know how sensitive the ‘overhead’ load time is due to the package size 
you deploy (I saw a big increase when I got my package below 10mb) or whether 
it truly is the NodeJS tax. I would love to get hold of the AWS team and 
suggest they provide another fixed solution that efficiently exec’s in C, a 
named executable with configurable parameters and the “event” parameter 
serialised in JSON (on the surface it seems overkill to use NodeJS for just 
that simple operation).

All this said the free tier gives you "1M free requests per month and 400,000 
GB-seconds of compute time per month” - so assuming we can do interesting 
things in under a second (which I’ve shown), then you can process 400,000 of 
them a month for free (which isn’t bad really).

Tim

> On 19 Jul 2017, at 13:59, Sven Van Caekenberghe  wrote:
> 
> 
>> On 19 Jul 2017, at 14:55, Esteban A. Maringolo  wrote:
>> 
>> I don't know how "mainstream" solutions perform on AWS Lambda or EC2,
>> but this seems really fast to me. 50 ms is great, assuming it bills by
>> every 100ms, you still have room to perform your computation.
> 
> Yes, it seems incredibly fast. I'll have to try this myself to check, but I 
> have no time now.
> 
>> Thank you for pursuing this path, it could open a new territory for
>> using Pharo at big scale.
>> 
>> Esteban A. Maringolo
>> 
>> 
>> 2017-07-17 8:32 GMT-03:00 Tim Mackinnon :
>>> Well I’ve been shooting in the dark a bit - but I also left out the sound
>>> and display so’s (e.g. -x execlude the following and add back the null so's
>>> 
>>> zip -r --symlinks ../deploy/$LAMBDA_NAME.zip * -x pharo-local/\* \*.sources
>>> \*.changes \*.st \*.log
>>>   */libgit2.* */libSDL2* */B3DAccelerator* */JPEGRead* */vm-sound*
>>> */vm-display* tmp/\* */__MACOSX\*
>>> - zip -uR ../deploy/$LAMBDA_NAME.zip *-null.so
>>> 
>>> 
>>> And everything seems to run clean. (Would be useful to get some feedback
>>> from those in the know - does just leaving out so’s incurred a penalty if
>>> you don’t recompile the VM? Presumably something would get written to std
>>> error or pharodebug if it was an issue).
>>> 
>>> In fact my run times on EC2 are pretty impressive:
>>> 
>>> PharoLambdaMin]$ time ./pharo Pharo.image exec "Lambda processJSON: '{}'"
>>> {"outputSpeech":{"text":"Good Morning, it's eleven
>>> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
>>> Successful","title":"Pharo Lambda","type":"Simple"}}
>>> 
>>> real 0m0.039s
>>> user 0m0.028s
>>> sys 0m0.000s
>>> [ec2-user@ip-172-31-44-73 PharoLambdaMin]$ time ./pharo Pharo.image exec
>>> "Lambda processJSON: '{}'"
>>> {"outputSpeech":{"text":"Good Morning, it's eleven
>>> twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
>>> Successful","title":"Pharo Lambda","type":"Simple"}}
>>> 
>>> real 0m0.039s
>>> user 0m0.020s
>>> sys 0m0.008s
>>> 
>>> 
>>> Not bad eh?
>>> 
>>> Tim
>>> 
>>> On 17 Jul 2017, at 07:00, Tim Mackinnon  wrote:
>>> 
>>> Thanks again Pavel - I'll try the 6.0 step 4 or possibly step 5 with sunit
>>> (as many libraries don't separate out their tests).
>>> 
>>> I've also tried leaving out libgit and libsdl2 .so's on my server build  and
>>> that se

[Pharo-users] Why does debugger browse open on Object?

2017-07-19 Thread Tim Mackinnon
Hi - I've always meant to ask this question as it often catches me out.

When you get a typical debugger on doesNotUnderstood: I find that typically 
I've muddled up a method name - so I want to quickly browse the receiver of my 
mistake and understand what methods it actually has that I can use.

It seems that the browse button is exactly that, and the debugger helpfully 
shows MyClass(Object) doesNotUnderstand

But browse actually opens on Object >>doesNotUnderstand: which while 
technically correct is not really that helpful.

Why isn't there an easy way to open a browser on the real receiver of the 
message? Am I missing an obvious button, or is it really use Spotter and retype 
the name of the class?

Interestingly - Create does the useful thing to create a method where you want?

It seems so obvious, but most Smalltalks seem to adopt this approach?

I guess I can add it - but surprised I have to really.

Tim

Sent from my iPhone



Re: [Pharo-users] Creating the smallest server runtime footprint

2017-07-19 Thread Stephane Ducasse
I'm really curious to see how these numbers will changes with Sista
because pharo will be to start hot from a Jit point of view. So you
will be able to run your app save it and ship it hot with the JIT
optimisation already there.

On Wed, Jul 19, 2017 at 6:20 PM, Tim Mackinnon  wrote:
> Hi - I neglected to mentioned “the catch” with Lambda, next to my results. So 
> on a tiny EC2 instance you get those kinds of results (this is where I 
> measured the numbers of 50ms) - however on Lambda you aren’t entirely clear 
> what hardware its running on - and there are 2 aspects to consider - a cold 
> start (where you are allocated a new Lambda instance, and so it has to bring 
> in your deployed package) and then there appears to be a cached start - where 
> it seems that one of your old Lambda environments  can be reused. On top of 
> both of these states - there is an extra cost of spawning out to Pharo as its 
> not supported natively.
>
> I mention this in the Readme on the gitlab page (it’s possibly a bit subtle) 
> - but I was pointed to the Sparta GoLang project (who aren’t supported 
> natively either) where they have measured that that cost of spawning out to 
> GoLang (and it looks fairly similar for Pharo) is 700ms. Essentially this 
> spawning is the cost of loading up a NodeJS environment (presumably some 
> Docker like image they have already prepared - although they don’t reveal how 
> this is done), “requiring” the ‘child-process’ node module to get an exec 
> method, and then your code to shell out. (In my repo - this is the 
> PharoLambda.js file).
>
> Empirically I am seeing results from 500ms to 1200ms which are in line with 
> Sparta (possibly better? I haven’t loaded up a Go environment to understand 
> what they need to package up to deploy an app that can be exec’d and how that 
> compares to our 10mb'ish footprint).
>
> If I look at a basic NodeJS hello world app - I see .5ms to 290ms responses - 
> (the min billing unit is 100ms). I got the impression for a recent serverless 
> meet-up that sub 500 is what people aim for. Which means we are at least in 
> the running.
>
> I don’t know how sensitive the ‘overhead’ load time is due to the package 
> size you deploy (I saw a big increase when I got my package below 10mb) or 
> whether it truly is the NodeJS tax. I would love to get hold of the AWS team 
> and suggest they provide another fixed solution that efficiently exec’s in C, 
> a named executable with configurable parameters and the “event” parameter 
> serialised in JSON (on the surface it seems overkill to use NodeJS for just 
> that simple operation).
>
> All this said the free tier gives you "1M free requests per month and 400,000 
> GB-seconds of compute time per month” - so assuming we can do interesting 
> things in under a second (which I’ve shown), then you can process 400,000 of 
> them a month for free (which isn’t bad really).
>
> Tim
>
>> On 19 Jul 2017, at 13:59, Sven Van Caekenberghe  wrote:
>>
>>
>>> On 19 Jul 2017, at 14:55, Esteban A. Maringolo  wrote:
>>>
>>> I don't know how "mainstream" solutions perform on AWS Lambda or EC2,
>>> but this seems really fast to me. 50 ms is great, assuming it bills by
>>> every 100ms, you still have room to perform your computation.
>>
>> Yes, it seems incredibly fast. I'll have to try this myself to check, but I 
>> have no time now.
>>
>>> Thank you for pursuing this path, it could open a new territory for
>>> using Pharo at big scale.
>>>
>>> Esteban A. Maringolo
>>>
>>>
>>> 2017-07-17 8:32 GMT-03:00 Tim Mackinnon :
 Well I’ve been shooting in the dark a bit - but I also left out the sound
 and display so’s (e.g. -x execlude the following and add back the null so's

 zip -r --symlinks ../deploy/$LAMBDA_NAME.zip * -x pharo-local/\* \*.sources
 \*.changes \*.st \*.log
   */libgit2.* */libSDL2* */B3DAccelerator* */JPEGRead* */vm-sound*
 */vm-display* tmp/\* */__MACOSX\*
 - zip -uR ../deploy/$LAMBDA_NAME.zip *-null.so


 And everything seems to run clean. (Would be useful to get some feedback
 from those in the know - does just leaving out so’s incurred a penalty if
 you don’t recompile the VM? Presumably something would get written to std
 error or pharodebug if it was an issue).

 In fact my run times on EC2 are pretty impressive:

 PharoLambdaMin]$ time ./pharo Pharo.image exec "Lambda processJSON: '{}'"
 {"outputSpeech":{"text":"Good Morning, it's eleven
 twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
 Successful","title":"Pharo Lambda","type":"Simple"}}

 real 0m0.039s
 user 0m0.028s
 sys 0m0.000s
 [ec2-user@ip-172-31-44-73 PharoLambdaMin]$ time ./pharo Pharo.image exec
 "Lambda processJSON: '{}'"
 {"outputSpeech":{"text":"Good Morning, it's eleven
 twenty-six","type":"PlainText"},"shouldEndSession":true,"card":{"content":"Operation
 Successful","title":"Pharo Lambda","typ

Re: [Pharo-users] Why does debugger browse open on Object?

2017-07-19 Thread Esteban A. Maringolo
Hi Tim, all

2017-07-19 14:34 GMT-03:00 Tim Mackinnon :
> Hi - I've always meant to ask this question as it often catches me out.
>
> When you get a typical debugger on doesNotUnderstood: I find that typically 
> I've muddled up a method name - so I want to quickly browse the receiver of 
> my mistake and understand what methods it actually has that I can use.
>
> It seems that the browse button is exactly that, and the debugger helpfully 
> shows MyClass(Object) doesNotUnderstand
>
> But browse actually opens on Object >>doesNotUnderstand: which while 
> technically correct is not really that helpful.

Yesterday I was going to ask about this behavior, which has been
nagging me for some time. I feel better reading it isn't only me who's
sensitive to this friction.

I don't understand why it is like this, but I assume it is because of
a common behavior of browsing the receiver object in the stack frame
without considering particular cases like Object>>doesNotUnderstand:

I'll be happy if you find a way to implement it, as a System Option or
via a shortcut modifier.

Regards!

Esteban A. Maringolo



Re: [Pharo-users] Can anyone answer this?

2017-07-19 Thread jWarrior
Well .

I worked on JWARS for 13 years until the Navy killed it at the end of 2010,
and I have never heard of Miles Fidelman. JWARS was run almost exclusively
in SCIFs (secure facilities), and most of the users did not have access to
the source code. So I do not know where Miles gets his information.

JWARS had extensive version control. All new versions of the main config
maps were tagged with extensive information about what was included. 

I agree with what Richard says below, "I suspect Miles doesn't really
understand what a “live coding environment” really means.", although I would
phrase it less gently.



--
View this message in context: 
http://forum.world.st/Can-anyone-answer-this-tp4955861p4955916.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Re: [Pharo-users] Can anyone answer this?

2017-07-19 Thread Stephane Ducasse
I do not see why live coding programming languages could not manage
their source code in versioning control system.
To me, people opposing live coding and released/versioned software are
just **PLAIN** idiots.
Period.
You can have a live reflective system and still want to have a fully
reproducible built system.
This is what we do with Pharo. We manage everything with a version
control system and still Pharo
is fully dynamic. And No not everybody can commit and change Pharo.

About large and complex systems, I heard that an insurance company has
30 Millions lines Smalltalk application.
and this system is live and also versioned! Hopefully.
So do not lose your energy with idiots.

Stef




On Wed, Jul 19, 2017 at 8:18 PM, jWarrior  wrote:
> Well .
>
> I worked on JWARS for 13 years until the Navy killed it at the end of 2010,
> and I have never heard of Miles Fidelman. JWARS was run almost exclusively
> in SCIFs (secure facilities), and most of the users did not have access to
> the source code. So I do not know where Miles gets his information.
>
> JWARS had extensive version control. All new versions of the main config
> maps were tagged with extensive information about what was included.
>
> I agree with what Richard says below, "I suspect Miles doesn't really
> understand what a “live coding environment” really means.", although I would
> phrase it less gently.
>
>
>
> --
> View this message in context: 
> http://forum.world.st/Can-anyone-answer-this-tp4955861p4955916.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>



Re: [Pharo-users] Why does debugger browse open on Object?

2017-07-19 Thread Esteban Lorenzano
Hi,

> On 19 Jul 2017, at 19:56, Esteban A. Maringolo  wrote:
> 
> Hi Tim, all
> 
> 2017-07-19 14:34 GMT-03:00 Tim Mackinnon :
>> Hi - I've always meant to ask this question as it often catches me out.
>> 
>> When you get a typical debugger on doesNotUnderstood: I find that typically 
>> I've muddled up a method name - so I want to quickly browse the receiver of 
>> my mistake and understand what methods it actually has that I can use.
>> 
>> It seems that the browse button is exactly that, and the debugger helpfully 
>> shows MyClass(Object) doesNotUnderstand
>> 
>> But browse actually opens on Object >>doesNotUnderstand: which while 
>> technically correct is not really that helpful.
> 
> Yesterday I was going to ask about this behavior, which has been
> nagging me for some time. I feel better reading it isn't only me who's
> sensitive to this friction.
> 
> I don't understand why it is like this, but I assume it is because of
> a common behavior of browsing the receiver object in the stack frame
> without considering particular cases like Object>>doesNotUnderstand:
> 
> I'll be happy if you find a way to implement it, as a System Option or
> via a shortcut modifier.

In fact, some years ago Camillo Bruni implemented a solution for this (it was 
opening the debugger in the place where the DNU was originated) but I think it 
was deactivated because people was not so happy. 
Maybe now is time to retry ;)

Esteban

> 
> Regards!
> 
> Esteban A. Maringolo
> 




Re: [Pharo-users] Can anyone answer this?

2017-07-19 Thread jWarrior
JWARS has 1.2 million lines of code. The longest method did Object to
Relational database mapping.



--
View this message in context: 
http://forum.world.st/Can-anyone-answer-this-tp4955861p4955922.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Re: [Pharo-users] Can anyone answer this?

2017-07-19 Thread Bob Hartwig
On Wed, Jul 19, 2017 at 9:47 AM, horrido  wrote:

>
> 1) When everything is a work in progress, it’s impossible to manage a
> project, maintain deployed code (“what version do you have, what did you
> modify? clearly that’s where the bug is”), update things (interfaces
> change,
> updates overwrite local mods), etc.
>
>

I've been working on a big production Smalltalk application (IBM Smalltalk
/ VisualAge) for a long time, and the approach we take is to use the
development environment for R&D, and with select beta customers, and then
periodically package a runtime when we want to release to the customer
base.  It gives the best of both worlds: a live development environment
with the myriad debugging and productivity advantages that that brings, and
a more conventional deployment model for the customers.



> 2) The typical deployment model is to deploy a completely new virtual
> machine & environment. For some things (e.g., servers), that works - and
> seems to be the way of the world with containerization - but for other
> things (e.g., desktop applications), deploying an entire new environment
> for
> every patch is just a bit match.
>
>
Don't know why you'd need to deploy a completely new VM and environment
with each release.  When we give a customer an update, we give them a new
packaged runtime image, which is a small fraction of the size of the VM and
its supporting files.  When we had a Smalltalk "fat client" (now our UI is
RESTful web services and XHR-heavy web app), deploying the runtime image
for each release worked well.


Re: [Pharo-users] Why does debugger browse open on Object?

2017-07-19 Thread Tim Mackinnon
It turns out it's trivial - just create a subclass of DebugAction (the create 
one has all the pieces you need).

Not sure how you get an icon on the button (create doesn't either).

Maybe I can propose it for 7.

Possibly call it "browse receiver"? (A bit long though)

Tim

Sent from my iPhone

> On 19 Jul 2017, at 19:55, Esteban Lorenzano  wrote:
> 
> Hi,
> 
>> On 19 Jul 2017, at 19:56, Esteban A. Maringolo  wrote:
>> 
>> Hi Tim, all
>> 
>> 2017-07-19 14:34 GMT-03:00 Tim Mackinnon :
>>> Hi - I've always meant to ask this question as it often catches me out.
>>> 
>>> When you get a typical debugger on doesNotUnderstood: I find that typically 
>>> I've muddled up a method name - so I want to quickly browse the receiver of 
>>> my mistake and understand what methods it actually has that I can use.
>>> 
>>> It seems that the browse button is exactly that, and the debugger helpfully 
>>> shows MyClass(Object) doesNotUnderstand
>>> 
>>> But browse actually opens on Object >>doesNotUnderstand: which while 
>>> technically correct is not really that helpful.
>> 
>> Yesterday I was going to ask about this behavior, which has been
>> nagging me for some time. I feel better reading it isn't only me who's
>> sensitive to this friction.
>> 
>> I don't understand why it is like this, but I assume it is because of
>> a common behavior of browsing the receiver object in the stack frame
>> without considering particular cases like Object>>doesNotUnderstand:
>> 
>> I'll be happy if you find a way to implement it, as a System Option or
>> via a shortcut modifier.
> 
> In fact, some years ago Camillo Bruni implemented a solution for this (it was 
> opening the debugger in the place where the DNU was originated) but I think 
> it was deactivated because people was not so happy. 
> Maybe now is time to retry ;)
> 
> Esteban
> 
>> 
>> Regards!
>> 
>> Esteban A. Maringolo
>> 
> 
>