[Pharo-dev] Re: Look for software for virtual outing

2021-01-26 Thread Eliot Miranda
Hi Koh,

On Tue, Jan 26, 2021 at 1:22 PM askoh  wrote:

> Hi:
>
> My team at work has been asked to organize a "Virtual Outing" for
> socializing or camaraderie. I am think of using a Smalltalk virtual reality
> software for such an outing.
>
> Would any of Qwaq, Tref, Croquet be viable VR platforms for a group of less
> that 50 people?
>

Absolutely.  We're running trials with as many as 85 participants right
now. So any number from 1 to 85 is definitely possible, and we haven't hit
the limit on the number of participants yet.

However, Terf is built by 3D ICC above Squeak.  I work full time for 3D
ICC.  We don't use Pharo.m Please get in touch with Ron Teitelbaum to
discuss usage.  I've cc'ed him (Hi Ron).

> Thanks,
> Aik-Siong Koh
>
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>

_,,,^..^,,,_
best, Eliot


[Pharo-dev] Re: Pharo 8 Image blows up after restart

2020-12-08 Thread Eliot Miranda
Hi Sabine,

> On Dec 8, 2020, at 10:49 AM, Sabine Manaa  wrote:
> 
> Hi,
> 
> I have the following problem:
> 
> sometimes, after restarting my Pharo 8 image on mac, it is not responsive
> and it blows up the memory.
> 
> I tried with  commandline handler  
> eval "self halt"
> and with 
> Smalltalk addToStartUpList: 
> 
> but both is too late - the Image starts and blows up and is not responsive. 
> 
> I can not interrupt it and it grows to several GB memory (eg after 20
> seconds 10 GigaByte! ) and I have to kill it.
> 
> Anyone having an idea what I can do to find the reason for this? 

Run the system under a low level debugger (lldb, gdb).
Put a break point on the function 
sqAllocateMemorySegmentOfSizeAboveAllocatedSizeInto, which is called when the 
heap must grow to allocate new objects.
When the system hits the breakpoint for the second or third time call the 
function 
printCallStack()
and be patient; it will print a lot of stack.

There is also a command line argument to limit the maximum size of the heap. So 
an alternative might be to restrict the heap to a few megabytes above the image 
size and see if you get a crash.dmp file when the vm exits when the system runs 
out of memory.

HTH

> 
> Sabine
> 
> 
> 
> 
> 
> 
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html


Re: [Pharo-dev] is there a way to know when a GC is happening?

2020-09-09 Thread Eliot Miranda
On Wed, Sep 9, 2020 at 9:49 PM Esteban Maringolo 
wrote:

> On Wed, Sep 9, 2020 at 11:58 PM Eliot Miranda 
> wrote:
>
> > In VisualWorks, for example, a WeakArray is primed with an Object
> instance, and this gets collected every scavenge.
> > So the WeakArray is notified.  From this VW builds a notification system.
>
> I guess this is how VisualWorks changes its cursor from a regular
> pointer to the, sometimes dreadful, "GC" icon during garbage
> collection.
> Isn't it?
>

I think so :-)


>
> Regards!
>
> Esteban A. Maringolo
>

and to you, Esteban!

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Project of Interest => Jekyll + Dynamic processing integration + Git(hubs)Pages => pharo in the middle

2020-05-26 Thread Eliot Miranda


> On May 26, 2020, at 12:54 PM, Tim Mackinnon  wrote:
> 
> 
> 
> Hi - a bit late to reply on this one, but I did try Jekyl years ago, it was 
> ok but over time frustrating to use and difficult to make the pipeline 
> understandable ...
> 
> I looked at Hugo and a few others but ended up going with Metalsmith (a JS 
> static generator). I liked the plugable pipeline model of it, but cursed the 
> state of Js tools (a few years ago) .
> 
> I’ve been meaning for ages to reimplement it in Smalltalk with a nice oo 
> composite pipeline model and an easy way to debug and visualise what is going 
> when getting your template right.

<3. I hope you can build a team to do just this.  This could be a solid 
business if done right, with open source and a sensible revenue model.  Good 
luck


> 
> Combine this with the new headless image and it should easily plug into 
> netlify .
> 
> Tim
> 
>>> On 23 May 2020, at 21:41, Cédrick Béler  wrote:
>>> 
>> 
>> Hi Esteban,
>> 
>>> This comes really on time for me, I decided to rewrite to small sites I 
>>> have using Jekyll, and as read all their tutorials I thought even of having 
>>> a Jekyllst variation, that uses the Jekyll directories and other 
>>> conventions, but uses Smalltalk as its engine. Of course this is far 
>>> reached given my real availability these days, that's lower than usual.
>> 
>> Cool anyway if that’s something that interest you too. What do you think of 
>> https://gohugo.io ?
>> 
>> Themes are pretty cool https://themes.gohugo.io 
>> 
>>> 
>>> However I'd like to be part of conversations around this, and eventually 
>>> contribute to it, because I already started playing with Jekyll (and Gatsby 
>>> as well).
>> 
>> Perfect :)
>> 
>> This is not urgent but I need to put 2 websites online for September (simple 
>> ones). For now, I’m trying around. Summer will be perfect for me to work on 
>> such project.
>> 
>> Cheers,
>> 
>> Cédrick
>> 
>>> 
>>> Regards,
>>> 
>>> 
>>> Esteban A. Maringolo
>>> 
>>> 
 On Sat, May 23, 2020 at 10:15 AM Cédrick Béler  wrote:
 Hi there, 
 
 This post is just to talk about one side project I’m exploring and 
 interested in from a long time. I think it may interest other people here.
 
 I’d like to have powerful (static based) web site so hosting is really 
 cheap (even free) and hassle free. I’ve my own server for years, it is 
 cheap and simple but, of course, it needs some maintenance (linux update, 
 nginx scripts, …) even if tools are the simplest I’ve found.
 
 Recently thanks to student projects ;-), I found some time to learn what I 
 find is a wonderful solution. This solution is to use GitHub DSCM, GitHub 
 Pages and Jekyll (a ruby static site generator that is natively 
 integrated) all together.
 https://jekyllrb.com 
 
 The beauty is that you can edit the site straight on GitHub. We get the 
 power of version system and hosting for free… 
 It literally is a CMS and the cheapest and reliable that I know of (grav 
 might be another option).
 
 Of course, there are some « dynamic » content possibilities too (otherwise 
 GitHub Pages is enough)
 - blog posts are natively generated through new files according to a name 
 convention.
 - there are plugins too (but you have to watch for compatibility in 
 GitHub).
 
 Dealing with forms and comments is possible
 - solutions that are hosted on a third-party. Solution like Discus or 
 formspree, … (that’s a NO GO to me)
 - web service integration that you can host (note that form spree is on 
 GitHub too https://github.com/formspree/formspree)
 
 This last point is where I’d like Pharo (Zinc, Iceberg) to be integrated. 
 Again we could imagine a web service system based on Zinc. I could manage 
 form submissions that way and everything I’d like but it may end up 
 complex. Do I need a database ? Do I need to store information and 
 therefore manage the underlying architecture. If it crash, I want only the 
 endpoint to be not available but the whole site still working.
 
 An in between elegant solution os to use git for what it’s good at 
 (versioning collaboratively through PR, and also reliable hosting in 
 classic platforms). 
 
 The idea is to use the PR mechanisms to submit stuff like blog comments 
 (note that you have a free moderation system). 
 This is actually not limited to comments but all kinds of possible 
 interaction…
 
 This way is (to me) better in terms of infrastructure management. Such a 
 service also needs to be available (and maintained) but this is a very 
 minimalist machinery (hanling POST request service only - no real content 
 management as deferred to github). And again, a fail safe version (for the 
 last version of the generated pages).
 
 Staticman (https://staticman.net) is a nice node applicatio

Re: [Pharo-dev] Squeak and Pharo speed differences

2020-05-18 Thread Eliot Miranda
Hi Shaping,

> On May 18, 2020, at 6:52 PM, Shaping  wrote:
> 
> 
> 1.  Double-click text selection in both Squeak and Pharo shows a 75-100 ms 
> latency (eye-balled, estimated) between end of double click (button up on 
> second click) and time of highlighting of selected text.  It could be as low 
> as 60 ms, but I doubt it, and that’s still too long.  I can’t track the 
> latency in VW 8.3.2.  It’s too short, probably 30 ms or less, and is under my 
> noise floor.  Notepad latencies are even lower.  The difference between VW 
> and Notepad is not enough to complain about.  Neither is noticeable in 
> passing.  The difference between VW and Pharo/Squeak latencies is a little 
> painful/distracting.  It’s very much in your face, and you are keenly aware 
> that you are waiting for something to happen before you can resume your 
> thoughts about the code.
>  
> 2.  Stepping in the Pharo debugger is slow (Squeak is fine).  The latencies 
> between the step-click event and selection of the next evaluable is a solid 
> 100 ms (again estimated).  Feels more like 150-175 ms much of the time.  This 
> is actually hard to work with. 
>  
> Neither of these unequivocally demonstrates VM performance.
>  
> I know.  This comment is not about the VM.  VM performance is another issue.  
> This comment is only about general usability as a function of the latencies, 
> whatever the cause. 
>  
>  Both are more likely to derive from overall GUI architecture.
>  
> Yup.
>  
> In particular, VW’s display architecture is a direct stimulus-response i/o 
> model where input results in a transformation producing immediate rendering, 
> whereas Morphic is an animation architecture where input results in a new 
> state but no rendering.  The Morphic GUI is rendered separately on every 
> “step” of the system.
>  
> Okay.
>  
>  Hence graphics output necessarily lags input on Morphic. So these speed 
> differences have nothing to do with vm performance and everything to do with 
> GUI architecture.
>  
> Both Squeak and Pharo show the same delay for text selection latency.   The 
> architecture difference is not likely causing that. 

Given that both Pharo and Squeak useorphic and hence nothing have the same 
tender-in-step architecture isn’t the fact that they show the sane performance 
issue evidence that points to precisely this being the cause?

>  How do we index or look up the word rectangle to render?   I’m think that is 
> more likely the cause.  Is a map created at method compile time and updated  
> after text is moved during edits?

My understanding is that damage rectangles are retrieved, combined to produce a 
smaller (non-overlapping?) set, and that the entire morph tree is asked to 
render within these damage rectangles.  You can read the gods for yourself.

>  Where is VisualWorks significantly faster than either Squeak or Pharo?  
>  
> VW 8.3.2 faster:
>  
> 1. Text selection. 
>  
> 2. Repeat-key rate in VW is smoother (not perfect; I see a few pauses).  
> Pharo’s repeat-key rate is the same or a little slower, there are more 
> pauses, and distribution of those pause-times is slightly wider for Pharo 9, 
> as if event flow isn’t as smooth as it could be (because text/cursor 
> rendering is a not efficient?).  This is a minor issue, not a practical 
> problem.  I did the test in a workspace in both cases.
>  
>  
> Pharo 9 same or faster:
>  
> Everything else in the GUI, like window openings/closings, menu 
> openings/closings work at nearly the same speed, or Pharo 9 is faster.
>  
> Opening a system browser in VW 8.3.2 and Pharo 9 takes about the same time.  
> If you scrutinize, you can see that Pharo system browser open times are often 
> about 2/3 to 4/5 of the VW times.  This action is never faster in VW. 
>  
> Popup menus in Pharo 9 are noticeably faster than those in VW 8.3.2.   
> Instant--delightful.
>  
>  
> Specifically which VisualWorks VM or lower level facilities are much faster 
> than the Cog VM?  Do you have benchmarks?
>  
> No, I don’t, but I find the subject interesting, and would like to pursue it. 
>  I’m trying to get some pressing work done in VW (as I contemplate jumping 
> ship to Pharo/Squeak).  It’s not a good time for excursions, but here I am 
> playing with Squeak/Pharo, anyway.  I want to dig deeper at some future date.
>  
> Do you have a specific procedure you like to use when benchmarking the VW VM? 
>  
> Any VM.  Express the benchmark as a block.  If the benchmark is not trying to 
> measure JIT and/or GC overhead then before the block is run make sure to put 
> the vm in some initialized state wrt hitting and/or GC, eg by voiding the JIT 
> code cache,
>  
> How is the JIT code cache cleared?

Dialect dependent.  In Squeak/Pharo/Cuis IIRC Smalltalk voidCogVMState.  Can’t 
remember how it’s done in VW.

> and/or forcing a scavenge or a global GC.  Then run the block twice, 
> reporting it’s second iteration, to ensure all code is JITted.
>  
> Okay, so the above proced

Re: [Pharo-dev] Squeak and Pharo speed differences

2020-05-18 Thread Eliot Miranda
Hi Shaping,

> On May 16, 2020, at 4:33 AM, Shaping  wrote:
> 
> 
> Hi Eliot.
>  
>  
> Generally, comparing VisualWorks to either Squeak or Pharo or both, what are 
> the most pressing speed problems?
>  
> 1.  Double-click text selection in both Squeak and Pharo shows a 75-100 ms 
> latency (eye-balled, estimated) between end of double click (button up on 
> second click) and time of highlighting of selected text.  It could be as low 
> as 60 ms, but I doubt it, and that’s still too long.  I can’t track the 
> latency in VW 8.3.2.  It’s too short, probably 30 ms or less, and is under my 
> noise floor.  Notepad latencies are even lower.  The difference between VW 
> and Notepad is not enough to complain about.  Neither is noticeable in 
> passing.  The difference between VW and Pharo/Squeak latencies is a little 
> painful/distracting.  It’s very much in your face, and you are keenly aware 
> that you are waiting for something to happen before you can resume your 
> thoughts about the code.
>  
> 2.  Stepping in the Pharo debugger is slow (Squeak is fine).  The latencies 
> between the step-click event and selection of the next evaluable is a solid 
> 100 ms (again estimated).  Feels more like 150-175 ms much of the time.  This 
> is actually hard to work with. 

Neither of these unequivocally demonstrates VM performance.  Both are more 
likely to derive from overall GUI architecture.  N particular, VW’s display 
architecture is a direct stimulus-response i/o model where input results in a 
transformation producing immediate rendering, whereas Morphic is an animation 
architecture where input results in a new state but no rendering.  The Morphic 
GUI is rendered separately on every “step” of the system.  Hence graphics 
output necessarily lags input on Morphic. So these speed differences have 
nothing to do with vm performance and everything to do with GUI architecture.

>  
>  Where is VisualWorks significantly faster than either Squeak or Pharo?  
>  
> VW 8.3.2 faster:
>  
> 1. Text selection. 
>  
> 2. Repeat-key rate in VW is smoother (not perfect; I see a few pauses).  
> Pharos repeat-key rate is the same or a little slower, there are more pauses, 
> and distribution of those pause-times is slightly wider for Pharo 9, as if 
> event flow isn’t as smooth as it could be (because text/cursor rendering is a 
> not efficient?).  This is a minor issue, not a practical problem.  I did the 
> test in a workspace in both cases.
>  
>  
> Pharo 9 same or faster:
>  
> Everything else in the GUI, like window openings/closings, menu 
> openings/closings work at nearly the same speed, or Pharo 9 is faster.
>  
> Opening a system browser in VW 8.3.2 and Pharo 9 takes about the same time.  
> If you scrutinize, you can see that Pharo system browser open times are often 
> about 2/3 to 4/5 of the VW times.  This action is never faster in VW. 
>  
> Popup menus in Pharo 9 are noticeably faster than those in VW 8.3.2.   
> Instant--delightful.
>  
>  
> Specifically which VisualWorks VM or lower level facilities are much faster 
> than the Cog VM?  Do you have benchmarks?
>  
> No, I don’t, but I find subject interesting, and would like to pursue it.  
> I’m trying to get some pressing work done in VW (as I contemplate jumping 
> ship to Pharo/Squeak).  It’s not a good time for excursions, but here I am 
> playing with Squeak/Pharo, anyway.  I want to dig deeper at some future date.
>  
> Do you have a specific procedure you like to use when benchmarking the VW VM? 

Any VM.  Express the benchmark as a block.  If the benchmark is not trying to 
measure JIT and/or GC overhead then before the block is run make sure to put 
the vm in some initialized state wrt hitting and/or GC, eg by voiding the JIT 
code cache, and/or forcing a scavenge or a global GC.  Then run the block 
twice, reporting it’s second iteration, to ensure all code is JITted.

If attempting to measure JIT and/or GC overhead then do the same wet getting 
the vm to some baseline consistent initial state and then ensure, through the 
relevant introspection primitives, that after the benchmark has run the events 
desired to be benchmarked have actually taken place.

If a micobenchmark then ensure that eg loop, block invocation, arithmetic, 
overheads are either minimised wrt the code being benchmarked or subtracted 
from the code being benchmarked.

i.e. make sure the benchmark is repeatable (benchmark an initial used state). 
make sure the benchmark measures what is intended to be benchmarked and not 
some overhead.

>  
>  
> Shaping

Cheers, Eliot,
_,,,^..^,,,_ (phone)

Re: [Pharo-dev] Squeak and Pharo speed differences

2020-05-15 Thread Eliot Miranda
Hi Ben,

> On May 15, 2020, at 10:33 AM, Ben Coman  wrote:
> 
> 
>> On Fri, 15 May 2020 at 14:09, Shaping  wrote:
> 
>> Why can’t the OSVM be a single, unforked, maxed-out VM with all the best and 
>> fastest features working in Squeak and Pharo?   Why did the split happen? 
>> 
> 
> In very general terms, the fork was due to their being Group A wanting to go 
> one direction 
> and Group B wanting to go in a different direction. i.e
> B says "We want to do X".
> A says "We don't want to do X." 
> B says "We really want to do X."
> A says "Definitely no."
> B says "We really want to do X and actually we're doing it over here."
> 
> In essence, Squeak considered backward compatibility of prime importance 
> including the code of some applications that had become entangled in the main 
> code base.Pharo wanted to "clean the code" by disentangling and stripping 
> those parts.  They also wanted to move to a reproducible-build-system where 
> each change "bootstrapped" a nightly image from an empty file, whereas Squeak 
> continues to use a "continuous evolution" model. And there are more reasons I 
> probably not aware of.
> Here is the Pharo Vision document circa 2012 which inspired me  
> https://hal.inria.fr/hal-01879346/document
> 
> 
>> It looks like a bad use of energy in a community that is small and needs to 
>> use its human resources efficiently.
>> 
>  
> Trying to go one way and dealing with continual pushback and conflict around 
> that is also bad energy.
> 
> 
>>  I want to help, but need to port first from VW, and I’m trying to choose 
>> Squeak or Pharo.  Both have speed problems.  Squeak has fewer, but Pharo 
>> could be much faster with broad use of Spec2.
>> 
>>  
>> 
>> Would reintegrating Squeak and Pharo development make more sense?
>> 
> 
> I think that is not likely.  Both continue to have different goals.  And a 
> significant area where they are likely to continue to diverge is the 
> graphics.  Squeak is likely(?) to stay with Morphic a long while Pharo 
> intends to dump Morphic.
> This is one of the reasons that Spec was created - to be independence layer. 
> IIUC in Pharo 9 Spec is already working on top of a GTK3 backend.
> 
> wrt the VM, Pharo want to remove all native-windowing from the VM, so that 
> window opening 
> is controlled from the Image via FFI rather than the VM.  This conflicts with 
> Squeak's backward comparability goals.

The VM comprises an execution engine, a memory manager (which share an object 
representation), and an assorted collection of plugins and platform support.  
The execution engine and memory manager are the core support for Smalltalk 
language execution and are shared 100% between Squeak and Pharo.  And I have 
rearchitected this core, adding a JIT and a much improved object representation 
and memory manager.  Pharo has made *no change* to this core.

The assorted collection of plugins and platform support are a kit of parts 
which can be assembled in a variety of configurations, just as a Smalltalk 
image can be configured in radically different ways to develop and deploy 
different applications.

It is therefore not true that there is a conflict in backward compatibility.  
The core VM is only backward compatible at a source level.  Backward 
compatibility in the platform is no more than a configuration in the kit of 
parts.  And the existence of the minheadless minimal core platform support 
alongside the transitional head Gul platform proves that there need be no 
conflict.

The Pharo community makes great claims about how different its VM is when in 
fact the new work that has given us much improved performance and scalability 
is shared 109% between the two.

>> This change would effectively create more devs willing to work on any 
>> problem.  This change would also prevent fracturing of feature-sets across 
>> the two Smalltalks from happening in the first place.
>> 
> 
> I personally had the inspiration that Squeak might be based off the Pharo 
> Headless Bootstrap,
> but in the end I didn't find the time to push this further.
>  
> 
>>  Squeak and Pharo GUI styles are different.  So be it.  Can’t the GUI 
>> frameworks and conventions be separated in the same image, and configured as 
>> desired in GUI sections of Settings?
>> 
> 
> Pharo currently can use both Morphic and GTK3 for its GUI backend.
> Possibly the GTK3 backend would provide some speed benefit (??)
> 
> cheers -ben
> 


Re: [Pharo-dev] Squeak and Pharo speed differences

2020-05-15 Thread Eliot Miranda
Hi Shaping,

_,,,^..^,,,_ (phone)

> On May 15, 2020, at 2:18 AM, Shaping  wrote:
> 
> 
> Arithmetic changes proposed in Squeak have no relationships to VM.
>  
> The question below is about both the VM and a common basic class-set.  
> Math-related classes/methods are assumed to be part of that common class-set. 
>  Why is that not so?
>  
> Shaping
>  
> On Fri, May 15, 2020 at 2:09 PM Shaping  wrote:
> There is an issue about incorporating Squeak arithmetic changes in Pharo:
> https://github.com/pharo-project/pharo/issues/3322
>  
> I start to understand what could be done and could not find time to do the 
> changes.
> You are welcome if you want to help.
>  
>  
> Arithmetic speed is important if most of one’s work is math and modeling.
>  
> I want to help, but need to port first from VW, and I’m trying to choose 
> Squeak or Pharo.  Both have speed problems.  Squeak has fewer, but Pharo 
> could be much faster with broad use of Spec2.

Generally, comparing VisualWorks to either Squeak or Pharo or both, what are 
the most pressing speed problems?  Where is VisualWorks significantly faster 
than either Squeak or Pharo?  Specifically which VisualWorks VM or lower level 
facilities are much faster than the Cog VM?  Do you have benchmarks?

>  Would reintegrating Squeak and Pharo development make more sense?
>  
> This change would effectively create more devs willing to work on any 
> problem.  This change would also prevent fracturing of feature-sets across 
> the two Smalltalks from happening in the first place.
>  
> Why can’t the OSVM be a single, unforked, maxed-out VM with all the best and 
> fastest features working in Squeak and Pharo?   Why did the split happen?  It 
> looks like a bad use of energy in a community that is small and needs to use 
> its human resources efficiently.
>  
> Squeak and Pharo GUI styles are different.  So be it.  Can’t the GUI 
> frameworks and conventions be separated in the same image, and configured as 
> desired in GUI sections of Settings?
>  
>  
> Shaping 
>  
>  
> On Fri, May 15, 2020 at 12:48 PM Shaping  wrote:
> Hi all.
>  
>  
> Squeak 5.3: 
>   Time millisecondsToRun: [ 10 factorial  ] 6250
>  
> Pharo 8:
>   Time millisecondsToRun: [ 10 factorial  ] 7736
>  
> Why the difference?
>  
> Squeak 5.3 release notes describe arithmetic improvements.  Nice.  I crunch 
> very big numbers, and these improvements therefore have value.  Why would 
> they not be included in OSVM (forked or not) and the basic class-set for both 
> Squeak and Pharo?
>  
> Playing with Squeak 5.3, I’ve noticed that the GUI is snappier.  Browser 
> ergonomics are better too (for me at least), but that can be fixed/tuned in 
> either environ to suit the developer.  (Still that’s some work I prefer not 
> to do.)  Pharo GUIs are now generally slower, except for the Launcher, which 
> is delightfully quick because it is written in Spec2.  I presume that all 
> Pharo GUIs will eventually (ETA?) be written in Spec2 and that Pharo will 
> then be quick in all its GUIs.  The obvious question is:  Will Squeak be 
> improving GUI look/behavior and speed with Spec2?  If not, can I load Spec2 
> into Squeak so that I can do new GUI work there?
>  
> Both Squeak and Pharo have slow text selection.  Pick any word in any pane, 
> and double click it to select it.  When I do this, I sense a 75 to 100 ms 
> latency between the end of the double click and the selection highlight 
> appearing on the word.   I thought I’d entered a wormhole.  So I did the same 
> experiment in VW 8.3.2, VS Code, and Notepad, and all three showed 
> undetectable latencies.   This matters to me.  I’m trying to port from VW to 
> Pharo or Squeak (for a really long time now), and can’t push myself past the 
> text-selection delay problem.  Can text-selection speed be improved to the 
> level of VW’s?   Can someone sketch the algo used and/or point me to the 
> right class/methods. 
>  
> The Squeak debugging experience step-to-step is much quicker.  The latencies 
> in Pharo after button- release are very long.  I estimate 100 to 150 ms.   
> That’s too long for me to work productively.  I lose my mental thread with 
> many of those delays, and have to restart the thought.  It’s a serious 
> problem, caused mostly by acclimation to no detectable latency for many years 
> (Dolphin and VW have quick GUIs).  Is speeding up the Pharo debugger with 
> Spec2 a priority?  I can’t think of a better GUI-related priority for Pharo.
>  
>  
> Not speed-related:
>  
> -  How can I load additional fonts into Squeak?  Pharo does this with the 
> font dialog’s Update button.
>  
> - Where in the Squeak and Pharo images can I change mouse-selection behavior 
> to be leading-edge?  Some of the Squeak panes have this; others don’t.  I 
> want leading-edge action in all panes, and wish the feature were in 
> Preferences/Settings.
>  
>  
>  
> Shaping
>  
>  
>  
>  
>  
>  
>  
>  
>  
> 
> 
> --
> Serge Stinckwic
> ​h​
> https://twit

Re: [Pharo-dev] FullBlockClosure - How to create one...

2020-05-01 Thread Eliot Miranda



> On May 1, 2020, at 11:21 AM, Sean P. DeNigris  wrote:
> 
> Eliot Miranda-2 wrote
>> (FullBlockClosure receiver: nil outerContext: nil method: blockMethod
>> copiedValues: nil) value: 1 value: 2
> 
> Cool! We're getting closer, but in Pharo8 FullBlockClosure DNU
> #receiver:outerContext:method:copiedValues:
> 
> I tried to adapt it to Pharo's API, but the following crashes the VM:
> blockMethod := [:a :b| a < b] method.
> (FullBlockClosure outerContext: nil startpc: nil numArgs: 2 copiedValues:
> nil)
>compiledBlock: blockMethod; value: 1 value: 2


Works fine in Squeak trunk.  

> 
> 
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> 



Re: [Pharo-dev] FullBlockClosure - How to create one...

2020-05-01 Thread Eliot Miranda
Hi Sean,


> On May 1, 2020, at 11:21 AM, Sean P. DeNigris  wrote:
> 
> Eliot Miranda-2 wrote
>> (FullBlockClosure receiver: nil outerContext: nil method: blockMethod
>> copiedValues: nil) value: 1 value: 2
> 
> Cool! We're getting closer, but in Pharo8 FullBlockClosure DNU
> #receiver:outerContext:method:copiedValues:
> 
> I tried to adapt it to Pharo's API, but the following crashes the VM:
> blockMethod := [:a :b| a < b] method.
> (FullBlockClosure outerContext: nil startpc: nil numArgs: 2 copiedValues:
> nil)
>compiledBlock: blockMethod; value: 1 value: 2

Does Pharo support the Sista bytecodeset yet?  If not, then no full blocks.

> 
> 
> 
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> 



Re: [Pharo-dev] FullBlockClosure - How to create one...

2020-05-01 Thread Eliot Miranda
you might want to adapt this change set into a package, to rename
BlockClosure's startups inst var to startpcOrMethod to clear up confusion.



On Fri, May 1, 2020 at 9:49 AM Eliot Miranda 
wrote:

> Hi Max,
>
> On Thu, Apr 30, 2020 at 11:51 PM Max Leske  wrote:
>
>> Hi Sean,
>>
>> You *need* an outer context. See Context>>cleanCopy, which Fuel uses to
>> serialize blocks.
>>
>
> One does *not* need an outer context.  An outer context of nil should be
> fine, provided that the block never attempts an up-arrow return.
>
>
> | blockMethod |
> blockMethod := [:a :b| a < b] method.
> (FullBlockClosure receiver: nil outerContext: nil method: blockMethod
> copiedValues: nil) value: 1 value: 2
>
> So one needs to use FullBlockClosure
> class>>receiver:outerContext:method:copiedValues: and one can supply nil
> for the outerContext provided that the block does not do an up-arrow return.
>
>
>> Cheers,
>> Max
>>
>> On 1 May 2020, at 3:23, Sean P. DeNigris wrote:
>>
>> What am I not understanding about FullBlockClosure?
>>
>> I have a clean block that I'd like to turn into a FullBlockClosure so
>> that I
>> can serialize it without dragging (unneeded methods) into my object graph.
>> However, documentation and in-image example usages seem severely limited.
>> Here was one experiment that ended with a primitive failure. It seems
>> like a
>> receiver is needed and it can't be a dummy value (see commented "receiver:
>> 1"). But what would the receiver be in the absence of an outer context?!
>>
>> aBlockClosure := [ :a :b | 1 + a + b ].
>> fbc := (FullBlockClosure
>> outerContext: nil
>> startpc: aBlockClosure startpc
>> numArgs: aBlockClosure argumentCount
>> copiedValues: Array new) "receiver: 1; yourself".
>>
>> fbc value: 2 value: 3. "PrimitiveFailed: primitive #value:value: in
>> FullBlockClosure failed"
>>
>>
>>
>> -
>> Cheers,
>> Sean
>> --
>> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>>
>>
>
> --
> _,,,^..^,,,_
> best, Eliot
>


-- 
_,,,^..^,,,_
best, Eliot


startpcOrMethod.cs
Description: Binary data


Re: [Pharo-dev] FullBlockClosure - How to create one...

2020-05-01 Thread Eliot Miranda
Hi Max,

On Thu, Apr 30, 2020 at 11:51 PM Max Leske  wrote:

> Hi Sean,
>
> You *need* an outer context. See Context>>cleanCopy, which Fuel uses to
> serialize blocks.
>

One does *not* need an outer context.  An outer context of nil should be
fine, provided that the block never attempts an up-arrow return.


| blockMethod |
blockMethod := [:a :b| a < b] method.
(FullBlockClosure receiver: nil outerContext: nil method: blockMethod
copiedValues: nil) value: 1 value: 2

So one needs to use FullBlockClosure
class>>receiver:outerContext:method:copiedValues: and one can supply nil
for the outerContext provided that the block does not do an up-arrow return.


> Cheers,
> Max
>
> On 1 May 2020, at 3:23, Sean P. DeNigris wrote:
>
> What am I not understanding about FullBlockClosure?
>
> I have a clean block that I'd like to turn into a FullBlockClosure so that
> I
> can serialize it without dragging (unneeded methods) into my object graph.
> However, documentation and in-image example usages seem severely limited.
> Here was one experiment that ended with a primitive failure. It seems like
> a
> receiver is needed and it can't be a dummy value (see commented "receiver:
> 1"). But what would the receiver be in the absence of an outer context?!
>
> aBlockClosure := [ :a :b | 1 + a + b ].
> fbc := (FullBlockClosure
> outerContext: nil
> startpc: aBlockClosure startpc
> numArgs: aBlockClosure argumentCount
> copiedValues: Array new) "receiver: 1; yourself".
>
> fbc value: 2 value: 3. "PrimitiveFailed: primitive #value:value: in
> FullBlockClosure failed"
>
>
>
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] FullBlockClosure - How to create one...

2020-04-30 Thread Eliot Miranda
Hi Sean,

> On Apr 30, 2020, at 7:36 PM, Sean P. DeNigris  wrote:
> 
> What am I not understanding about FullBlockClosure?
> 
> I have a clean block that I'd like to turn into a FullBlockClosure so that I
> can serialize it without dragging (unneeded methods) into my object graph.
> However, documentation and in-image example usages seem severely limited.
> Here was one experiment that ended with a primitive failure. It seems like a
> receiver is needed and it can't be a dummy value (see commented "receiver:
> 1"). But what would the receiver be in the absence of an outer context?!
> 
> aBlockClosure := [ :a :b  | 1 + a + b ].
> fbc := (FullBlockClosure
>outerContext: nil
>startpc: aBlockClosure startpc
>numArgs: aBlockClosure argumentCount
>copiedValues: Array new) "receiver: 1; yourself".

FullBlockClosure doesn’t have a startpc.  Because it has its own method its 
startpc is implicit, just like a normal method.

I’m on my phone now, so I don’t know what the creation method is off hand.  But 
if you look at how Context implements the FullBlockClosure value primitives in 
doPrimitive:... you should find it.

> 
> fbc value: 2 value: 3. "PrimitiveFailed: primitive #value:value: in
> FullBlockClosure failed"
> 
> 
> 
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> 



Re: [Pharo-dev] [Vm-dev] [OpenSmalltalk/opensmalltalk-vm] Desired size of Eden parameter is not respected (#476)

2020-02-20 Thread Eliot Miranda
Hi Cyril,

> On Feb 20, 2020, at 2:03 AM, CyrilFerlicot  wrote:
> 
> I want to customize the GC parameters for images that will probably grow to 
> be very large.
> 
> One of the most important performance-wise is the size of the Eden. IIUR, the 
> size cannot be changed at runtime so there are two VM parameters. The 
> parameter 44 gives the current size of the Eden while parameter 45 should be 
> used to set the desired size of the Eden.
> 
> What I did was that I executed this:
> 
> $VM Pharo.image --no-default-preferences eval --save "Smalltalk vm 
> parameterAt: 45 put: 67108864. Smalltalk vm parameterAt: 25 put: 33554432. 
> Smalltalk vm parameterAt: 24 put: 67108864. Smalltalk vm parameterAt: 55 put: 
> 0.7. 'GC tunned'"
> $VM Pharo.image
> In the image that opens I then get this:
> 
> Smalltalk vm parameterAt: 44. "49546912"
> Smalltalk vm parameterAt: 45. "67108864"
> There is quite a large margin between the two.
> 
Alas this discrepancy is historical and semantic.  In the old V3 memory manager 
there was only old space and eden, and one could set the size of eden 
dynamically because it was simply “the region at the end of old space”, marked 
by a base/limit paid of pointers (IIRC; I’m on my phone so I might have details 
wrong).

In Spur eden is actually one of three regions of new space, the other two being 
the two survivor spaces, past survivor space and future survivor space (past 
space and future space for short).  Past and future space are the same size and 
swap after each scavenge, which empties eden and past space, copying survivors 
into future space, tenuring overflow to old space and then making future space 
the new past space.  The ratio of each survivor space to eden is 1 to 5, i.e. 
eden is 5/7 of new space and the survivor spaces are 2/7 of new space 
(currently this ratio cannot be changed).

What we really want to do is set the size of new space but the parameter 
historically referred to v3’s eden.  So Spur interprets parameter 45 as setting 
the desired size of (all of) new space but accurately reports the size of eden 
for parameter 44.  (49546912 / 67108864) ≈ (5/7).

So the vm parameter methods should state that vm parameter 45 in v3 sets the 
desired size of eden and in Spur sets the desired size of new space, while in 
both, parameter 44 reports the size of eden.  And yes, this is a bit of a mess. 
 IIRC there is another parameter that reports the size of new space, but it may 
report the occupancy of new space.

HTH

_,,,^..^,,,_ (phone)

> I got the VM via: https://files.pharo.org/get-files/pharo64-mac-stable.zip
> 
> Image
> -
> /Users/cyrilferlicot/Downloads/Pharo/Pharo.image
> Pharo8.0.0
> Build information: 
> Pharo-8.0.0+build.1128.sha.9f6475d88dda7d83acdeeda794df35d304cf620d (64 Bit)
> Unnamed
> 
> Virtual Machine
> ---
> /Users/cyrilferlicot/Downloads/Pharo/pharo-mac-vm/Pharo.app/Contents/MacOS/Pharo
> CoInterpreter VMMaker.oscog-eem.2509 uuid: 
> 91e81f64-95de-4914-a960-8f842be3a194 Feb  7 2019
> StackToRegisterMappingCogit VMMaker.oscog-eem.2509 uuid: 
> 91e81f64-95de-4914-a960-8f842be3a194 Feb  7 2019
> VM: 201902062351 https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: 
> Wed Feb 6 15:51:18 2019 CommitHash: a838346 Plugins: 201902062351 
> https://github.com/OpenSmalltalk/opensmalltalk-vm.git
> 
> Mac OS X built on Feb  7 2019 00:01:47 UTC Compiler: 4.2.1 Compatible Apple 
> LLVM 7.3.0 (clang-703.0.31)
> VMMaker versionString VM: 201902062351 
> https://github.com/OpenSmalltalk/opensmalltalk-vm.git Date: Wed Feb 6 
> 15:51:18 2019 CommitHash: a838346 Plugins: 201902062351 
> https://github.com/OpenSmalltalk/opensmalltalk-vm.git
> CoInterpreter VMMaker.oscog-eem.2509 uuid: 
> 91e81f64-95de-4914-a960-8f842be3a194 Feb  7 2019
> StackToRegisterMappingCogit VMMaker.oscog-eem.2509 uuid: 
> 91e81f64-95de-4914-a960-8f842be3a194 Feb  7 2019
> 
> Virtual Machine Commandline Options
> ---
> none
> 
> Virtual Machine Parameters
> --
> #11297188640  end (v3)/size(Spur) of old-space (0-based, read-only)
> #28941328 end (v3)/size(Spur) of young/new-space (read-only)
> #31365909504  end (v3)/size(Spur) of heap (read-only)
> #4nil nil (was allocationCount (read-only))
> #5nil nil (was allocations between GCs (read-write)
> #60   survivor count tenuring threshold (read-write)
> #74   full GCs since startup (read-only)
> #89937total milliseconds in full GCs since startup (read-only)
> #971  incremental GCs (SqueakV3) or scavenges (Spur) since startup 
> (read-only)
> #10   77  total milliseconds in incremental GCs (SqueakV3) or scavenges 
> (Spur) since startup (read-only)
> #11   0   tenures of surving objects since startup (read-only)
> #12   0   12-20 were specific to ikp's JITTER VM, now 12-19 are open for 
> use
> #13   0   12-20 were specific to ikp's JITTER VM, now 12-19 are open for 
> use
> #14   0 

Re: [Pharo-dev] [Ann] Concurrent Programming in Pharo is available

2020-02-10 Thread Eliot Miranda
Hi Stef,

  here's my review/feedback

On Sun, Feb 9, 2020 at 5:05 AM Stéphane Ducasse 
> wrote:
> >
> > On http://books.pharo.org/booklet-ConcurrentProgramming/
>


In fig 1.1 there needs to be a back arrow from Executing to Runnable which
is labelled "Preempted by a higher priority process" or similar.

In the label for fig 1.2 the word "pending" should be replaced with
"runnable".

Page 8, section "Process Priorities"
"At any time only one process is executed." should read "At any time only
one process is executing."
The line starting "Next table lists..." in the last paragraph on page 8
should read "The following table lists...".
The section might want to state some thing like "there that the current
implementation of the Pharo scheduler has process priorities from 1 to
100.  Only some of these are named.  But the programmer is free to use any
priority within that range that they see fit."  It would be good to include
examples which did things like "forkAt: Processor userPriority + 1", etc.

In the label for Fig 2.1 "ressources" should read "resources".  But
throughout that section I would word it that "processes need to share
resources", but "the process is waiting to acquire the resource".  So use
the plural for the general case and the singular for the specific case.
Sentences such as this: "P0 has finished to use the resources." should read
"P0 has finished using the resource."

In Section 2.1 Conclusion
"Semaphores are the lower synchronisation mechanisms." should read
"Semaphores are the lowest level synchronisation mechanisms."

Chapter 3, "Scheduler's Principles" should state that
"The Smalltalk scheduler is a real-time, cooperative, preemptive across
priorities, non-preemptive within priorities, scheduler."

The title to Chapter 4.5 "ShareQueue: a nice semaphore example" should read
"Share*d*Queue: a nice semaphore example"

Most of the paragraphs on page 49 in section 4.5 are still in French and
need translating.


> S.
>
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org
> 03 59 35 87 52
> Assistant: Julie Jonas
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley,
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Bug 18632 : Virtual Machine parameters need to be documented

2020-02-07 Thread Eliot Miranda
Hi Benoit,

On Fri, Feb 7, 2020 at 8:20 AM Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com> wrote:

> Hi Benoit,
> for some reason (?), your email was marked as SPAM in gmail...
> There is a short description in primitiveVMParameter, which is found in
> generated code, for example:
>
> https://raw.githubusercontent.com/OpenSmalltalk/opensmalltalk-vm/Cog/src/vm/gcc3x-cointerp.c
> or VMMaker source code (you can also find VMMaker source code in pharo vm
> github repo, no time to dig).
>
> Le jeu. 6 févr. 2020 à 12:45, Benoit St-Jean via Pharo-dev <
> pharo-dev@lists.pharo.org> a écrit :
>
>> I was looking for some documentation on VM parameters (are they read,
>> read-write or write only as well as the expected argument type) and came
>> across that issue on the old FogBugz database.
>>
>
Are you talking just about vmParameterAt:[put:]/vm parameterAt:[put:] or
are you also talking about command line arguments (pharos-vm --help/squeak
-help)?
If the former, there are much more current comments in trunk Squeak
SmalltalkImage>>vmParameterAt:[put:] than in Pharo; you could usefully copy
that across. The comments in the two methods accurately specify which
parameters are read/write, and which persist in the image header.

But I agree, more verbose documentation would be very helpful; the current
documentation is very terse, and while it describes what the parameters are
and whether they are writable, it really doesn't describe what the systems
that they operate on actually do.  I'm happy to collaborate with you in
developing better documentation.  If you will do the writing I will happily
consult.  Does that work?


>>
>> https://pharo.fogbugz.com/f/cases/18632/Virtual-Machine-parameters-need-to-be-documented
>>
>> I tried looking it up on GitHub but couldn't find it.  Am I missing
>> something or I should open an issue?  I'm working on something related
>> to various memory settings and not having any info on a lot of these
>> parameters doesn't help.  While I could meticulously read tons of C code
>> of the VM and pinpoint exactly what I need to know for each and everyone
>> of these undocumented parameters, I'd gladly do it myself if any of the
>> VM guys (Clément, Pablo, Alexandre or Eliot) have notes/documents/links
>> that could help me do it.
>>
>> tia
>>
>> --
>> -
>> Benoît St-Jean
>> Yahoo! Messenger: bstjean
>> Twitter: @BenLeChialeux
>> Pinterest: benoitstjean
>> Instagram: Chef_Benito
>> IRC: lamneth
>> GitHub: bstjean
>> Blogue: endormitoire.wordpress.com
>> "A standpoint is an intellectual horizon of radius zero".  (A. Einstein)
>>
>>
>>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] printing Symbol

2020-01-19 Thread Eliot Miranda


> On Jan 19, 2020, at 1:50 PM, Stéphane Ducasse  
> wrote:
> 
> 
> The idea that is that I would like to be able to 
> 
> text -> tokens -> text
> 
> For text -> tokens 
> 
>   (RBScanner on: 'self classVariables: { #A . #B }' readStream)
>   contents collect: #value
> 
> 
> I wrote a little method that takes the result of the RBScanner and recreate 
> the text
> But I cannot get this method to work. 
> I’m puzzled because the symbols are eaten.
> 
> 
> expressionStringFrom: aLine
>   
>   "self new 
>   expressionStringFrom:  #('self' 'classVariables:' ${ #A $. #B 
> $}) 
>   >>> 
>   'self classVariables: { A . B }'
>   "
>   ^ String streamContents: [ :s |
>   aLine 
>   do: [ :each | s << each ]
>   separatedBy: [ s space ]]
> 
> I tried with print:, printOn:, but I failed. 
> 
> Any idea?

With Symbols one needs to use storeOn:

> 
> S. 
> 
> 
> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Julie Jonas 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 


Re: [Pharo-dev] about signal

2020-01-12 Thread Eliot Miranda
Hi Alastair,

>> On Jan 12, 2020, at 12:34 AM, Alistair Grant  wrote:
> On Thu, 9 Jan 2020 at 13:01, ducasse  wrote:
>> 
>> Hi
>> 
>> I wanted to explain
>> 
>> | semaphore p1 p2 |
>> semaphore := Semaphore new.
>> p1 := [ semaphore wait.
>>'p1' crTrace ] fork.
>> 
>> p2 := [semaphore signal.
>> 'p2' crTrace ] fork.
>> 
>> displays p2 and p1.
>> but I would like explain clearly but it depends on the semantics of signal.
> 
> The way this is phrased seems to imply that 'p2' will always be
> displayed before 'p1', however in Pharo this is not guaranteed (when
> the processes are at the same priority, as they are this example).
> 
> As Eliot implied in another reply, Pharo has #processPreemptionYields
> set to true, which means that any time a higher priority process
> preempts, the current process will be moved to the back of the queue.
> 
> So in the case above, after p2 signals the semaphore, if a timer was
> delivered or keystroke pressed, p2 would be suspended and moved to the
> back of the queue.  When the timer / keystroke / etc. had finished
> processing p1 would be at the front of the queue and would complete
> first.
> 
> Since time and input events are (for practical purposes) unpredictable
> it means that the execution order of processes at a given priority is
> also unpredictable.
> 
> While this isn't likely to happen in the example above, I have seen it
> regularly with TaskIt and multiple entries being run concurrently.
> 
> I agree with Eliot that changing #processPreemptionYields to true by
> default would be an improvement in Pharo.  It would make it easier to
> predict what is happening in a complex environment

You mean to write that 

“I agree with Eliot that changing #processPreemptionYields to false by
default would be an improvement in Pharo.  It would make it easier to
predict what is happening in a complex environment.”

Preemption by a higher priority process should not cause a yield.

> Running the following variant, and then typing in to another window,
> demonstrates the behaviour:
> 
> | semaphore p1 p2 |
> semaphore := Semaphore new.
> [ 100 timesRepeat: [
> p1 := [ | z |
> semaphore wait.
> z := SmallInteger maxVal.
>   1000 timesRepeat: [ z := z + 1 ].
> 'p1' crTrace ] fork.
> 
> p2 := [ | z | 1 second wait.
> semaphore signal.
> z := SmallInteger maxVal.
>   1000 timesRepeat: [ z := z + 1 ].
>   'p2' crTrace ] fork.
> 1 second wait.
> ] ] fork.
> 
> 
> The tail of transcript:
> 
> 'p2'
> 'p1'
> 'p1'
> 'p1'
> 'p1'
> 'p2'
> 'p2'
> 'p2'
> 'p1'
> 'p1'
> 'p2'
> 'p1'
> 'p2'
> 'p2'
> 'p1'
> 'p1'
> 'p2'
> 'p1'
> 
> 
> 
> Cheers,
> Alistair

Cheers, Alistair!
_,,,^..^,,,_ (phone)

Re: [Pharo-dev] about signal

2020-01-10 Thread Eliot Miranda
On Fri, Jan 10, 2020 at 2:01 PM Eliot Miranda 
wrote:

> Hi Steph,
>
>
> On Jan 10, 2020, at 12:42 PM, ducasse  wrote:
>
> Yes this is why in my chapter on Exception I show the VM code while some
> people told me that it was not interesting.
> And this is why in the current chapter on semaphore I have a section on
> the implementation.
> Now it does not mean that the we cannot have a higher view too :).
>
>
> Indeed.  Note that now we have two improvements supported by the VM over
> the blue book scheduler & synchronization primitives.
>
>
Oops!  I forgot to mention the other improvement.  That is the ability of
the scheduler to add a process to the front of a particular run queue when
a process is preempted, not to the back of its run queue as is specified
(erroneously) in the original Smalltalk-80 specification.  Why is this
erroneous?

Smalltalk has a real-time preemptive-across-priorities,
cooperative-within-priorities scheduling model.  No process at the same
priority as the active process can preempt the active process.  Instead it
must wait until the active process yields (which moves a process to the
back of its run queue, allowing all other runnable processes at its
priority a chance to run until it will run again), is suspended (on a
semaphore or mutex), or explicitly suspends (via the suspend primitive).
So when the original scheduler puts a process at the end of its run queue
when a higher priority process preempts it that introduces an implicit
yield, which violates the contract, a contract that can be used to
implement cheapjack-free mutual exclusion between processes of the same
priority.

So the improvement, selected by a vm flag, is to cause preemption to add a
process to the front of its run queue, maintaining the order and preserving
the contract.
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] about signal

2020-01-10 Thread Eliot Miranda
 methods in
the system focus on the semaphore or native critical section.  What (I
think) programmers want is to understand how the process behaves, not
understand how the semaphore or native critical section works.  So
documenting things from a process perspective is more useful.

P.P.S.  If you compare the performance of the constructed Mutex against the
native Mitex please report the results.

P.P.P.S. We had tio step carefully to replace the old Mutex with the new
one.  I can't remember her the details, but we handled it with Monticello
load scripts and we can find the details if you need them


On 10 Jan 2020, at 18:29, Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com> wrote:


For example, whether a Semaphore would queue waiting process by order of
registration (thru a linked list for example) or by order of priority (thru
a Heap for example), would completely change its behavior.
So isn't that kind of implementation detail SUPER important, especially
when hidden in VM?

Also, describing HOW it works is very often used as a mean to explain (and
make understand) a higher level feature.
IMO, understanding a feature from one implementation is as useful as
understanding a feature by examples of usage.
Even when implementation is plain Smalltalk, it's already an added value to
give main guidelines to help reading code (see this particular class or
message for understanding the core...), so when it's in VM...

Le ven. 10 janv. 2020 à 12:59, Danil Osipchuk  a
écrit :

> I didn't claim expertise on the subject (although I use semaphores
> extensively), nor its simplicity, nor that the implementation description
> should be the only guide on its usage (hence 'to add..., how it works'
> wording)
> Said that, to me it is the case, when a clear description of what is going
> on aids a lot. Instead of trying to define some rules and scenarios
> abstractly - to help a user to reason about the system behavior (isn't Stef
> was willing to look into VM code for this reason?).
>
> To me both scenarios of Stef could be explained that in first case the
> 'signal' process is not getting preempted by the 'wait' process of the same
> priority, while in second the preemption happens upon return from primitive
> (hopefully my memory serves me well and my understanding is correct).
>
> A tangent note on comments in general -- I've noticed more than once, that
> people tend to produce far  clearer descriptions in exchanges like this --
> when discussing matter with others.
> When a person is in documentation/comment writing mode he/she is sort of
> tenses up in a formal state and often produces something not very helpful.
> Current class comment of Semaphore is a perfect example, if I were not
> familiar with the concept from other sources, I would not be able to make
> any sense of it. So, I would suggest to use opportunities like this to
> improve comments/docs when a bit of knowledge shows up in a discussion.
>
>
>
> regards,
>  Danil
>
> пт, 10 янв. 2020 г. в 13:09, Sven Van Caekenberghe :
>
>> Actually, it is just a, albeit concise, description of how Semaphores are
>> implemented.
>>
>> It does not help much in understanding them, in learning how they
>> can/should be used, for what purposes and how code behaves.
>>
>> Understanding of Process, priorities and Scheduling are also needed for a
>> more complete understanding.
>>
>> This is not a simple subject.
>>
>> Read https://en.wikipedia.org/wiki/Semaphore_(programming) and see how
>> well you understand the subject.
>>
>> In short, it does not answer Stef's concrete question(s).
>>
>> > On 10 Jan 2020, at 06:30, Danil Osipchuk 
>> wrote:
>> >
>> > Maybe to add this into the class comment, this is the most concise and
>> clear description of how it works i've ever seen
>> >
>> > пт, 10 янв. 2020 г., 8:13 Eliot Miranda :
>> >
>> >
>> > On Thu, Jan 9, 2020 at 5:03 AM ducasse 
>> wrote:
>> > Hi
>> >
>> > I wanted to explain
>> >
>> > | semaphore p1 p2 |
>> > semaphore := Semaphore new.
>> > p1 := [ semaphore wait.
>> > 'p1' crTrace ] fork.
>> >
>> > p2 := [semaphore signal.
>> >  'p2' crTrace ] fork.
>> >
>> > displays p2 and p1.
>> > but I would like explain clearly but it depends on the semantics of
>> signal.
>> >
>> >
>> > - ==p1== is scheduled and its execution starts to wait on the
>> semaphore, so it is removed from the run queue of the scheduler and added
>> to the waiting list of the semaphore.
>> 

Re: [Pharo-dev] about signal

2020-01-09 Thread Eliot Miranda
On Thu, Jan 9, 2020 at 5:03 AM ducasse  wrote:

> Hi
>
> I wanted to explain
>
> | semaphore p1 p2 |
> semaphore := Semaphore new.
> p1 := [ semaphore wait.
> 'p1' crTrace ] fork.
>
> p2 := [semaphore signal.
>  'p2' crTrace ] fork.
>
> displays p2 and p1.
> but I would like explain clearly but it depends on the semantics of
> signal.
>
>
> - ==p1== is scheduled and its execution starts to wait on the semaphore,
> so it is removed from the run queue of the scheduler and added to the
> waiting list of the semaphore.
> - ==p2== is scheduled and it signals the semaphore. The semaphore takes
> the first waiting process (==p1==) and reschedule it by adding it to the
> end of the suspended lists.
>

Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2
will start to run until something else happens after the execution of p1 :=
[...] fork. p2 := [...] fork. So for example, if there is Processor yield
then p1 can start to run.

So you need to add code to your example to be able to determine what will
happen.  The easiest thing would be to delay long enough that both can run.
 1 millisecond is more than enough.


> Now this sentence "The semaphore takes the first waiting process (==p1==)
> and reschedule it by adding it to the end of the suspended lists.” is super
> naive. Is the semaphore signalling scheduled? or not?
>

I would say these three things, something like this:

"A semaphore is a queue (implemented as a linked list) and an excess
signals count, which is a non-negative integer.  On instance creation a new
semaphore is empty and has a zero excess signals count.  A semaphore
created for mutual exclusion is empty and has an excess signals count of
one."

"When a process waits on a semaphore, if the semaphore's excess signals
count is non-zero, then the excess signal count is decremented, and the
process proceeds.  But if the semaphore has a zero excess signals count
then the process is unscheduled and added to the end of the semaphore,
after any other processes that are queued on the semaphore."

"When a semaphore is signaled, if it is not empty, the first process is
removed from it and added to the runnable processes in the scheduler. If
the semaphore is empty its excess signals count is incremented.

Given these three statements it is easy to see how they work, how to use
them for mutual exclusion, etc.


>
> signal
> "Primitive. Send a signal through the receiver. If one or more
> processes
> have been suspended trying to receive a signal, allow the first
> one to
> proceed. If no process is waiting, remember the excess signal.
> Essential.
> See Object documentation whatIsAPrimitive."
>
> 
> self primitiveFailed
>
> "self isEmpty
> ifTrue: [excessSignals := excessSignals+1]
> ifFalse: [Processor resume: self removeFirstLink]"
>
>
> I wanted to know what is really happening when a semaphore is signalled.
> Now resume: does not exist on Processor.
>
> I will look in the VM code.
>
>
> S
>
>
>
>
>
> S.
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Debugging GCC code generation

2019-12-11 Thread Eliot Miranda
On Wed, Dec 11, 2019 at 12:03 PM Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com> wrote:

> Yes,
> But we have to replace natural crappy code (split a long in 2 ints) that
> was once legal, by an even more crappy code (memcpy), so all in all, it's a
> crappy art.
>

:-)  Indeed.  Personally I liked it when C was a portable assembler.
That's what it's fit for and that's what it should be good at.  Trying to
pretend it's Pascal is, um, pretentious?


>
> Le mer. 11 déc. 2019 à 19:30, teso...@gmail.com  a
> écrit :
>
>> Hi Nicolas,
>>   thanks for the comment, you are right the problem is the bad
>> original code. But my opinion is that it just is not reporting the
>> situation correctly, generating a warning or non-optimizing the code
>> looks more like a expected behavior. Because as I have said, using a
>> constant as index in the last statement generates a meaningful warning
>> and the non-optimizated version of the function.
>>
>> And again as you said, the only thing to learn about all this is that
>> we should not write crappy code.
>>
>> On Wed, Dec 11, 2019 at 7:11 PM Nicolas Cellier
>>  wrote:
>> >
>> > Of course, when I say "your" code, it's the code you have shown, and
>> probably "our" (VMMaker) code ;)
>> >
>> > Le mer. 11 déc. 2019 à 19:05, Nicolas Cellier <
>> nicolas.cellier.aka.n...@gmail.com> a écrit :
>> >>
>> >> Hi Pablo (again),
>> >> no, not a bug.
>> >>
>> >> The problem is in the source code. The compiler has the right to
>> presume that your code is exempt of UB, because you cannot depend on UB
>> (obviously).
>> >> So it can eliminate all code which corresponds to UB.
>> >>
>> >> The compiler has the right to assume that a pointer to an int cannot
>> point to a long (UB).
>> >> So modifying a long cannot have any sort of impact on the content of
>> the int pointer.
>> >> So the compiler can decouple both path return int content and assign
>> long.
>> >> But assigning the long has no effect, so the code can be suppressed
>> altogether.
>> >>
>> >> Le mer. 11 déc. 2019 à 18:54, teso...@gmail.com  a
>> écrit :
>> >>>
>> >>> Hi Aliaksei,
>> >>>   to me it looks like a bug of GCC optimization. Basically, it is
>> >>> assuming that the x variable is used but never read or its value is
>> >>> never used. Also it assumes the same of the i variable, as we are only
>> >>> accessing indirectly to the memory where it locates (the code is even
>> >>> assuming that the variable exists, but it can be optimize out as in
>> >>> this scenario). Even though, the original C code is valid C code, we
>> >>> are not helping the compiler writing code like that. So I have
>> >>> rewritten the code in a way that does not use indirect memory access
>> >>> to the stack space.
>> >>>
>> >>> One thing more that makes me think is a bug, if you use an int
>> >>> constant as the index and not a parameter, the error does not occur
>> >>> (the code is not badly optimized) and there is a warning about the
>> >>> not-so-great access to the stack.
>> >>>
>> >>> On Wed, Dec 11, 2019 at 6:01 PM Aliaksei Syrel 
>> wrote:
>> >>> >
>> >>> > Hi Pablo,
>> >>> >
>> >>> > Wow! Thank you for the detective story :)
>> >>> >
>> >>> > Do I understand correctly that the original code causes undefined
>> behavior and therefore can be changed (or even removed) by the compiler?
>> >>> > (because it returns something that is referencing memory on the
>> stack)
>> >>> >
>> >>> > Please keep posting similar things in future! It is very educative
>> :)
>> >>> >
>> >>> > Cheers,
>> >>> > Alex
>> >>> >
>> >>> >
>> >>> > On Wed, 11 Dec 2019 at 17:35, teso...@gmail.com 
>> wrote:
>> >>> >>
>> >>> >> Hi,
>> >>> >> this mail is related to Pharo because it is knowledge I found
>> >>> >> debugging the build of the VM, but the rest is to document it and
>> >>> >> perhaps someone will found it interesting (also I couldn't find it
>> >>> >> easily using Google). Sorry for the long mail!
>> >>> >>
>> >>> >> The problem
>> >>> >> ==
>> >>> >>
>> >>> >> The following code does not produce good code in 8.3 when using
>> optimizations:
>> >>> >>
>> >>> >> long __attribute__ ((noinline)) myFunc(long i, int index){
>> >>> >>long v;
>> >>> >>long x = i >> 3;
>> >>> >>
>> >>> >>v = x;
>> >>> >>return ((int*)(&v))[index];
>> >>> >> }
>> >>> >>
>> >>> >> #include 
>> >>> >>
>> >>> >> int main(){
>> >>> >>
>> >>> >> long i;
>> >>> >> int x;
>> >>> >>
>> >>> >> scanf("%ld", &i);
>> >>> >> scanf("%d", &x);
>> >>> >>
>> >>> >> printf("%ld",myFunc(i,x));
>> >>> >> }
>> >>> >>
>> >>> >> Basically, with -02, it generates the following code:
>> >>> >>
>> >>> >> myFunc:
>> >>> >>  movslq %esi, %rsi
>> >>> >>  movslq -8(%rsp,%rsi,4), %rax
>> >>> >>  ret
>> >>> >>
>> >>> >> And with -01 it generates the following code:
>> >>> >>
>> >>> >> myFunc:
>> >>> >>  sarq $3, %rdi
>> >>> >>  movq %rdi, -8(%rsp)
>> >>> >>  movslq %esi, %rsi
>> >>> >>  movslq -8(%rsp,%rsi,4), %rax
>> >>> >>  ret
>

[Pharo-dev] Doing the equivalent of "fast forward" with Iceberg

2019-10-17 Thread Eliot Miranda
Hi All,

I'm involved in a team project using Pharo 7.1.  I have some
uncommitted changes in a package that others have committed to.  I want to
pull their latest commits without overwriting mine and because my changes
are incomplete I don't yet want to commit, and hence don't want to create a
branch.

Can I just pull in the Pull tool without overwriting my changes?

Does this do the equivalent of git's "fast forward" if there are no
conflicts?

If there are conflicts, what happens?
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Seg Fault Pharo 7.0.3

2019-10-06 Thread Eliot Miranda
Hi Sean, Hi All,

this may be because of the issue described here: 
http://forum.world.st/Difficult-to-debug-VM-crash-with-full-blocks-and-Sista-V1-tt5103810.html

This issue is characterized by the system crashing soon after start up when 
some significant i/o is done, typically either to files or sockets.  It affects 
macOS only and may indeed affect only 64-bits.  We have strong evidence that it 
is caused by the dynamic linker being invoked in the signal handler for SIGIO 
when the signal is delivered while the VM is executing JITted code.  The 
symptom that causes the crash is corruption of a particular jitted method’s 
machine code, eg Delay class>>#startEventLoop, and we believe that the 
corruption is caused by the linker when it misinterprets a jitted Smalltalk 
stack frame as an ABI-compliant stack frame and attempts to scan code to link 
it.

Our diagnosis is speculative; this is extremely hard to reproduce.  Typically 
in repeating a crashing run SIGIO may no longer be delivered at the same point 
because any remote server has now woken up and delivers results sooner, etc.  
However, Nicolas Cellier and I are both confident that we have correctly 
identified the bug.

The fix is simple; SIGIO should be delivered on a dedicated signal stack (see 
sigaltstack(2)).  I committed a fix yesterday evening and we should see within 
a week or so if these crashes have disappeared.

I encourage the Pharo vm maintainers to build and release vms that include 
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/c24970eb2859a474065c6f69060c0324aef2b211
 asap.


Cheers,
Eliot
_,,,^..^,,,_ (phone)

> On Oct 3, 2019, at 1:24 PM, Sean P. DeNigris  wrote:
> 
> Segmentation fault Thu Oct  3 15:52:33 2019
> 
> 
> VM: 201901051900 https://github.com/OpenSmalltalk/opensmalltalk-vm.git
> Date: Sat Jan 5 20:00:11 2019 CommitHash: 7a3c6b6
> Plugins: 201901051900 https://github.com/OpenSmalltalk/opensmalltalk-vm.git
> 
> C stack backtrace & registers:
>rax 0x00012438 rbx 0x7ffeebd00050 rcx 0x00468260 rdx
> 0x00dd6800
>rdi 0x000124cee5a0 rsi 0x000124cee5a0 rbp 0x7ffeebcffe50 rsp
> 0x7ffeebcffe50
>r8  0x7fff3f2cefe5 r9  0x0b00 r10 0x6000 r11
> 0xfcd8d5a0
>r12 0x0002 r13 0x3580 r14 0x7ffeebd00064 r15
> 0x2800
>rip 0x7fff630f7d09
> 0   libsystem_platform.dylib0x7fff630f7d09
> _platform_memmove$VARIANT$Haswell + 41
> 1   Pharo   0x000103f52642 reportStackState
> + 952
> 2   Pharo   0x000103f52987 sigsegv + 174
> 3   libsystem_platform.dylib0x7fff630fab3d _sigtramp + 29
> 4   ??? 0x05890a00 0x0 +
> 6085968660992
> 5   libGLImage.dylib0x7fff3f2ce29e
> glgProcessPixelsWithProcessor + 2149
> 6   AMDRadeonX5000GLDriver  0x00010db16db1 glrATIStoreLevels
> + 1600
> 7   AMDRadeonX5000GLDriver  0x00010db52c83
> glrAMD_GFX9_LoadSysTextureStandard + 45
> 8   AMDRadeonX5000GLDriver  0x00010db519bb glrUpdateTexture
> + 1346
> 9   libGPUSupportMercury.dylib  0x7fff5181279d
> gpusLoadCurrentTextures + 591
> 10  AMDRadeonX5000GLDriver  0x00010db5a099 gldUpdateDispatch
> + 397
> 11  GLEngine0x7fff3ff72078
> gleDoDrawDispatchCore + 629
> 12  GLEngine0x7fff3ff16369
> glDrawArraysInstanced_STD_Exec + 264
> 13  GLEngine0x7fff3ff1625a
> glDrawArrays_UnpackThread + 40
> 14  GLEngine0x7fff3ff6dce1 gleCmdProcessor +
> 77
> 15  libdispatch.dylib   0x7fff62ec2dcf
> _dispatch_client_callout + 8
> 16  libdispatch.dylib   0x7fff62ecea2c
> _dispatch_lane_barrier_sync_invoke_and_complete + 60
> 17  GLEngine0x7fff3fec4b85
> glFlush_ExecThread + 15
> 18  Pharo   0x000103f4cc62
> -[sqSqueakOSXOpenGLView drawRect:flush:] + 314
> 19  Pharo   0x000103f4cb22 -
> ...
> 
> Smalltalk stack dump:
>0x7ffeebd14238 M DelaySemaphoreScheduler>unscheduleAtTimingPriority
> 0x10fab3ad0: a(n) DelaySemaphoreScheduler
>0x7ffeebd14270 M [] in
> DelaySemaphoreScheduler(DelayBasicScheduler)>runBackendLoopAtTimingPriority
> 0x10fab3ad0: a(n) DelaySemaphoreScheduler
>   0x1125923f8 s BlockClosure>ensure:
>   0x111e88d30 s
> DelaySemaphoreScheduler(DelayBasicScheduler)>runBackendLoopAtTimingPriority
>   0x112590a50 s [] in
> DelaySemaphoreScheduler(DelayBasicScheduler)>startTimerEventLoopPriority:
>   0x111e88e08 s [] in BlockClosure>newProcess
> 
> Most recent primitives
> @
> actualScreenSize
> millisecondClockValue
> tempAt:
> 
> 
> 
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Devel

Re: [Pharo-dev] Pharo source code formatting guide

2019-07-06 Thread Eliot Miranda
On Thu, Jul 4, 2019 at 11:58 PM Norbert Hartl  wrote:

> Sometimes I wonder when I change a piece of code in pharo if there is an
> official formatting guide line. Is the formatter in calypso the incarnation
> of it or how code is supposed to be formatted in the offical image?
>
> I just see tons of occurrences where caret immediately follows a token and
> such which I don’t like at all.
>

There are many styles, many opinions.  Kent Beck presents a well argued set
of rules that work for visual thinkers like me; the main thing I like is
rectangular blocks.

Kent Beck Best Practice Patters

p
126 onwards

As Kent says

"The priorities of these patterns are:
1. To make the gross structure of the method apparent at a glance. Complex
messages and blocks, in particular, should jump out at the reader.

2. To preserve vertical space. There is a huge difference between reading a
method that fits into the text pane of a browser and reading one that
forces you to scroll. Keeping methods compact vertically lets you have
smaller browsers and still be able to read methods without scrolling. This
reduces window management overhead and leaves more screen space for other
programming tools."

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Any experts on using path in app bundles willing to help me?

2019-04-18 Thread Eliot Miranda
On Thu, Apr 18, 2019 at 10:08 AM Eliot Miranda 
wrote:

> Hi All,
>
> I have a plugin dependent on several support libraries that may be
> shared with other plugins.  So I want the support libraries in a common
> place (TheVm.app/Contents/Frameworks) while the plugins themselves are
> bundles in TheVM.app/Contents/Resources.  In allowing one to install plugin
> support libraries in TheVM.app/Contents/Frameworks I have to use the
> linker's -rpath feature to specify where dlopen may find the search
> libraries.  The question i don't see an answer to in Apple's documentation
> is whether one uses -rpath when linking the plugin, or when linking the VM
> into which the plugin will be loaded, or both.  Anybody ever done this and
> know definitively what to do?
>

Never mind.  I think I've found what I need:
https://medium.com/@donblas/fun-with-rpath-otool-and-install-name-tool-e3e41ae86172
https://wincent.com/wiki/@executable_path,_@load_path_and_@rpath

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Questions about DebugSession>>isContextPostMortem:

2019-03-21 Thread Eliot Miranda
Hi Thomas, Hi Max,

On Thu, Mar 21, 2019 at 3:35 AM Max Leske  wrote:

> On 21 Mar 2019, at 11:06, Thomas Dupriez wrote:
>
> Hi Thomas,
>
> > Hello,
> >
> > While looking at the DebugSession>>isContextPostMortem: method (code
> > below), I got three questions:
> >
> > 1) There is a check for whether the suspendedContext (top context) of
> > the process is nil. Does it even make sense for a process not to have
> > any top context?
>
> Yes, only suspended processes have a suspendedContext. Also, the process
> might have been terminated already.
>
> >
> > 2) It seems that all the last 3 lines are doing is to check whether
> > selectedContext is in the context chain of the process. Could they be
> > rewritten into this simpler one-liner?   `|^ (suspendedContext
> > hasContext: selectedContext) not`|
>
> Yes, I think that would work.
>
> >
> > 3) Overall, this method says that a context C is "post mortem" if the
> > process controlled by the DebugSession has a top context and C is not
> > in its context chain. That's the practical definition. Could someone
> > shed some light on the high-level definition of "post mortem"? Because
> > "post mortem" is like "after death", but the death of what? A context
> > that merely belongs to another context chain would be considered "post
> > mortem" by the practical definition, but there's no death in this
> > case...
> > ||
> >
>
> You can create a copy of a process' stack. That stack will behave like a
> process in the debugger but it can't run, as the VM doesn't have a
> process to which the context are attached, hence it's considered
> post-mortem.
>
> >
> > ```
> >
> > |DebugSession>>isContextPostMortem: selectedContext "return whether
> > we're inspecting a frozen exception without a process attached" |
> > suspendedContext | suspendedContext := interruptedProcess
> > suspendedContext. suspendedContext ifNil: [ ^ false ].
> > (suspendedContext == selectedContext) ifTrue: [ ^ false ]. ^
> > (suspendedContext findContextSuchThat: [:c | c sender ==
> > selectedContext]) isNil ``` |
>

In addition a context is post-mortem if its pc is nil, which at least in
Squeak is answered by the Context>>isDead method.  So I might look at
writing that method as

DebugSession>>isContextPostMortem: selectedContext
"Answer if we're inspecting a frozen exception without a process
attached."
^selectedContext isDead
  or: [(interruptedProcess suspendedContext hasContext:
selectedContext) not]


> >
> > Does someone know the answer to some (or all) of these questions?
> >
> > Thomas
>
>
> Hope that helps.
>
> Max
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] managing modification of class initialization methods

2019-03-04 Thread Eliot Miranda
On Mon, Mar 4, 2019 at 8:26 AM Sven Van Caekenberghe  wrote:

> (1) the basic concepts are clear (and have been for a long time):
>
> - when a class initialize method is loaded, it is executed afterwards, if
> and only if the source code changed
>
> - there are startUp[:] and shutDown[:] and SessionManager to handle images
> coming up, saving and going down
>
> With these tools you can build any behaviour you want.
>
> I am not sure we need something else, except maybe more education
>
>
> (2) the problem is much larger than just the class initialize method,
> since that can call other methods (like #initializeConstants, etc, ..) -
> even if the class initialize method did not change, a method further down
> might have and could require re-initialization.
>
> For this reason I sometimes put a timestamp in a comment of the class
> initialize method to force re-initialization since I known that I added a
> new constant somewhere (much) further down.
>
>
> Complex new features might do more harm than good
>

+1000.  KISS.


>
> > On 4 Mar 2019, at 17:13, Ben Coman  wrote:
> >
> >
> >
> > On Mon, 4 Mar 2019 at 20:08, Norbert Hartl  wrote:
> >
> >
> > > Am 04.03.2019 um 03:46 schrieb Ben Coman :
> > >
> > > In relation to developing sample solutions for an Exercism exercise,
> the following observation was made about class initialization...
> > >
> > > > class is initialized on load - and not when you modify it - so this
> can be very confusing for users
> > >
> > > My first thought was to wonder if Quality Assistant could track
> whether a class initialize method had been run after it was modified,
> > > and display alerts.
> > >
> > > Alternatively, I wonder if a reasonable pattern would be to couple
> class-side lazy initialization
> > > with a pragma to reset a variable when the method is saved...
> > >
> > > MyClass class >> referenceData
> > > 
> > > ^ ReferenceData := ReferenceData ifNil: [ 'reference data' ]
> > >
> >
> > Isn’t the usual way to do that to register the class in the shutdown
> list and implement #shutdown: ?
> >
> > To me a good minute to work out why I didn't understand you.
> > Sorry, I meant 
> >
> > So when 'reference data' is updated and the modified method is saved,
> > the variable gets lazy initialized *again* with the new definition.
> >
> > hope that is more clear,
> > cheers -ben
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Weird FFI method redefinition in Pharo 6

2019-03-04 Thread Eliot Miranda
Hi Esteban,

On Mon, Mar 4, 2019 at 9:09 AM Esteban Lorenzano 
wrote:

> On 4 Mar 2019, at 16:02, Eliot Miranda  wrote:
>
> Hi Esteban,
>
> On Mon, Mar 4, 2019 at 4:31 AM Esteban Lorenzano 
> wrote:
>
>> Hi,
>>
>> Mmm… FFIMethodRegistry is reseting the method value to its old value
>> because you modified the method but you did not executed it (you simulated
>> it, I guess), and FFICalloutAPI updates the registry on method execution.
>> Then on image shutdown the methods are reseted to their original value.
>> This is usually not an issue, but running with the simulator can cause
>> that problem.
>>
>
> It happens with or without the simulator.  If I redefine the method and
> save, expecting that on startup the new version will be executed, and
> startup with the normal VM, the method is reset.  Surely this is a bug and
> the method should be removed from the FFI registry when it is redefined
> with no FFI usage in it.  Right?
>
>
> Then indeed, that may be a bug… still, I need to check the best way to fix
> it (we need to introduce a check when compiling to reset the ffi method… I
> guess with a compiler plugin).
>

That seems right to me.  Only methods containing FFI calls should be in the
register.  If one redefines such na method to remove its FFI call then it
should be removed from the registry.  The alternative would be for the
reset code to check that it is resetting a method containing an FFI call,
and if there is none, not resetting.  This might be simpler in the end.


>
> Esteban
>
>
>
>>
>> Esteban
>>
>> On 3 Mar 2019, at 23:03, Eliot Miranda  wrote:
>>
>> Hi All,
>>
>> I'm trying to debug remaining limitations in Image Segment support
>> in Spur using a case provided by Max Leske.  This is a pair of Pharo 6
>> images, one of which saves an image segment the other which loads it.  Both
>> use OSEnvironment>>getEnv: to access environment variables.
>>
>> The base definition of OSEnvironment>>getEnv: is
>>
>> !OSEnvironment methodsFor: 'accessing' stamp: 'auto 5/3/2016 10:31'!
>> getEnv: arg1
>> "This method calls the Standard C Library getenv() function. The
>> name of the argument (arg1) should fit decompiled version."
>>  ^ self ffiCall: #( String getenv (String arg1) ) module: LibC! !
>>
>> but as I'm using the VM Simulator to debug the image segment issues I
>> need to avoid the FFI, which isn't yet simulateable.  So I redefine
>> OSEnvironment>>getEnv: as follows, and then save and exit.
>>
>> !OSEnvironment methodsFor: 'accessing' stamp: 'EliotMiranda 02/27/2019
>> 17:09'!
>> getEnv: aByteStringOrByteArray
>> "This method calls the Standard C Library getenv() function. The
>> name of the argument (arg1) should fit decompiled version."
>> 
>> ec == #'bad argument' ifTrue:
>> [aByteStringOrByteArray isString ifFalse:
>> [^self getEnv: aByteStringOrByteArray asString]].
>> self primitiveFail! !
>>
>> But, and here's the weird bit, when I start up the image, the new
>> definition has been discarded and replaced by the original.  WTF?!?!  Why
>> is this happening?  How can I disable this?
>>
>> The only way that I've found I am able to save with a new version is by
>> doing a Save As... to a new name.  This is fine, but I find the current
>> behavior extremely unhelpful.  Is it a bug?  If it is intended, whats the
>> rationale?
>> _,,,^..^,,,_
>> best, Eliot
>>
>>
>>
>
> --
> _,,,^..^,,,_
> best, Eliot
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Weird FFI method redefinition in Pharo 6

2019-03-04 Thread Eliot Miranda
Hi Esteban,

On Mon, Mar 4, 2019 at 4:31 AM Esteban Lorenzano 
wrote:

> Hi,
>
> Mmm… FFIMethodRegistry is reseting the method value to its old value
> because you modified the method but you did not executed it (you simulated
> it, I guess), and FFICalloutAPI updates the registry on method execution.
> Then on image shutdown the methods are reseted to their original value.
> This is usually not an issue, but running with the simulator can cause
> that problem.
>

It happens with or without the simulator.  If I redefine the method and
save, expecting that on startup the new version will be executed, and
startup with the normal VM, the method is reset.  Surely this is a bug and
the method should be removed from the FFI registry when it is redefined
with no FFI usage in it.  Right?


>
> Esteban
>
> On 3 Mar 2019, at 23:03, Eliot Miranda  wrote:
>
> Hi All,
>
> I'm trying to debug remaining limitations in Image Segment support in
> Spur using a case provided by Max Leske.  This is a pair of Pharo 6 images,
> one of which saves an image segment the other which loads it.  Both use
> OSEnvironment>>getEnv: to access environment variables.
>
> The base definition of OSEnvironment>>getEnv: is
>
> !OSEnvironment methodsFor: 'accessing' stamp: 'auto 5/3/2016 10:31'!
> getEnv: arg1
> "This method calls the Standard C Library getenv() function. The
> name of the argument (arg1) should fit decompiled version."
>  ^ self ffiCall: #( String getenv (String arg1) ) module: LibC! !
>
> but as I'm using the VM Simulator to debug the image segment issues I need
> to avoid the FFI, which isn't yet simulateable.  So I redefine
> OSEnvironment>>getEnv: as follows, and then save and exit.
>
> !OSEnvironment methodsFor: 'accessing' stamp: 'EliotMiranda 02/27/2019
> 17:09'!
> getEnv: aByteStringOrByteArray
> "This method calls the Standard C Library getenv() function. The
> name of the argument (arg1) should fit decompiled version."
> 
> ec == #'bad argument' ifTrue:
> [aByteStringOrByteArray isString ifFalse:
> [^self getEnv: aByteStringOrByteArray asString]].
> self primitiveFail! !
>
> But, and here's the weird bit, when I start up the image, the new
> definition has been discarded and replaced by the original.  WTF?!?!  Why
> is this happening?  How can I disable this?
>
> The only way that I've found I am able to save with a new version is by
> doing a Save As... to a new name.  This is fine, but I find the current
> behavior extremely unhelpful.  Is it a bug?  If it is intended, whats the
> rationale?
> _,,,^..^,,,_
> best, Eliot
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


[Pharo-dev] Weird FFI method redefinition in Pharo 6

2019-03-03 Thread Eliot Miranda
Hi All,

I'm trying to debug remaining limitations in Image Segment support in
Spur using a case provided by Max Leske.  This is a pair of Pharo 6 images,
one of which saves an image segment the other which loads it.  Both use
OSEnvironment>>getEnv: to access environment variables.

The base definition of OSEnvironment>>getEnv: is

!OSEnvironment methodsFor: 'accessing' stamp: 'auto 5/3/2016 10:31'!
getEnv: arg1
"This method calls the Standard C Library getenv() function. The
name of the argument (arg1) should fit decompiled version."
 ^ self ffiCall: #( String getenv (String arg1) ) module: LibC! !

but as I'm using the VM Simulator to debug the image segment issues I need
to avoid the FFI, which isn't yet simulateable.  So I redefine
OSEnvironment>>getEnv: as follows, and then save and exit.

!OSEnvironment methodsFor: 'accessing' stamp: 'EliotMiranda 02/27/2019
17:09'!
getEnv: aByteStringOrByteArray
"This method calls the Standard C Library getenv() function. The
name of the argument (arg1) should fit decompiled version."

ec == #'bad argument' ifTrue:
[aByteStringOrByteArray isString ifFalse:
[^self getEnv: aByteStringOrByteArray asString]].
self primitiveFail! !

But, and here's the weird bit, when I start up the image, the new
definition has been discarded and replaced by the original.  WTF?!?!  Why
is this happening?  How can I disable this?

The only way that I've found I am able to save with a new version is by
doing a Save As... to a new name.  This is fine, but I find the current
behavior extremely unhelpful.  Is it a bug?  If it is intended, whats the
rationale?
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] The HiDPI Issue

2019-02-18 Thread Eliot Miranda
Hi Eric,

On Fri, Feb 15, 2019 at 8:53 AM Eric Gade  wrote:

> Hello,
>
> I know that others have posted about this before but I wanted to get the
> current status.
>
> I've recently had to buy a new laptop that came with a HiDPI display.
> Generally (especially on Linux systems) this makes Pharo unusable. Though
> there are font size increase and scaling options in the Pharo system
> settings, these do not work as a solution -- buttons are still tiny, there
> is inconsistent scaling behavior across morphic, etc. The overall problem
> can be described as: in Pharo, one pixel equals one "point," and so the
> interface is incredibly small on these HiDPI screens (3k etc).
>
> These HiDPI screens are becoming more common, both as laptop and as
> external displays. Their main advantage is that they can render text very
> crisply. In the HN post announcing the release of P7, there were one or two
> complaints about this issue. It does make it hard to demonstrate to others
> (as I do often) the power of developing in Pharo.
>
> Here are some questions I have about this issue:
> 1) What is the current state of affairs in dealing with this issue, if any?
> 2) Would this require VM changes (I assume it would)? If so, what might
> those entail?
> 3) If this does require VM changes, I assume the Squeak people would want
> in on it?
> 4) Is the current plan to wait for Bloc to resolve these issues and/or
> would switching to Bloc resolve these issues at all anyway?
> 5) Related -- where can one start to learn about current VM architecture
> and development practices?
>

In the CONTRIBUTING.md and README.md and the HowToBuild files in each
build.??? directory in the repository (
https://github.com/OpenSmalltalk/opensmalltalk-vm.git).  On the
opensmalltalk-vm mailing list (
http://lists.squeakfoundation.org/mailman/listinfo/vm-dev).  In several
blog posts and papers on the VM.


>
> That said I'm not here to just bellyache. While I don't have any VM
> experience, I'm willing to jump in and try to work on it if someone can
> point me in the right direction. Or perhaps this is too specialized a
> task...
>

No it is not :-).  It is a learning task, but VM development is performed
by humans for humans ;-)


>
> Thanks
>
> --
> Eric
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [squeak-dev] Squeak and Tonel

2019-02-17 Thread Eliot Miranda
On Sun, Feb 17, 2019 at 1:00 PM ducasse  wrote:

> Hi eliot
>
> What is common in your thread is that we always look like the super
> emotional, or bad guys.
> You never ask yourselves why Esteban left the VM mailing-list.
>

What is common is that I discuss technical matters (difficulties in using
Tonel/git, architectural issues with developing the VM) and organizational
issues (Estebans' unhelpful response regarding an option in Tonel) and you
immediately personalize, do not bring up a single technical issue, and then
make grand pronouncements like :this is the last email I'll send you".
Laughable and sad.


>
> And I do not see why you cannot fork Tonel to produce your own version.
> Git is about distributed projects. People do that all the time. You can
> take Tonel and hack it to death
> without having your hacks pushed back into Pharo and this is perfectly ok
> for us.
> So Esteban has the right to say no and you have the right to hack your own
> version.
>
> If you want to have some support for porting Scorch/Sista to github and
> loading you should ask
> but asking it more nicely.
>
> About the simulation of the UI, if you do not talk to us we cannot pay
> attention.
> I do not see why we cannot keep a package with the image level UI for the
> simulator.
> Now again you are talking about collaboration but you do not talk to us
> and you do not
> listen to us so do not expect that people are willing to spend their free
> time to help.
>
>
> I had deep concerns that the pursuit of git integration would end up
> splitting the Pharo and Squeak communities and indeed this is now in
> progress.  I am utterly unmotivated by the lack of cooperation, the sheer
> arrogance and bullying of those that say "you will move to git/tonel or
> else”,
>
>
> This is fun to ask Pharo not to grow up to use modern technology to manage
> source code under the
> premisses that Squeak cannot or will not.
> This is so funny how you state it. I will not comment more than that.
> People around are adults and they will be able to judge by themselves.
>
> and considering leaving VMMaker altogether.
>
>
> If you leave VMMaker, let us know because we will port it to Github and
> make it work in Pharo.
> Esteban did it several times in the past. At least this will have the
> benefit to clarify the situation.
>
> I think that this is good that you tell us that you do not want to
> cooperate with us.
> This will have at least the impact to kick our ass and pay attention to
> us.
>
> The only things that are keeping me interested are Ron Teitelbaum's Terf
> and me pursuing a PhD on register allocation in the context of Sista/Scorch
> with Robert Hirshfeld's group at HPI.
>
> Here's the kind of crap people like Ducasse throw at me:
>
> "Eliot
>
> At the end of the day I will probably ask the two phds that should work on 
> language
> design to use truffle or pypy
> because let us face it we cannot work with the Pharo VM. Else we will
> simply have to fork it (because we do not want to have
> to jump over cuis, newspeak, squeak code constraints and I do not what)
> and it will be another drama is in the pico world
> of the “open” smalltalk VM. "
>
> I am so over this crap.
>
>
> This is not a crap. I can restate what I said. Because of responsibility
> of a research team and creating a future for students. You know I’m not
> alone, I have quite some responsibility towards PhD students
> of my group and yes I cannot make them fail by construction (or produce
> unadequate results)
> just by imposing them to use a system with far too many constraints.
>
> I will not ask them to work on the openSmalltalk vm directly because this
> is not their responsibilities
> to have to jump over newspeak ifdef and others.  A job of a PhD is to be
> able to brainstorm and create new ideas. Look at Stefan Marr (He is working
> on SOM or Truffle).
>
> Now you can think that I’m an asshole, arrogant, or I do not know pick
> what you want.
> ***I do not care***. I do not have Diva syndrome I have responsibilities
> towards people.
>
> I was discussing with some truffle experts and they told me that this is
> can be complex. I believe it.
> I would like to avoid pypy for obvious reasons. So what we will probably
> do for their PhD is to see if we can use a light version of opensmalltalk.
> I do not want to ask them to jump over many things that are totally useless
> for them all the time.
>
>
> PS: personnally I do not get why VMMaker would be the only project on
> earth that cannot be managed using git and Pharo.
> But for us the future is Git and we will continue to build on this
> infrastructure.
>
>
> Stef
>
>
> _,,,^..^,,,_
> best, Eliot
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


[Pharo-dev] [Off Topic] Michel Bauwens on the history and immediate future of work

2019-01-25 Thread Eliot Miranda
Hi All,

I find this talk profound and key to making functional open source 
communities as we try and survive.

I’ve skipped an introduction in Portuguese and some slide control issues.
https://youtu.be/vDjazcMm-eE?t=4m40s

_,,,^..^,,,_ (phone)

Re: [Pharo-dev] DebugSession>>activePC:

2019-01-20 Thread Eliot Miranda
Hi Marcus,

On Fri, Jan 18, 2019 at 5:42 AM Marcus Denker 
wrote:

>
> > On 18 Jan 2019, at 14:26, ducasse  wrote:
> >
> > I simply love the dynamic rewriting this is just too cool. We should
> systematically use it.
> > I will continue to use it in any deprecation.
> >
>
> On my TODO is to make it stand-alone and provide is as a “compatibility
> transform”, too.
>
> So we can add it to methods that we want to keep for compatibility, but
> they will nevertheless transform the code automatically.
> (this then might be disabled in production to not transform)
>
> > Now I have a simple question (You can explain it to me over lunch one of
> these days).
> >
> > I do not get why RBAST would not be a good representation for the
> compiler?
> > I would like to know what is the difference.
> >
> I think it is a good one. I have not yet seen a reason why not. But
> remember, Roel left Squeak because his visitor pattern for the compiler was
> rejected as a dumb idea… so there are definitely different views on core
> questions.
>
> E.g. the RB AST is annotated and the whole things for sure uses a bit more
> memory than the compiler designed for a machine from 1978.
>
> > You mean that before going from BC to AST was difficult?
>
> You need to do the mapping somehow, the compiler needs to remember the BC
> offset in the code generation phase and the AST (somehow) needs to store
> that information (either in every node or some table).
>
> > How opal performs it? It does not use the source of the method to
> recreate the AST but he can do it from the BC?
> >
>
> It uses the IR (which I still am not 100% sure about, it came from the old
> “ClosureCompiler” Design and it turned out to be quite useful, for example
> for the mapping: every IR node retains the offset of the BC it creates,
> then the IR Nodes
> retain the AST node that created them.
>
> -> so we just do a query: “IRMethod, give me the IRInstruction that
> created BC offset X. then “IR, which AST node did create you? then the AST
> Node: what is your highlight interval in the source?
>
> The devil is in the detail as one IR can produce multiple byte code
> offsets (and byte codes) and one byte code might be created by two IR
> nodes, but it does seem to work with some tricks.
> Which I want to remove by improving the mapping and even the IR more…
> there is even the question: do we need the IR? could we not do it simpler?
>
> The IR was quite nice back when we tried to do things with byte code
> manipulation (Bytesurgeon), now it feels a bit of an overkill. But it
> simplifies e.g. the bc mapping.
>

I find Bytesurgeon functionality, specifically a bytecode dis/assembler
very useful, but wouldn't use it for the back end of the bytecode
compiler.  It adds overhead that has no benefit.  But I use my version,
MethodMassage, for lots of things:

- transporting compiled methods from one dialect to another, e.g. to do
in-image JIT compilation omg a method from Pharo in Squeak.
- generating JIT test cases
- generating methods that can't be generated from Smalltalk source, e.g. an
accessor for a JavaScript implementation above Smalltalk where inst var 0
is the prototype slot and nil is unbound, and so in a loop one wants to
fetch the Nth inst var from a temporary initialized to self, and ion the
value is non-nil return it, otherwise setting the temporary to
the prototype slot, hence walking up the prototype chain until an
initialized inst var is found.

I based mine around the messages that InstructionStream sends to the client
in the interpretFooInstructionFor: methods; a client that catches
doesNotUnderstand: then forms the basis of the disassembler.  Simple and
light-weight.

Marcus

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] DebugSession>>activePC:

2019-01-20 Thread Eliot Miranda
Hi Marcus,

On Fri, Jan 18, 2019 at 5:15 AM Marcus Denker via Pharo-dev <
pharo-dev@lists.pharo.org> wrote:

>
> > On 11 Jan 2019, at 20:28, Eliot Miranda  wrote:
> >
> > Hi Thomas,
> >
> >  forgive me, my first response was too terse.  Having thought about it
> in the shower it becomes clear :-)
> >
> >> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez <
> tdupr...@ens-paris-saclay.fr> wrote:
> >>
> >> Hi,
> >>
> >> Yes, my question was just of the form: "Hey there's this method in
> DebugSession. What is it doing? What's the intention behind it? Does
> someone know?". There was no hidden agenda behind it.
> >>
> >> @Eliot
> >>
> >> After taking another look at this method, there's something I don't
> understand:
> >>
> >> activePC: aContext
> >> ^ (self isLatestContext: aContext)
> >>ifTrue: [ interruptedContext pc ]
> >>ifFalse: [ self previousPC: aContext ]
> >>
> >> isLatestContext: checks whether its argument is the suspended context
> (the context at the top of the stack of the interrupted process). And if
> that's true, activePC: returns the pc of **interruptedContext**, not of the
> suspended context. These two contexts are different when the debugger opens
> on an exception, so this method is potentially returning a pc for another
> context than its argument...
> >>
> >> Another question I have to improve the comment for this method is:
> what's the high-level meaning of this concept of "activePC". You gave the
> formal definition, but what's the point of defining this so to speak? What
> makes this concept interesting enough to warrant defining it and giving it
> a name?
> >
> > There are two “modes” where a pc us mapped to a source range.  One is
> when stepping a context in the debugger (the context is on top and is
> actively executing bytecodes).  Here the debugger stops immediately before
> a send or assignment or return, so that for sends we can do into or over,
> or for assignments or returns check stack top to see what will be assigned
> or returned.  In this mode we want the pc of the send, assign or return to
> map to the source range for the send, or the expression being assigned or
> returned.  Since this is the “common case”, and since this is the only
> choice that makes sense for assignments ta and returns, the bytecode
> compiler constructs it’s pc to source range map in terms of the pc of the
> first byte if the send, assign or return bytecode.
> >
> > The second “mode” is when selecting a context below the top context.
> The pc for any context below the top context will be the return pc for a
> send, because the send has already happened.  The compiler could choose to
> map this pc to the send, but it would not match what works for the common
> case. Another choice would appear be to have two map entries, one for the
> send and one for the return pc, both mapping to the source range.  But this
> wouldn’t work because the result of a send might be assigned or returned
> and so there is a potential conflict.  I stead the reasonable solution is
> to select the previous pc for contexts below the top of context, which will
> be the pc for the start of the send bytecode.
> >
>
>
> I checked with Thomas
>
> -> for source mapping, we use the API of the method map. The map does the
> “get the mapping for the instruction before”, it just needs to be told that
> we ask the range for an active context:
>
> #rangeForPC:contextIsActiveContext:
>
> it is called
>
> ^aContext debuggerMap
> rangeForPC: aContext pc
> contextIsActiveContext: (self isLatestContext: aContext) ]
>
> So the logic was move from the debugger to the Map. (I think this is even
> your design?), and thus the logic inside the debugger is not needed
> anymore.
>

"Design" is giving my code a little too much respect.  I was desperately
trying to get something to work to be able to deploy Cog with the new
closure model.  I happily admit that DebuggerMethodMap in Squeak is ugly
code.  It had to be extended recently to handle full blocks. But it would
be great to rewrite it.

I dream of a harmonisation of Squeak/Pharo/Cuis execution classes such that
we have the same Context, CompiledCode, CompiledBlock, CompiledMethod,
debuggerMap and BytecodeEncoder (which is effectively the back end of the
compiler that generates bytecode, and the interface to the debugger when
bytecode is analyses or executed in the debugger), which would make my life
easier maintaining the VM and execution classes, especially as we introduce
Sista.  

Re: [Pharo-dev] Better management of encoding of environment variables

2019-01-18 Thread Eliot Miranda
Hi Guille,

> On Jan 18, 2019, at 6:04 AM, Guillermo Polito  
> wrote:
> 
>> On Fri, Jan 18, 2019 at 2:46 PM Ben Coman  wrote:
>> 
>>> On Fri, 18 Jan 2019 at 21:39, Sven Van Caekenberghe  wrote:
>>> 
>>> > On 18 Jan 2019, at 14:23, Guillermo Polito  
>>> > wrote:
>>> > 
>>> > 
>>> > I think that will just overcomplicate things. Right now, all Strings in 
>>> > Pharo are unicode strings.
>> 
>> Cool. I didn't realise that.  But to be pedantic, which unicode encoding? 
>> Should I presume from Sven's "UTF-8 encoding step" comment below 
>> and the WideString class comment  "This class represents the array of 32 bit 
>> wide characters"
>> that the WideString encoding is UTF-32?  So should its comment be updated to 
>> advise that?
> 
> None :D
> 
> That's the funny thing, they are not encoded.
> 
> Actually, you should see Strings as collections of Characters, and Characters 
> defined in terms of their abstract code points.
> ByteStrings are an optimized (just more compact) version that stores 
> codepoints that fit in a byte.

And Spur supports 16-bit strings too, which would be versions that store code 
points that fit in doublebytes.

>> cheers -ben
>> 
>>> Characters are represented with their corresponding unicode codepoint.
>>> > If all characters in a string have codepoints < 256 then they are just 
>>> > stored in a bytestring. Otherwise they are WideStrings.
>>> > 
>>> > I think assuming a single representation for strings, and then encode 
>>> > when interacting with external apps/APIs is MUCH simpler.
>>> 
>>> Absolutely !
>>> 
>>> (and yes I know that for outgoing FFI calls that might mean a UTF-8 
>>> encoding step, so be it).
> 
> 
> -- 
>
> Guille Polito
> Research Engineer
> Centre de Recherche en Informatique, Signal et Automatique de Lille
> CRIStAL - UMR 9189
> French National Center for Scientific Research - http://www.cnrs.fr
> 
> Web: http://guillep.github.io
> Phone: +33 06 52 70 66 13


Re: [Pharo-dev] Better management of encoding of environment variables

2019-01-18 Thread Eliot Miranda


> On Jan 18, 2019, at 2:04 AM, Guillermo Polito  
> wrote:
[snip]
> 
> Well, personally I would like that getenv/setenv and getcwd setcwd support 
> are not in a plugin but as a basic service provided by the vm.

+1000

> Cheers,
> Guille



Re: [Pharo-dev] Purpose of VM [was: Re: Better management of encoding of environment variables]

2019-01-17 Thread Eliot Miranda
On Thu, Jan 17, 2019 at 8:02 AM Sven Van Caekenberghe  wrote:

>
> > On 17 Jan 2019, at 02:00, Martin McClure  wrote:
> >
> > On 1/16/19 1:24 AM, Nicolas Cellier wrote:
> >> IMO, windows VM (and plugins) should do the UCS2 -> UTF8 conversion
> because the purpose of a VM is to provide an OS independant façade.
> >
> > I have not looked at this particular problem in detail, so I have no
> opinion on whether the VM is the right place for this particular
> functionality.
> >
> > However, I feel that in general trying to put everything that might be
> OS-specific into the VM is not the best design. To me, the purpose of a
> Smalltalk VM is to present an object-oriented abstraction of the underlying
> machine.
> >
> > Thinking that way leads me to believe that the following are examples of
> things that are good for a VM to do:
> >
> > * Memory is garbage-collected objects, not bytes.
> >
> > * Instructions are bytecodes, not underlying machine instructions.
> >
> > This works well to hide the differences between machine instruction
> sets, memory access, and other low-level things. However, no Smalltalk
> implementation that I know of has been able to use the VM to iron out all
> differences between different OSes.
> >
> > I do believe that it is a good idea to have cleanly-designed layers of
> the system, and that there should be an OS-independent layer and an
> OS-dependent layer with clean separation. But I think it might be better to
> put most of the OS-dependent layer in the image rather than in the VM. For
> one thing, the image is easier to change if there is a bug, or a lacking
> feature, or you're trying to support a new OS.
> >
> > And if it's in the image you get to do the programming in Smalltalk
> rather than C or Slang, which is more fun for most of us. And, let's face
> it, fun is an important metric in an open-source project -- things that are
> fun are much more likely to get done.
>
> +100
>

The VM *is* developed in Smalltalk
https://www.researchgate.net/publication/328509577_Two_Decades_of_Smalltalk_VM_Development_Live_VM_Development_through_Simulation_Tools


> > Regards,
> >
> > -Martin
> >
> >
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Some clarifications (was DebugSession>>activePC:)

2019-01-16 Thread Eliot Miranda
nue to work from releases to release.  From Pharo 6 to
Pharo 7 that did not happen.

Pharo will change because Pharo is agile and because many things should be
> improved.
>

and because it has a high-quality VM beneath it that has improved
performance exponentially in the move from interpreter, stack interpreter,
cog v1 and spur, and should continue to do so through Sista.  We are all
working hard to make things better.

We pay real attention about backwards compatibility. Much more than people
> think.
> Because we have many external projects and libraries that we support.
> Now if Pharo is used to build the VM then we will have to find a way to
> pay attention.
> And we will reinvest in building a process that checks this.
>

Good; thank you.


>
> Our problem is that integration time cannot take hours and right now
> validating Pharo is
> a bit too long. We will work on it.
>
> We strongly advocate to invest in tests. Tests are a good mechanisms to
> support evolution.
> If you have tests we can automatically rewrite deprecated methods and we
> will use this more often.
>
> We finally start to have:
> - a better compiler and not an ancient one which reported error as
> Morph.
> - a better architecture, more modular
> - a real bootstrap (we should improve it and build tools to
> support it - we are working on it)
> - strong libraries (File, Stream, HTTP) more documented and tested
> - better tools (Iceberg and Calypso are definitively steps in the
> right direction)
> And we will continue to improve.
> We will iterate on all these to make Pharo even better.
> We are writing more tests to support those changes.
>

Glad to hear it.  And I'm very much aware of this work and support it
wholeheartedly.  And in the VM we are doing similar things.


> I repeat it: I think that we are doing a pretty good job about the quality
> that we deliver.
>

I agree.


> Stef
>

Thank you.
Eliot


>
>
> > On 11 Jan 2019, at 18:54, Eliot Miranda  wrote:
> >
> > Hi Stef,
> >
> >> On Jan 10, 2019, at 7:59 AM, ducasse  wrote:
> >>
> >> Eliot I would like also two points to this.
> >>
> >> - First we asked thomas to write tests about the debugger model and you
> see if they would be tests about methods we could understand
> >> that they are used and control what they do. So we should all thank
> thomas for his energy in this not so easy task.
> >>
> >> - Second it would be nice if you could refrain to be systematically
> negative about what we are doing. I think that our development process
> >> is much better than many others :) It is not perfect because this does
> not exist.
> >> I think that we are doing a great job make Smalltalk cool. And yes it
> may happen that one untested, undocumented method
> >> get lost. I think that we are doing pretty good given the resources we
> have.
> >
> > Even more serious an issue for the Pharo community than a development
> process which fails to support the Ned’s of users is a defensive attitude
> that does not want to discuss serious issues maturely. I bring up the
> stability and backward-portability issue because it is *important*; it has
> affected Clément’s ability to deliver Sista and my and feenk’s efforts to
> support VM development on Pharo.  If your response to my trying to discuss
> seriously and objectively a problem that needs discussion is always to say
> “please don’t be negative” I have even less confidence that Pharo can be a
> realistic platform for my work and the work of others.
> >
> >
> >> Stef
> >>
> >>> On 10 Jan 2019, at 15:11, Eliot Miranda 
> wrote:
> >>>
> >>> Hi Thomas,
> >>>
> >>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev <
> pharo-dev@lists.pharo.org> wrote:
> >>>>
> >>>> 
> >>>
> >>> in a stack of contexts the active pc is different for the top
> context.  For other than the top context, a context’s pc will be pointing
> after the send that created the context above it, so to find the pc of the
> send one finds the previous pc.  For the top context its pc is the active
> pc.
> >>>
> >>> Typically the debugger is invoked in two different modes, interruption
> or exception. When interrupted, a process is stopped at the next suspension
> point (method entry or backward branch) and the top context in the process
> is the context to be displayed in the debugger.  When an exception occurs
> the exception search machinery will find the signaling context, the context
> that raised the exception, which w

Re: [Pharo-dev] external#senders (was Re: DebugSession>>activePC:)

2019-01-11 Thread Eliot Miranda
Sven,

> On Jan 11, 2019, at 11:40 AM, Sven Van Caekenberghe  wrote:
> 
> 
> 
>> On 11 Jan 2019, at 19:32, Eliot Miranda  wrote:
>> 
>> Sven,
>> 
>>> On Jan 11, 2019, at 10:03 AM, Sven Van Caekenberghe  wrote:
>>> 
>>> Eliot, 
>>> 
>>> I can assure you that multiple core Pharo people had the same reaction, 
>>> don't turn this (again) in a play on one person's emotions (apart from the 
>>> fact that those are present in all living creatures).
>> 
>> First you assume a motive I don’t have.  I am not trying to provoke anyone.  
> 
> Clearly you are, given the reactions. 
> 
> Like Doru said, you did not just answer the question, your last two 
> paragraphs contained lots of provocation.

You’re entitled to your opinion.  But since the intent to provoke or not would 
be in my head you’re is a projection, not fact.

> 
>> Second, I think emotions are the results of mammalian brains, perhaps bird 
>> and fish brains, and certainly not present in amoeba.
> 
> First an IS reference, now this: yes, you are a man ratio and reason only, 
> devout of human emotions like the rest of us. Good for you.
> 
> Hundreds of libraries and frameworks were moved between Pharo 6.x and 7, with 
> minimal changes.
> 
> We are an active, living community where many, many people contribute, are 
> allowed to make mistakes, to question old code and old rules, to learn, to 
> make things better.
> 
>>> Sven
>>> 
>>>> On 11 Jan 2019, at 18:57, Eliot Miranda  wrote:
>>>> 
>>>> Stef,
>>>> 
>>>>> On Jan 10, 2019, at 11:24 PM, ducasse  wrote:
>>>>> 
>>>>> Ben
>>>>> 
>>>>> Since you asked I reply. 
>>>>> For calypso we try and sometimes fail and retry. But we do not rant. 
>>>>> 
>>>>> Now the solution is also to have tests and this is what we are doing. 
>>>>> We want more tests and we are working on having more tests.
>>>>> 
>>>>> The solution is also to have ***positive* communication. 
>>>>> 
>>>>> There is no point to piss on our process because
>>>>>   - we were the first to push package usage back in squeal 3.9
>>>>>   - increase enormously the number of tests
>>>>>   - have CI to run the tests and use PR. 
>>>>>   and it is working!
>>>>> 
>>>>> So before bashing us I would expect a bit of respect that it is due to 
>>>>> our track record. 
>>>> 
>>>> Again you fail to respond to an attempt to discuss real issues and instead 
>>>> take it as a personal map attack and respond emotionally.  Ben is /not/ 
>>>> bashing your process in an attempt to damage Pharo.  As an academic 
>>>> researcher you should be able to respond objectively to criticism.  This 
>>>> frequent emotionality doesn’t help you or the community.
>>>> 
>>>>> 
>>>>> Finally it takes 1 min to enter a bug entry and now you cannot even 
>>>>> complain that you have to log 
>>>>> because it is on github. (BTW nobdoy is asking the amount of time it 
>>>>> takes US to migrate and go over the bug entry -
>>>>> again I ask for respect for the people doing this tedious, boring but 
>>>>> important job). 
>>>>> 
>>>>> When VMMaker will be available in Pharo we will be able to automate 
>>>>> things not before. 
>>>>> Please remember also that Igor paid by us spent a lot of time making sure 
>>>>> that 
>>>>> everybody and in particular our jenkins server could automatically build 
>>>>> vm.
>>>>> 
>>>>> So we believe in agility, communication and automation. 
>>>>> 
>>>>> Stef
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 11 Jan 2019, at 05:54, Ben Coman  wrote:
>>>>>> 
>>>>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev 
>>>>>>  wrote:
>>>>>> Thomas can you integrate such comments in the debugger class comment
>>>>>> 
>>>>>> @Eliot thanks for the explanation. 
>>>>>> About the method removed, could you please react less negatively? It 
>>>>>> would be nice. 
>>>>>> I cannot believe that you the guy that know the VM would get stopped to 
&

Re: [Pharo-dev] DebugSession>>activePC:

2019-01-11 Thread Eliot Miranda
Hi Thomas,

  forgive me, my first response was too terse.  Having thought about it in the 
shower it becomes clear :-)

> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez  
> wrote:
> 
> Hi,
> 
> Yes, my question was just of the form: "Hey there's this method in 
> DebugSession. What is it doing? What's the intention behind it? Does someone 
> know?". There was no hidden agenda behind it.
> 
> @Eliot
> 
> After taking another look at this method, there's something I don't 
> understand:
> 
> activePC: aContext
> ^ (self isLatestContext: aContext)
> ifTrue: [ interruptedContext pc ]
> ifFalse: [ self previousPC: aContext ]
> 
> isLatestContext: checks whether its argument is the suspended context (the 
> context at the top of the stack of the interrupted process). And if that's 
> true, activePC: returns the pc of **interruptedContext**, not of the 
> suspended context. These two contexts are different when the debugger opens 
> on an exception, so this method is potentially returning a pc for another 
> context than its argument...
> 
> Another question I have to improve the comment for this method is: what's the 
> high-level meaning of this concept of "activePC". You gave the formal 
> definition, but what's the point of defining this so to speak? What makes 
> this concept interesting enough to warrant defining it and giving it a name?

There are two “modes” where a pc us mapped to a source range.  One is when 
stepping a context in the debugger (the context is on top and is actively 
executing bytecodes).  Here the debugger stops immediately before a send or 
assignment or return, so that for sends we can do into or over, or for 
assignments or returns check stack top to see what will be assigned or 
returned.  In this mode we want the pc of the send, assign or return to map to 
the source range for the send, or the expression being assigned or returned.  
Since this is the “common case”, and since this is the only choice that makes 
sense for assignments ta and returns, the bytecode compiler constructs it’s pc 
to source range map in terms of the pc of the first byte if the send, assign or 
return bytecode.

The second “mode” is when selecting a context below the top context.  The pc 
for any context below the top context will be the return pc for a send, because 
the send has already happened.  The compiler could choose to map this pc to the 
send, but it would not match what works for the common case. Another choice 
would appear be to have two map entries, one for the send and one for the 
return pc, both mapping to the source range.  But this wouldn’t work because 
the result of a send might be assigned or returned and so there is a potential 
conflict.  I stead the reasonable solution is to select the previous pc for 
contexts below the top of context, which will be the pc for the start of the 
send bytecode.

HTH

> 
> Cheers,
> Thomas
> 
>> On 11/01/2019 13:53, Tudor Girba wrote:
>> Hi,
>> 
>> @Eliot: Thanks for the clarifying answer.
>> 
>> I believe you might have jumped to conclusion about the intention of the 
>> question. Thomas asked a legitimate question. Without users of a method it 
>> is hard to understand its use. It does not necessarily imply that the 
>> intention is to remove it, but it does show that someone wants to understand.
>> 
>> As far as I know, Thomas actually wants to write a test to cover that usage. 
>> I am sure that you appreciate and encourage that :).
>> 
>> @Thomas: Thanks for this effort!
>> 
>> Cheers,
>> Doru
>> 
>> 
>>> On Jan 10, 2019, at 3:11 PM, Eliot Miranda  wrote:
>>> 
>>> Hi Thomas,
>>> 
>>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev 
>>>>  wrote:
>>>> 
>>>> 
>>> in a stack of contexts the active pc is different for the top context.  For 
>>> other than the top context, a context’s pc will be pointing after the send 
>>> that created the context above it, so to find the pc of the send one finds 
>>> the previous pc.  For the top context its pc is the active pc.
>>> 
>>> Typically the debugger is invoked in two different modes, interruption or 
>>> exception. When interrupted, a process is stopped at the next suspension 
>>> point (method entry or backward branch) and the top context in the process 
>>> is the context to be displayed in the debugger.  When an exception occurs 
>>> the exception search machinery will find the signaling context, the context 
>>> that raised the exception, which will be below the search machinery and the 
>>> debugger invocation 

Re: [Pharo-dev] external#senders (was Re: DebugSession>>activePC:)

2019-01-11 Thread Eliot Miranda
Sven,

> On Jan 11, 2019, at 10:03 AM, Sven Van Caekenberghe  wrote:
> 
> Eliot, 
> 
> I can assure you that multiple core Pharo people had the same reaction, don't 
> turn this (again) in a play on one person's emotions (apart from the fact 
> that those are present in all living creatures).

First you assume a motive I don’t have.  I am not trying to provoke anyone.  
Second, I think emotions are the results of mammalian brains, perhaps bird and 
fish brains, and certainly not present in amoeba.

> 
> Sven
> 
>> On 11 Jan 2019, at 18:57, Eliot Miranda  wrote:
>> 
>> Stef,
>> 
>>> On Jan 10, 2019, at 11:24 PM, ducasse  wrote:
>>> 
>>> Ben
>>> 
>>> Since you asked I reply. 
>>> For calypso we try and sometimes fail and retry. But we do not rant. 
>>> 
>>> Now the solution is also to have tests and this is what we are doing. 
>>> We want more tests and we are working on having more tests.
>>> 
>>> The solution is also to have ***positive* communication. 
>>> 
>>> There is no point to piss on our process because
>>>- we were the first to push package usage back in squeal 3.9
>>>- increase enormously the number of tests
>>>- have CI to run the tests and use PR. 
>>>and it is working!
>>> 
>>> So before bashing us I would expect a bit of respect that it is due to our 
>>> track record. 
>> 
>> Again you fail to respond to an attempt to discuss real issues and instead 
>> take it as a personal map attack and respond emotionally.  Ben is /not/ 
>> bashing your process in an attempt to damage Pharo.  As an academic 
>> researcher you should be able to respond objectively to criticism.  This 
>> frequent emotionality doesn’t help you or the community.
>> 
>>> 
>>> Finally it takes 1 min to enter a bug entry and now you cannot even 
>>> complain that you have to log 
>>> because it is on github. (BTW nobdoy is asking the amount of time it takes 
>>> US to migrate and go over the bug entry -
>>> again I ask for respect for the people doing this tedious, boring but 
>>> important job). 
>>> 
>>> When VMMaker will be available in Pharo we will be able to automate things 
>>> not before. 
>>> Please remember also that Igor paid by us spent a lot of time making sure 
>>> that 
>>> everybody and in particular our jenkins server could automatically build vm.
>>> 
>>> So we believe in agility, communication and automation. 
>>> 
>>> Stef
>>> 
>>> 
>>> 
>>> 
>>>> On 11 Jan 2019, at 05:54, Ben Coman  wrote:
>>>> 
>>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev 
>>>>  wrote:
>>>> Thomas can you integrate such comments in the debugger class comment
>>>> 
>>>> @Eliot thanks for the explanation. 
>>>> About the method removed, could you please react less negatively? It would 
>>>> be nice. 
>>>> I cannot believe that you the guy that know the VM would get stopped to 
>>>> open a bug entry telling us that isOptimizedBlock
>>>> has been removed and it should not. How much time opening a bug entry can 
>>>> take? Under 1 min I guess. 
>>>> 
>>>> I'd guess it takes more than 1 minute overall - a few minutes to shift 
>>>> context to open an old Pharo image 
>>>> and a few more open the original method to copy it to Pharo and repeat 
>>>> that for the next ten missing methods,
>>>> and then having fixed it for yourself, rather than just log a job for 
>>>> someone else, having fixed your own 
>>>> you now repair your pharo repo with Iceberg and submit a commit, and your 
>>>> now off-task by half an hour.  
>>>> Not a great deal of time if that was what you schedule to work on, you but 
>>>> frustrating when dragged off task.
>>>> 
>>>> The thing is, when is someone is frustrated, without sharing there is no 
>>>> chance to resolve anything, 
>>>> so the frustration doubles and builds up, and unconsciously creeps in 
>>>> relationships and/or leads to a breakdown. 
>>>> Putting it out in the world relieves that pressure and provides the 
>>>> possibility that someone might 
>>>> find a middle path.  As always, it is not what is said but how it is said, 
>>>> and personally that seemed okay to me.
>>>> 
>>>>

Re: [Pharo-dev] DebugSession>>activePC:

2019-01-11 Thread Eliot Miranda
Hi Thomas,


> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez  
> wrote:
> 
> Hi,
> 
> Yes, my question was just of the form: "Hey there's this method in 
> DebugSession. What is it doing? What's the intention behind it? Does someone 
> know?". There was no hidden agenda behind it.
> 
> @Eliot
> 
> After taking another look at this method, there's something I don't 
> understand:
> 
> activePC: aContext
> ^ (self isLatestContext: aContext)
> ifTrue: [ interruptedContext pc ]
> ifFalse: [ self previousPC: aContext ]
> 
> isLatestContext: checks whether its argument is the suspended context (the 
> context at the top of the stack of the interrupted process). And if that's 
> true, activePC: returns the pc of **interruptedContext**, not of the 
> suspended context. These two contexts are different when the debugger opens 
> on an exception, so this method is potentially returning a pc for another 
> context than its argument...

Ugh, I had missed that.  Thanks for pointing that out.  It does look like a 
bug.  The Squeak code is very different (much less elegant code written by me 
in DebuggerMethodMap) but that code does use only one context.

So I expect the code should read
activePC: aContext
^ (self isLatestContext: aContext)
ifTrue: [ aContext pc ]
ifFalse: [ self previousPC: aContext ]

> 
> Another question I have to improve the comment for this method is: what's the 
> high-level meaning of this concept of "activePC". You gave the formal 
> definition, but what's the point of defining this so to speak? What makes 
> this concept interesting enough to warrant defining it and giving it a name?

Because the active pc is used to derive display feedback in the debugger.  In 
particular it is used to derive source ranges for contexts.

> 
> Cheers,
> Thomas
> 
>> On 11/01/2019 13:53, Tudor Girba wrote:
>> Hi,
>> 
>> @Eliot: Thanks for the clarifying answer.
>> 
>> I believe you might have jumped to conclusion about the intention of the 
>> question. Thomas asked a legitimate question. Without users of a method it 
>> is hard to understand its use. It does not necessarily imply that the 
>> intention is to remove it, but it does show that someone wants to understand.
>> 
>> As far as I know, Thomas actually wants to write a test to cover that usage. 
>> I am sure that you appreciate and encourage that :).
>> 
>> @Thomas: Thanks for this effort!
>> 
>> Cheers,
>> Doru
>> 
>> 
>>> On Jan 10, 2019, at 3:11 PM, Eliot Miranda  wrote:
>>> 
>>> Hi Thomas,
>>> 
>>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev 
>>>>  wrote:
>>>> 
>>>> 
>>> in a stack of contexts the active pc is different for the top context.  For 
>>> other than the top context, a context’s pc will be pointing after the send 
>>> that created the context above it, so to find the pc of the send one finds 
>>> the previous pc.  For the top context its pc is the active pc.
>>> 
>>> Typically the debugger is invoked in two different modes, interruption or 
>>> exception. When interrupted, a process is stopped at the next suspension 
>>> point (method entry or backward branch) and the top context in the process 
>>> is the context to be displayed in the debugger.  When an exception occurs 
>>> the exception search machinery will find the signaling context, the context 
>>> that raised the exception, which will be below the search machinery and the 
>>> debugger invocation above that. The active pc of the signaling context will 
>>> be the of for the send of digbsl et al.
>>> 
>>> So the distinction is important and the utility method is probably useful.
>>> 
>>> Do you want to remove the method simply because there are no senders in the 
>>> image?
>>> 
>>> If so, this is indicative of a serious problem with the Pharo development 
>>> process.  In the summer I ported VMMaker.oscog to Pharo 6.  Now as feenk 
>>> try and build a VMMaker.oscog image on Pharo 7, the system is broken, in 
>>> part because of depreciations and in part because useful methods 
>>> (isOptimisedBlock (isOptimizedBlock?) in the Opal compiler) have been 
>>> removed.
>>> 
>>> Just because a method is not in the image does not imply it is not in use.  
>>> It simply means that it is not in use in the base image.  As the system 
>>> gets modularised this issue will only increase.  There are lots of 
>>> collection methods that exist as a library that are not used in the base 
>>> image and removing them would clearly damage the library for users.  This 
>>> is the case for lots of so-called system code.  There are users out there, 
>>> like those of us in the vm team, who rely on such system code, and it is 
>>> extremely unsettling and frustrating to have that system code change all 
>>> the time.  If Pharo is to be a useful platform to the vm team it has to be 
>>> more stable.
>> --
>> www.feenk.com
>> 
>> “The smaller and more pervasive the hardware becomes, the more physical the 
>> software gets."
>> 
>> 
> 


Re: [Pharo-dev] external#senders (was Re: DebugSession>>activePC:)

2019-01-11 Thread Eliot Miranda
Esteban,


> On Jan 10, 2019, at 11:45 PM, Esteban Lorenzano  wrote:
> 
>> On 11 Jan 2019, at 08:24, ducasse  wrote:
>> 
>> Ben
>> 
>> Since you asked I reply. 
>> For calypso we try and sometimes fail and retry. But we do not rant. 
>> 
>> Now the solution is also to have tests and this is what we are doing. 
>> We want more tests and we are working on having more tests.
>> 
>> The solution is also to have ***positive* communication. 
>> 
>> There is no point to piss on our process because
>>  - we were the first to push package usage back in squeal 3.9
>>  - increase enormously the number of tests
>>  - have CI to run the tests and use PR. 
>>  and it is working!
>> 
>> So before bashing us I would expect a bit of respect that it is due to our 
>> track record. 
>> 
>> Finally it takes 1 min to enter a bug entry and now you cannot even complain 
>> that you have to log 
>> because it is on github. (BTW nobdoy is asking the amount of time it takes 
>> US to migrate and go over the bug entry -
>> again I ask for respect for the people doing this tedious, boring but 
>> important job). 
>> 
>> When VMMaker will be available in Pharo we will be able to automate things 
>> not before. 
>> Please remember also that Igor paid by us spent a lot of time making sure 
>> that 
>> everybody and in particular our jenkins server could automatically build vm.
>> 
>> So we believe in agility, communication and automation. 
>> 
>> Stef
>> 
>> 
>> 
>> 
>>> On 11 Jan 2019, at 05:54, Ben Coman  wrote:
>>> 
 On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev 
  wrote:
>>> 
 Thomas can you integrate such comments in the debugger class comment
 
 @Eliot thanks for the explanation. 
 About the method removed, could you please react less negatively? It would 
 be nice. 
 I cannot believe that you the guy that know the VM would get stopped to 
 open a bug entry telling us that isOptimizedBlock
 has been removed and it should not. How much time opening a bug entry can 
 take? Under 1 min I guess. 
>>> 
>>> I'd guess it takes more than 1 minute overall - a few minutes to shift 
>>> context to open an old Pharo image 
>>> and a few more open the original method to copy it to Pharo and repeat that 
>>> for the next ten missing methods,
>>> and then having fixed it for yourself, rather than just log a job for 
>>> someone else, having fixed your own 
>>> you now repair your pharo repo with Iceberg and submit a commit, and your 
>>> now off-task by half an hour.  
>>> Not a great deal of time if that was what you schedule to work on, you but 
>>> frustrating when dragged off task.
>>> 
>>> The thing is, when is someone is frustrated, without sharing there is no 
>>> chance to resolve anything, 
>>> so the frustration doubles and builds up, and unconsciously creeps in 
>>> relationships and/or leads to a breakdown. 
>>> Putting it out in the world relieves that pressure and provides the 
>>> possibility that someone might 
>>> find a middle path.  As always, it is not what is said but how it is said, 
>>> and personally that seemed okay to me.
>>> 
>>> >> Just because a method is not in the image does not imply it is not in 
>>> >> use.  It simply means that it is not in use in the base image.  As the 
>>> >> system gets modularised this issue will only increase.   
>>> 
>>> On the flip side, if the rule was "don't touch unused methods", that would 
>>> block a lot of action
>>> around cleaning, minimisation and modulation that are important.  Even 
>>> though those things 
>>> aren't directly the shiney new tools that make Pharo great, its their 
>>> philosophy that underpins
>>> a lot of the visible Pharo improvements which has facilitated Pharo's 
>>> growth.  
>>> That "vision" is why I'm here.
>>> 
>>> The pivot point here the concept of "unused" and perhaps where we can do 
>>> better.
>>> Currently developers have no information available to base their decision 
>>> on.
>>> Requiring developers to query the mail list about every cleaning, 
>>> minimisation and modulation action 
>>> would have a freezing effect.  
>>> 
>>> For stuff that is image its much easier for developers since:
>>> * its "visible" right in front of them
>>> * they can make decisions and take action around it
>>> * tests can be run
>>> 
>>> So the question is how we can get those things for important modules 
>>> outside the image?
>>> For me, VM is not like any third party app but is very much a *part* of 
>>> Pharo
>>> since its something which helps Pharo itself advance.  So lets treat it as 
>>> such, similar 
>>> to how we treat other modules like Calypso or Iceberg which happen 
>>> distributed in-Image.
>>> Can we consider the last step of the CI (after packing the CI product)
>>> could load a static version of VMMaker?  Not that any issues would fail the 
>>> commit, but just report 
>>> to bring "visibility" to the table ?
> 
> You know? since we are sharing frustrations, I

Re: [Pharo-dev] DebugSession>>activePC:

2019-01-11 Thread Eliot Miranda
Hi Doru,


> On Jan 11, 2019, at 4:53 AM, Tudor Girba  wrote:
> 
> Hi,
> 
> @Eliot: Thanks for the clarifying answer.
> 
> I believe you might have jumped to conclusion about the intention of the 
> question. Thomas asked a legitimate question. Without users of a method it is 
> hard to understand its use. It does not necessarily imply that the intention 
> is to remove it, but it does show that someone wants to understand.

Indeed.  I am responding because of the recent experience we had, that you are 
intimately aware of, of moving the somewhat functional Pharo 6 VMMaker port 
forward to Pharo 7, which is frustrating because enough things changes that it 
was broken.  And that is far from an isolated experience.

I want very, very much for Pharo to succeed.  It is the most important user of 
the opensmalltalk-vm by far.  If Pharo fails, opensmalltalk-vm will very likely 
become entirely irrelevant and uninteresting.  So my career and financial 
security are wedded to Pharo’s success.  At the same time I do not feel 
positive about Pharo, as I have said, in its stability and in the community’s 
difficulty in discussing problems (primarily the stability and development 
model issues).  I am therefore very much interested in solving these problems.  
So if I jump to conclusions it is because I am concerned and want to change how 
I feel about Pharo as a viable platform for my work, and that means being able 
to talk about difficult issues and not be shushed.  I want there to be 
constructive discussion, not defensiveness or blithe positivity.  Progress 
depends on truth and ingenuity, not positive thinking.

> 
> As far as I know, Thomas actually wants to write a test to cover that usage. 
> I am sure that you appreciate and encourage that :).

Indeed I do!

> 
> @Thomas: Thanks for this effort!
> 
> Cheers,
> Doru
> 
> 
>> On Jan 10, 2019, at 3:11 PM, Eliot Miranda  wrote:
>> 
>> Hi Thomas,
>> 
>>> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev 
>>>  wrote:
>>> 
>>> 
>> 
>> in a stack of contexts the active pc is different for the top context.  For 
>> other than the top context, a context’s pc will be pointing after the send 
>> that created the context above it, so to find the pc of the send one finds 
>> the previous pc.  For the top context its pc is the active pc.
>> 
>> Typically the debugger is invoked in two different modes, interruption or 
>> exception. When interrupted, a process is stopped at the next suspension 
>> point (method entry or backward branch) and the top context in the process 
>> is the context to be displayed in the debugger.  When an exception occurs 
>> the exception search machinery will find the signaling context, the context 
>> that raised the exception, which will be below the search machinery and the 
>> debugger invocation above that. The active pc of the signaling context will 
>> be the of for the send of digbsl et al.
>> 
>> So the distinction is important and the utility method is probably useful.
>> 
>> Do you want to remove the method simply because there are no senders in the 
>> image?
>> 
>> If so, this is indicative of a serious problem with the Pharo development 
>> process.  In the summer I ported VMMaker.oscog to Pharo 6.  Now as feenk try 
>> and build a VMMaker.oscog image on Pharo 7, the system is broken, in part 
>> because of depreciations and in part because useful methods 
>> (isOptimisedBlock (isOptimizedBlock?) in the Opal compiler) have been 
>> removed.
>> 
>> Just because a method is not in the image does not imply it is not in use.  
>> It simply means that it is not in use in the base image.  As the system gets 
>> modularised this issue will only increase.  There are lots of collection 
>> methods that exist as a library that are not used in the base image and 
>> removing them would clearly damage the library for users.  This is the case 
>> for lots of so-called system code.  There are users out there, like those of 
>> us in the vm team, who rely on such system code, and it is extremely 
>> unsettling and frustrating to have that system code change all the time.  If 
>> Pharo is to be a useful platform to the vm team it has to be more stable.
> 
> --
> www.feenk.com
> 
> “The smaller and more pervasive the hardware becomes, the more physical the 
> software gets."
> 
> 



Re: [Pharo-dev] DebugSession>>activePC:

2019-01-11 Thread Eliot Miranda
Craig,

thank you. +1000

> On Jan 11, 2019, at 12:58 AM, Craig Latta  wrote:
> 
> 
> Hi all--
> 
> Eliot writes:
> 
>> Do you want to remove the method simply because there are no senders
>> in the image?
>> 
>> If so, this is indicative of a serious problem with the Pharo
>> development process.  In the summer I ported VMMaker.oscog to Pharo 6.
>> Now as feenk try and build a VMMaker.oscog image on Pharo 7, the
>> system is broken, in part because of depreciations and in part because
>> useful methods (isOptimisedBlock (isOptimizedBlock?) in the Opal
>> compiler) have been removed.
>> 
>> Just because a method is not in the image does not imply it is not in
>> use.  It simply means that it is not in use in the base image.  As the
>> system gets modularised this issue will only increase.  There are lots
>> of collection methods that exist as a library that are not used in the
>> base image and removing them would clearly damage the library for
>> users.  This is the case for lots of so-called system code.  There are
>> users out there, like those of us in the vm team, who rely on such
>> system code, and it is extremely unsettling and frustrating to have
>> that system code change all the time.  If Pharo is to be a useful
>> platform to the vm team it has to be more stable.
> 
> Esteban responds:
> 
>> ...we are told that we remove things without caring.
> 
> I don't see where Eliot said anyone didn't care.
> 
> Stef responds:
> 
>> About the method removed, could you please react less negatively? It
>> would be nice.
>> 
>> ...
>> 
>> How much time opening a bug entry can take? Under 1 min I guess. So
>> why if marcus removed it inadvertly would you want to make him feel
>> bad?
> 
> Eliot said the system has to be more stable. It doesn't seem like a
> negative reaction, or an attempt to make anyone feel bad. As Ben pointed
> out, the major cost of reporting regressions isn't the time spent
> interacting with the bug-tracking system, it's being switched away from
> what you were doing. Using the automated regression-testing system seems
> like a good way of catching this particular issue (even though it's a
> step away from having full live traceability all the time, before
> committing changes).
> 
>> For calypso we try and sometimes fail and retry. But we do not rant...
>> The solution is also to have ***positive*
>> communication... There is no point to piss on our process... So before
>> bashing us I would expect a bit of respect that it is due to our track
>> record... it would be nice if you could refrain to be systematically
>> negative about what we are doing.
> 
> I don't think Eliot is being systematically negative, or that he
> was ranting, pissing, or bashing. I think introducing those accusatory
> words into the conversation detracts from positive communication.
> 
>> I think that we are doing a great job make Smalltalk cool.
> 
> I do, too! (And thanks for using that word. ;)
> 
> 
> thanks,
> 
> -C
> 
> --
> Craig Latta
> Black Page Digital
> Amsterdam :: San Francisco
> cr...@blackpagedigital.com
> +31   6 2757 7177 (SMS ok)
> + 1 415  287 3547 (no SMS)
> 



Re: [Pharo-dev] external#senders (was Re: DebugSession>>activePC:)

2019-01-11 Thread Eliot Miranda


> On Jan 10, 2019, at 11:24 PM, ducasse  wrote:
> 
> Ben
> 
> Since you asked I reply. 
> For calypso we try and sometimes fail and retry. But we do not rant. 
> 
> Now the solution is also to have tests and this is what we are doing. 
> We want more tests and we are working on having more tests.
> 
> The solution is also to have ***positive* communication. 

What do we understand by positive communication?  Is it IS-style patting on the 
back for average performance some we don’t hurt people’s feelings or is it 
communication that advances a community’s work product?  For me it is the 
latter.

I would never dream of responding to technical criticism of the CM with a 
response that says “please refrain from criticizing us”.  I try and respond 
honestly with an objective assessment of the technical, logistical and human 
issues.  In fact I welcome criticism; how on earth will the VM improve in 
directions other than the narrow ones those working on it set without criticism 
from other stake holders?

> 
> There is no point to piss on our process because
>   - we were the first to push package usage back in squeal 3.9
>   - increase enormously the number of tests
>   - have CI to run the tests and use PR. 
>   and it is working!
> 
> So before bashing us I would expect a bit of respect that it is due to our 
> track record. 
> 
> Finally it takes 1 min to enter a bug entry and now you cannot even complain 
> that you have to log 
> because it is on github. (BTW nobdoy is asking the amount of time it takes US 
> to migrate and go over the bug entry -
> again I ask for respect for the people doing this tedious, boring but 
> important job). 
> 
> When VMMaker will be available in Pharo we will be able to automate things 
> not before. 
> Please remember also that Igor paid by us spent a lot of time making sure 
> that 
> everybody and in particular our jenkins server could automatically build vm.
> 
> So we believe in agility, communication and automation. 
> 
> Stef
> 
> 
> 
> 
>> On 11 Jan 2019, at 05:54, Ben Coman  wrote:
>> 
>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev 
>>>  wrote:
>> 
>>> Thomas can you integrate such comments in the debugger class comment
>>> 
>>> @Eliot thanks for the explanation. 
>>> About the method removed, could you please react less negatively? It would 
>>> be nice. 
>>> I cannot believe that you the guy that know the VM would get stopped to 
>>> open a bug entry telling us that isOptimizedBlock
>>> has been removed and it should not. How much time opening a bug entry can 
>>> take? Under 1 min I guess. 
>> 
>> I'd guess it takes more than 1 minute overall - a few minutes to shift 
>> context to open an old Pharo image 
>> and a few more open the original method to copy it to Pharo and repeat that 
>> for the next ten missing methods,
>> and then having fixed it for yourself, rather than just log a job for 
>> someone else, having fixed your own 
>> you now repair your pharo repo with Iceberg and submit a commit, and your 
>> now off-task by half an hour.  
>> Not a great deal of time if that was what you schedule to work on, you but 
>> frustrating when dragged off task.
>> 
>> The thing is, when is someone is frustrated, without sharing there is no 
>> chance to resolve anything, 
>> so the frustration doubles and builds up, and unconsciously creeps in 
>> relationships and/or leads to a breakdown. 
>> Putting it out in the world relieves that pressure and provides the 
>> possibility that someone might 
>> find a middle path.  As always, it is not what is said but how it is said, 
>> and personally that seemed okay to me.
>> 
>> >> Just because a method is not in the image does not imply it is not in 
>> >> use.  It simply means that it is not in use in the base image.  As the 
>> >> system gets modularised this issue will only increase.   
>> 
>> On the flip side, if the rule was "don't touch unused methods", that would 
>> block a lot of action
>> around cleaning, minimisation and modulation that are important.  Even 
>> though those things 
>> aren't directly the shiney new tools that make Pharo great, its their 
>> philosophy that underpins
>> a lot of the visible Pharo improvements which has facilitated Pharo's 
>> growth.  
>> That "vision" is why I'm here.
>> 
>> The pivot point here the concept of "unused" and perhaps where we can do 
>> better.
>> Currently developers have no information available to base their decision on.
>> Requiring developers to query the mail list about every cleaning, 
>> minimisation and modulation action 
>> would have a freezing effect.  
>> 
>> For stuff that is image its much easier for developers since:
>> * its "visible" right in front of them
>> * they can make decisions and take action around it
>> * tests can be run
>> 
>> So the question is how we can get those things for important modules outside 
>> the image?
>> For me, VM is not like any third party app but is very much a *part* of Ph

Re: [Pharo-dev] external#senders (was Re: DebugSession>>activePC:)

2019-01-11 Thread Eliot Miranda
Stef,

> On Jan 10, 2019, at 11:24 PM, ducasse  wrote:
> 
> Ben
> 
> Since you asked I reply. 
> For calypso we try and sometimes fail and retry. But we do not rant. 
> 
> Now the solution is also to have tests and this is what we are doing. 
> We want more tests and we are working on having more tests.
> 
> The solution is also to have ***positive* communication. 
> 
> There is no point to piss on our process because
>   - we were the first to push package usage back in squeal 3.9
>   - increase enormously the number of tests
>   - have CI to run the tests and use PR. 
>   and it is working!
> 
> So before bashing us I would expect a bit of respect that it is due to our 
> track record. 

Again you fail to respond to an attempt to discuss real issues and instead take 
it as a personal map attack and respond emotionally.  Ben is /not/ bashing your 
process in an attempt to damage Pharo.  As an academic researcher you should be 
able to respond objectively to criticism.  This frequent emotionality doesn’t 
help you or the community.

> 
> Finally it takes 1 min to enter a bug entry and now you cannot even complain 
> that you have to log 
> because it is on github. (BTW nobdoy is asking the amount of time it takes US 
> to migrate and go over the bug entry -
> again I ask for respect for the people doing this tedious, boring but 
> important job). 
> 
> When VMMaker will be available in Pharo we will be able to automate things 
> not before. 
> Please remember also that Igor paid by us spent a lot of time making sure 
> that 
> everybody and in particular our jenkins server could automatically build vm.
> 
> So we believe in agility, communication and automation. 
> 
> Stef
> 
> 
> 
> 
>> On 11 Jan 2019, at 05:54, Ben Coman  wrote:
>> 
>>> On Thu, 10 Jan 2019 at 23:51, ducasse via Pharo-dev 
>>>  wrote:
>> 
>>> Thomas can you integrate such comments in the debugger class comment
>>> 
>>> @Eliot thanks for the explanation. 
>>> About the method removed, could you please react less negatively? It would 
>>> be nice. 
>>> I cannot believe that you the guy that know the VM would get stopped to 
>>> open a bug entry telling us that isOptimizedBlock
>>> has been removed and it should not. How much time opening a bug entry can 
>>> take? Under 1 min I guess. 
>> 
>> I'd guess it takes more than 1 minute overall - a few minutes to shift 
>> context to open an old Pharo image 
>> and a few more open the original method to copy it to Pharo and repeat that 
>> for the next ten missing methods,
>> and then having fixed it for yourself, rather than just log a job for 
>> someone else, having fixed your own 
>> you now repair your pharo repo with Iceberg and submit a commit, and your 
>> now off-task by half an hour.  
>> Not a great deal of time if that was what you schedule to work on, you but 
>> frustrating when dragged off task.
>> 
>> The thing is, when is someone is frustrated, without sharing there is no 
>> chance to resolve anything, 
>> so the frustration doubles and builds up, and unconsciously creeps in 
>> relationships and/or leads to a breakdown. 
>> Putting it out in the world relieves that pressure and provides the 
>> possibility that someone might 
>> find a middle path.  As always, it is not what is said but how it is said, 
>> and personally that seemed okay to me.
>> 
>> >> Just because a method is not in the image does not imply it is not in 
>> >> use.  It simply means that it is not in use in the base image.  As the 
>> >> system gets modularised this issue will only increase.   
>> 
>> On the flip side, if the rule was "don't touch unused methods", that would 
>> block a lot of action
>> around cleaning, minimisation and modulation that are important.  Even 
>> though those things 
>> aren't directly the shiney new tools that make Pharo great, its their 
>> philosophy that underpins
>> a lot of the visible Pharo improvements which has facilitated Pharo's 
>> growth.  
>> That "vision" is why I'm here.
>> 
>> The pivot point here the concept of "unused" and perhaps where we can do 
>> better.
>> Currently developers have no information available to base their decision on.
>> Requiring developers to query the mail list about every cleaning, 
>> minimisation and modulation action 
>> would have a freezing effect.  
>> 
>> For stuff that is image its much easier for developers since:
>> * its "visible" right in front of them
>> * they can make decisions and take action around it
>> * tests can be run
>> 
>> So the question is how we can get those things for important modules outside 
>> the image?
>> For me, VM is not like any third party app but is very much a *part* of Pharo
>> since its something which helps Pharo itself advance.  So lets treat it as 
>> such, similar 
>> to how we treat other modules like Calypso or Iceberg which happen 
>> distributed in-Image.
>> Can we consider the last step of the CI (after packing the CI product)
>> could load a static versi

Re: [Pharo-dev] DebugSession>>activePC:

2019-01-10 Thread Eliot Miranda
Hi Thomas,

> On Jan 10, 2019, at 2:24 AM, Thomas Dupriez via Pharo-dev 
>  wrote:
> 
> 

in a stack of contexts the active pc is different for the top context.  For 
other than the top context, a context’s pc will be pointing after the send that 
created the context above it, so to find the pc of the send one finds the 
previous pc.  For the top context its pc is the active pc.

Typically the debugger is invoked in two different modes, interruption or 
exception. When interrupted, a process is stopped at the next suspension point 
(method entry or backward branch) and the top context in the process is the 
context to be displayed in the debugger.  When an exception occurs the 
exception search machinery will find the signaling context, the context that 
raised the exception, which will be below the search machinery and the debugger 
invocation above that. The active pc of the signaling context will be the of 
for the send of digbsl et al.

So the distinction is important and the utility method is probably useful.

Do you want to remove the method simply because there are no senders in the 
image?

If so, this is indicative of a serious problem with the Pharo development 
process.  In the summer I ported VMMaker.oscog to Pharo 6.  Now as feenk try 
and build a VMMaker.oscog image on Pharo 7, the system is broken, in part 
because of depreciations and in part because useful methods (isOptimisedBlock 
(isOptimizedBlock?) in the Opal compiler) have been removed.

Just because a method is not in the image does not imply it is not in use.  It 
simply means that it is not in use in the base image.  As the system gets 
modularised this issue will only increase.  There are lots of collection 
methods that exist as a library that are not used in the base image and 
removing them would clearly damage the library for users.  This is the case for 
lots of so-called system code.  There are users out there, like those of us in 
the vm team, who rely on such system code, and it is extremely unsettling and 
frustrating to have that system code change all the time.  If Pharo is to be a 
useful platform to the vm team it has to be more stable.


Re: [Pharo-dev] [Issue 19852] Cached settings and moving images

2019-01-08 Thread Eliot Miranda
Hi Guille,

> On Jan 8, 2019, at 3:19 AM, Guillermo Polito  
> wrote:
> 
> Hi all,
> 
> I was checking issue https://pharo.fogbugz.com/f/cases/19852 which is about 
> the problems that arise when we move an image of location, and moreover when 
> we move it between different platforms (windows to linux or mac for example).
> 
> The problem are dangling file references pointing to invalid locations.
> 
> We have found 4 of them:
> 
> SystemResolver localdirectory is cached in a class variable
> GTPlaybook caches the cache and stash directories in class variables
> OMSessionStore retrieves at some point during startup an invalid file 
> reference from the (wrongly cached) local directory 
> IceLibgitRepository Sharedrepositorieslocation is also cached
> 
> All these are actually caused by settings, whose behavior is not defined when 
> an image is moved.
> There are several strange issues like, if we store settings, the local 
> directory is stored, and then all images will (wrongly) use the same 
> pharo-local directory of the first image. This is particularly annoying when 
> using the launcher for example :).
> 
> Now, we can leave this as it is right now and just move it to pharo8.
> A quick fix for Pharo7 would be removing those settings and avoiding caching, 
> but that is probably very disruptive too...
> 
> Opinions?

While a “proper” fix might involve adding symbolic names that can be 
cross-platform (including using environment variables, etc) and that would 
indeed mean waiting for Pharo 8, surely something simple can be fine in the 
mean time.  Why not have those classes maintain a “current platform” class var 
and a current directory on start up flush the file names if either the current 
platform or the current directory has changed?  One would of course have to 
test the platform before the directory.  Or even simpler, if the image name and 
directory are saved as strings (eg in Smalltalk as PreviousImageAndDirectory := 
{ ... }
and there’s an accessor such as imageNameAndOrDirectoryHasChanged) then the 
file names can easily be changed on start up.  Getting the startup order right 
might be a little tricky but surely not that hard in this case.

On a related note I think time boxed releases are a bad idea.  A system is 
ready when it is ready, based on proper acceptance criteria, such as tests, 
user experience reports, etc.  when it is clearly broken it is clearly broken.  
Releasing a system that is clearly broken helps nobody.  (I say that as the US 
prepares to enter the 2020 election cycle...)

> 
> Guille


Re: [Pharo-dev] Pharo image don't restart

2018-12-15 Thread Eliot Miranda
Hi Dario,

if you look at the end of the PharoDebug.log you will see:

Most recent primitives
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
...
..
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:
doesNotUnderstand:


So you have a recursive doesNotUnderstand: error and the system is running
out of memory.  The questions are where and why?  You can try and debug
this further by running the VM with --trace=259, e.g.

pharo-vm/pharo --trace=259 my image.image

This will produce lots of output, eventually ending in an endless stream of
doesNotUnderstand:'s.  So capture the first few megabytes of output (see
e.g. head(1) ($ man head))

FYI
"traceFlags is a set of flags.
1 => print trace (if something below is selected)
2 => trace sends
4 => trace block activations
8 => trace interpreter primitives
16 => trace events (context switches, GCs, etc)
32 => trace stack overflow
64 => send breakpoint on implicit receiver (Newspeak VM only)
128 => check stack depth on send (simulation only)
256 => trace linked sends "

Alternatively you could use a debugger such as gdb and I can tell you how
to put a breakpoint on doesNotUnderstand:


On Fri, Dec 14, 2018 at 3:42 AM Trussardi Dario Romano <
dario.trussa...@tiscali.it> wrote:

> Some consideration:
>
>
> At 12 December:
>
> I work all day with the image.
>
> at: 19:xx  I save the image without any problem.
>
> But after saved the image the system begin
> *unstable.*
>
>
>
>
>
>
>
>
>
>
> *When the mouse go on the windows summary bar ( at the bottom of the Pharo
> window ) the image crash. Some time i can launch - relaunch the same image
> from the PharoLauncher ( the pharo run but was unstable ) at: 19:45 After
> reload the unstable image ( as mentioned above ) i do the save command
> ( saving it on itself ) and the system crash. After this crash ( when i do
> the image save command ), i have not been able to launch the image. I do
> not know the status of this image ( I call this corrupt image ) At 13
> December: A) From pharoLauncher i launch the corrupt image some time,*
> * the  shell report the issue, *
>
>
>
>
>
>
>
>
> * but don't create the crash.dmp file. B) After copy the pharoLauncher
> corrupt image into Pharo7.0-64DTRDevErr entry i do this:
> /opt/pharolauncher/pharo-vm$ ./pharo --eden 15207744
> /home/party/Pharo/images/Pharo7.0-64DTRDevErr/Pharo7.0-64DTRDevErr.image
> the shell report: *
>
> *pthread_setschedparam failed: Operation not permitted*
>
> *This VM uses a separate heartbeat thread to update its internal clock*
>
> *and handle events.  For best operation, this thread should run at a*
>
> *higher priority, however the VM was unable to change the priority.  The*
>
> *effect is that heavily loaded systems may experience some latency*
>
> *issues.  If this occurs, please create the appropriate configuration*
>
> *file in /etc/security/limits.d/ as shown below:*
>
>
> *cat <
> **  hard rtprio  2*
>
> **  soft rtprio  2*
>
> *END*
>
>
> *and report to the pharo mailing list whether this improves behaviour.*
>
>
> *You will need to log out and log back in for the limits to take effect.*
>
> *For more information please see*
>
> *...r3732#linux*
>
> *Errore di segmentazione (core dump creato)*
>
>
>
>
>
>
> * C) i have a PharoDebug.log but has the size of 16MB. How can I send it
> to you? This is the status of the problematic. Thanks, Dario *
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Pharo image don't restart

2018-12-13 Thread Eliot Miranda
Hi All,

> On Dec 13, 2018, at 5:40 AM, Christophe Demarey  
> wrote:
> 
> Hi,
> 
> The problem seems to come from: "no room in eden for 
> allocateSmallNewSpaceSlots:format:classIndex:"

Then there should be a crash.dmp file that includes a stack trace.  I’m 
surprised the VM exists by this path; I would expect  the system to retry the 
allocation with an old space allocation that should work.

So, Dario, please keep a copy of your image and changes.  And please email me 
the crash.dmp file.  Create a clean one by deleting it first and then running 
the system until it fails and exits.  Multiple exits append to the crash.dmp 
file so it can end up being quite large.

> Eden is a part of the heap where new objects are allocated.
> Basically, it appears that Pharo cannot create objects any more.
> You can get information about gc and eden space here: 
> https://clementbera.wordpress.com/2017/03/12/tuning-the-pharo-garbage-collector/
> 
> Could you try to run your image in headless mode to see if it starts?
> ex: 
> 
> I would try to start your image with a bigger size for the eden.
> Clément recommends to do: Smalltalk vm parameterAt: 45 put: (Smalltalk vm 
> parameterAt: 44) * 4.
> but it means you need an image that is started …
> 
> You can also set it as an argument to the VM: --eden [mk]
> 
> You could try with —eden 15207744 for example:
> ~/Documents/Pharo/vms/70-x64/Pharo.app/Contents/MacOS/Pharo --eden 15207744 
> ~/Documents/Pharo/images/Pharo\ 7.0\ -\ 64bit\ \(development\ 
> version\)/Pharo\ 7.0\ -\ 64bit\ \(development\ version\).image 
> 
> Regards,
> Christophe
> 
> 
>> Le 13 déc. 2018 à 09:25, Trussardi Dario Romano  
>> a écrit :
>> 
>> Ciao,
>> 
>>  i have a Ubuntu system
>> where i defined a Pharo 7.0 - 64bit image managed with 
>> PharoLauncher.
>> 
>>  I work with it for a month and all work fine.
>> 
>>  Now after a save the image the system begin unstabled.
>> 
>>  When the mouse go on the windows summary bar ( at the bottom of the 
>> Pharo window ) the image go down.
>> 
>>  The first time i can launch the same image from the PharoLauncher,
>> 
>>  but after a new  image  save ( the image go down ) i can't 
>> relaunch the image.
>> 
>>  When i do the launch the pharoLauch shell report:
>> 
>> 
>> 
>>  I have some important work in the image ( and i don't have a backup )
>> 
>>  Some consideration?
>> 
>>  Thanks,
>>  Dario
>> 
> 


Re: [Pharo-dev] Application entrypoints

2018-12-13 Thread Eliot Miranda
Hi Ben,

On Wed, Dec 12, 2018 at 6:41 PM Ben Coman  wrote:

> A question was asked on discord... "I know how to start the lights out
> example,
> and feed my objects test data with the testing framework, but how does one
> start
> something like ChineseCheckers? How does one find the entry point?
> Is there a convention on naming a starting place?"
>
> I remember having similar thoughts when starting in Pharo.
>
> One convention I have seen is that amongst all the classes presumably
> prefixed "CC"
> one class would stand out being named for the application without the
> prefix.
> e.g. class "ChineseCheckers".  That is only a narrow chance for a
> namespace conflict,
> the the risk still remains.
>
> I suggested another path would have a package tag "Application"
> (i.e. "ChineseCheckers-Application") that contains a single class
> which has an #open method on the class-side.
> The tag "Application" sorts high up on the package-tags and is
> self-descriptive.
> But I've not seen that used before, so while I think its a good idea, its
> not really a convention.
> Conventions are only useful if they are broadly understood.
>
> So I'm wondering what other things people do to draw attention to their
> application entry points.
>

I use a combination of class-side categories (instance creation, examples,
api, etc) and class comments.  e.g. in the class comment
for StackInterpreterSimulator in VMMaker you'll find:

| vm |
vm := StackInterpreterSimulator newWithOptions: #().
vm openOn: '/Users/eliot/Squeak/Squeak4.4/trunk44.image'.
vm setBreakSelector: #&.
vm openAsMorph; run

But the above is hard to find.  I buy Doru's examples approach.  Usually
interesting objects are used in some kind of context and there may be a
flow that the above illustrates, instantiation => initialization => use.
All this is possible with examples, and examples have a pragma to identify
them to tools.  So I would push the examples direction. I guess the kernel
of this approach is to create a method that creates and uses some object,
and that is labelled with a specific pragma identifying it as an example.
It can then serve as both a test and an example to users.


> cheers -ben
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Fwd: Announcing Repl.it Multiplayer

2018-12-06 Thread Eliot Miranda
Hi Santiago,

On Thu, Dec 6, 2018 at 1:52 PM Santiago Bragagnolo <
santiagobragagn...@gmail.com> wrote:

> Would this be interesting to have in pharo??
>

There is already previous relevant work.  Look up Kansas for Self
http://wiki.squeak.org/squeak/1357 and Nebraska for Squeak
http://wiki.squeak.org/squeak/1356.  Focussing on the Multiplayer like UI
would be a major regression.  Note that we already have lots fo relevant
infrastructure, such as a VNC server that allows desktops to be shared.
Building a shared programming environment for Pharo doesn't need to start
from such limited models as the Multiplayer.


>
> What do you think?
>
> -- Forwarded message -
> From: Amjad from Repl.it 
> Date: jue., 6 de dic. de 2018 21:55
> Subject: Announcing Repl.it Multiplayer
> To: 
>
>
> The official release of Repl.it Multiplayer, the collaborative coding
> experience.
> [image: Repl.it Logo]
> 
>
> Hey Santiago,
>
> Professional programmers all know that software development is a
> fundamentally social experience. But coding remains a single-player
> experience by default — today, we're changing this!
>
> As part of our mission to make computing more accessible, we believe
> connecting coders, learners, and teachers together in real time, in the
> development environment, is a big piece of the puzzle. That's why we're
> proud to announce *Multiplayer*.
>
> Multiplayer lets you code with friends in the same editor, execute
> programs in the same interpreter, interact with the same terminal, chat in
> the IDE, edit files and share the same system resources, and ship
> applications from the same interface! We've redesigned every part of our
> infrastructure to work in multiplayer mode -- from the filesystem to the
> interpreter.
> [image: Repl.it Multiplayer]
> 
>
> Read more about it here
> ,
> or, better yet, hop in
> ,
> invite your friends and start coding!
>
> Amjad from Repl.it
>
> 767 Bryant St, #210, San Francisco, CA 94107
>
> Unsubscribe
> 

Re: [Pharo-dev] Minheadless trial

2018-12-05 Thread Eliot Miranda
Hi Esteban,

On Wed, Dec 5, 2018 at 5:53 AM Esteban Lorenzano 
wrote:

> Hi Eliot,
>
> On 5 Dec 2018, at 14:46, Eliot Miranda  wrote:
>
> Hi Esteban,
>
> On Aug 7, 2018, at 4:36 AM, Esteban Lorenzano  wrote:
>
> I’m slowly working on that VM because we want it to be the default for
> Pharo 8.
> In our vision, it should be a responsibility of the image to start or not
> a graphical UI, so we are preparing (we have been preparing to it for
> years, actually) to achieve this behaviour.
> To make this work, we need all platforms covered (and another huge
> quantity of changes here and there).
> Anyway, I didn’t merge because I wanted to have win64 covered, not just
> what we have now, and since no-one was using that VM I didn’t feel pression
> to do it :)
>
>
> How does that answer Norbert’s question?  By doing the work in your own
> fork you risk forking.  Do you want to fork?  If not, why not do the work
> in opensmalltalk-vm?
>
>
> This is old thing (there is a pull request now, since like 3 weeks).
>

Ah, OK.  I should;d have checked the dates more carefully.


> I worked on my fork because that’s how you do it with git: you fork, you
> work, and you do a Pull Request when ready.
>

I hope you forked opensmalltalk-vm not pharo-vm/opensmalltalk-vm, that's
all.


> I was explaining why the PR was not still done: I wanted to have covered
> the three platforms before doing it.
>
> I guess the terminology is confusing you?
>

I get forking in a single repository.  I also get forking across
repositories.  These are two different things.  I had misunderstood where
you were forking.  I apologize.


>
> Cheers!
> Esteban
>
>
>
> Cheers,
> Esteban
>
>
> On 7 Aug 2018, at 08:50, Norbert Hartl  wrote:
>
> What keeps you from doing a pull request to opensmalltalk-vm ?
>
> Am 07.08.2018 um 07:47 schrieb Esteban Lorenzano :
>
> Hi Ben,
>
> Sorry for coming here so late, I didn’t see this thread before.
> I already have a working minheadless branch that was adapted to Eliot’s
> process.
> It was working for Pharo in Linux and Mac (Windows was ongoing but not
> finished, that’s why is not pushed).
>
> You can find this branch here:
>
> https://github.com/estebanlm/opensmalltalk-vm/tree/add-minheadless-vm
>
> Interesting part is that I didn’t tackled any of the issues you are
> working on, so the work is easily mergeable :)
>
> Cheers,
> Esteban
>
> Ps: with some changes in image, I’m using exclusively this VM since a
> couple of months and it works fine.
>
>
> On 7 Aug 2018, at 07:22, Ben Coman  wrote:
>
> On 7 August 2018 at 05:12, Eliot Miranda  wrote:
>
>>
>> Hi Ben,
>>
> Feel free to make this edit and commit
>>
>
> I'm pushing changes here...
>
> https://github.com/bencoman/opensmalltalk-vm/tree/MinimalisticHeadless-x64-msvc2017
>
> and the diff can be tracked here...
>
> https://github.com/bencoman/opensmalltalk-vm/compare/MinimalisticHeadless...bencoman:MinimalisticHeadless-x64-msvc2017
>
>
> 
> On 6 August 2018 at 13:22, Ben Coman  wrote:
>
>> On 6 August 2018 at 11:50, Ben Coman  wrote:
>>
>>
>> https://github.com/ronsaldo/opensmalltalk-vm/blob/be7b1c03/platforms/minheadless/windows/sqPlatformSpecific-Win32.c#L80
>>  typedef HRESULT WINAPI (*SetProcessDpiAwarenessFunctionPointer) (int
>> awareness);
>> C2059 sqPlatformSpecific-Win32.c:80 syntax error: '('
>> E0651 a calling convention may not be followed by a nested declarator.
>>
>> The following change reduces build errors to 1...
>>   typedef HRESULT (*SetProcessDpiAwarenessFunctionPointer) (int
>> awareness);
>>
>> but I'm not sure of the implications.
>
>
> I found the correct solution to this...
> "The trick is placing the [call declaration] inside the parentheses"
>
> https://stackoverflow.com/questions/4830355/function-pointer-and-calling-convention
>
> i.e. the following compiles cleanly
> typedef HRESULT (WINAPI *SetProcessDpiAwarenessFunctionPointer) (int
> awareness);
>
>
> -
> Now running the VM (without parameters) I get...
>Debug Assertion Failed!
>Program: ...\x64-Debug\dist\pharo.exe
>File: minkernel\crts\ucrt\src\appcrt\tran\amd64\ieee.c
>Line: 106
>Expression: (mask&~(_MCW_DN | _MCW_EM | _MCW_RC))==0
>
> at the call to _controlfp(FPU_DEFAULT, _MCW_EM | _MCW_RC | _MCW_PC |
> _MCW_IC);
>
> https://github.com/ronsaldo/opensmalltalk-vm/blob/be7b1c03/platforms/minheadless/windows/sqPlatformSpecific-Win32.c#L118
>
>
> According to https://msdn.microsoft.com/en-us/library/e9b52ceh.aspx
> x64 does not support _MCW_PC or _MCW_IC
> but I'm clueless about the implications of these FPU flags.
> Could our math guys please advise?
>
> Eliminating those two flags allows a VM run successfully without loading
> an Image.
> i.e. it successfully passes...
>osvm_initialize();
>osvm_parseCommandLineArguments(argc, argv);
>osvm_initializeVM();
>
> Next is to try loading an Image.
>
> cheers -ben
>
>
>
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Minheadless trial

2018-12-05 Thread Eliot Miranda
Hi Esteban,

> On Aug 7, 2018, at 4:36 AM, Esteban Lorenzano  wrote:
> 
> I’m slowly working on that VM because we want it to be the default for Pharo 
> 8. 
> In our vision, it should be a responsibility of the image to start or not a 
> graphical UI, so we are preparing (we have been preparing to it for years, 
> actually) to achieve this behaviour. 
> To make this work, we need all platforms covered (and another huge quantity 
> of changes here and there). 
> Anyway, I didn’t merge because I wanted to have win64 covered, not just what 
> we have now, and since no-one was using that VM I didn’t feel pression to do 
> it :)

How does that answer Norbert’s question?  By doing the work in your own fork 
you risk forking.  Do you want to fork?  If not, why not do the work in 
opensmalltalk-vm?

> 
> Cheers, 
> Esteban
> 
> 
>> On 7 Aug 2018, at 08:50, Norbert Hartl  wrote:
>> 
>> What keeps you from doing a pull request to opensmalltalk-vm ?
>> 
>>> Am 07.08.2018 um 07:47 schrieb Esteban Lorenzano :
>>> 
>>> Hi Ben,
>>> 
>>> Sorry for coming here so late, I didn’t see this thread before. 
>>> I already have a working minheadless branch that was adapted to Eliot’s 
>>> process. 
>>> It was working for Pharo in Linux and Mac (Windows was ongoing but not 
>>> finished, that’s why is not pushed).
>>> 
>>> You can find this branch here: 
>>> 
>>> https://github.com/estebanlm/opensmalltalk-vm/tree/add-minheadless-vm
>>> 
>>> Interesting part is that I didn’t tackled any of the issues you are working 
>>> on, so the work is easily mergeable :) 
>>> 
>>> Cheers, 
>>> Esteban
>>> 
>>> Ps: with some changes in image, I’m using exclusively this VM since a 
>>> couple of months and it works fine.
>>> 
>>> 
>>>> On 7 Aug 2018, at 07:22, Ben Coman  wrote:
>>>> 
>>>>> On 7 August 2018 at 05:12, Eliot Miranda  wrote:
>>>>>  
>>>>> Hi Ben, 
>>>>> Feel free to make this edit and commit
>>>> 
>>>> I'm pushing changes here...
>>>> https://github.com/bencoman/opensmalltalk-vm/tree/MinimalisticHeadless-x64-msvc2017
>>>> 
>>>> and the diff can be tracked here...
>>>> https://github.com/bencoman/opensmalltalk-vm/compare/MinimalisticHeadless...bencoman:MinimalisticHeadless-x64-msvc2017
>>>> 
>>>> 
>>>> 
>>>>> On 6 August 2018 at 13:22, Ben Coman  wrote:
>>>>> On 6 August 2018 at 11:50, Ben Coman  wrote:
>>>>> 
>>>>> https://github.com/ronsaldo/opensmalltalk-vm/blob/be7b1c03/platforms/minheadless/windows/sqPlatformSpecific-Win32.c#L80
>>>>>  typedef HRESULT WINAPI (*SetProcessDpiAwarenessFunctionPointer) (int 
>>>>> awareness);
>>>>> C2059 sqPlatformSpecific-Win32.c:80 syntax error: '('
>>>>> E0651 a calling convention may not be followed by a nested declarator.
>>>>> 
>>>>> The following change reduces build errors to 1...
>>>>>   typedef HRESULT (*SetProcessDpiAwarenessFunctionPointer) (int 
>>>>> awareness);
>>>>> 
>>>>> but I'm not sure of the implications.
>>>> 
>>>> I found the correct solution to this...
>>>> "The trick is placing the [call declaration] inside the parentheses"
>>>> https://stackoverflow.com/questions/4830355/function-pointer-and-calling-convention
>>>> 
>>>> i.e. the following compiles cleanly
>>>> typedef HRESULT (WINAPI *SetProcessDpiAwarenessFunctionPointer) (int 
>>>> awareness); 
>>>> 
>>>> 
>>>> -
>>>> Now running the VM (without parameters) I get...
>>>>Debug Assertion Failed!
>>>>Program: ...\x64-Debug\dist\pharo.exe
>>>>File: minkernel\crts\ucrt\src\appcrt\tran\amd64\ieee.c 
>>>>Line: 106
>>>>Expression: (mask&~(_MCW_DN | _MCW_EM | _MCW_RC))==0
>>>> 
>>>> at the call to _controlfp(FPU_DEFAULT, _MCW_EM | _MCW_RC | _MCW_PC | 
>>>> _MCW_IC);
>>>> https://github.com/ronsaldo/opensmalltalk-vm/blob/be7b1c03/platforms/minheadless/windows/sqPlatformSpecific-Win32.c#L118
>>>> 
>>>> 
>>>> According to https://msdn.microsoft.com/en-us/library/e9b52ceh.aspx
>>>> x64 does not support _MCW_PC or _MCW_IC
>>>> but I'm clueless about the implications of these FPU flags.
>>>> Could our math guys please advise?
>>>> 
>>>> Eliminating those two flags allows a VM run successfully without loading 
>>>> an Image.
>>>> i.e. it successfully passes...
>>>>osvm_initialize();
>>>>osvm_parseCommandLineArguments(argc, argv);
>>>>osvm_initializeVM();
>>>> 
>>>> Next is to try loading an Image.
>>>> 
>>>> cheers -ben
>>> 
> 


Re: [Pharo-dev] InterruptedContext vs suspendedContext

2018-12-01 Thread Eliot Miranda
Hi Andrei, Hi Thomas,

Andrei, you are right; they are different and the difference is important.  
As you say, suspendedContext is the top (“hot”) context in a process’s context 
chain, but interruptedContext is the context which sent the signal message that 
eventually raised the exception that invoked the debugger.  suspendedContext is 
therefore where execution is in the process, but interruptedContext is where 
the debugger should show the stack.

Thomas, because the exception system is implemented in Smalltalk the handling 
of the initial signal (eg in Object>>#halt), all the way to opening a debugger, 
is itself Smalltalk code, and exists as activations from suspendedContext to 
interruptedContext.  The debugger, with help from the exception system, 
carefully hides this processing from the programmer.  If it did not we would 
have to wade through many activations before we found where the exception 
occurred.

When a process is interrupted by control period things are different.  Here, 
another process handles opening the debugger and indeed suspendedContext and 
interruptedContext are the same.

So the difference between suspendedContext and interruptedContext is vital to 
the debugger.  Without it we would  see the inner machinery of the exception 
system when errors or exceptions are raised.

HTH

_,,,^..^,,,_ (phone)

> On Nov 30, 2018, at 7:42 AM, Andrei Chis  wrote:
> 
> Hi,
> 
> From what I remember they are not always redundant, but I'm not 100% sure 
> they are both needed.
> 
> `suspendedContext` from Process is always the top context of a process. After 
> execution actions like Step Into, Step Over, Step Through it will be the same 
> as interruptedContext in the debugger.
> 
> They will be different when opening the debugger as a result of an exception.
> Exception>>#debug triggers the workflow for opening a debugger. This uses 
> `self signalerContext` as the context that is being debugged. This is the 
> context where the exception was raised and this will be put in the 
> interruptedContext. As this point the execution of the current process is not 
> over and it will continue up to 
> `MorphicUIManager>>#debugProcess:context:label:fullView:notification: ` where 
> the current process is suspended. At that point the two will be different, as 
> suspendedContext will be the context of the method 
> MorphicUIManager>>#debugProcess:context:label:fullView:notification: and the 
> interruptedContext the context that triggered the exception.
> 
> But it might be that they are not both needed. One  possible option might be 
> to force the process to step to the context that raised the exception when 
> the debugger is created. For example in DebugSession>>process:context:.
> Apart from when opening the debugger I do not know if there is another 
> situation where those two can diverge.
> 
> Cheers,
> Andrei
> 
>> On Fri, Nov 30, 2018 at 11:54 AM Thomas Dupriez 
>>  wrote:
>> Hello,
>> 
>> 
>> Instances of DebugSession have an "interruptedContext" and an 
>> "interruptedProcess" instance variable.
>> 
>> Instances of Process have a "suspendedContext" instance variable.
>> 
>> Does someone know if there is a relation between the interruptedContext of a 
>> DebugSession and the suspendedContext of its interruptedProcess? At first 
>> glance it seems like these two variables are redundant and store the same 
>> Context.
>> 
>> Thomas Dupriez


Re: [Pharo-dev] [squeak-dev] 32 vs 64 bits and large integer hash

2018-11-24 Thread Eliot Miranda
On Sat, Nov 24, 2018 at 7:40 AM Luciano Notarfrancesco 
wrote:

> On Fri, Nov 23, 2018 at 12:36 AM Chris Muller  wrote:
>
>> > >> I'm a bit puzzled. I thought  (small)Integers being their own hash
>> is a good thing?
>>
>> I was wondering exactly the same thing!
>>
>> > > I would call it simple but not necessarily good.
>> > > The problem with it is that consecutive numbers generate long chains
>> in HashedCollections:
>> > >
>> > > a := (1 to: 1000) asArray.
>> > > s := Set withAll: a.
>> > > [ 1 to: 100 do: [ :each | s includes: each ] ] timeToRun.
>> > > "==> 7014"
>> > >
>> > > The solution in Squeak is to use PluggableSet instead of Set, because
>> it applies #hashMultiply on the hash value:
>> > >
>> > > ps := PluggableSet integerSet.
>> > > ps addAll: a.
>> > > [ 1 to: 100 do: [ :each | ps includes: each ] ] timeToRun.
>> > > "==> 95"
>> > >
>> > > IIRC in Pharo SmallInteger's hash is based on #hashMultiply to avoid
>> the long chains. That was probably the main reason for the push to make
>> #hashMultply a numbered primitive.
>> > >
>> >
>> > Interesting!
>>
>> Indeed!  When making a #hash methods, one always focuses on hash
>> distribution and finding the elements, but its easy to forget about
>> performance of NOT finding an element.
>>
>>
> Yes! I had this problem blow in my face a couple of years ago, and Juan
> agreed to include the change in Cuis to make SmallIntegers NOT their own
> hash.
>

This seems to be a basic error.  The idea of a hash function is to produce
a well-distributed set of integers for some set of values.  Since the
SmallIntegers are themselves perfectly distributed (each unique
SmallInteger is a unique value), it is impossible to produce a better
distributed hash function than  the integers themselves. For some
application it may indeed be possible to produce a better distributed set
of hashes for the integers; for example an application which considers only
powers of two could use the log base 2 to produce a smaller and better
distributed set fo values modulo N than the integers themselves.  But in
general the integers are definitionally well-distributed.  In fact, unless
one has a perfect hash function one is in danger of producing a less
well-distributed set of values from a hash function than the SmallIntegers
themselves.

This argument doesn't apply as integers grow beyond the SmallInteger, but
not because we want better distribution of values than the large integers
themselves, but b because we want to avoid large integer arithmetic.


> I think it is a good idea in general when programming a hash method in
> Smalltalk to make it somewhat random.
>

As I've indicated, I think this is impossible in general.  It is only in
specific applications, using specific subsets of the SmallIn tigers for
which a better hash function could be derived, but this would be specific
to that application.  For purposes such as these we have
PluggableDictionary et al which can exploit an application-specific hash.
But in general the SmallIntegers are ideal values for their own hashes.


> It will never be perfect for every use case, and it doesn't need to be
> cryptographically secure, but there shouldn't be simple use cases (e.g.,
> consecutive integers) that produce hashes that are not uniformly
> distributed. This can be achieved with minimal performance impact (and
> potentially big performance gains in hashed collections) quite simply by
> sending some hashMultiply message in your hash method.
>
> Regards,
> Luciano
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [squeak-dev] Message>>#= & Message>>#hash

2018-11-19 Thread Eliot Miranda
Hi David,

On Mon, Nov 19, 2018 at 9:52 AM David T. Lewis  wrote:

> On Mon, Nov 19, 2018 at 09:32:17AM -0800, Eliot Miranda wrote:
> > Hi All,
> >
> > In VisualWorks Message implements #= & #hash naturally; two messages
> > whose selectors and arguments are #= are also equal.  But in Cuis, Squeak
> > and Pharo Message inherits #= and #hash from Object, i.e. uses identity
> > comparison.  This is, to say the least, annoying.  Any objections to
> > implementing comparing in Message to match VisualWorks?
> >
>
> That sounds like an obviously good thing to do :-)
>
> Is the lookupClass instance variable relevant for comparisons? I am
> guessing not, since we already have #analogousCodeTo: for that type of
> comparison.
>

For me it is relevant.  Two messages with different lookupClasses, e.g. one
with nil and one with a specific class, represent different messages, one a
normal send one a super send.  So my changes in waiting include lookupClass
in both hash and =.  I don't think it makes much difference, but the
incompatibility with VisualWorks, while regrettable, feels correct to me.

Dave
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Pharo 7 speed compared to VW

2018-11-19 Thread Eliot Miranda
Hi Shaping,

On Mon, Nov 19, 2018 at 5:04 AM Shaping  wrote:

> (Second try in 6 days.  Does anyone have recent performance data for Pharo
> 7 relative to VW 7.10 or later?)
>

I don't have access to recent vnc versions beyond 7.7.  If you have access
perhaps you could run the computer language shootout benchmarks (
https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/smalltalk.html)
and report the results.


>
>
> *From:* Pharo-dev [mailto:pharo-dev-boun...@lists.pharo.org] *On Behalf
> Of *Shaping
> *Sent:* Tuesday, November 13, 2018 00:33
> *To:* 'Pharo Development List' 
> *Subject:* [Pharo-dev] Pharo 7 speed compared to VW (was: The STON
> Specification, Cog speed, and namespaces/modules)
>
>
>
> Does anyone know whether  Pharo 7 is as fast as VW 7.10 or later?
>
>
>
> Do we have recent comparative benchmarks?
>
>
>
> The comparison should ignore potential GPU-based improvement in my algo;
> that will happen later.  The test should involve some math, file streaming,
> and the parsing that entails—an ordinary mix of macrobenchmarks.  The
> comparison should be based on both Pharo 7 and VW each running a single
> Smalltalk Process (one time-slice of one OS thread in one OS process).   I
> need Pharo 7 speed to be comparable or better to justify the port.
>
>
>
> Pharo is definitely looking and working better.  I’ve spend more time with
> it last few weeks than during the previous decade.  Thanks to everyone for
> the effort and improvements.
>
>
>
>
>
> Shaping
>
>
>
> *From:* Shaping [mailto:shap...@uurda.org ]
> *Sent:* Wednesday, November 7, 2018 00:41
> *To:* 'Pharo Development List' 
> *Subject:* RE: [Pharo-dev] [ANN] The STON Specification, Cog speed, and
> namespaces/modules
>
>
>
> Hi Eliot.
>
>
>
> Pharo (& Squeak & Cuis) Float subclass BoxedFloat64 maps exactly to
> VW's Double. In 64-bit SmallFloat64 maps exactly to SmallDouble. But I
> wonder whether there is any issue here.  STON would use the print strings
> for (PSC) Float / (VW) Double, and so deseerialization on Pharo would
> automatically produce the right class.  Going in the other direction might
> need some help.  APF needs support in PSC before one can port, but are
> representable as suitably-sized word arrays.
>
>
>
> There is no support for __float128 anywhere in the VM (e.g. not even in
> the FFI) on PSC as yet.
>
>
>
> I see Pharo’s WordArray.  I’ll work on an APF for Pharo, as time permits.
> I’m using APFs in VW in the 300-bit range, and want to reduce the needed
> precision to 64 bits, to save space and time on large (5 million+) scalar
> time-series, both on the heap and during BOSSing (25 m save-time now).
> The problem is not  so much an issue for the JulianDayNumber
> (JDN)-precision, which is adequate in this app at 14 to 15 digits (even
> though my JDN class subclasses APF, for now).  Other calculations need the
> more extreme precision.  I think I can make 128-bit floats work, and would
> really like to see a small, fast, boxed 128-bit float implementation in
> Pharo or VW.   The APFs are big and slow.  Where in the queue of planned
> improvements to Pharo does such a task lie?  I suspect it’s not a very
> popular item.
>
>
>
> Broadening the issue somewhat, I’m trying to find as many good reasons as
> possible to justify the work needed to port all my VW stuff to Pharo.
>
>
>
> I’ve seen the references to Cog’s speed and coming speed-up.  Are there
> recent (say, in the last year) benchmarks comparing VW and Pharo?   Any
> details here would be very much appreciated.
>
>
>
> Having no namespaces in Pharo is, I think, the biggest impediment.   I
> prefer not to prefix class names, but there may be fewer name-collisions
> than I suppose--maybe none.   Still, I need to know how VW and Pharo
> classes map in order to place overrides and extensions correctly.  Besides
> the mentioned float-class mappings is there a reference on this?
>
>
>
> Object allSubclasses
>
>
>
> in Pharo 7 64-bit, produces 14946 classes.  Pharo is a little bigger than
> it used to be.
>
>
>
> I suppose I don’t need to check all unloaded packages because all classes
> in each of those will have the same unique prefix. Is that correct?  Or, I
> could just load every package I can find, before I check names.  But that
> little experiment has never gone well in the past.
>
>
>
> Is the Pharo-with-namespaces issue dead or merely suspended, awaiting a
> more fitting idea than what VW currently offers?
>
>
>
>
>
> Shaping
>
>
>
>
>
> On Tue, Nov 6, 2018 at 12:56 AM Shaping  wrote:
>
> STON is able to serialise Pharo’s Floats, what do you mean by double ?
>
>
>
> Floating-point numbers in IEEE 64-bit format, 14 or 15 significant digits,
> with a range between =10^307 and 10^307.
>
>
>
> Additionally, I recently asked Sven if t would be possible to store
> ScaledDecimals (I think it implements what you call
> ArbitraryPrecisionFloats) without loss of precision.
>
>
>
> I’m referring to VW’s Double (and SmallDouble in 64-bit engines/image

Re: [Pharo-dev] Anyone else seen crashes like these ?

2018-11-12 Thread Eliot Miranda
Hi Ben,
On Mon, Nov 12, 2018 at 8:51 AM Ben Coman  wrote:

>
>
> On Mon, 12 Nov 2018 at 21:37, Sven Van Caekenberghe  wrote:
>
>> Hi,
>>
>> I run Pharo 7 64-bit on a macOS laptop, where the images are kept running
>> across sleep/wake cycles.
>>
>
> Just to confirm, you mean your laptop sleep/wake cycle.
> On my Window 10 laptop I do the same quite often with no problems.
>
>
>> For many weeks, it often happens that an image crashes before/after such
>> a sleep/wakeup (not all the time, just regularly).
>>
>> Here is a crash dump from today (fresh image/vm from WE, nothing special
>> loaded).
>>
>> Related to scheduling ? Event handling ?
>
>
> In trying to understand the last few moments, is the recent primitives
> list strongly ordered in time?...
>   Most recent primitives
>   signal
>   nowTick
>   primSignal:atUTCMicroseconds:
>   wait
>   millisecondClockValue
>   @
>   actualScreenSize
>

Yes, it's a log of the most recent 256 named primitives.  Note that they're
irrelevant in this case.  The crash is always rooted in vmIOProcessEvents.
i.e. the VM is responding to some input event, and it calls pumpRunLoop to
do (Objective-C) [NSRunLoop mainRunLoop] runMode:NSDefaultRunLoopMode
beforeDate:[NSDate distantPast], which in Smalltalk would be written
NSRunLoop mainRunLoop runMode: NSDefaultRunLoopMode beforeDate: NSDate
distantPast.  And somewhere within this a display update occurs which
crashes, presumably because we're using stale data that should have been
invalidated oil sleep and refreshed on wake.


>
>
> Starting at #actualScreenSize, instrumenting the code by adding...
>  Transcript crShow: thisContext sender printString.
> indicates the following call chain...
>   DisplayScreen-class>>actualScreenSize
>   MorphicUIManager>>checkForNewDisplaySize
>   DisplayScreen class>>checkForNewScreenSize
>   WorldState>>doOneCycleNowFor:
>   WorldState>>doOneCycleFor:
>   WorldMorph>>doOneCycle
>   WorldMorph class>>doOneCycle
>   MorphicUIManager>>spawnNewProcess
>
> In detail from a static analysis of the code...
>> WorldState>>doOneCycleNowFor:
>   > DisplayScreen class>>checkForNewScreenSize
>   > MorphicUIManager>>checkForNewDisplaySize
>   > DisplayScreen-class>>actualScreenSize
> PRIMITIVE
>   < MorphicUIManager>>checkForNewDisplaySize
>< DisplayScreen class>>checkForNewScreenSize
> < WorldState>>doOneCycleNowFor
>  < WorldState>>doOneCycleFor:
>   < WorldMorph>>doOneCycle
>< WorldMorph class>>doOneCycle
> < MorphicUIManager>>spawnNewProcess
>> WorldMorph class>>doOneCycle
>   > WorldMorph>>doOneCycle
>  > WorldState>>doOneCycleNowFor:
> > WorldState>>interCyclePause:
>> Time>>millisecondClockValue   PRIMITIVE
> < WorldState>>interCyclePause:
>> Delay>>schedule
>   > DelaySemaphoreScheduler>>schedule:
>   > Semaphore>>wait
> PRIMITIVE "readyToSchedule variable"
>   > Semaphore>>signal
>  PRIMITIVE "timingSemaphore variable"   NOT RECORDED
>
> maybe context change causes this to be recorded later in "Most
> recent primitives"
>  >
> DelayMicrosecondTicker>>waitForUserSignalled:orExpired:  waking up from
> Semaphore>>wait
> the primSignal:atUTCMicroseconds:
> PRIMITIVE  immediately before the #wait seems to have been logged now in
> "Most recent primitives"
>  >
> DelaySemaphoreScheduler>>scheduleAtTimingPriority
> > Semaphore>>signal
>  PRIMITIVE  "readyToSchedule variable"  seems to have not been recorded in
> "Most recent primitives"
> >
> DelayBasicScheduler>>scheduleAtTimingPriority
>  <
> DelaySemaphoreScheduler>>scheduleAtTimingPriority
>   <
> DelayBasicScheduler>>runBackendLoopAtTimingPriority
>  > DelayMicrosecondTicker>>nowTick
>  PRIMITIVE
>  > Delay>>timingPrioritySignalExpired
> > Semaphore>>signal
> PRIMITIVE "delaySemaphore variable"  CRASH
>
>  Smalltalk stack dump:
> 0x7ffee21d7138 M Delay>timingPrioritySignalExpired 0x113e3b138: a(n)
> Delay
> 0x7ffee21d7170 M [] in
> DelaySemaphoreScheduler(DelayBasicScheduler)>runBackendLoopAtTimingPriority
> 0x11453b8c0: a(n) DelaySemaphoreScheduler
>0x1169f51c8 s BlockClosure>ensure:
>0x1169f5828 s
> DelaySemaphoreScheduler(DelayBasicScheduler)>runBackendLoopAtTimingPriority
>0x1169f5c38 s [] in
> DelaySemaphoreScheduler(DelayBasicScheduler)>startTimerEventLoopPriority:
>0x1169fae10 s [] in BlockClosure>newProcess
>
> The only reason I can think that last semaphore call would be a problem is
> if variable delaySemaphore was not a Semaphore,
> but th

Re: [Pharo-dev] Infinite error recursion while saving Pharo70 alpha

2018-11-07 Thread Eliot Miranda
On Wed, Nov 7, 2018 at 6:28 AM Eliot Miranda 
wrote:
[snip]

> Forgive me, but may I beg you to post the text of stack back traces in
> future not screenshots?  The screenshots have two major disadvantages; a)
> they're huge and b) one cannot search email messages for the text they
> contain.  You can use e.g. shell level tools to copy the text, eg from the
> terminal or from the console.
>

i.e.:

Your mail to 'Pharo-dev' with the subject

Re: [Pharo-dev] Infinite error recursion while saving Pharo70 alpha

Is being held until the list moderator can review it for approval.

The reason it is being held:

Message body is too big: 1522650 bytes with a limit of 1000 KB

[snip]

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [ANN] The STON Specification

2018-11-06 Thread Eliot Miranda
Hi Shaping,

Pharo (& Squeak & Cuis) Float subclass BoxedFloat64 maps exactly to
VW's Double. In 64-bit SmallFloat64 maps exactly to SmallDouble.  But I
wonder whether there is any issue here.  STON would use the print strings
for (PSC) Float / (VW) Double, and so deseerialization on Pharo would
automatically produce the right class.  Going in the other direction might
need some help.  APF needs support in PSC before one can port, but are
representable as suitably-sized word arrays. There is no support for __float128
anywhere in the VM (e.g. not even in the FFI) on PSC as yet.

On Tue, Nov 6, 2018 at 12:56 AM Shaping  wrote:

> STON is able to serialise Pharo’s Floats, what do you mean by double ?
>
>
>
> Floating-point numbers in IEEE 64-bit format, 14 or 15 significant digits,
> with a range between =10^307 and 10^307.
>
>
>
> Additionally, I recently asked Sven if t would be possible to store
> ScaledDecimals (I think it implements what you call
> ArbitraryPrecisionFloats) without loss of precision.
>
>
>
> I’m referring VW’s Double (and SmallDouble in 64-bit engines/images).
> APFs are binary representations that can be arbitrarily large, using
> Integers (LargePositiveIntegers for example) to model the bits of the
> mantissa.
>
>
>
> Before, because STON extends JSON, it was storing all kind of numbers
> either as float or integer.
>
>
>
> Now, thanks to Sven, STON stores ScaledDecimals correctly (without loss of
> precision through serialisation as float, what was done before).
>
>
>
> But I do not know if this change is integrated in recent images yet.
>
>
>
> …..
>
>
>
> Number subclass: #Float
>
> instanceVariableNames: ''
>
> classVariableNames: 'E Epsilon Halfpi Infinity Ln10 Ln2
> MaxVal MaxValLn MinValLogBase2 NaN NegativeInfinity NegativeZero Pi
> RadiansPerDegree Sqrt2 ThreePi Twopi'
>
> poolDictionaries: ''
>
> category: 'Kernel-Numbers'
>
>
>
>
>
> My instances represent IEEE-754 floating-point double-precision numbers.
> They have about 16 digits of accuracy and their range is between plus and
> minus 10^307. Some valid examples are:
>
>
>
> 8.0 13.3 0.3 2.5e6 1.27e-30 1.27e-31 -12.987654e12
>
> ….
>
>
>
> I see that Pharo’s Float is VW’s Double.   So then I just need to be able
> to serialize APF.
>
>
>
> ….
>
> FIFloatType subclass: #FFIFloat128
>
> instanceVariableNames: ''
>
> classVariableNames: ''
>
> poolDictionaries: ''
>
> category: 'UnifiedFFI-Types'
>
>
>
>
>
> I'm a 128bits (cuadruple precision) float.
>
> It is usually not used, but some compiler modes support it (__float128 in
> gcc)
>
>
>
> THIS IS NOT YET SUPPORTED
>
> ….
>
>
>
> The class above is also from the Pharo 7 image.  This is the largest of
> the c-type FFIFloats.  Any Float classes of this size and larger for the
> Smalltalk heap on 64-bit Pharo?
>
>
>
>
>
> Cheers,
>
>
>
> Shaping
>
>
>
>
>
> Le 6 nov. 2018 à 06:58, Shaping  a écrit :
>
>
>
> (Having domain problems recently.  Please excuse this posting if you have
> seen it twice.  I've not seen it appear yet on the list.)
>
>
> Can STON be extended to handle Doubles and ArbitraryPrecisionFloats?
>
> Shaping
>
> -Original Message-
> From: Pharo-dev [mailto:pharo-dev-boun...@lists.pharo.org
> ] On Behalf Of
> David T. Lewis
> Sent: Wednesday, October 31, 2018 14:58
> To: Pharo Development List 
> Subject: Re: [Pharo-dev] [ANN] The STON Specification
>
> This is very clear and well written.
>
> Dave
>
>
> Hi,
>
> Since there can never be enough documentation I finally took some time
> to write a more formal description of STON as a data format.
>
>  https://github.com/svenvc/ston/blob/master/ston-spec.md
>
> The idea is to let this stabilise a bit and to then update the two
> other documents describing STON, where necessary:
>
>  https://github.com/svenvc/ston/blob/master/ston-paper.md
>
> https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuc
> cessfulBuild/artifact/book-result/STON/STON.html
>
> Also, the latest changes in STON have to make their way to the Pharo
> image as well.
>
>  https://github.com/svenvc/ston
>
> All feedback is welcome.
>
> Sven
>
>
> --
> Sven Van Caekenberghe
> Proudly supporting Pharo
> http://pharo.org
> http://association.pharo.org
> http://consortium.pharo.org
>
>
>
>
>
>
>
>
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Xcode projects for VM on iOS or Mac?

2018-10-28 Thread Eliot Miranda
Hi Todd,

On Fri, Oct 26, 2018 at 10:00 AM Todd Blanchard via Pharo-dev <
pharo-dev@lists.pharo.org> wrote:

> Does anyone have Xcode projects for building the VM on iOS or Mac?  Can
> they share them or give me some tips for setting one up?
>

First I'd like to know your use case.  I presume it;'s for debugging, but
it may be more for browsing.  Can you say?  (Personally I find raw lldb
adequate for debugging).

Second, my reasons for not using Xcode projects for building the Mac VMs
are that
- Xcode projects, being a serialization of an object graph, are difficult
to edit, and they offer no parameterisation, so whereas there is one small
set of makefiles (8 in all) for all of the 32-bit and 64-bit builds on Mac,
there had to be a separate Xcode project for each build.  When adding Spur
this was simply unsupportable
- clearly Xcode is not necessary for building, browsing, or debugging;
people are able to accomplish all three tasks using other tools

However, I do appreciate that Xcode is a more pleasant and higher-level GUI
interface than the shell, make, lldb and one's favorite editor.  What I
would support is a tool that created an Xcode project from Makefiles; such
tools used to exist.  I'd love to see such a tool.  What I will fight
against until my dying breath is any attempt to replace the Makefile based
build system with Xcode.  I hope the reasons above justify why.

_,,,^..^,,,_
best, Eliot


[Pharo-dev] Effect Handlers

2018-10-26 Thread Eliot Miranda
Hi All,

I found this write up instructive and well-written:
https://www.quora.com/What-are-some-features-youd-want-a-new-programming-language-to-borrow/answer/Quildreen-Motta

_,,,^..^,,,_ (phone)

Re: [Pharo-dev] [ANN] Migrated Artefact to GitHub

2018-10-01 Thread Eliot Miranda
Hi Guille,

this issue is /really/ important to me.  People are helping me migrate
VMMaker to GitHub and it is /treally/ important to the project that
authorship history is maintained, because finding out who to ask when code
is affected is essential.  VMMaker is large, very complex and has had many
contributors.  Wiping authorship is unacceptable to me.  I'm glad that
Peter's tool is being used in the migration.  Hence my responses below..


On Sat, Sep 29, 2018 at 4:34 AM Guillermo Polito 
wrote:

>
> On Sat, Sep 29, 2018 at 12:41 PM Peter Uhnak  wrote:
>
>> Hi Stef,
>>
>> I understand that everyone is short on time, but I consider not
>> preserving the history problematic for two reasons
>>
>> * it is appropriating someone else's work as one's own -- this seems
>> borderline illegal, or at the very least in violation of MIT license
>>
>
> Not really. If the contributors are specified in the copyright, is it
> stealing?
>
> Now, my point is manners. I got in 4/5 emails:
>  - you're stupid because you did not use Peter's tool
>  - you're inefficient because you took one hour to do it
>  - you're stealing
>
> The first two points show first one big problem that I've seem many times
> with people and software: missing context.
> People think most of the time that "I would have done it better". But
> usually they don't take into account
>  - the time constraints (I had one hour, and one hour I had)
>  - the knowledge (where is the Artefact Repo?, why is it failing for
> Milton?, Do I know enough about metacello/streams/artefact to do it well?)
>  - the working environment (does the person that did it have all the tools
> to work properly? does he have a healthy working environment? For example,
> I've worked on several big companies where you can find really bad
> environments...)
>  - technical and not technical problems that are sometimes independent of
> the problem itself (take into account that for example, fighting against a
> metacello baseline is completely orthogonal to your tool or even iceberg,
> good internet connection)
>
> All these details are important also at the end, and they should be put
> into the balance too when we make a judgement.
>
>
>> * it is sending bad signals to potential contributors that we can scrub
>> them anytime we want
>>
>> as you yourself have said:
>> > @People try to avoid to piss on good will of others.
>> Yet this is what it feels like to some when traces of their contributions
>> are voided.
>>
>
> And this is my third point. This "stealing" idea is mainly a matter of
> manners.
> I really hope nobody here really thinks I wanted to take credit for
> Olivier, Guillaume or any of the other contributors.
> Still I preserved pointers to the original authors and their original
> website in google sites.
>

That's simply not good enough.  The minimum acceptable solution is that
within Pharo, within a browser, one can find out who authored what method,
class comment, and preferably class declaration (we don't have this yet).
Going outside to find out who authored is an unacceptable regression.

But in the case somebody did think that, I removed the repository to remove
> any doubt.
> So again, I apologize if somebody felt offended, but I also prefer to not
> be called a thief.
>
> Now, when I do stuff I'm not thinking about "oh yes, I'm getting famous",
> that would be pretty sad for me :/.
> I do stuff because I just think it's useful.
> I DON'T CARE personally about artefact, and I don't want to take credit
> for it.
> I don't even care about the fu***ng 2 commits I did to port it to Pharo 7,
> I can tell anybody what I did so she/he can re-doit.
> Because I don't use Artefact. Now, Somebody wants the "credit"? I could
> have even amended a commit and put anybody else as author.
>

But dot you see that voiding authorship a) gives the impression of
stealing, and clearly opens you up to the accusation of stealing, no matter
what your actual intent is?  And do you see that vitally important
information is being lost?  If this application had, as VMMaker does,
hundreds of contributors then tracking down who last modified what, which
is really important information, is made much harder.

My problem here is people assuming stealing by default, instead of
> assuming, for example, mistake.
> Imagine an alternative scenario:
>  - X: "Hey Guille, could you add in the copyright X, and Y and Z? They
> also contributed to the project, you should take them into account..."
>  - Guille: "Ah sure, sorry, this was not my intention, I'm so stupid, I
> forgot about Z. Commit push, done".
>
> If instead of bashing on people, we wanted to discuss on how to actually
> FIX the thing, here are my 2 cents:
>  - From a copyright perspective it should have been enough to check the
> licence file and name the contributors there
>  - The history could have been retrieved in a separate branch and then
> merged (and look, we had the best of the two worlds!)
>  - both of the things could have bee

Re: [Pharo-dev] iceberg PrimitiveFailed allocateExecutablePage - PharoDebug.log

2018-09-24 Thread Eliot Miranda
Hi Petr,

along with Esteban’s request for the error code from allocateExecutablePage 
can you also see whether use of iceberg is successful the second time you 
launch Pharo?  So start up your virtualbox, try and interact with iceberg, quit 
if it fails, relaunch and try again?

Also in your steps what do you do to prepare?  eg do you boot in virtualbox, 
return from sleep, or...?

FYI, allocateExecutablePage uses valloc (IIRC) to get a page from the OS and 
then uses mprotect to add executable permission to the page before answering 
the page’s address as the result of the primitive.  The callback machinery then 
uses the page to provide the executable blue code used in implementing 
callbacks.  The address of a code sequence in the page is what is actually 
handed out to C code as a fu croon pointer.  When external code calls this 
function pointer the code in the sequence invokes a callback into the vm before 
returning back to C.  Consequently it is key that allocateExecutablePage works 
correctly.  If it doesn’t then no callbacks.

_,,,^..^,,,_ (phone)

> On Sep 24, 2018, at 11:11 AM, Petr Fischer via Pharo-dev 
>  wrote:
> 
> 


Re: [Pharo-dev] Platform file encoding for FFI

2018-09-19 Thread Eliot Miranda
Hi Henry,
On Tue, Sep 18, 2018 at 1:43 AM Henrik Sperre Johansen <
henrik.s.johan...@veloxit.no> wrote:

> Guillermo Polito wrote
> > On Mon, Sep 17, 2018 at 6:52 PM Alistair Grant <
>
> > akgrant0710@
>
> > >
> > wrote:
> >
> >> Hi Esteban, Guille and Everyone,
> >>
> >> I haven't looked at using FFI much, however it is easy to imagine that
> >> different file encoding rules on different platforms will make writing
> >> FFI calls more difficult,
> >
> >
> > Well not really (from my point of view :))
> > From the point of view of the FFI call an encoded string is just a bunch
> > of
> > bytes. FFI does not do any interpretation of them.
>
> It *would* be pretty handy for adding some auto-conversion into the
> marshaller based on parameter encoding options though... (other than
> filename, could be done in smalltalk using exisiting encoders)
>
> self
> ffiCall: #(bool saveContentsToFile(String fileName, String contents))
> options: #(+stringEncodings( fileName return , platformAPI contents)
>
> (And yes, I've probably badly mangled the options syntax)
>

Why not go for some generic escape sequence that can inject Smalltalk code
into the marshaling?  Right now e.g.

primExport: aName value: aValue

^ self ffiCall: #(void moz_preferences_set_bool (short* aName, bool aValue))

is compiled as

primExport: arg1 value: arg2
| tmp1 tmp2 |
''
invokeWithArguments:
{(tmp2 := arg1 packToArity: 1).
arg2}

where '' is the ExternalFunction object
(it could usefully print itself ass a literal and then decompilation would
be meaningful; there is already code in the Squeak FFI repository).

Let's say one added {}'s as characters that can't ever appear in C
parameter lists (of course and alas []'s can because of arrays)≥  Then you
could perhaps write

primExport: aName value: aValue

^ self ffiCall: #(void moz_preferences_set_bool ( { short* aName }
asUTF8String, bool aValue))

and have that generate a send of asUTF8String to arg1 or tmp2.  One could
surround the whole thing to apply a coercion to the return value, but
there's no need because one can write e.g.

primExport: aName value: aValue

^(self ffiCall: #(void moz_preferences_set_bool ( { short* aName }
asUTF8String, bool aValue))) fromUTF8String

So then there would be a generic mechanism for in jetting Smalltalk code
into the marshaling and one could develop the string encoding support
independently from the FFI.  The options syntax however requires parsing
support, more documentation, and constant extension to support new
facilities, etc.

Is much less verbose than having to manually convert Strings to the proper
> platform Unicode encodings before calling.
> Depends a bit on whether the primitive argument is
> Byte/Widestrings(latin1/utf32), or if it accepts only utf8 bytes and one
> has
> to convert first anyways.
>
> It's not like this isn't a pain point, there are plenty of currently used
> API's that are broken if you try to use non-ascii.
>
> Cheers,
> Henry
>
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>
>

-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [rmod] Float should not implement #to:, #to:by:, etc...

2018-09-18 Thread Eliot Miranda


> On Sep 18, 2018, at 2:52 AM, Guillaume Larcheveque 
>  wrote:
> 
> Maybe #to:by: should convert its parameters in Fraction to avoid Floats 
> problems (not sure, just an idea)

There is no need to convert.  One can simply write

0 to: 1 by: 1/10

The issue with 0 to: 1 by: 0.1 is a problem with floating point arithmetic, not 
with intervals, and one does not cure disease by putting band aids on symptoms. 
 Instead we should teach the pitfalls of floating point arithmetic 
representations so that people are not astonished by 1/10.0*10.  Avoid 
simplifying language. Teach literacy.

> 
> 2018-09-18 11:25 GMT+02:00 Esteban Lorenzano :
>> 
>> 
>>> On 18 Sep 2018, at 11:13, Guillermo Polito  
>>> wrote:
>>> 
>>> 
>>> 
>>> On Tue, Sep 18, 2018 at 11:06 AM Julien  wrote:
 Hello,
 
 I realised that it is possible to create an interval of floats.
 
 I think this is bad because, since intervals are computed by successively 
 adding a number, it might result in precision errors.
 
 (0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30004 0.4 
 0.5 0.6001 0.7001 0.8 0.9 1.0)
 
 The correct (precise) way to do it would be to use ScaledDecimal:
 
 (0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1 
 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)
 
 I opened an issue about it: 
 https://pharo.fogbugz.com/f/cases/22467/Float-should-not-implement-to-to-by-etc
 
 And I’d like to discuss this with you.
 
 What do you think?
>>> 
>>> Well, I think it's a matter of balance :)
>>> 
>>> #to:by: is defined in Number. So we could, for example, cancel it in Float.
>>> However, people would still be able to do
>>> 
>>> 1 to: 1.0 by: 0.1
>>> 
>>> Which would still show problems.
>> 
>> Nevertheless, I have seen this a lot of times. 
>> 
>> 0.0 to: 1.0 by: 0.1
>> 
>> Is a common use case.
>> 
>>> 
>>> And moreover, we could try to do
>>> 
>>> 1 to: 7 by: (Margin fromNumber: 1)
>>> 
>>> And even worse
>>> 
>>> 1 to: Object new by: (Margin fromNumber: 1)
>>> 
>>> I think adding type-validations all over the place is not a good solution, 
>>> and is kind of opposite to our philosophy...
>>> 
>>> So we should
>>>  - document the good usages
>>>  - document the bad ones
>>>  - and live with the fact that we have a relaxed type system that will fail 
>>> at runtime :)
>> 
>> yup. 
>> But not cancel.
>> 
>> Esteban
>> 
>>> 
>>> Guille
>> 
> 
> 
> 
> -- 
> Guillaume Larcheveque
> 


[Pharo-dev] In Pharo6 what has happened to package scripts?

2018-08-27 Thread Eliot Miranda
Hi All,

   I'm trying to submit a SLICE for Pharo6 that contains chains related to
FFI error handling.  This SLICE needs a package script to recreate the
specialObjectsArray.  But I see no accessor for editing package scripts in
the Pharo6 Monticello Browser.  Have package scripts become unsupported?
If so, what is the alternative?  If not, how does one edit them?
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] About the infinite debugger

2018-06-29 Thread Eliot Miranda
Hi Guille,

> On Jun 29, 2018, at 7:48 AM, Guillermo Polito  
> wrote:
> 
> Hi all,
> 
> during today's sprint we have been working with lots of people on the 
> infinite debugger problem (https://pharo.fogbugz.com/f/cases/22085/). We have 
> checked the emails sent in the latest month. Then, together with Quentin, 
> Pablo, Pavel, Yoan we have been discussing and testing hypothesis all day. We 
> have been also comparing the debuggers code between pharo 3/4 (where the bug 
> was is present) and pharo 7, but this was not necessarily straight forward as 
> the code is not the same and there is no easy diff...

This is frustrating.  I can’t see the issue cuz I can’t login to fogbugz.  
Having to login to read an issue is a major flaw.  I can see it makes sense for 
submitting, but fir merely browsing it should be unacceptable.  That said...

The pragma  actually sets the primitive number in the method, 
so it it not merely a pragma; it alters bits in the method that the VM uses to 
search for handler contexts.  So why one would do that for evaluateSignal: 
makes no sense to me. The primitive should be set only in on:do: or something 
very similar (for example one could imagine adding on:or:do: instead of using , 
to construct an ExceptionSet).  So I think removing it from evaluateSignal: is 
definitely the right thing to do.

As far as tests for findNextHandlerFrom:, this is tested implicitly by any 
nested exception test so I expect you have several tests affected.  Clément 
points to a test that fails when not including  in 
evaluateSignal: so more investigation is necessary.  Difficult to do while bugs 
are hidden in fogbugz.  When are they going to migrate the github where they 
belong?

> 
> ND, we have found that the problem may come from a wrong pragma marker. 
> Just removing that pragma gives us back the same behaviour as in  Pharo 3/4. 
> :D
> 
> https://github.com/pharo-project/pharo/pull/1621
> 
> I know that the exception handling/debugging has been modified several times 
> in the latest years (some refactorings, hiding contexts...), we unfortunately 
> don't have tests for it, so I'd like some more pair of eyes on it. Ben, 
> Martin could you take a look?
> 
> Thanks all for the fish,
> Guille


Re: [Pharo-dev] Fuel Error in Latest Launcher

2018-06-21 Thread Eliot Miranda
Hi Sean,


> On Jun 21, 2018, at 5:48 AM, Sean P. DeNigris  wrote:
> 
> Max Leske wrote
>> if you discovered a compiler 
>> bug it would help to post the information.
> 
> I'm not sure if it's a bug, but here is the info I know:
> 1. Serialized a block in #60540 32-bit
> 2. Materialized it in same version 64-bit  and it was "broken", i.e.:
>a. It can't be serialized again as indicated above
>b. Strangely, its print string in the GT Inspector is the entire source
> code of #outerContext instead of the source code of just the block

Looks like the bug is that Fuel is not adjusting the pcs in Contexts and 
BlockClosures when loading something saved in a different word size.  
CompiledCode (CompiledMethod & CompiledBlock) is a hybrid object, the first 
part being object references (the method’s header followed by its literals), 
the second part being bytes (the bytecodes for the method and any additional 
info encoded in trailing bytes).  So in 32 bits a pc is (4*numLiterals+1) less 
than it is in 64 bits, and Fuel and other code must adjust things accordingly.  
See CompiledCode>>#initialPC

> 
> aBlock:
> - The following instVars are the same in both images: startup = 56, numArgs
> = 1
> - outerContext also seems the same, with the only obvious exception that the
> #method bytecodes are not all the same
>  - sender = nil
>  - pc = 45
>  - method = (SmallBaselineLoadScript>>#descriptionConflictBlock)
>  - closureOrNil = nil
>  - receiver = aSmallBaselineLoadScript
> 
> Let me know if you would like any more info…
> 
> 
> 
> 
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> 



Re: [Pharo-dev] Fuel Error in Latest Launcher

2018-06-20 Thread Eliot Miranda
Hi Sean,

could you post some more information?  What's the pc?  What's the
CompiledMethod's output for #symbolic?  Where in
interpretNextV3PlusClosureInstructionFor: is it stuck?  What's the computed
index?

On Wed, Jun 20, 2018 at 6:07 AM, Sean P. DeNigris 
wrote:

> While Fuel was trying to serialize a sort block ([ :a :b | a name < b name
> ]), I got CompiledMethod>>errorSubscriptBounds:. One strange thing about
> the
> block is that the outerContext is `SortedCollection>>DoIt`, meaning I
> probably had to initialize it by hand due to a previous Fuel problem.
>
> The odd thing about the error is that this object graph was serializing and
> materializing just fine before I upgraded Pharo Launcher.
>
> Call chain:
> CompiledMethod(Object)>>errorSubscriptBounds:
> CompiledMethod(Object)>>at:
> InstructionStream>>interpretNextV3PlusClosureInstructionFor:
> OpalEncoderForV3PlusClosures class>>interpretNextInstructionFor:in:
> InstructionStream>>interpretNextInstructionFor:
> [ (InstructionStream new method: self pc: pc)
> interpretNextInstructionFor: nil ] in
> CompiledMethod(CompiledCode)>>abstractBytecodeMessageAt: in Block: [
> (InstructionStream new method: self pc: pc)...
> BlockClosure>>on:do:
> CompiledMethod(CompiledCode)>>abstractBytecodeMessageAt:
> BlockClosure>>blockCreationBytecodeMessage
> BlockClosure>>endPC
> BlockClosure>>abstractBytecodeMessagesDo:
> BlockClosure>>isClean
> BlockClosure>>shouldBeSubstitutedByCleanCopy
> BlockClosure>>fuelAccept:
> FLLightGeneralMapper>>mapAndTrace:
>
>
>
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>
>


-- 
_,,,^..^,,,_
best, Eliot


[Pharo-dev] Is it possible to add an existing github repository from within the image or does one have to clone on the command line?

2018-06-19 Thread Eliot Miranda
SLSIA.  I want to clone gitfiletree://github.com/ThierryGoubier/filetree.git
but would love to do it form the image and not have to visit the command
line.  Is this possible yet?
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Debugger Button Positioning

2018-06-16 Thread Eliot Miranda
Hi Doru,

On Sat, Jun 16, 2018 at 2:25 AM, Tudor Girba  wrote:

> Hi,
>
> This is indeed an issue we should have solved. The concatenation of pragma
> selectors is unnecessary in this case, but we did not clean it up and it
> remained like this.
>

If the issue is that one wants generosity at the reference (in the actions
methods that provide what should be pragma names) and specificity at the
referent (the methods containing the pragmas) then one can add a second
pragma and use and "and" function to insist on the pragma plus another
qualifier.  For example,

StepOverDebugAction class>>gtStackDebuggingActionFor: aDebugger

^ (self forDebugger: aDebugger)
icon: GLMUIThemeExtraIcons glamorousOver

could become

StepOverDebugAction class>>gtStackDebuggingActionFor: aDebugger


^ (self forDebugger: aDebugger)
icon: GLMUIThemeExtraIcons glamorousOver

but I suspect that's not necessary here because all these methods exist in
a class hierarchy that can provide the context and specificity required.

But in this case it seems to me that there's a deeper
weakness/opportunity.  The meat of each class is the executeAction method,
for example

StepOverDebugAction >>executeAction

self session stepOver: self currentContext

why not add the pragma there and dispense with the
gtStackDebuggingActionFor: methods altogether.  For example, you could
include a pragma identifying the method as an action, and a pragma to be
used by a builder to add the action, or at least specify the parameters:

StepOverDebugAction >>executeAction

 "This could be performed by a
builder class specific to building glamorous debuggers, or glamorous tools,
or..."

self session stepOver: self currentContext

This second pragma could have as many keywords and arguments as one wants,
and serves to reduce the additional machinery one needs to "plug" something
in.

In general the pattern is to annotate some action or entry point method
with pragmas that specify how the method fits in to some larger context, so
for example,

- an action method on a menu, where a single pragma can specify the label,
icon, hot key, and menu name, using a message understood by a menu
decorator that would perform the pragma to add it to the menu (and then if
there is a hook on the class side that is looking for method additions and
removals, menus in live tools can update as soon as a method containing
such a pragma is added or removed)

- an entry point for some exported interface, such as a COM interface,
where the pragma specifies the types of the incoming parameters, and hence
as the method is installed into some built interface, the relevant
marshaling machinery is built, rather than being specified off to the side

Used in this way, information about how a component fits in can be
localized in the key method(s) providing the component, instead of being
spread over several methods, reducing load on the programmer in
comprehending and adding similar actions.


> Cheers,
> Doru
>
>
>
> > On Jun 16, 2018, at 8:58 AM, Eliot Miranda 
> wrote:
> >
> > Hi Tim,
> >
> > On Fri, Jun 15, 2018 at 2:21 PM, Tim Mackinnon  wrote:
> >> The whole point about pragmas is that they're supposed to be messages,
> and hence senders (and if possible, implementors) work.  But with the
> mangling that occurs here, a simple senders of stackDebuggingActions brings
> nothing, and one is left manually searching the system before one finds
> references to gtStackDebuggingAction.  Obscure and hence horrible.
> >
> > Hi Eliot - I’ve been caught out by that belief too - but if you select a
> pragma and do cmd-b, cmd-n (or menu code-search, senders of it) - it works
> How I think you are expecting and shows you all the senders of that pragma.
> I use this all of the time for remember how to implement GT-Inspector tabs
> , as I was always impressed with the one in Date and use it as a way to
> find lots of good examples.
> >
> > It could be, that you tried “implementers of” - and that one always
> confuses me - partly because they aren’t a real thing (and I guess I agree
> with you on a distaste for them - it feels like we could have done better
> somehow?).
> >
> > These aren't the issues here.  The issue here is the classic one of name
> mangling, a.k.a. constructing selectors from fragments, so that the
> reference (the elements of the arrays in codeDebuggingPragmas and
> stackDebuggingActionPragmas, which are  #codeDebuggingAction
> #stackDebuggingAction) don't match the pragmas in the methods, which are
>  & .  Why the mismatch?  I
> would expect the references to use #gtCodeDebuggingAction &
> #gtStackDebuggingAction, and finding the implementations would be trivial.
> Instead one has to hunt.  This is bad.
> >
> >
> > Tim

Re: [Pharo-dev] Debugger Button Positioning

2018-06-15 Thread Eliot Miranda
Hi Tim,

On Fri, Jun 15, 2018 at 2:21 PM, Tim Mackinnon  wrote:

> The whole point about pragmas is that they're supposed to be messages, and
> hence senders (and if possible, implementors) work.  But with the mangling
> that occurs here, a simple senders of stackDebuggingActions brings nothing,
> and one is left manually searching the system before one finds references
> to gtStackDebuggingAction.  Obscure and hence horrible.
>
>
> Hi Eliot - I’ve been caught out by that belief too - but if you select a
> pragma and do cmd-b, cmd-n (or menu code-search, senders of it) - it works
> How I think you are expecting and shows you all the senders of that pragma.
> I use this all of the time for remember how to implement GT-Inspector tabs
> , as I was always impressed with the one in Date and use it as a way to
> find lots of good examples.
>
> It could be, that you tried “implementers of” - and that one always
> confuses me - partly because they aren’t a real thing (and I guess I agree
> with you on a distaste for them - it feels like we could have done better
> somehow?).
>

These aren't the issues here.  The issue here is the classic one of name
mangling, a.k.a. constructing selectors from fragments, so that the
reference (the elements of the arrays in codeDebuggingPragmas and
stackDebuggingActionPragmas, which are  #codeDebuggingAction
#stackDebuggingAction) don't match the pragmas in the methods, which
are  & .  Why the mismatch?
I would expect the references to use #gtCodeDebuggingAction &
#gtStackDebuggingAction, and finding the implementations would be trivial.
Instead one has to hunt.  This is bad.


> Tim
>
>
> On 15 Jun 2018, at 00:23, Eliot Miranda  wrote:
>
> Hi Henrik,
>
> On Thu, Jun 14, 2018 at 12:47 PM, Henrik-Nergaard 
> wrote:
>
>> Hi,
>>
>> Moving the icons down to the middle row is as simple as changing:
>>
>> ***
>> codeActionsPragmas
>>   ^ #( stackDebuggingActions codeDebuggingActions )
>> ***
>> ***
>> stackDebuggingActionsPragmas
>>   ^ #()
>> ***
>>
>> in GTGenericStackDebugger.
>>
>> Best regards,
>> Henrik
>>
>
> Thanks.  I have to say that the renaming from #stackDebuggingActions to
> gtStackDebuggingAction, as in
>
> StepIntoDebugAction class>>gtStackDebuggingActionFor: aDebugger
> 
>
> is a cruel joke.  The whole point about pragmas is that they're supposed
> to be messages, and hence senders (and if possible, implementors) work.
> But with the mangling that occurs here, a simple senders of
> stackDebuggingActions brings nothing, and one is left manually searching
> the system before one finds references to gtStackDebuggingAction.  Obscure
> and hence horrible.
>
> _,,,^..^,,,_
> best, Eliot "I designed pragmas to be useful and natural, not painful and
> obscure" Miranda
>
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Debugger Button Positioning

2018-06-14 Thread Eliot Miranda
Hi Henrik,

On Thu, Jun 14, 2018 at 12:47 PM, Henrik-Nergaard 
wrote:

> Hi,
>
> Moving the icons down to the middle row is as simple as changing:
>
> ***
> codeActionsPragmas
>   ^ #( stackDebuggingActions codeDebuggingActions )
> ***
> ***
> stackDebuggingActionsPragmas
>   ^ #()
> ***
>
> in GTGenericStackDebugger.
>
> Best regards,
> Henrik
>

Thanks.  I have to say that the renaming from #stackDebuggingActions to
gtStackDebuggingAction, as in

StepIntoDebugAction class>>gtStackDebuggingActionFor: aDebugger


is a cruel joke.  The whole point about pragmas is that they're supposed to
be messages, and hence senders (and if possible, implementors) work.  But
with the mangling that occurs here, a simple senders of
stackDebuggingActions brings nothing, and one is left manually searching
the system before one finds references to gtStackDebuggingAction.  Obscure
and hence horrible.

_,,,^..^,,,_
best, Eliot "I designed pragmas to be useful and natural, not painful and
obscure" Miranda


[Pharo-dev] Debugger Button Positioning

2018-06-14 Thread Eliot Miranda
Hi All,

 I've been using Pharo intensively for the first time, Pharo 7. Forgive
me for starting with a complaint, but I don't have time to state all the
things that are great about it; you already now ;-)

One thing I find painful is the positioning of the debugger
into/over/through buttons.  Because these are above the context list, and
you read the code like I do, one has to mouse further to reach them.  I
find my focus is on the highlighted method, and my cursor is typically
within it (I'm dong implementors, or senders, or just looking at the
code).  Further, there's lots of space between the Source tab and the
"Where Is?" and "Browse" buttons.  Doesn't it make more sense to put the
into/over/through buttons between the Source tab and the "Where Is?"
button?  If not, doesn't it make sense to put a copy of the buttons there
where they're in much easier reach?

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Curious context change latency experiment results (wrt DelayScheduler)

2018-05-28 Thread Eliot Miranda
Hi Ben,




_,,,^..^,,,_ (phone)
> On May 26, 2018, at 10:46 PM, Ben Coman  wrote:
> 
> 
> 
>> On 6 May 2018 at 04:20, Eliot Miranda  wrote:
>> Hi Ben,
>> 
>> 
>>> On May 5, 2018, at 7:25 AM, Ben Coman  wrote:
>>> 
>>> 
>>> 
>>>> On 5 May 2018 at 22:10, Ben Coman  wrote:
>>>> One of the key parts of Delay scheduling is setting the resumption time.
>>>> There are two places this could be done.
>>>> a. In Delay>>#schedule, running (for example) at userSchedulingPriority=40
>>>> b. In DelayMicrosecondScheduler>>#handleTImerEvent:, running at 
>>>> timingPriority=80
>>>> 
>>>> When we were using the millisecond clock for delay scheduling,
>>>> it made sense to use (b.) since the clock might(?) roll over 
>>>> between when resumption time was set, and when #handleTImerEvent: expires 
>>>> delays.
>>>> 
>>>> This should not be a problem now that we are using the microsecond clock, 
>>>> so I wondered what the latency penalty might be between (a.) and (b.).
>>>> I ran a little experiment that indicates the cost/latency of switching 
>>>> threads,
>>>> and was curious if anyone can comment on the validity of the experiment 
>>>> and interpretation of results.
>>>> 
>>>> I made the following three changes...
>>>> 
>>>> DelayScheduler subclass: #DelayMicrosecondScheduler
>>>>instanceVariableNames: 'latencyStart countSinceLastZero'
>>>>classVariableNames: 'LatencyCounts'
>>>>poolDictionaries: ''
>>>>category: 'Kernel-Processes'
>>>> 
>>>> 
>>>> DelayMicrosecondScheduler>>#schedule: aDelay
>>>>latencyStart:= Time primUTCMicrosecondsClock. "This is position (a.)"
>>>>aDelay schedulerBeingWaitedOn ifTrue: [^self error: 'This Delay has 
>>>> already been scheduled.'].
>>>>accessProtect critical: [
>>>>scheduledDelay := aDelay.
>>>>timingSemaphore signal. "#handleTimerEvent: sets 
>>>> scheduledDelay:=nil"
>>>>].
>>>> 
>>>> 
>>>> DelayMicrosecondScheduler>>#handleTimerEvent: microsecondNowTick
>>>>| microsecondNextTick |
>>>>"Process any schedule requests" "This is position (b.)"
>>>>scheduledDelay ifNotNil: [
>>>>|latency|
>>>>latency := Time primUTCMicrosecondsClock - latencyStart.
>>>>LatencyCounts ifNil: [  LatencyCounts := Bag new ].
>>>>LatencyCounts add: latency.
>>>>latency = 0 
>>>>ifTrue: [ countSinceLastZero := 1 + (countSinceLastZero 
>>>> ifNil: [0])]
>>>>ifFalse: [Transcript 
>>>>crShow: 'zero latency count ' , 
>>>> countSinceLastZero printString ;
>>>>show: ' before latency ', latency 
>>>> printString.
>>>>countSinceLastZero := 0].
>>>>"Schedule the given delay."
>>>>scheduledDelay scheduler: self resumptionTime: 
>>>> microsecondNowTick + (1000 * scheduledDelay millisecondDelayDuration).
>>>>self scheduleDelay: scheduledDelay.
>>>>scheduledDelay := nil ].
>>>> 
>>>> rest of method unchanged
>>>> 
>>>> 
>>>> Then opened the Transcript and in Playground evaluated...
>>>> Delay delaySchedulerClass: DelayMicrosecondScheduler.
>>>> 
>>>> 
>>>> The Transcript results are shown below with some comments inserted.
>>>> 
>>>> Now I guess the latency is affected by garbage collection. 
>>>> But one thing I was curious about is why the latency's were quantised in 
>>>> 1000s.  
>>>> 
>>>> Another interesting thing is that vast majority of the latency's were zero,
>>>> which was a nice surprise, but can it be true?  Or is it a consequence 
>>>> of the quantitisation rounding down?
>>>> 
>>>> It seems that the idle-ness of the image affected how often a non-zero 
>>>> latency occurred.
>>>> After I le

Re: [Pharo-dev] Crash on GC garbageCollect

2018-05-25 Thread Eliot Miranda
Hi Andreas,

On Fri, May 25, 2018 at 1:24 AM, Andreas Brodbeck  wrote:

> Hi all
>
> I have a 6.0 image which crashes on garbage collect. If I run "Smalltalk
> garbageCollect" it crashes. The crash log is attached. But I don't really
> understand that crash log. I run it with a 6.0 VM, tried it also with
> newest 6.1 VM and 7.0 VM, but always the same error.
>
> I read here in the list about crash problems with GC, memory leaks and
> similar, but I really don't see what I could do.
>

Note that if you build an Assert VM you will be able to manually patch the
image in lldb so that you can rescue it.  It looks like this:

$ *lldb PharoAssert.app/Contents/MacOS/Pharo*

(lldb) target create "/Users/eliot/oscogvm/build.macos64x64/pharo.cog.spur/
PharoAssert.app/Contents/MacOS/Pharo"
Current executable set to '/Users/eliot/oscogvm/build.
macos64x64/pharo.cog.spur/PharoAssert.app/Contents/MacOS/Pharo' (x86_64).
(lldb) settings set -- target.run-args  "clap_broken.d9e5daa.image"
(lldb) *b warning*
Breakpoint 1: 3 locations.
(lldb) *run --leakcheck 31 clap_broken.d9e5daa.image*
Process 31569 launched: '/Users/eliot/oscogvm/build.
macos64x64/pharo.cog.spur/PharoAssert.app/Contents/MacOS/Pharo' (x86_64)
object leak in*0x10f919658* @ 0 =0x122216538
object leak in*0x10fbb3448* @ 0 =0x122216760
object leak in*0x10fbb3480* @ 0 =0x1222166a8
object leak in*0x10ff384f0* @ 0 =0x122d480b0
object leak in*0x10ff38518* @ 0 =0x122d480b0
object leak in*0x10ff385d0* @ 0 =0x122d480b0
Process 31569 stopped
* thread #1: tid = 0x5b6d56, 0x00011a83 Pharo`warning(s="
checkHeapIntegrityclassIndicesShouldBeValid(0, 1) 57196") + 19 at
gcc3x-cointerp.c:44, queue = 'com.apple.main-thread', stop reason =
breakpoint 1.1
frame #0: 0x00011a83 Pharo`warning(s="
checkHeapIntegrityclassIndicesShouldBeValid(0, 1) 57196") + 19 at
gcc3x-cointerp.c:44
   41   sqInt warnpid, erroronwarn;
   42   void
   43   warning(char *s) { /* Print an error message but don't necessarily
exit. */
-> 44   if (erroronwarn) error(s);
   45   if (warnpid)
   46   printf("\n%s pid %ld\n", s, (long)warnpid);
   47   else
(lldb) *call freeObject(0,0x10f919658)*
(sqInt) $0 = 4478138592
(lldb) *call **freeObject**(0,0x10fbb3448)*
(sqInt) $1 = 4478138592
(lldb) *call **freeObject**(0,0x10fbb3480)*
(sqInt) $2 = 4478138592
(lldb) *call **freeObject**(0,0x10ff384f0)*
(sqInt) $3 = 4478138592
(lldb) *call **freeObject**(0,0x10ff38518)*
(sqInt) $4 = 4478138592
(lldb) *call **freeObject**(0,0x10ff385d0)*
(sqInt) $5 = 4478138592
(lldb) *expr checkForLeaks = 0*
(sqInt) $0 = 0
(lldb) *continue*


and then save the image.


>
> Any pointers in which direction I could investigate?
>
> Thanks and cheers,
> Andreas
>
> --
> Andreas Brodbeck
> www.mindclue.ch
>
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Pharo 7 Launch Failure - Freetype Problem?

2018-05-24 Thread Eliot Miranda
Hi Sean,

On Sun, May 20, 2018 at 7:53 PM, Sean P. DeNigris 
wrote:

> Guillermo Polito wrote
> > I've just downloaded the image in [3] and was able to open it with some
> > old
> > vm and with a freshly downloaded stable vm
>
> Weird. If I open it via CLI with either a fresh stable VM or the one for
> Pharo 7 that launcher dl-ed, it works fine, but if I drop the image file
> onto the same VM (which is what I did in my OP), it doesn't open. Anyway,
> no
> big deal if I can open it somehow - just strange!
>

So this could be a current directory problem.  When one launches from the
CLI the VM is in the current directory that the shell is in.  When one
drops an image onto a VM one is in whatever directory the GUI launches
desktop apps in.  These differ and hence that may cause problems.


> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Curious context change latency experiment results (wrt DelayScheduler)

2018-05-05 Thread Eliot Miranda
Hi Ben,


> On May 5, 2018, at 7:25 AM, Ben Coman  wrote:
> 
> 
> 
>> On 5 May 2018 at 22:10, Ben Coman  wrote:
>> One of the key parts of Delay scheduling is setting the resumption time.
>> There are two places this could be done.
>> a. In Delay>>#schedule, running (for example) at userSchedulingPriority=40
>> b. In DelayMicrosecondScheduler>>#handleTImerEvent:, running at 
>> timingPriority=80
>> 
>> When we were using the millisecond clock for delay scheduling,
>> it made sense to use (b.) since the clock might(?) roll over 
>> between when resumption time was set, and when #handleTImerEvent: expires 
>> delays.
>> 
>> This should not be a problem now that we are using the microsecond clock, 
>> so I wondered what the latency penalty might be between (a.) and (b.).
>> I ran a little experiment that indicates the cost/latency of switching 
>> threads,
>> and was curious if anyone can comment on the validity of the experiment 
>> and interpretation of results.
>> 
>> I made the following three changes...
>> 
>> DelayScheduler subclass: #DelayMicrosecondScheduler
>>  instanceVariableNames: 'latencyStart countSinceLastZero'
>>  classVariableNames: 'LatencyCounts'
>>  poolDictionaries: ''
>>  category: 'Kernel-Processes'
>> 
>> 
>> DelayMicrosecondScheduler>>#schedule: aDelay
>>  latencyStart:= Time primUTCMicrosecondsClock. "This is position (a.)"
>>  aDelay schedulerBeingWaitedOn ifTrue: [^self error: 'This Delay has 
>> already been scheduled.'].
>>  accessProtect critical: [
>>  scheduledDelay := aDelay.
>>  timingSemaphore signal. "#handleTimerEvent: sets 
>> scheduledDelay:=nil"
>>  ].
>> 
>> 
>> DelayMicrosecondScheduler>>#handleTimerEvent: microsecondNowTick
>>  | microsecondNextTick |
>>  "Process any schedule requests" "This is position (b.)"
>>  scheduledDelay ifNotNil: [
>>  |latency|
>>  latency := Time primUTCMicrosecondsClock - latencyStart.
>>  LatencyCounts ifNil: [  LatencyCounts := Bag new ].
>>  LatencyCounts add: latency.
>>  latency = 0 
>>  ifTrue: [ countSinceLastZero := 1 + (countSinceLastZero 
>> ifNil: [0])]
>>  ifFalse: [Transcript 
>>  crShow: 'zero latency count ' , 
>> countSinceLastZero printString ;
>>  show: ' before latency ', latency 
>> printString.
>>  countSinceLastZero := 0].
>>  "Schedule the given delay."
>>  scheduledDelay scheduler: self resumptionTime: 
>> microsecondNowTick + (1000 * scheduledDelay millisecondDelayDuration).
>>  self scheduleDelay: scheduledDelay.
>>  scheduledDelay := nil ].
>> 
>> rest of method unchanged
>> 
>> 
>> Then opened the Transcript and in Playground evaluated...
>> Delay delaySchedulerClass: DelayMicrosecondScheduler.
>> 
>> 
>> The Transcript results are shown below with some comments inserted.
>> 
>> Now I guess the latency is affected by garbage collection. 
>> But one thing I was curious about is why the latency's were quantised in 
>> 1000s.  
>> 
>> Another interesting thing is that vast majority of the latency's were zero,
>> which was a nice surprise, but can it be true?  Or is it a consequence 
>> of the quantitisation rounding down?
>> 
>> It seems that the idle-ness of the image affected how often a non-zero 
>> latency occurred.
>> After I left the house for a while, the count of zero latency was very high, 
>> but a few still occurred.  It would make sense there was less GC while idle. 
>>  What is a good snippet of code to stress GC. I presume the latency might 
>> increase.
>> 
>> 
>> zero latency count 2273 before latency 1000
>> zero latency count 943 before latency 1000
>> zero latency count 3666 before latency 1000
>> zero latency count 1643 before latency 1000
>> zero latency count 27 before latency 1000
>> "Left house for 20 minutes"  
>> zero latency count 12022 before latency 1000
>> zero latency count 15195 before latency 1000
>> zero latency count 41998 before latency 1000
>> "Returned from outing"
>> zero latency count 128 before latency 1000
>> zero latency count 116 before latency 1000
>> zero latency count 555 before latency 1000
>> zero latency count 2377 before latency 1000
>> zero latency count 5423 before latency 1000
>> zero latency count 3178 before latency 1000
>> zero latency count 47 before latency 1000
>> zero latency count 2276 before latency 1000
>> "Left house to go shopping"
>> zero latency count 6708 before latency 3000
>> zero latency count 4896 before latency 1000
>> zero latency count 433 before latency 1000
>> zero latency count 7106 before latency 1000
>> zero latency count 2195 before latency 1000
>> zero latency count 12397 before latency 1000
>> zero latency count 4815 before latency 2000
>> zero latency count 3480 before latency 1000
>> zero latency co

Re: [Pharo-dev] [squeak-dev] How to yellow click with two buttons without changing mouse settings?

2018-04-23 Thread Eliot Miranda
Hi Marcel,

On Sun, Apr 22, 2018 at 11:24 PM, Marcel Taeumel 
wrote:

> Hi Jakob,
>
> do: "HandMorph showEvents: true" then do CTRL+RED/LEFT. You will see that
> you get, in fact, a yellow click. I think that there is code that checks
> for "control pressed" first to show the different menu. So, we do emulate
> that yellow-click in the VM, but we keep the "control pressed"-flag. So,
> the event state is different, even though you get "event
> yellowButtonPressed = true" both times.
>
> To sum up some prior thoughts on this topic:
> 1) The VM should not provide this single-button-mouse behavior but the
> image should.
> 2) The event filters to simulate those yellow/blue clicks are already
> possible in the image. We do it with mouse-wheel vs. CTRL+up/down already.
> 3) A good transformation for CTRL+RED in such a single-button-mouse mode
> would be YELLOW and not CTRL+YELLOW.
>
> Anyway, this is not an easy fix or decision to make. :-)
>

What would be relatively straight-forward is to add an image property flag
alongside the ones in the image flag word such as "floats in platform
order", "processPreemptionYields", etc in vm parameter 48.  This flag would
tell the VM not to do any event modifications (or at least no mouse button
swapping).  And then we could at least experiment with allowing the image
to perform event modification, making this a preference, etc, before some
fine day in the future, transitioning away from VM support for event
modification in a major release.  Thoughts?


> Best,
> Marcel
>
> Am 21.04.2018 14:35:43 schrieb Jakob Reschke :
> Hello,
>
> I downloaded a VM (64 bit Cog Spur Windows) and a fresh trunk image
> separately and have not changed any mouse button settings yet (swap
> mouse buttons preference is on and neither 1 button mouse nor 3 button
> mouse are selected in the VM preferences menu). My notebook does not
> have a middle "mouse" button and the right button currently does a
> blue click (opens the halo).
>
> In this default setting, is there any way to do a yellow click?
>
> I get the morph menu with ctrl+left click and in list morphs and on
> the world I can invoke the menu with the Esc key, but I did not find a
> modifier+button combination that would work universally to emulate a
> yellow click. Have I missed it?
>
> After disabling swap mouse buttons, right click becomes a yellow
> click. But when the preference is on, both right click and alt+left
> click produce a blue click. So with a two button mouse it seems that
> nothing is actually swapped, but one button is unavailable
> altogether...
>
> A simple solution would be to turn the preference off by default.
>
> Kind regards,
> Jakob
>
>
>
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Hierarchy (roots) of package

2018-04-19 Thread Eliot Miranda
Hi Clément,


> On Apr 18, 2018, at 11:45 PM, Clément Bera  wrote:
> 
> I would do that:
> 
> Implementation:
> rootsInsidePackage := [ :packageName |
>   | myPackage |
>   myPackage := RPackageOrganizer default packageNamed: packageName.
>   myPackage definedClasses select: [ :each | each superclass package ~~ 
> myPackage ] ].

Don't forget nil superclasses (for proxy classes) so

each superclass isNil
or: [ each superclass package ~~ myPackage ] ]

> 
> Example use-case:
> rootsInsidePackage value: 'OpalCompiler-Core'
> 
> Is that what you expected ?
> 
>> On Thu, Apr 19, 2018 at 7:19 AM, Stephane Ducasse  
>> wrote:
>> Hi
>> 
>> Given a package I would like to know the classes that are roots
>> of hierarchy inside the package.
>> 
>> Do we have something like that?
>> 
>> Stef
>> 
> 
> 
> 
> -- 
> Clément Béra
> https://clementbera.github.io/
> https://clementbera.wordpress.com/


Re: [Pharo-dev] [Pharo-Launcher] call for tests on Windows

2018-04-16 Thread Eliot Miranda
Hi Phil,

On Mon, Apr 16, 2018 at 12:54 PM, p...@highoctane.be 
 wrote:

> I have downloaded the https://github.com/pharo-
> project/pharo-vm/blob/master/opensmalltalk-vm/build.
> win32x86/pharo.cog.spur.lowcode/Pharo.exe.manifest and put it in the
> folder that contains the PharoLauncher Pharo.exe and then the pointers are
> correct again (they have their masking done correctly.
>

Good!


> So, there is a packaging issue somewhere for Pharo Windows VMs as
> downloaded by the Laucher.
>

Indeed, we must include the manifest files on Windows.


>
> Phil
>
>
On Mon, Apr 16, 2018 at 12:45 PM, p...@highoctane.be 
wrote:

> Eliot,
>
> Thx for the pointer.
>
> No, there is no such file in the vm folder that is launching the Pharo
> Launcher nor is there one in the vms folders that are downloaded to run the
> various images started by the launcher.
>

> This manifest file is new to me. Where is it to be found and what does it
> do for Pharo?
>

It must lie alongside the .exe.  It's an XML file that specifies various
runtime environment settings for the executable.  i don't know all that it
can do, I just know that it does seem whether an application is highDpi
aware or not, and your bug report seems to point directly to this.  Here's
the manifest for Pharo:

Pharo.exe.manifest



Pharo Smalltalk Virtual Machine

http://schemas.microsoft.com/SMI/2005/WindowsSettings";>
false










And that reminds me that they need to get updated to refer to
opensmalltalk.org!!


> Phil
>
> On Mon, Apr 16, 2018 at 5:29 PM, Eliot Miranda 
> wrote:
>
>> Hi Phil,
>>
>> On Apr 16, 2018, at 8:16 AM, philippe.b...@highoctane.be <
>> philippe.b...@gmail.com> wrote:
>>
>> I guess that it is what I installed and it works fine on my Win 10 system.
>>
>> Nice icon BTW.
>>
>> Now there is an issue with the mouse pointer in the launcher and all
>> pther images (so maybe a VM problem).
>>
>> The cursor is all black and tiny and has no white surroundings. In a dark
>> theme the cursor is close to invisible.
>>
>> Also the cursor is not obeying the magnification setting of Windows. It
>> worked before.
>>
>>
>> I guess that the cursor masks are  not applied properly + other new stuff.
>>
>>
>> Is the manifest (Pharo.exe.manifest ?) being installed?
>>
>>
>> That is ruining the whole Pharo experience for me (and I guess newcomers).
>>
>>
>> Phil
>>
>> On Mon, Apr 16, 2018, 16:14 Christophe Demarey <
>> christophe.dema...@inria.fr> wrote:
>>
>>> Hi,
>>>
>>> Regarding the various problems Pharo Launchers had on Windows, we worked
>>> on a new installer based on Advanced Installer [1] to avoid UAC (User
>>> Account Control) and write permissions problems. This new installer now
>>> installs Pharo Launcher in the user’s local app data folder (where for
>>> sure, he has write permissions). Pharo Launcher also now have its own icon
>>> and use its own name instead of Pharo. Also, the uninstaller now works as
>>> expected. Last but not least, the installer is now signed to avoid warning
>>> of Windows Defender.
>>> Thanks to Ben Coman who did the first version of the packaging using
>>> Advanced Installer (was NSIS).
>>> This version should also improve the launch of Pharo images on Windows.
>>>
>>> So, please could you install and test this new version: and report any
>>> problem with it? http://files.pharo.org/pharo-launcher/win-alpha/
>>> We do not have windows users around so it’s hard to know if it works
>>> outside our tests boxes.
>>>
>>> Thanks,
>>> Christophe
>>>
>>> [1] https://www.advancedinstaller.com/
>>>
>>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [Pharo-Launcher] call for tests on Windows

2018-04-16 Thread Eliot Miranda
Hi Phil,

> On Apr 16, 2018, at 8:16 AM, philippe.b...@highoctane.be 
>  wrote:
> 
> I guess that it is what I installed and it works fine on my Win 10 system.
> 
> Nice icon BTW.
> 
> Now there is an issue with the mouse pointer in the launcher and all pther 
> images (so maybe a VM problem).
> 
> The cursor is all black and tiny and has no white surroundings. In a dark 
> theme the cursor is close to invisible.
> 
> Also the cursor is not obeying the magnification setting of Windows. It 
> worked before.
> 
> 
> I guess that the cursor masks are  not applied properly + other new stuff.

Is the manifest (Pharo.exe.manifest ?) being installed?

> 
> That is ruining the whole Pharo experience for me (and I guess newcomers).
> 
> 
> Phil
> 
>> On Mon, Apr 16, 2018, 16:14 Christophe Demarey  
>> wrote:
>> Hi,
>> 
>> Regarding the various problems Pharo Launchers had on Windows, we worked on 
>> a new installer based on Advanced Installer [1] to avoid UAC (User Account 
>> Control) and write permissions problems. This new installer now installs 
>> Pharo Launcher in the user’s local app data folder (where for sure, he has 
>> write permissions). Pharo Launcher also now have its own icon and use its 
>> own name instead of Pharo. Also, the uninstaller now works as expected. Last 
>> but not least, the installer is now signed to avoid warning of Windows 
>> Defender.
>> Thanks to Ben Coman who did the first version of the packaging using 
>> Advanced Installer (was NSIS).
>> This version should also improve the launch of Pharo images on Windows.
>> 
>> So, please could you install and test this new version: and report any 
>> problem with it? http://files.pharo.org/pharo-launcher/win-alpha/
>> We do not have windows users around so it’s hard to know if it works outside 
>> our tests boxes.
>> 
>> Thanks,
>> Christophe
>> 
>> [1] https://www.advancedinstaller.com/


Re: [Pharo-dev] Critiques - Temporaries read before written.

2018-04-15 Thread Eliot Miranda
Hi Martin,

> On Apr 15, 2018, at 11:40 AM, Martin McClure  wrote:
> 
>> On 04/15/2018 07:42 AM, Ben Coman wrote:
>> The greater prominence of Critiques in Calypso 
>> encourages me to try to clean them out.
>> 
>> I bumped into a false positive "Temporaries read before written."
>> that I've condensed to the following example.
>> 
>> test
>> |x|
>> [ x := nil ] whileNil: [ x ifNil: [ x := 1] ]
>> 
>> Now before I log an Issue, is it feasible to be able to recognise this?
>> Perhaps only for common looping idioms?
> 
> In this example, the first runtime reference to x is to send it #ifNil:. So 
> technically, x *is* being read before being written, at least if you count 
> sending it a message as "reading" (which seems a reasonable interpretation to 
> me).

How so?  The first run-time reference to xcould be in either of the two 
non-unlined blocks.  The assignment to x in [ x := nil ] could precede the read 
if (as of course it does, but we can't guarantee) BlockClosure>>whileNil: 
chooses to evaluate it before [ x ifNil: [ x := 1] ].   The issue is that the 
code is ambiguous; it depends on the implementation of whileNil: and hence 
should be reported as "may" rather than "shall".

>> 
>> Anyway, the workaround is simple enough...
>> test
>> |x|
>> x := nil. "silence critiques"
>> [ x := nil ] whileNil: [ x ifNil: [ x := 1] ]
> Probably not a terrible idea to be explicit about initializing to nil, 
> thereby revealing the developer's intent that this variable be nil rather 
> than relying on the default initialization to nil.

I've never agreed with this.  It is in the language spec that all pointer 
variables, including temporaries, are initialized with nil, and repeating an 
initialization shows ignorance, and makes me suspect the rest of the code.  
Smalltalk intentionally ensures that all variables are initialised and for good 
reason.  It's similar, but worse, than people adding an explicit ifFalse: [nil] 
to an ifTrue: when the default return value for a false ifTrue: is indeed nil. 
We should strive for literacy, not bastardise things for the ignorant since 
repeating a mistake is a route to establishing the mistake as common parlance 
hence losing elegance and concision.

> So I'd lean towards leaving the critique as-is. If a developer knows what 
> they did was safe, they can either ignore the critique, exclude the critique, 
> or put in the explicit initialization to nil. I think I prefer explicit 
> initialization.

The critique is wrong.  It can only say "may be read before written" and a 
human being in the same situation would soon observe that the receiver block is 
always evaluated before the argument block and so not earn at all.  Benis 
right; this is a false positive.  However, saying "may" is at least in the 
spirit of the language given the possibility of reimplementing whileNil:.

> 
> Regards,
> -Martin


Re: [Pharo-dev] Why can't we use * in protocol for package extension?

2018-04-12 Thread Eliot Miranda
Hi Stephane,

On Thu, Apr 12, 2018 at 11:46 AM, Stephane Ducasse 
wrote:

> Eliot
>
> We do not want to go the road of overrides. We want to keep our
> engineer task forces.
>

There are overrides anyway.  In general overrides are unavoidable in some
circumstances.  The issue is not whether they exist, it's whether they work
reliably.  Right now they don't; they rely on changes file technology that
is extremely fragile.  Sean and Denis' move to reimplement extensions is an
opportunity to implement overrides correctly.


>
> Stef
>
> On Thu, Apr 12, 2018 at 7:31 PM, Eliot Miranda 
> wrote:
> > Hi Sean,
> >
> > On Thu, Apr 12, 2018 at 9:49 AM, Sean P. DeNigris  >
> > wrote:
> >>
> >> Stephane Ducasse-3 wrote
> >> > You see. You pushed this idea and at then end we will have to handle
> the
> >> > mess.
> >> > I do not see why we cannot simply support *.
> >>
> >> I'm surprised by this resistance. The *Xyz was always an ugly hack, part
> >> of
> >> Squeak's overloading the same mechanism for both system categorization
> and
> >> packaging, and exposing and limiting protocols as "just dumb strings",
> all
> >> of which IMHO makes the system much less understandable (no real
> "private"
> >> tagging, extension methods can't show up in proper protocol, etc). We're
> >> not
> >> in a feature freeze, so what is the problem with tackling part of this
> >> mess
> >> now? Sure, maybe the UI support can be improved, but let's focus on some
> >> concrete suggestions.
> >>
> >> Denis and I just happened to be talking about this larger issue the
> other
> >> day. Here are a few snippets I dug up during that conversations of some
> of
> >> my many posts about this over the years…
> >>
> >> > we have overloaded system categories to package code for SCM. System
> >> > categories should be tags (preferably multiple allowed)
> >> > which offer a logical view of the system. Packages, the POV we show
> now,
> >> > are orthogonal and much less useful for users.
> >> (edited)
> >> and another:
> >> > I feel more and more that the standard "Package" pane is only useful
> >> > for... packaging, and when one takes off the dependency management hat
> >> > and
> >> > puts the user hat on (i.e. most of the time), what you really want
> there
> >> > is a logical view of the system. So I see three use cases:
> >> - Logical view of the system - I guess this was the original intention
> of
> >> Categories, but has been hijacked by Monticello
> >> - By project - which, as you just showed, we have now, yay!
> >> - By package - the least useful, but primary (up til now), view
> >> (edited)
> >> and regarding Nautilus' tree package pane (when it first arrived):
> >> I noticed that right now, separate packages within the same project are
> >> not
> >> collapsed. E.g. if I have MyProject-Core and MyProject-Platform, they
> will
> >> be siblings in the tree, instead of both under MyProject. It seems like
> it
> >> would be more useful to have
> >> - MyProject
> >>   - Core
> >>   - Platform
> >> in the tree
> >
> >
> > If you and Denis are "going radical" and going to do the right thing then
> > please also give thought to overrides and unloading.  Allowing a package
> to
> > override a set of methods on load is a useful facility, fraught with
> > difficulties (being able to browse the overridden versions being the main
> > one).  Having things organized so that the overridden versions are saved,
> > don't get lost when source is rewritten, etc, etc (made much easier by
> > keeping source in methods), but most importantly, get restored in the
> right
> > order when packages are unloaded.  I believe it's as simple as
> associating
> > the methods that are overridden with the packages to which they belong,
> and
> > maintaining a load order (so that if PkgA B & C implement C>>foo, and are
> > loaded in the order A, B, C, then we can compute easily that unloading C
> > restores PkgB's C>>foo, and that unloading B does not affect C>>foo).
> >
> >>
> >>
> >> > it seems that the tree is primarily about chunking information into
> >> > manageable pieces.
> >>
> >> A primary difficulty here is that packages are often divide

Re: [Pharo-dev] Why can't we use * in protocol for package extension?

2018-04-12 Thread Eliot Miranda
Hi Sean,

On Thu, Apr 12, 2018 at 9:49 AM, Sean P. DeNigris 
wrote:

> Stephane Ducasse-3 wrote
> > You see. You pushed this idea and at then end we will have to handle the
> > mess.
> > I do not see why we cannot simply support *.
>
> I'm surprised by this resistance. The *Xyz was always an ugly hack, part of
> Squeak's overloading the same mechanism for both system categorization and
> packaging, and exposing and limiting protocols as "just dumb strings", all
> of which IMHO makes the system much less understandable (no real "private"
> tagging, extension methods can't show up in proper protocol, etc). We're
> not
> in a feature freeze, so what is the problem with tackling part of this mess
> now? Sure, maybe the UI support can be improved, but let's focus on some
> concrete suggestions.
>
> Denis and I just happened to be talking about this larger issue the other
> day. Here are a few snippets I dug up during that conversations of some of
> my many posts about this over the years…
>
> > we have overloaded system categories to package code for SCM. System
> > categories should be tags (preferably multiple allowed)
> > which offer a logical view of the system. Packages, the POV we show now,
> > are orthogonal and much less useful for users.
> (edited)
> and another:
> > I feel more and more that the standard "Package" pane is only useful
> > for... packaging, and when one takes off the dependency management hat
> and
> > puts the user hat on (i.e. most of the time), what you really want there
> > is a logical view of the system. So I see three use cases:
> - Logical view of the system - I guess this was the original intention of
> Categories, but has been hijacked by Monticello
> - By project - which, as you just showed, we have now, yay!
> - By package - the least useful, but primary (up til now), view
> (edited)
> and regarding Nautilus' tree package pane (when it first arrived):
> I noticed that right now, separate packages within the same project are not
> collapsed. E.g. if I have MyProject-Core and MyProject-Platform, they will
> be siblings in the tree, instead of both under MyProject. It seems like it
> would be more useful to have
> - MyProject
>   - Core
>   - Platform
> in the tree
>

If you and Denis are "going radical" and going to do the right thing then
please also give thought to overrides and unloading.  Allowing a package to
override a set of methods on load is a useful facility, fraught with
difficulties (being able to browse the overridden versions being the main
one).  Having things organized so that the overridden versions are saved,
don't get lost when source is rewritten, etc, etc (made much easier by
keeping source in methods), but most importantly, get restored in the right
order when packages are unloaded.  I believe it's as simple as associating
the methods that are overridden with the packages to which they belong, and
maintaining a load order (so that if PkgA B & C implement C>>foo, and are
loaded in the order A, B, C, then we can compute easily that unloading C
restores PkgB's C>>foo, and that unloading B does not affect C>>foo).


>
> > it seems that the tree is primarily about chunking information into
> > manageable pieces.
>
> A primary difficulty here is that packages are often divided for reasons
> that have nothing to do with the domain model, e.g. the ubiquitous
> MyPackage-Platform, which is an artifact of Metacello that is not all that
> relevant to a user wanting to understand the system.
>
> From the naive user perspective, if I'm exploring from the top level of the
> system, I want to see things like:
> - CodeImport
> - Collections
> - Compiler
>
> From this perspective, the 14 entries for Collections, multiplied by a few
> dozen top-level categories make the list unwieldy and only marginally less
> daunting than the flattened list we used to have (see
> http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two )
>
>
>
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Changed #atEnd primitive - #atEnd vs #next returning nil

2018-04-11 Thread Eliot Miranda
Hi Sven,

On Wed, Apr 11, 2018 at 1:25 PM, Sven Van Caekenberghe  wrote:

>
>
> > On 11 Apr 2018, at 21:44, Stephane Ducasse 
> wrote:
> >
> > I did not know about the NeoConsole. Nice because I wanted to build a
> > little REPL for my minilanguage implementation.
>
> You are of course welcome to look at it.
> But it is Pharo specific.
> I use it to be able to hook/look into running headless server images.
> For this it is super handy.
>

Cool usage!  Could you tell me whether you type Smalltalk expressions into
this to examine your running server?  The answer will be used in a related
discussion on a mailing list not too distant from this one ;-)


>
> > Stef
> >
> > On Wed, Apr 11, 2018 at 8:47 PM, Sven Van Caekenberghe 
> wrote:
> >> Alistair,
> >>
> >>> On 11 Apr 2018, at 19:42, Sven Van Caekenberghe  wrote:
> >>>
> >>> I will send you some code later on.
> >>
> >> Today I arranged for my NeoConsole code (that normally works over a
> network connection) to work over stdio. Although I am not yet happy with
> every aspect of the implementation, it does work (using unaltered Zn
> streams and code). The foll
> >>
> >> $ cat /etc/issue
> >> Ubuntu 16.04.4 LTS
> >> $ mkdir pharo7
> >> $ cd pharo7/
> >> $ curl get.pharo.org/70+vm | bash
> >>  % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
> >> Dload  Upload   Total   SpentLeft
> Speed
> >> 100  3036  100  30360 0  36799  0 --:--:-- --:--:--
> --:--:-- 37024
> >> Downloading the latest 70 Image:
> >>http://files.pharo.org/get-files/70/pharo.zip
> >> Pharo.image
> >> Downloading the latest pharoVM:
> >>http://files.pharo.org/get-files/70/pharo-linux-stable.zip
> >> pharo-vm/pharo
> >> Creating starter scripts pharo and pharo-ui
> >> On a 64-bit system? You must enable and install the 32-bit libraries
> >>   Please see http://pharo.org/gnu-linux-installation for detailed
> instructions
> >> $ ./pharo Pharo.image config http://mc.stfx.eu/Neo
> ConfigurationOfNeoConsole --install=bleedingEdge
> >> 'Installing ConfigurationOfNeoConsole bleedingEdge'
> >>
> >> Loading 1-baseline of ConfigurationOfNeoConsole...
> >> Fetched -> Neo-Console-Core-SvenVanCaekenberghe.24 ---
> http://mc.stfx.eu/Neo --- http://mc.stfx.eu/Neo
> >> Loaded -> Neo-Console-Core-SvenVanCaekenberghe.24 ---
> http://mc.stfx.eu/Neo --- cache
> >> ...finished 1-baseline
> >> $ ./pharo Pharo.image eval NeoConsoleStdio run
> >> Neo Console Pharo-7.0+alpha.build.760.sha.
> d2734dcabda799803c307365bcd120f92211d34a (32 Bit)
> >> pharo> 1+2
> >>
> >> 3
> >> pharo> 42 factorial
> >>
> >> 14050061177528798985431426062445115699363840
> >> pharo> Stdio stdin
> >>
> >> StdioStream: #stdin
> >> pharo> ==
> >> self: StdioStream: #stdin
> >> class: StdioStream
> >> file: a File
> >> handle: #[148 213 25 107 160 197 105 247 0 0 0 0 0 0 0 0 0 1 255 1]
> >> forWrite: false
> >> peekBuffer: nil
> >> pharo> show StdioStream>>#atEnd
> >> StdioStream>>#atEnd
> >> atEnd
> >>
> >>^ file atEnd
> >> pharo> get process.list
> >> Morphic UI Process
> >> Delay Scheduling Process
> >> Low Space Watcher
> >> Input Event Fetcher Process
> >> Idle Process
> >> WeakArray Finalization Process
> >> CommandLine handler process
> >> pharo> quit
> >> Bye!
> >> a NeoConsoleStdio
> >>
> >> I know there are many approaches to a REPL, I don't claim mine is best,
> it is just the one that I have been using for years.
> >>
> >> In the above, I do not depend on EOF - just to be clear. The point
> being that there is no immediate fundamental problem.
> >>
> >> But there is something wrong with what is returned by Stdio stdin
> >>
> >> Sven
> >>
> >>
> >>
> >
>
>
>


-- 
_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] DateAndTime Offset Bug Proposal

2018-04-10 Thread Eliot Miranda
Hi Sven,

On Tue, Apr 10, 2018 at 7:33 AM, Sven Van Caekenberghe  wrote:

>
>
> > On 10 Apr 2018, at 16:13, Stephane Ducasse 
> wrote:
> >
> > What is a field based date and  time?
>
> This is more of an implementation choice but it has probably influence on
> the API as well.
>
> Field based date/time/datetime objects are like the naive implementation
> where each element that we talk about as humans has its own instance
> variable. You would store hours, minutes, seconds, nanosecond separately,
> as well as year, month, day.
>
> This is not as efficient as the current implementations, where time is
> typically stored as seconds and nanoseconds, and dates as a single julian
> day number.
>
> Still, these are implementation choices. For example, the split between
> seconds and nanoseconds is a bit artificial as well (it is so that both
> remain SmallIntegers in 32-bit), while in 64-bit this decision could be
> revised.
>

Note that the VM provides times as either 64-bit LargePositiveInteger
microseconds from 1901 (32-bits) or 61-bit positive SmallInteger
microseconds from 1901 (64-bits), in either UTC or the current time zone.

This gives us the following fit-for-purpose limits:

DateAndTime fromSeconds: 1 << 64 - 1 // 100 586455-01-18T08:01:49-07:00
DateAndTime fromSeconds: SmallInteger maxVal // 100
38435-08-17T21:30:06-07:00

Choosing 64-bit nanoseconds doesn't work nearly as well; we run out of bits
in 586 years.

I don't buy the performance issue of keeping things as SmallIntegers.
There is good 64-bit positive integer support in all the 32-bit VMs (e.g.
in the interpreter, inlined #+. #- et al all work on a 64-bit integer
range).  If Pharo does decide to change the internal representation then I
suggest that it uses the same representation as the VM.


> > On Tue, Apr 10, 2018 at 1:32 PM, Esteban A. Maringolo
> >  wrote:
> >>
> >> What is missing in the current Pharo image is a field based
> >> Date/DateTime instead of an offset+duration one as it currently is.
> >>
> >> Why not use Chronos instead? AFAIR Chronos provides that.
> >>
> >> An alternative would be to implement a "Calendar" (as in
> >> Java.util.Calendar [1]), that can exist in parallel with the existing
> >> Date class.
> >>
> >> Regards,
> >>
> >> [1] https://developer.android.com/reference/java/util/Calendar.html
> >>
> >> On 10/04/2018 03:30, Stephane Ducasse wrote:
> >>> Hi Paul
> >>>
> >>> I agree and instead of patching the current system I would start using
> >>> TDD to design
> >>> a new Date package.
> >>>
> >>> stef
> >>>
> >>> On Mon, Apr 9, 2018 at 8:42 PM, Paul DeBruicker 
> wrote:
>  I  think #= is a bad selector for Date and should be avoided when
> determining
>  whether something happens on a date, or whether two dates are the
> same.   We
>  all know March 24th in London covers a different 24 hours than March
> 24th in
>  Hawaii but Date>>#= does not.
> 
> 
> 
>  I think whats needed are more descriptive selectors like
> 
> 
>  Date>>#isSameOnDateAs: aDateOrDateAndTime
>  Date>>#overlapsWithDate: aDate
>  DateAndTime>>#occursOnDate: aDate
>  DateAndTime>>#sameHMSButDifferentUTCIn: aTimeZoneKey
>  DateAndTime>>#sameUTCButDifferentHMSIn: aTimeZoneKey
> 
>  and change Date>>#= to #shouldNotImplement.
> 
> 
>  FWIW I also don't like #offset: as before you send it you know the
> timezone
>  and after you may let that knowledge be forgotten. Real offsets can
> change
>  as laws change.
> 
> 
> 
>  I think people are aware of this but if you have need for comparing
> dates &
>  times then you must use a library that accesses the regularly updated
> Olson
>  timezone database on your system and classes that respect time
> zones.  Time
>  zones are political, and legal definitions of offsets can change hours
>  before the DST transition dates & times.
> 
> 
>  I don't think it matters which default timezone you pick for the
> image if
>  you're not going to take them into account when doing comparisons.
> 
> 
>  Unfortunately there isn't a way to avoid this complexity until DST
> goes
>  away.
> 
> 
>  There's certainly flaws to how we currently do it and I think
>  TimeZoneDatabase and Chronos make good attempts to fix it.  I haven't
> looked
>  at Chalten but would guess its good too.
> 
> 
> 
> 
> 
> 
> 
> 
>  Sean P. DeNigris wrote
> > I was bitten by this very annoying bug again. As most of us probably
> know
> > due
> > to the steady stream of confused ML posts in the past, the bug in
> summary
> > is
> > that we have an incomplete timezone implementation that doesn't
> properly
> > take into account historical DST changes. This flares up without
> warning
> > especially when DST toggles. I created a wiki page to document the
> > situation: https://github.com/seandenigris/pharo/

Re: [Pharo-dev] Did magritte change? Magritte-Morph-SeasnDeNigris.95 is not found

2018-04-08 Thread Eliot Miranda
Hi Sean,


> On Apr 8, 2018, at 6:04 AM, Sean P. DeNigris  wrote:
> 
> Peter Uhnák wrote
>> Is this the canonical repo? I'm still depending on the version in
>> SmalltalkHub.
> 
> I would say it's a judgement call because the original author is no longer
> active in the community. I originally was pushing all my fixes to the StHub
> repo, but I started to break the Squeak CI builds, so I forked and made many
> significant enhancements since. I always depend on my fork and use it in
> most of my projects, both personal and production. I would say that there is
> no downside as long as one is using it in Pharo exclusively. I would guess
> it wouldn't be super complicated to fix for Squeak and sync everything, but
> so far no one has voiced a need.

If it is possible and not too onerous then the courtesy would be appreciated.

> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> 



Re: [Pharo-dev] Changed #atEnd primitive - #atEnd vs #next returning nil

2018-04-04 Thread Eliot Miranda
On Wed, Apr 4, 2018 at 8:37 AM, Sven Van Caekenberghe  wrote:

>
>
> > On 4 Apr 2018, at 17:32, K K Subbu  wrote:
> >
> > On Wednesday 04 April 2018 04:06 PM, Nicolas Cellier wrote:
> >>> IIRC, someone said it is implemented as 'remaining size being zero'
> >>> and some virtual unix files like /dev/random are zero sized.
> >> Currently, for files other than sdio (stdout, stderr, stdin) it is
> >> effectively defined as:
> >> atEnd := stream position >= stream size
> > I see a confusion between Stream and its underlying collection. Stream
> is an iterator and just does next, nextPut, peek, reset etc. But methods
> like size or atEnd depend on its collection and there is no guarantee that
> this collection has a known and finite size.
> >
> > Essentially, a collection's size may be known and finite, unknown but
> finite size or infinite. This has nothing do with file descriptor being
> std{in,out,err}. If std* is a regular file or special file like /dev/mem,
> /dev/null, it's size is known at open time. With console streams or sysfs
> files, size is unknown until EOT (^D) or NUL is received. Lastly, special
> files like /dev/zero, /dev/random or /proc/cpuinfo don't have a finite size
> but report it as zero (!).
> >
> > [ stream atEnd ] whileFalse: [ stream next. .. ]
> >
> > will only terminate if its collection size is finite. It won't terminate
> for infinite collections.
> >
> > Regards .. Subbu
>
> Good summary, I agree.
>
> Still, what are the semantics of #next - does the caller always have to
> check for nil ? Do we think this is ugly (as the return value is outside
> the domain) ?


The problem is not that the value is outside the domain.  The problem is
that it may be within the domain.  Answering an element within the domain
when there are no more elements is insane.  Not even C does that.


> Do we then still need #atEnd ?
>

For backward compatibility, yes, even if deprecated.  There is so much
existing code that uses it, it'll be useful to have.


> Sven
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] [Vm-dev] Image crashing on startup, apparently during GC

2018-04-01 Thread Eliot Miranda
Hi Alistair,

_,,,^..^,,,_ (phone)

> On Mar 31, 2018, at 1:42 PM, Alistair Grant  wrote:
> 
> Hi Pablo & Eliot,
> 
>> On 31 March 2018 at 20:49, Eliot Miranda  wrote:
>> Hi Pablo,
>> 
>> On Sat, Mar 31, 2018 at 10:19 AM, teso...@gmail.com 
>> wrote:
>>> 
>>> Hi,
>>> I am taking the VM from the latest VM in
>>> http://files.pharo.org/get-files/70/ (the one downloaded by the get pharo
>>> scripts, I believe is
>>> http://files.pharo.org/get-files/70/pharo-mac-latest.zip)
>>> The output of version in the VM is:
>>> 
>>> 5.0 5.0.201803151936 Mac OS X built on Mar 15 2018 23:30:17 UTC Compiler:
>>> 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31) [Production Spur VM]
>>> 
>>> CoInterpreter VMMaker.oscog-eem.2347 uuid:
>>> 062614a7-e3da-4b30-997a-9568911b9ff5 Mar 15 2018
>>> 
>>> StackToRegisterMappingCogit VMMaker.oscog-eem.2347 uuid:
>>> 062614a7-e3da-4b30-997a-9568911b9ff5 Mar 15 2018
>>> 
>>> VM: 201803151936 https://github.com/OpenSmalltalk/opensmalltalk-vm.git $
>>> Date: Thu Mar 15 20:36:43 2018 +0100 $
>>> 
>>> Plugins: 201803151936
>>> https://github.com/OpenSmalltalk/opensmalltalk-vm.git $
>>> 
>>> 
>>> I don't know if this information helps you to know the specific commit,
>>> but please feel free to tell me how I can get the exact commit from the VM.
>>> Or where to get other VMs to check the error.
>> 
>> 
>> The best one can do is either
>> - running the VM executable from the command line using --version
>> - via the System Reporter
>> 
>> Alas git doesn't help here.  Unlike many other scc systems git doesn't
>> provide a metalanguage to embed the current commit id into source.  The best
>> we have is the time stamp and as we can see the granularity isn't good
>> enough when things are changing quickly.
> 
> git doesn't provide a substitution mechanism like sccs, but the script
> we have that embeds the date can just as easily embed the hash.  In
> .git_filters/RevDateURL.smudge there's a line that retrieves the
> commit date from git:
> 
> $date = `git log --format=%ad -1`;
> 
> to get the (short) hash we can simply add:
> 
> $shorthash = `git log --format=%h -1`;
> 
> The string substitution can then proceed as for the date.
> 
> I think it would be worthwhile having both the date and hash in the
> --version info.
> 
> I'm happy to add this in and update the --version output if there's
> general agreement.

Yes please!!! The conventional alternative is to invoke git log from the 
makefiles and lass in the commit hash as a compiler-line default me.  But this 
is messy and slows down compilation (unless there is a special rule for just 
one file, and that's fragile).  I much prefer having the commit somewhere in 
source.

If you do go ahead with this also consider modifying the makefiles to ensure 
that updateSCCSVersions has been run at least once before the bulk of the build 
is done.

> 
> 
> 
>> As Alistair says, the issue is fixed in the VMMaker.oscog package commit
>> VMMaker.oscog-eem.2359, which is
>> 
>> commit 1f0a7da9d4e8dcf4cdfac07014decdadac6937bb
>> Author: Eliot Miranda 
>> Date:   Thu Mar 15 18:09:12 2018 -0700
> 
> 
> Which unfortunately is 1 commit after the version you have.
> 
> There appears to be a separate problem that MacOS VMs aren't being
> uploaded to files.pharo.org, so while running the VM through the Pharo
> automated test suite and bootstrap process is a great idea, right now
> we need to wait for an updated VM for MacOS. :-(.
> 
> 
> Cheers,
> Alistair
> 
> 
> 
>>CogVM source as per VMMaker.oscog-eem.2359
>> 
>>Cogits:
>>Fix regression introduced in VMMaker.oscog-eem.2333 or thereabouts when
>> improving comoilation breakpoint.  maybeSelectorOfMethod can answer nil so a
>> guard is needed.
>> 
>> I'm sorry but the crash.dmp doesn't appear to include the VMMaker.oscog
>> commit.  I thought it did.  I'll fix this.
>> 
>>> 
>>> Cheers,
>>> Pablo
>>> 
>>> On Sat, Mar 31, 2018 at 6:53 PM, Alistair Grant 
>>> wrote:
>>>> 
>>>> Hi Pablo,
>>>> 
>>>>> On 31 March 2018 at 18:36, teso...@gmail.com  wrote:
>>>>> Hi Everyone,
>>>>>  I have created the PR in Pharo, so the CI runs the bootstrap with the
>>>>> latest VM (March 15th).
>>>>> Running the process fails during execution of the tests 

Re: [Pharo-dev] [Vm-dev] Image crashing on startup, apparently during GC

2018-03-31 Thread Eliot Miranda
Hi Pablo,

On Sat, Mar 31, 2018 at 10:19 AM, teso...@gmail.com 
wrote:

> Hi,
> I am taking the VM from the latest VM in http://files.pharo.org/get-
> files/70/ (the one downloaded by the get pharo scripts, I believe is
> http://files.pharo.org/get-files/70/pharo-mac-latest.zip)
> The output of version in the VM is:
>
> 5.0 5.0.201803151936 Mac OS X built on Mar 15 2018 23:30:17 UTC Compiler:
> 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31) [Production Spur VM]
>
> CoInterpreter VMMaker.oscog-eem.2347 uuid: 
> 062614a7-e3da-4b30-997a-9568911b9ff5
> Mar 15 2018
>
> StackToRegisterMappingCogit VMMaker.oscog-eem.2347 uuid:
> 062614a7-e3da-4b30-997a-9568911b9ff5 Mar 15 2018
>
> VM: 201803151936 https://github.com/OpenSmalltalk/opensmalltalk-vm.git $
> Date: Thu Mar 15 20:36:43 2018 +0100 $
>
> Plugins: 201803151936 https://github.com/OpenSmalltalk/opensmalltalk-
> vm.git $
>
> I don't know if this information helps you to know the specific commit,
> but please feel free to tell me how I can get the exact commit from the VM.
> Or where to get other VMs to check the error.
>

The best one can do is either
- running the VM executable from the command line using --version
- via the System Reporter

Alas git doesn't help here.  Unlike many other scc systems git doesn't
provide a metalanguage to embed the current commit id into source.  The
best we have is the time stamp and as we can see the granularity isn't good
enough when things are changing quickly.

As Alistair says, the issue is fixed in the VMMaker.oscog package commit
VMMaker.oscog-eem.2359, which is

commit 1f0a7da9d4e8dcf4cdfac07014decdadac6937bb
Author: Eliot Miranda 
Date:   Thu Mar 15 18:09:12 2018 -0700

CogVM source as per VMMaker.oscog-eem.2359

Cogits:
Fix regression introduced in VMMaker.oscog-eem.2333 or thereabouts when
improving comoilation breakpoint.  maybeSelectorOfMethod can answer nil so
a guard is needed.

I'm sorry but the crash.dmp doesn't appear to include the VMMaker.oscog
commit.  I thought it did.  I'll fix this.


> Cheers,
> Pablo
>
> On Sat, Mar 31, 2018 at 6:53 PM, Alistair Grant 
> wrote:
>
>> Hi Pablo,
>>
>> On 31 March 2018 at 18:36, teso...@gmail.com  wrote:
>> > Hi Everyone,
>> >   I have created the PR in Pharo, so the CI runs the bootstrap with the
>> > latest VM (March 15th).
>> > Running the process fails during execution of the tests in 32bits OSX.
>> > It crashes the VM with a segmentation fault.
>> > I could reproduce the crash, running the tests from the command line,
>> and
>> > also running OCBytecodeGeneratorTest test.
>>
>>
>> There were several VMs built on / around the 15th.  Would you mind
>> letting me know the commit hash as Eliot fixed this particular problem
>> about then.
>>
>> I tested 43a2f5c.
>>
>> Thanks,
>> Alistair
>>
>>
>>
>> >
>> > I am attaching the crash.dmp with both executions (from the commandLine
>> and
>> > headful), both are in the same point.
>> >
>> > Cheers,
>> > Pablo
>> >
>> > On Sat, Mar 31, 2018 at 3:52 PM, Stephane Ducasse <
>> stepharo.s...@gmail.com>
>> > wrote:
>> >>
>> >> > I will try to promote then the one of 15 march. We’ll see next week.
>> >> > but then, this is part of my observation: We cannot know which VMs
>> are
>> >> > stable, and that’s because the *process* to make them stable is very
>> >> > “human
>> >> > dependent”: We consider a version stable when it builds on CI and
>> Eliot
>> >> > says
>> >> > is stable. But since Eliot does not use Pharo (not a critic, a
>> reality),
>> >> > that may be not true for Pharo. And that’s actually what happens,
>> Pharo
>> >> > crashes.
>> >>
>> >> Hi esteban
>> >>
>> >> What would be a way to fix the process and make your work easier?
>> >>
>> >> If you do not know what can be a release candidate then who can?
>> >> We should really improve this situation.
>> >>
>> >> Stef
>> >>
>> >>
>> >> > I tried to avoid a bit this problem with our fork and nightly builds
>> >> > that
>> >> > runs the pharo tests (to knew about problems as early as possible).
>> But
>> >> > to
>> >> > be honest I didn’t have the time (and the will) to work on it
>> recently,
>> >> > then
>> >> > pharo fork is in practice stalled. I will

Re: [Pharo-dev] Why is FileSystem's Path class private ?

2018-03-30 Thread Eliot Miranda
Hi Alistair,

> On Mar 30, 2018, at 7:57 AM, Alistair Grant  wrote:
> 
> Hi Damien,
> 
>> On 30 March 2018 at 14:59, Damien Pollet  wrote:
>> The class Path from FileSystem is documented as private… but it should be
>> public, shouldn't it?
>> 
>> First hint is that it's named with a public name: (Path not FileSystemPath).
>> 
>> Then, consider the following use-case : if you have a file name in some
>> config file (.ini, .toml…) then that's really a Path, not a resolved
>> FileReference.
> 
> The file name is implicitly attached to a file system, so it is
> resolved (using Pharo terminology).  If you pass the path to a
> program, e.g. an editor, it has enough information to be able to open
> the file.
> 
> In Unix, everything is mounted on the root file system, so there's no
> need to specify a file system.  Pharo's model allows for multiple file
> systems, e.g. the disk, and multiple memory and zip file systems.
> 
> Path's by themselves don't provide much functionality - you can't
> actually access the file or directory you think is represented by the
> path; you have to know which file system the path is attached to.
> 
> It's the FileReference that associates the path and the file system,
> and provides all the public interface.

But isn't the notion of a default file system (the root of the file system on 
the current machine, or the current image directory) so clear and natural that 
there be a public API that, by default, resolves past he against all that root? 
 It seems to me that the benefit of the convenience here is large.

> 
> HTH,
> Alistair

Eliot,
_,,,^..^,,,_ (phone)

> 
> 
>> I'm not talking about RelativePath and AbsolutePath, those can be hidden
>> away since Path provides the factory and DSL methods. IMHO they should still
>> be renamed to a more private name.
>> 
>> --
>> Damien Pollet
>> type less, do more [ | ] http://people.untyped.org/damien.pollet
> 



Re: [Pharo-dev] Some segfault crashes and freezes when trying to re-run saved pharo 7 images

2018-03-28 Thread Eliot Miranda
Hi Holger,

On Mon, Mar 26, 2018 at 8:32 AM, Holger Freyther  wrote:

>
>
> > On 26. Mar 2018, at 00:00, Eliot Miranda 
> wrote:
> >
> > Hi Holger,
>
> Hey!
>
>
> > Is the intent of CCallOut with: aBlock to collect and defer
> deallocations until the block completes?  I think it's nice but complex and
> wonder how general it is.  But it seems like it would impact a lot of code
> and require a lot of effort changing existing code bases.  My handle scheme
> is only intended to fix the issue of images crashing on startup. The
> problems with crashing on startup being a) one loses one's work and b) the
> issue is hard to debug.  That, for me, motivates something like the simple
> fix I p[roposed.  I'm not standing in the way of something more beautiful,
> but I do believe that one shouldn't make the perfect the enemy of the good.
>
> By all means, let's have a smart pointer! It will be beneficial for
> freetype and many other places. Nobody likes crashing images (for a mistake
> made in a previous run).
>
>
> My intent with the CCallOut is inhibit image saving until the code is
> outside a sequence of (interruptible, hence the backward jump int the
> example) C calls working with one or more pieces of manually managed
> memory. But maybe it is best to call it by what it does instead of finding
> a name of where it is used.
>
> I can't come up with a better example right now. But I think a
> Smartpointer wouldn't be of much help with a sequence of strtok calls.
>
> word := CStringAPI strtok: cStringPtr safeBytes separator: '\r'.
> [word isNil] whileFalse: [
> word = 'Foo'...
> word = 'Bla'...
> word := CStringAPI strtok: nil separator: '\r'
>  - Image Save happening right here and word is not nil -
> ].
>
> I hope this is more clear.
>

Right.  So the VM could have a flag that, when set, causes the snapshot
primitive to fail, and that flag would be cleared only when all FFI cals
had unwound.  Would that be enough?  What would you have the image do if
snapshot fails because FFI calls are in progress?


> holger
>

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Some segfault crashes and freezes when trying to re-run saved pharo 7 images

2018-03-25 Thread Eliot Miranda
Hi Holger,

On Sat, Mar 24, 2018 at 9:09 PM, Holger Freyther  wrote:

>
>
> > On 24. Mar 2018, at 19:22, Eliot Miranda 
> wrote:
> >
>
> Dear Eliot,
>
>
> >> d.) Re-write FreeType with Alien and just use the Plugin to
> conveniently link/load to freetype..
> >
> > I wonder if there is sense in trying to come up with a general memory
> handle object that includes a session identifier, so that attempts to free
> stake memory always fail.
>
> I think it would help with the FT2Handle (to invalidate it on a new
> session) but it will not solve other image resume/FFI issues. Let's assume
> we have multiple calls into a C library (get a handle, call 1st method,
> call 2nd method, have a loop with backwards jumps). Something like:
>
>
>| memory |
>memory := GetSomeMemorySomewhere.
>
>1 to: 5 do: [
>  self doSomeCStuffWithMemory: memory pointerAndCheckStillValid.
>  ... more C stuff
>].
>
> We could be interrupted at any point and when the execution is resumed the
> handles might be invalid. In Python terminology something like a context
> manager could help:
>
>
> # Image saving is delayed/inhibited until after the callout chain
> CCallOut with: [
>1 to: 5 do: [
>  self doSomeCStuffWithMemory: memory pointerAndCheckStillValid.
>  ... more C stuff
>].
> ]
>
> What do you think?
>

Is the intent of CCallOut with: aBlock to collect and defer deallocations
until the block completes?  I think it's nice but complex and wonder how
general it is.  But it seems like it would impact a lot of code and require
a lot of effort changing existing code bases.  My handle scheme is only
intended to fix the issue of images crashing on startup. The problems with
crashing on startup being a) one loses one's work and b) the issue is hard
to debug.  That, for me, motivates something like the simple fix I
p[roposed.  I'm not standing in the way of something more beautiful, but I
do believe that one shouldn't make the perfect the enemy of the good.

_,,,^..^,,,_
best, Eliot


Re: [Pharo-dev] Some segfault crashes and freezes when trying to re-run saved pharo 7 images

2018-03-24 Thread Eliot Miranda
Hi Holger,


> On Mar 24, 2018, at 10:03 AM, Holger Freyther  wrote:
> 
> 
> 
>> On 24. Mar 2018, at 16:42, Holger Freyther  wrote:
>> 
>> 
>> 
>> 1.) FT2Handle class>>#startUp: isn't called. Which means 
>> FreeTypeFace>>#beNull has not been called yet!
>> 
> 
> 
> 
>> I think implementing:
>> 
>>FreeTypeExternalMemory class >> #bytes: aByteArray
>>^(aByteArray copy)
>>pin;
>>yourself
>> 
>> could solve most of it? (We can argue about the copy...)
> 
> It doesn't because most of FT2Plugin expects the "handle" it invented. But if 
> we look at it...
> 
> FreeTypeFace>validate
> SmalltalkImage>session
> SessionManager class>default
> SessionManager>currentSession
> SmalltalkImage>session
> FreeTypeFace>create
> FreeTypeFace(FT2Face)>newFaceFromExternalMemory:index:
> FreeTypeExternalMemory>validate
> FreeTypeExternalMemory(FT2Handle)>isValid
> 
> FreeTypeFace>>#validate already does:
> 
>(session == Smalltalk session
>...
> 
> 
> And this is why FreeTypeFace>>#create is being called and that will call 
> validate on the FreeTypeExternalMemory instance...
> 
> 
> FreeTypeExternalMemory>validate just checks if the handle isValid but 
> remember 1st from my previous mail. We have not _yet_ cleared the FT2Handle 
> SubInstances.. So the memory is all good.
> 
> 
> So back to the ideas.
> 
> a.) Make sure FT2Handler class>>#startUp runs a lot earlier
> b.) Find out which UI thing uses freetype earlier (but FreeType can be used 
> in non GUI apps. E.g. to draw text to an image...)
> c.) Keep the session in FreeTypeExternalMemory as well and use it in 
> >>#validate.. (not relying on the startUp order)
> d.) Re-write FreeType with Alien and just use the Plugin to conveniently 
> link/load to freetype..

I wonder if there is sense in trying to come up with a general memory handle 
object that includes a session identifier, so that attempts to free stake 
memory always fail.

> holger



  1   2   3   4   5   6   7   8   >