[Pharo-users] Comparison of blocks

2023-04-12 Thread Steffen Märcker

Hi!

In VisualWorks, blocks can be compared with each other. In Pharo the
comparison just checks for Identity. Is this on purpose? For reference,
that's how BlockClosure>>= is implemented in VW:


= aBlockClosure
^aBlockClosure class = self class and:
 [method = aBlockClosure method and:
  [outerContext = aBlockClosure outerContext and:
   [copiedValues = aBlockClosure copiedValues]]]

Kind regards,
Steffen


[Pharo-users] Re: [vwnc] Block evaluation with n+1 arguments

2023-04-12 Thread Steffen Märcker

Dear Richard,


thanks for elaborating on your ideas. As I am still figuring out what works
best, I'll give them a try. Especially the approach to let the source deal
with passing the arguments to the block - though this requires more
changes.



The problem is NOT, as some commentators apparently think, that
you are using a block.


Indeed. A block provided (by the user) is actual the natural way in my
case.


The problem is that having thought of
one way to wire things up -- not an unreasonable way, in fact --
you concentrated on making *that* way faster instead of looking
for other ways to do it.


You're right. I first wanted to see how far I can get with this "direct"
approach before trying other techniques. :-D


All the best!
Steffen


[Pharo-users] Re: Porting from VW to Pharo

2023-04-12 Thread Christian Haider
Hi Steffen,

 

thanks for trying and asking!

 

I was loading the code needed into a 8.3, 64bit virgin image and realized that 
loading is not that straight forward and described too briefly.

First, you need a non-default setting for the store prerequisites. I added this 
to the store access page: https://wiki.pdftalk.de/doku.php?id=storeaccess . It 
is critical to load the prereqs from store and not from parcels!

 

The first thing to load is the bundle {Smalltalk Transform Project}.

To see examples you need to load the subject of transformation: PDFtalk.

You need to load to top bundle {PDFtalk Project} which includes the test 
classes you are missing in your image.

At last, load the [Pharo Fileout PDFtalk] package.

I improved the landing page 
https://wiki.pdftalk.de/doku.php?id=smalltalktransform a bit to make this 
clearer.

 

I just tried and this loads without errors or warnings.

 

(Actually good that the load did not work for you, because I added a mistake in 
January which causes a 8.3 image to crash when you open a browser. Sorry for 
that.)

 

Now you should be all set for generating a fileout of PDFtalk for Pharo (in the 
current unfinished state).

 

Thanks for spotting the problems with the documentation. I will get over it 
tomorrow.

I am quick in renaming and making structural changes when things are not 
working as I want… But the docs should be correct, of course.

 

 

About the project structure.

Currently, everything belonging to a project, need to be transformed in one go. 
This is not a big deal, because all code transformations are described on the 
package level and can be easily recombined as the bundle structure changes.

 

The last piece of the transformation puzzle is to make the transformations 
modular, so that the renamings of prerequisite packages can be used without the 
need to transform the prereqs as well. I hope to get at that soon…

In the meantime, I would start with your Core project to get a feel for the 
mechanics. I am sure the rest will fall nicely into its places.

 

About how to structure your code in Pharo with Git, I don’t know much about 
that. Actually, I would also be interested in some guidelines to bake them into 
the transformations…

 

If you are seriously interested, we could have an online session to hack around 
with it…

 

Cheers,

Christian

 

 

Von: Steffen Märcker  
Gesendet: Dienstag, 11. April 2023 17:52
An: Any question about pharo is welcome 
Betreff: [Pharo-users] Re: Porting from VW to Pharo

 

Dear Christian and Richard,

 

thanks for your answers. I'll try to go through the process step by step and 
come back with questions to the list if that's okay.

 

First, after loading the "Pharo Fileout PDFTalk", VW (8.3, 64 Bit) shows two 
unloadable definitions:

- PostscriptInterpreterTests>>_ph_testOperatorNotFound
- ColorValueTest>>_ph_testBridgedNamedColors

Both classes are not loaded

 

 

 

Second, it appears that some of the selectors mentioned on 
https://wiki.pdftalk.de/doku.php?id=smalltalktransformdocumentation have been 
renamed, e.g., PackageChanges>>unusedClasses

 

More general, regarding project structure. What is the best approach to port a 
project that consists of multiple loosely coupled packages (not in a bundle) 
some of which being optional? Like

- Package Project Core

- Package Project Core Tests (requires Core)

- Package Project Extension A (requires Core)
- Package Project Extension A Tests (requires Extension A)

- Package Project Examples (requires Core and Extension A)

 

And how should I structure this on the Pharo site and in an iceberg repository? 
One Git repository per package or all in the same? Is there a guide to this or 
a specific Mooc lesson?

 

Kind regards,

Steffen

 

 


Christian Haider schrieb am Donnerstag, 6. April 2023 18:16:00 (+02:00):

Yes, PDFtalk is the only example, because it was created to port that library. 
Any other uses are welcome.

 

The project has been dormant for a year now because of other obligations, but I 
hope to resume soon.

 

The documentation is, as Richard notes, in a suboptimal state. I think that the 
information is still accurate.

Any help with this would be welcome, for example by asking questions or by 
criticizing concrete issues.

 

Christian

 

Von: Richard Sargent mailto:richard.sarg...@gemtalksystems.com> > 
Gesendet: Donnerstag, 6. April 2023 17:55
An: Any question about pharo is welcome mailto:pharo-users@lists.pharo.org> >
Betreff: [Pharo-users] Re: Porting from VW to Pharo

 

The best(?) place to start is perhaps  
 
https://wiki.pdftalk.de/doku.php?id=smalltalktransform.

The only examples are various ports of PDFtalk (from VisualWorks) to Pharo, 
Squeak, GemStone, and VAST.

 

PDFtalk is quite complex and the porting rules are correspondingly complex. The 
Transform documentation does leave something to be desired.

 

On Thu, Apr 6, 2023 at 8

[Pharo-users] Collection>>reduce naming

2023-04-12 Thread Steffen Märcker

Hi!

I wonder whether there was a specific reason to name this method #reduce:?
I would have expected #fold: as this is the more common term for what it
does. And in fact, even the comment reads "Fold the result of the receiver
into aBlock." Whereas #reduce: is the common term for what we call with
#inject:into: .

I am asking not to annoy anyone but out of curiosity. It figured this out
only by some weird behaviour after porting some code that (re)defines
#reduce .

Ciao!
Steffen


[Pharo-users] Re: [vwnc] Block evaluation with n+1 arguments

2023-04-12 Thread Richard O'Keefe
You say

- The source object returns multiple values as a tuple (for good reasons).
- The block processes theses values but needs another argument (at the
first place).


On the first point, we have to take your word for it.
It's not clear why you could not pass an n+1-element array to
the source method and have it fill in elements after the first
rather than having it allocate a new array.  If you are concerned
about object allocation in a tight loop this would be a good
place to start.

argArray := Array new: block argumentCount.
...
   source compute: i into: argArray.
   argArray at: 1 put: i.
   block valueWithArguments: argArray.

If for some reason it is utterly impossible to modify
'source compute: i' in this way, we can *still* use the
technique of allocating an array once for the whole loop.

argArray := Array new: block argumentCount.
...
   argArray at: 1 put: i;
 replaceFrom: 2 to: argArray size with: (source compute: i).
   block valueWithArguments: argArray.

This seems like the smallest possible change to your code.
You still have the overhead of copying from one array to another
-- which is why I prefer modifying #compute: -- but you do not
have the overhead of allocating an array per iteration.

On the second point, right there you have the assumption that is
limiting your vision.

You are viewing the problem as "pass an extra first argument to
the block" when you *should* frame it as "ensure that the block
knows the value of i SOMEHOW".  Presumably these blocks are
generated by code written by you.

So let's start with
Someclass
  methods for: 'generating blocks'
blockFor: aSituation
  ^[:x0 :x1 ... :xn | ]

block := Someclass blockFor: theSourceSituation.
(1 to: 1000) do: [:i | | args |
  args := source compute: i.
  block valueWithArguments: {i} , args

So now we change it to

Someclass
  methods for: 'generating blocks'
blockFor: aSituation sharing: stateObject
  ^[:x1 ... :xn | |x0|
  x0 := stateObject contents.
  .]

ref := Ref with: 0.
block := Someclass blockFor: theSourceSituation sharing: ref.
1 to: 1000 do: [:i | |args|
  ref contents: i.
  args := source compute: i.
  block valueWithArguments: args].

Ref is an actual class in my library modelled on the Pop-2 and SML
types of the same name.  It's not important.  What *is* important is
that information can be supplied to a block through a shared object
as well as through a parameter.

I am a little bit twitchy about the 'coincidence' of the size of
#compute:'s result and the argument count of the block.  Why not
pass the block to #compute: so that there never is any array in
the first place?

compute: index
   ... ^{e1. ... en} ...

=>
compute: index thenDo: aBlock
   ... ^aBlock value: e1 ... value: en ...

ref := Ref with: 0.
block := Someclass blockFor: theSourceSituation sharing: ref.
1 to: 1000 do: [:i | |args|
  ref contents: i.
  source compute: i thenDo: block].

Now there are even fewer arrays being allocated and no use of
#valueWithArguments: in any guise.

The problem is NOT, as some commentators apparently think, that
you are using a block.  The problem is that having thought of
one way to wire things up -- not an unreasonable way, in fact --
you concentrated on making *that* way faster instead of looking
for other ways to do it.

We have several idioms here:
  Reuse Object (convert an allocation per iteration to an allocation
  per loop by reinitialising a object instead of allocating a new one)
  Communicate Through Shared Microstate (communicate information between
  a method and a block or object through a 'microstate' object created
  by the method and passed when the block or object is created)
  Multiple Values by Callback (instead of 'returning' multiple values in
  a data structure, pass a block to receive those values as parameters).



On Wed, 12 Apr 2023 at 04:44, Steffen Märcker  wrote:

> Hi!
>
> First, thanks for your engaging answers Richard, Stephane and the others!
>
> The objective is to avoid unnecessary object creation in a tight loop that
> interfaces between a value source and a block that processes the values.
> - The source object returns multiple values as a tuple (for good reasons).
> - The block processes theses values but needs another argument (at the
> first place).
> We do not know the number of values at compile time but know that they
> match the arity of the block. Something like this (though more involved in
> practice):
>
> (1 to: 1000) do: [:i | | args |
> args := source compute: i.
> block valueWithArguments: {i} , args ]
>
> Since prepending the tuple with the first argument and then sending
> #valueWithArguments: creates an intermediate Array, I wonder whether we can
> avoid (some of) that overhead in the loop without changing this structure.
> Note, "{i}, args" is only for illustration and creates an additional third
> array as Steve

[Pharo-users] Fwd: [Esug-list] [IWST 2023]Call for Papers

2023-04-12 Thread stephane ducasse


> Begin forwarded message:
> 
> From: Gocrdana Rakic via Esug-list 
> Subject: [Esug-list] [IWST 2023]Call for Papers
> Date: 11 April 2023 at 19:40:35 CEST
> To: esug-l...@lists.esug.org
> Reply-To: Gocrdana Rakic 
> 
> Call For Papers
> IWST 2023: International Workshop on Smalltalk Technologies 
> Lyon, France; August 29th-31st, 2023
> Goals and scope
> The goals of the workshop is to create a forum around contributions and 
> experiences in building or using technologies related to Smalltalk. While 
> maturity of presented ideas and results is not crucial, it is expected that 
> their presentation trigger discussion and exchange of ideas. The topics of 
> your paper can be on all aspect of Smalltalk, theoretical as well as 
> practical. Authors are invited to submit research articles or industrial 
> papers.
> 
> Important Dates
> Submission deadline: May 14th, 2023
> 
> Notification deadline: June 11th, 2023
> 
> Re-submission deadline: July 1st, 2023
> 
> Workshop: August 29th-31st, 2023
> 
> Topics
> We welcome contributions on all aspects, theoretical as well as practical, of 
> Smalltalk related topics such as:
> 
> Aspect-oriented programming,
> 
> Design patterns,
> 
> Experience reports,
> 
> Frameworks,
> 
> Implementation, new dialects or languages implemented in Smalltalk,
> 
> Interaction with other languages,
> 
> Meta-programming and Meta-modeling,
> 
> Tools
> 
> Submissions, reviews, and selection
> We are looking for papers of two kinds:
> 
> Short position papers (5 to 10 pages) describing fresh ideas and early 
> results.
> 
> Long research papers (more than 10 pages) with deeper description of 
> experiments and of research results.
> 
> Both submissions and final papers must be prepared using the CEUR ART 
> 1-column style .
> 
> All submissions must be sent via EasyChair submission page 
> .
> 
> Reviewing
> 
> Submissions will be reviewed by at least 3 reviewers. Selected papers will be 
> invited to be presented at the workshop in Lyon and published in the CEUR-WS 
> Proceedings .
> 
> As the workshop form encourage bringing fresh ideas and early results to be 
> presented and discussed, and aims for giving a chance to young community 
> members to learn and grow, it may happen that submissions with discussion 
> potential be conditionally accepted. In this case authors are expected to 
> strictly follow the recommendation of the reviewers and resubmit a new 
> version for the second fast review by the chairs in collaboration with 
> assigned reviewers, and for making the final decision.
> 
> Best Paper Award
> 
> To encourage the submission of high-quality papers, the IWST organizing 
> committee is very proud to announce a Best Paper Award for this edition of 
> IWST.
> 
> We thank our financial contributors who make it possible for prizes for the 
> three best papers (estimated): 1000 USD for first place, 600 USD for second 
> place and 400 USD for third place.
> 
> The ranking will be decided by the program committee during the review 
> process. The awards will be given during the ESUG conference social event.
> 
> The Best Paper Award will take place only with a minimum of six submissions. 
> Notice also that to be eligible, a paper must be presented at the workshop by 
> one of the author and that the presenting author must be registered at the 
> ESUG conference.
> 
> Program chairs
> Stephane Ducasse, Inria Lille, France (chair),
> Gordana Rakic, University of Novi Sad, Serbia (chair)
> Program committee
> 
> Nour Agouf, Inria Lille, France,
> Vincent Blondeau, Lifeware, Switzerland,
> Cedrick Beler, Ecole Nationale d Ingenieurs de Tarbes, Hautes-Pyrenees, 
> France,
> Nicolas Cardozo, Universidad de los Andes, Bogota, Colombia,
> Celine Deknop, Universite catholique de Louvain (UCL), Belgium,
> Michele Lanza, Software Institute, Universita della Svizzera italiana, 
> Lugano, Switzerland,
> Eric Lepors, Thales DMS, France,
> Dave Mason, Ryerson University, Toronto, Canada,
> Kim Mens, Universite catholique de Louvain (UCL), Belgium,
> Ana-Maria Oprescu, University of Amsterdam, Neatherlands,
> Jean Privat, University of Quebec in Montreal, Canada,
> Pooja Rani, University of Bern, Switzerland,
> Larisa Safina, Inria Lille, France,
> Joao Saraiva, University of Minho, Portugal,
> Benoît Verhaeghe, Berger-Levrault, Lyon, France,
> Oleksandr Zaytsev, Cirad, UMR SENS, France
> ___
> Esug-list mailing list -- esug-l...@lists.esug.org 
> 
> To unsubscribe send an email to esug-list-le...@lists.esug.org 
>