Re: [Pharo-dev] Reflecting on data (literal) object syntax

2017-07-01 Thread Eliot Miranda
Hi Norbert,


> On Jul 1, 2017, at 7:36 AM, Norbert Hartl  wrote:
> 
> 
>>> Am 30.06.2017 um 21:14 schrieb Stephane Ducasse :
>>> 
>>> But what is DataFrame?
>> 
>> the new collection done by alesnedr from Lviv. It is really nice but
>> does not solve the problem of the compact syntax.
>> 
 
 STON fromString: 'Point[10,20]'
 
>>> Same goes for JSON.
>>> 
 We were brainstorming with marcus and we could have a first nice extension:
 
 { 'x' -> 10 .'y' -> 20 } asObjectOf: #Point.
>>> 10@20
 
 Now in addition I think that there is a value in having an object
 literal syntax.
 
 I pasted the old mail of igor on object literals because I like the
 idea since it does not add any change in the parser.
 Do you remember what were the problem raised by this solution (beside
 the fact that it had too many # and the order was like in ston not
 explicit).
 
 I would love to have another pass on the idea of Igor.
>>> 
>>> What I don't like about it is that the object literal exposes the internal 
>>> implementation of the object. Everything is based on index. So it could 
>>> suffer the same problem as fuel. When you don't have the exact same code 
>>> the deserialization fails.
>> 
>> Indeed this is why
>> { 'x' -> 10 .'y' -> 20 } asObjectOf: #Point.
>> could be more robust.
>> We could extend the object literal syntax to use association for non
>> collection.
>> 
> I think it is more robust and more explicit. I do not know what are the 
> semantics of detecting something as #Point being a class name. Is it then 
> forbidden to use symbols with uppercase letters? I think something like
> 
> { #x -> 10 . #y -> 20} asObjectOf: #Point
> 
> is handling the format with implicit knowledge of type. While the explicit 
> version would be
> 
> { #Point -> { #x -> 10 . #y -> 20}} asObject
> 
> And then nested objects are as easy as
> 
> {#PointCollection -> { 
>#points -> { {#Point -> { #x -> 10 . #y -> 20} }.
>  {#Point -> { #x -> 5 . #y -> 8} } } }  asObject

The -> messages are just noise and add additional processing for nothing.  This 
is just as effective:

{ #Point. { #x. 10 . #y. 20}} asObject

{#PointCollection. {   #points. { {#Point. { #x. 10 . #y. 20} }.
   {#Point. { #x. 5 . #y. 8} } } }  asObject

So an object is a pair of a class name and an array of slot specifier pairs, 
and a slot specifier is a pair of a slot band and a value.  And of course that 
means that many object specs can be literal, which is useful for storing in 
pragmas etc:

#(PointCollection
(points ((Point (x 10 y 20))
  ((Point (x 5 y 8  asObject

> 
> would give a PointCollection of two point objects. My future wish would be 
> that there is an object literal parser that takes all of the information from 
> the format. And then a object literal parser that is aware of slot 
> information. Meaning that the type information can be gathered from the 
> object class instead having it to write in the format. In the PointCollection 
> the slot for points would have the type information #Point attached. The 
> format goes then to
> 
> { #points -> {
>{ #x -> 10 . #y -> 20 }.
>{ #x -> 5 . #y -> 8 } }
> 
> which would then the equivalent to something like JSON
> 
> { "points" : [
>  { "x" : 10, "y" : 20 },
>  { "x" : 5, "y" : 8 } ] }
> 
> What I don't know is how to solve the difference between a dictionary and an 
> collection of associations.

That's incidental to the format, internal to the parser.  If the parser chooses 
to build a dictionary as it parses so be it.  The point is that the output is 
as you specify; a tree of objects.

The thing to think about is how to introduce labels so that sub objects can be 
shared in the graph, the naïve deepCopy vs deepCopyUsing: issue.

> 
> Norbert
> 
> 
> 
>>> As a dictionary is both, an array of associations and a key-value store, it 
>>> works perfectly there. But for other objects I have doubts. Especially is 
>>> in a lot of contexts you need to have a mapping of internal state to 
>>> external representation. It can be applied afterwards but I'm not sure that 
>>> can work all the time.
>> 
>> Yes after we should focus on the frequent cases. And may be having a
>> literal syntax for dictionary would be good enough.
>> 
>> I will do another version of igor's proposal with associations to see
>> how it feels.
>> 
>>> 
>>> my 2 cents,
>>> 
>>> Norbert
>>> 
 
 Stef
 
 
 
 
 -- Forwarded message --
 From: Igor Stasenko 
 Date: Fri, Oct 19, 2012 at 1:09 PM
 Subject: [Pharo-project] Yet another Notation format: Object literals
 To: Pharo Development 
 
 
 Hi,
 as i promised before, here the simple smalltalk-based literal format.
 It based on smalltalk syntax, and so, unlike 

Re: [Pharo-dev] Baseline question

2017-07-01 Thread Peter Uhnak
On Sat, Jul 01, 2017 at 02:35:07PM +0200, Stephane Ducasse wrote:
> Hi
> 
> I 'm trying to define a baseline for our project and I defined it as
> 
> baseline: spec
>
>spec for: #common do: [ spec
> baseline: 'SmaCC' with: [ spec
> repository: 'github://ThierryGoubier/SmaCC';
> loads: 'SmaCC-GLR-Runtime' ];
> 
> package: 'SmaCC-Solidity'
> "we could say that SmaccSolidity depends on Smacc-Runtime but I do not
> know how to say it"
> ]
> 
> Now I thought that I could try to load it using the following
> 
> Metacello new
> baseline: 'SmaccSolidity';
> repository: 'github://RMODINRIA-Blockchain/SmaCC-Solidity';
> load.
> 
> 
> But I get an error telling me that the baseline constructor does not
> understand load ;(

It loaded ok for me in a fresh image (pharo 6), maybe local problem?

As for dependency on a package, here's an older answer by Thierry: 
http://forum.world.st/How-to-depend-on-another-Github-repo-from-the-baseline-tp4812546p4812551.html

Peter



Re: [Pharo-dev] Reflecting on data (literal) object syntax

2017-07-01 Thread Norbert Hartl

> Am 30.06.2017 um 21:14 schrieb Stephane Ducasse :
> 
>> But what is DataFrame?
> 
> the new collection done by alesnedr from Lviv. It is really nice but
> does not solve the problem of the compact syntax.
> 
>>> 
>>> STON fromString: 'Point[10,20]'
>>> 
>> Same goes for JSON.
>> 
>>> We were brainstorming with marcus and we could have a first nice extension:
>>> 
>>> { 'x' -> 10 .'y' -> 20 } asObjectOf: #Point.
>> 10@20
>>> 
>>> Now in addition I think that there is a value in having an object
>>> literal syntax.
>>> 
>>> I pasted the old mail of igor on object literals because I like the
>>> idea since it does not add any change in the parser.
>>> Do you remember what were the problem raised by this solution (beside
>>> the fact that it had too many # and the order was like in ston not
>>> explicit).
>>> 
>>> I would love to have another pass on the idea of Igor.
>> 
>> What I don't like about it is that the object literal exposes the internal 
>> implementation of the object. Everything is based on index. So it could 
>> suffer the same problem as fuel. When you don't have the exact same code the 
>> deserialization fails.
> 
> Indeed this is why
> { 'x' -> 10 .'y' -> 20 } asObjectOf: #Point.
> could be more robust.
> We could extend the object literal syntax to use association for non
> collection.
> 
I think it is more robust and more explicit. I do not know what are the 
semantics of detecting something as #Point being a class name. Is it then 
forbidden to use symbols with uppercase letters? I think something like

{ #x -> 10 . #y -> 20} asObjectOf: #Point

is handling the format with implicit knowledge of type. While the explicit 
version would be

{ #Point -> { #x -> 10 . #y -> 20}} asObject

And then nested objects are as easy as

{#PointCollection -> { 
#points -> { {#Point -> { #x -> 10 . #y -> 20} }.
{#Point -> { #x -> 5 . #y -> 8} } } }  asObject

would give a PointCollection of two point objects. My future wish would be that 
there is an object literal parser that takes all of the information from the 
format. And then a object literal parser that is aware of slot information. 
Meaning that the type information can be gathered from the object class instead 
having it to write in the format. In the PointCollection the slot for points 
would have the type information #Point attached. The format goes then to

{ #points -> {
{ #x -> 10 . #y -> 20 }.
{ #x -> 5 . #y -> 8 } }

which would then the equivalent to something like JSON

{ "points" : [
  { "x" : 10, "y" : 20 },
  { "x" : 5, "y" : 8 } ] }

What I don't know is how to solve the difference between a dictionary and an 
collection of associations.

Norbert

 

>> As a dictionary is both, an array of associations and a key-value store, it 
>> works perfectly there. But for other objects I have doubts. Especially is in 
>> a lot of contexts you need to have a mapping of internal state to external 
>> representation. It can be applied afterwards but I'm not sure that can work 
>> all the time.
> 
> Yes after we should focus on the frequent cases. And may be having a
> literal syntax for dictionary would be good enough.
> 
> I will do another version of igor's proposal with associations to see
> how it feels.
> 
>> 
>> my 2 cents,
>> 
>> Norbert
>> 
>>> 
>>> Stef
>>> 
>>> 
>>> 
>>> 
>>> -- Forwarded message --
>>> From: Igor Stasenko 
>>> Date: Fri, Oct 19, 2012 at 1:09 PM
>>> Subject: [Pharo-project] Yet another Notation format: Object literals
>>> To: Pharo Development 
>>> 
>>> 
>>> Hi,
>>> as i promised before, here the simple smalltalk-based literal format.
>>> It based on smalltalk syntax, and so, unlike JSON, it doesn't needs to
>>> have separate parser (a normal smalltalk parser used for that).
>>> 
>>> The idea is quite simple:
>>> you can tell any object to represent itself as an 'object literal' ,
>>> for example:
>>> 
>>> (1@3) asObjectLiteral
>>> -->  #(#Point 1 3)
>>> 
>>> { 1@2.  3@4. true. false . nil  } asObjectLiteral
>>> 
>>> -> #(#Array #(#Point 1 2) #(#Point 3 4) true false nil)
>>> 
>>> (Dictionary newFromPairs: { 1->#(1 2 3) . 'foo' -> 'bar' }) asObjectLiteral
>>> ->
>>> #(#Dictionary 1 #(#Array 1 2 3) 'foo' 'bar')
>>> 
>>> Next thing, you can 'pretty-print' it (kinda):
>>> 
>>> #(#Dictionary 1 #(#Array 1 2 3) 'foo' 'bar') printObjectLiteral
>>> 
>>> '#(#Dictionary
>>>   1
>>>   (#Array 1 2 3)
>>>   ''foo'' ''bar'')'
>>> 
>>> 
>>> and sure thing, you can do reverse conversion:
>>> 
>>> '#(#Dictionary
>>>   1
>>>   (#Array 1 2 3)
>>>   ''foo'' ''bar'')'  parseAsObjectLiteral
>>> 
>>> a Dictionary('foo'->'bar' 1->#(1 2 3) )
>>> 
>>> Initially, i thought that it could be generic (by implementing default
>>> Object>>#asObjectLiteral),
>>> but then after discussing it with others, we decided to leave
>>> 
>>> Object>>#asObjectLiteral to be a 

Re: [Pharo-dev] git, packages, and package dependencies (also migration from sthub to git)

2017-07-01 Thread Luke Gorrie
On 1 July 2017 at 13:28, Peter Uhnak  wrote:

> after many-o-hours spent on my git-migration tool (
> https://github.com/peteruhnak/git-migration ), I've concluded that the
> migration cannot be properly done for packages.
>

Let us hope this effort can be salvaged!


> In mcz/monticello, every package has an independent history and can change
> independently of each other.
> In git, this history is merged into a single hierarchy, therefore:
>

Git does not strictly require you to put all of your commits into a single
hierarchy. You can also have multiple root commits in the same repository
i.e. multiple parallel hierarchies of commits. Then you can combine these
together with merge commits if/when you want to have them in the same tree
(afaik.)

See 'git commit --orphan' for introducing a new root commit into the repo.

So thinking aloud...

Perhaps for each package could you create a dedicated branch (A, B, ...)
with its own root commit? This way each package would have a distinct
history on its private branch and not be prone to collisions. If you want
to combine multiple packages into one tree then you could create a new
branch (A+B) that merges the required per-package branches (which remain
isolated and pristine.)

Just an idea -- sorry if I have misunderstood your use case, and let me
know if this idea requires more explanation.

Cheers,
-Luke


Re: [Pharo-dev] Baseline question

2017-07-01 Thread Julien Delplanque

Hello,

To do my baselines I do the following:

1. I create a subclass of BaselineOf.

2. I create an instance method #baseline:

baseline: spec

spec
for: #common
do: [
self
defineDependencies: spec;
definePackages: spec ]

3. I create the #defineDependencies: instance method (for example in 
SFDiff):


defineDependencies: spec
^ spec
project: 'ChangeModel' with: [
spec
className: 'ConfigurationOfFamixDiff';
version: #development;
repository: 
'http://smalltalkhub.com/mc/Moose/FamixDiff/main';

loads: 'ChangeModel' ];
project: 'FamixDiff' with: [
spec
className: 'ConfigurationOfFamixDiff';
version: #development;
repository: 
'http://smalltalkhub.com/mc/Moose/FamixDiff/main';

loads: 'Core' ];
yourself

4. I create the #definePackages: instance method (still for SFDiff):

definePackages: spec
^ spec
package: 'SimilarityFlooding' with: [ spec requires: 
#('ChangeModel') ];
package: 'SimilarityFlooding-Tests' with: [ spec requires: 
#('SimilarityFlooding') ];
package: 'SimilarityFlooding-Diff' with: [ spec requires: 
#('SimilarityFlooding') ];
package: 'SimilarityFlooding-Diff-Tests' with: [ spec requires: 
#('SimilarityFlooding-Diff') ];
package: 'SimilarityFlooding-Evaluator' with: [ spec requires: 
#('SimilarityFlooding' 'FamixDiff') ];
package: 'SimilarityFlooding-DiffOrion' with: [ spec requires: 
#('SimilarityFlooding-Diff') ];

yourself

Separating the dependencies and package definitions allow me to modify 
it easily.


I hope it helps.

Julien

PS: to make a baseline use on another baseline as dependency I put this 
kind of source code in #defineDependencies:


spec baseline: 'DependenceName' with: [
spec repository: 'github://user/repositoryname/eventualdirectory' ].


On 01/07/17 14:35, Stephane Ducasse wrote:

Hi

I 'm trying to define a baseline for our project and I defined it as

baseline: spec

spec for: #common do: [ spec
 baseline: 'SmaCC' with: [ spec
 repository: 'github://ThierryGoubier/SmaCC';
 loads: 'SmaCC-GLR-Runtime' ];

 package: 'SmaCC-Solidity'
"we could say that SmaccSolidity depends on Smacc-Runtime but I do not
know how to say it"
]

Now I thought that I could try to load it using the following

Metacello new
 baseline: 'SmaccSolidity';
 repository: 'github://RMODINRIA-Blockchain/SmaCC-Solidity';
 load.


But I get an error telling me that the baseline constructor does not
understand load ;(

I read the blog of peter
https://www.peteruhnak.com/blog/2016/07/25/how-to-use-git-and-github-with-pharo/

and I'm doing exactly the same so I wonder.

Stef






Re: [Pharo-dev] git, packages, and package dependencies (also migration from sthub to git)

2017-07-01 Thread Peter Uhnak
On Sat, Jul 01, 2017 at 02:22:53PM +0200, Damien Pollet wrote:
> On 1 July 2017 at 13:28, Peter Uhnak  wrote:
> 
> > In mcz/monticello, every package has an independent history and can change
> > independently of each other.
> > In git, this history is merged into a single hierarchy, therefore:
> > * a specific version of package A mandates specific versions of
> > every other package in the project
> > * it is not possible to do a 1:1 migration from mcz to git and
> > keep this flexibility
> >
> 
> Did you ever use that feature (to achieve something precise, not just
> because it's here)?

Well the implications are more problematic. Imagine you have pkg A, with 
commits A1,A2,A3.
Now you are in A3 and you create a new package (B) and commit it (B1, which has 
A3 as a parent).
Then you move back in history to A2, or you create A4 (as a child of A2) -- 
package B no longer exists.

So you cannot go back in history for any package without affecting the rest of 
the packages. I think this is a cognitive shift for people switching. And 
practically speaking if you want to go back for just a single package you have 
to cherry pick, which is error-prone.


(none of this is problem for me, as I've been using git in Pharo for a long 
time, but more shining light on the impending issues for people now switching)

> 
> > The only solution I see is to either separate every package to a separate
> > git project to keep the flexibility or use git subtree/git module... which
> > are both complications...
> >
> 
> Yes, it's a pain… might be useful in some cases but I think most of the
> time versioning each package of a project independently is just cognitive
> noise.
> 
> As a git commit specifies the code in the entire project, even across
> > packages, then I wonder what is the point of expressing package
> > dependencies _within_ the project (whether in BaselineOf or Cargo).
> >
> 
> Load order?

Ah, right.

> 
> 
> -- 
> Damien Pollet
> type less, do more [ | ] http://people.untyped.org/damien.pollet



Re: [Pharo-dev] git, packages, and package dependencies (also migration from sthub to git)

2017-07-01 Thread stephan

On 01-07-17 14:22, Damien Pollet wrote:

Yes, it's a pain… might be useful in some cases but I think most of the 
time versioning each package of a project independently is just 
cognitive noise.


No, it is essential complexity in most cases. As soon as independent 
teams/people make changes you need to split projects up. How to do so to 
get the least friction is left as an exercise to the reader (see Parnas)


Stephan




Re: [Pharo-dev] [FT improvements] [GSoC] [guidance needed] move some of the data source responsibilities to the cells

2017-07-01 Thread Stephane Ducasse
Esteban is moving this week-end from another part of France to Lille
so I think that he will get back to live wednesday.

On Fri, Jun 30, 2017 at 12:14 PM, Elhamer  wrote:
> Since there are no response, I am guessing that this design is okay and i
> will go for it :D.
>
> Best,
> Elhamer.
>
>
>
> --
> View this message in context: 
> http://forum.world.st/FT-improvements-GSoC-guidance-needed-move-some-of-the-data-source-responsibilities-to-the-cells-tp4952783p4953058.html
> Sent from the Pharo Smalltalk Developers mailing list archive at Nabble.com.
>



[Pharo-dev] Baseline question

2017-07-01 Thread Stephane Ducasse
Hi

I 'm trying to define a baseline for our project and I defined it as

baseline: spec
   
   spec for: #common do: [ spec
baseline: 'SmaCC' with: [ spec
repository: 'github://ThierryGoubier/SmaCC';
loads: 'SmaCC-GLR-Runtime' ];

package: 'SmaCC-Solidity'
"we could say that SmaccSolidity depends on Smacc-Runtime but I do not
know how to say it"
]

Now I thought that I could try to load it using the following

Metacello new
baseline: 'SmaccSolidity';
repository: 'github://RMODINRIA-Blockchain/SmaCC-Solidity';
load.


But I get an error telling me that the baseline constructor does not
understand load ;(

I read the blog of peter
https://www.peteruhnak.com/blog/2016/07/25/how-to-use-git-and-github-with-pharo/

and I'm doing exactly the same so I wonder.

Stef



Re: [Pharo-dev] git, packages, and package dependencies (also migration from sthub to git)

2017-07-01 Thread Damien Pollet
On 1 July 2017 at 13:28, Peter Uhnak  wrote:

> In mcz/monticello, every package has an independent history and can change
> independently of each other.
> In git, this history is merged into a single hierarchy, therefore:
> * a specific version of package A mandates specific versions of
> every other package in the project
> * it is not possible to do a 1:1 migration from mcz to git and
> keep this flexibility
>

Did you ever use that feature (to achieve something precise, not just
because it's here)?

The only solution I see is to either separate every package to a separate
> git project to keep the flexibility or use git subtree/git module... which
> are both complications...
>

Yes, it's a pain… might be useful in some cases but I think most of the
time versioning each package of a project independently is just cognitive
noise.

As a git commit specifies the code in the entire project, even across
> packages, then I wonder what is the point of expressing package
> dependencies _within_ the project (whether in BaselineOf or Cargo).
>

Load order?


-- 
Damien Pollet
type less, do more [ | ] http://people.untyped.org/damien.pollet


[Pharo-dev] git, packages, and package dependencies (also migration from sthub to git)

2017-07-01 Thread Peter Uhnak
Hi,

after many-o-hours spent on my git-migration tool ( 
https://github.com/peteruhnak/git-migration ), I've concluded that the 
migration cannot be properly done for packages.

In mcz/monticello, every package has an independent history and can change 
independently of each other.
In git, this history is merged into a single hierarchy, therefore:
* a specific version of package A mandates specific versions of every 
other package in the project
* it is not possible to do a 1:1 migration from mcz to git and keep 
this flexibility

Both ways (mcz and git) have advantages and disadvantages of their own, however 
as far as I could tell, they are incompatible; so mcz flexibility and some 
information (history) will be lost during migration.

The only solution I see is to either separate every package to a separate git 
project to keep the flexibility or use git subtree/git module... which are both 
complications...

As a git commit specifies the code in the entire project, even across packages, 
then I wonder what is the point of expressing package dependencies _within_ the 
project (whether in BaselineOf or Cargo).

Peter



Re: [Pharo-dev] [Pharo-users] Custom Glamour browser for Dr. Geo scripting

2017-07-01 Thread Nicolai Hess
2017-06-30 9:55 GMT+02:00 Hilaire :

> I extended the browser definition with:
>
> browser transmit
> from: #scripts;
> from: #categories;
> to: #methods;
> when: [ :a :b |  a isMeta not ];
> andShow: [:a | self methodsIn: a  ].
> browser transmit
> from: #scripts;
> from: #categories;
> to: #methods;
> when: [ :a | a isMeta ];
> andShow: [:a | self classMethodsIn: a  ].
>
>
> However it does not work as #when: message always receive a class, so
> the wrong methods a re list in the method pane.
>
> I enclosed a Fileout of the browser. It works independently of DrGeo.
> When one select class methods, still the instance methods are displayed
> in right most pane.
>
> Any tips?
>

I don't have a solution. I just want to make more clear what the problem is.
See attached screenshot.
The question is, is it possible to make the population of the "methods" list
dependent of the focused "Instance Methods"/"Class Methods" pane ?

I tried to wire a "#focus" port, but I don't think the "categories"-pane
exports
any port that could be used to distinguish between the selected tabs.






>
> Thanks
>
> Hilaire
>
>
> Le 29/06/2017 à 14:59, Hilaire a écrit :
> > but something is missing to get the listed method right depending on the
> > category is instance or class side. I don't how to do it.
>
> --
> Dr. Geo
> http://drgeo.eu
>
>


Re: [Pharo-dev] Epicea feedback

2017-07-01 Thread Stephane Ducasse
Thanks martin for your excellent support.
What would be nice is to have some menus on the sessions items
because often I try to click on them so see what I can do with them.

Stef

On Tue, Jun 27, 2017 at 4:17 PM, Martin Dias  wrote:
> Hello Jan, thanks for your feedback. I will have a look asap. Feel free to
> report it in fogbugz.
>
> Martín
>
> On Tue, Jun 27, 2017 at 8:26 AM, Jan Blizničenko 
> wrote:
>>
>> Hello
>>
>> I'd like to thank you for creating the Epicea. It is way more
>> user-friendly
>> and dependable than previous change-handling tool. Browsing, applying and
>> reverting changes is way easier.
>>
>> Just one note:
>> It is not exactly clear from first look how to properly reapply changes
>> after an image has been closed without saving. If I select all changes I
>> want to apply without using any filter, those changes are reapplied in
>> wrong
>> order (if I changed a method two times, first the new version is applied
>> and
>> then the old one). As far as I understood, I have to use filter to show
>> only
>> latest changes, but I think it could be clearer in the UI that I need to
>> do
>> so (or something completely different?) for applying changes.
>>
>> Thank you
>> Jan
>>
>>
>>
>> --
>> View this message in context:
>> http://forum.world.st/Epicea-feedback-tp4952661.html
>> Sent from the Pharo Smalltalk Developers mailing list archive at
>> Nabble.com.
>>
>