Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Holger Freyther


> On 24. Apr 2018, at 23:31, Santiago Bragagnolo  
> wrote:
> 
> 

> Yes. But with more work than the workers can handle the queue will grow. 
> Which means the (median/max) latency of the system will monotonically 
> increase.. to the point of the entire system failing (tasks handled after the 
> external deadlines expired, effectively no work being done).
> 
> 
> Normally the worker pool adjust to the minimal needed workers (there is a 
> watch dog checking how much idle processes are there, or more workers are 
> needed, and ensuring to spawn or stop process regarding to the state). 
> So, the number poolMaxSize is just a maximal limit. This limit should be set 
> for ensuring that the tasks that are running concurrently are not incurring 
> into too much resource consumption or into too much overhead leading to kind 
> of trashing. 
> I am not really friend of setting only a number for such a complex 
> problematic, but so far is the only approach I found that it does not lead to 
> a complex design. If you have better ideas to discuss on this subject, i am 
> completely open. (the same to deal with priorities by general system 
> understanding rather than absolute numbers) 


I think we might not talk about the same thing. Any system might end up being 
driven close or above its limits. One question is if it can recover from it. 
Let me try to give you a basic example (and if one changes from 'dev' to a 
proper work pool one just needs to adjust timings to show the same problem).

The code schedules a block that on each invocation takes about one second to 
execute. But the completion time is monotonically increasing.


| completions |
completions := OrderedCollection new.

1 to: 1000 do: [:each |
| start |
start := DateAndTime now.
[ (Delay forSeconds: 1) wait. completions add: (DateAndTime now - 
start)  ] schedule.
(Delay forMilliseconds: 200) wait.
].
completions


Now why is this a problem? It is a problem because once the system is in 
overload it will never recover (unless tasks are being stopped). The question 
is what can be done from a framework point to gracefully degrade? I am leaving 
this here right now.

holger







Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Holger Freyther


> On 25. Apr 2018, at 08:42, Andrew Glynn  wrote:
> 
> Generally to avoid this I've used the Synapse micro service bus.  It also 
> allows the creation of an unlimited number of queues, allowing higher 
> priority tasks to "jump the queue".  ' Backpressure' is precisely what 
> message buses avoid in distributed computing.

Can you elaborate and point to which Synapse you are meaning? If you use 
transport protocols like TCP (in contrast to QUIC or SCTP) there will be 
head-of-line blocking, how do you jump the queue on a single TCP connection?




Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Andrew Glynn
Btw I think you meant "thrashing", not "trashing'.  
Trashing is what my team leads do when they read my code.  😉.
AndrewOn Tue, 2018-04-24 at 15:31 +, Santiago Bragagnolo wrote:
> 
> 
> On Tue, 24 Apr 2018 at 16:18 Holger Freyther 
> wrote:
> > 
> > 
> > > On 24. Apr 2018, at 20:16, Santiago Bragagnolo 
> > o...@gmail.com> wrote:
> > > 
> > > Hi Holger! 
> > > I respond in bold
> > 
> > hehe. And in the reply I am back to non rich text. Let me see if I
> > quote it correctly.
> hahahaha, non rich?, how come?  I will still bolding hoping that if
> you need to ensure the content you will check in a rich text client
> :D 
> 
> > 
> > 
> > > 
> > > 
> > > 
> > > On Tue, 24 Apr 2018 at 12:00 Holger Freyther 
> > wrote:
> > > Hey!
> > > 
> > 
> > 
> > > I wondered if somebody thought of remote task execution?
> > > 
> > > *If you mean something else, I would need more information :).
> > 
> > 
> > 
> > 
> > > When you do [ action ] schedule / [ action ] future, both created
> > tasks are scheduled into the default runner. The default runner is
> > a working pool with a default 'poolSizeMax' on 4, meaning, limit 4
> > processes working over the tasks. (this is a dynamic configuration,
> > you can change it by 
> > > TKTConfiguration runner poolMaxSize: 20. )
> > 
> > Yes. But with more work than the workers can handle the queue will
> > grow. Which means the (median/max) latency of the system will
> > monotonically increase.. to the point of the entire system failing
> > (tasks handled after the external deadlines expired, effectively no
> > work being done).
> 
> Normally the worker pool adjust to the minimal needed workers (there
> is a watch dog checking how much idle processes are there, or more
> workers are needed, and ensuring to spawn or stop process regarding
> to the state). 
> So, the number poolMaxSize is just a maximal limit. This limit should
> be set for ensuring that the tasks that are running concurrently are
> not incurring into too much resource consumption or into too much
> overhead leading to kind of trashing. 
> I am not really friend of setting only a number for such a complex
> problematic, but so far is the only approach I found that it does not
> lead to a complex design. If you have better ideas to discuss on this
> subject, i am completely open. (the same to deal with priorities by
> general system understanding rather than absolute numbers) 
> 
>  
> > 
> > For network connected systems I like to think in terms of "back
> > pressure" (not read more from the socket than the image can handle,
> > eventually leading to the TCP window shrinking) and one way of
> > doing it is to have bounded queues (and/or sleep when scheduling
> > work).
> > 
> > I can see multiple parts of a solution (and they have different
> > benefits and issues):
> > 
> > * Be able to attach a deadline to a task (e.g. see context.Context
> > in go)
> > * Be able to have a "blocking until queue is less than X elements"
> > schedule  (but that is difficult as one task might be scheduled
> > during the >>#value of another task).
> > 
> I wouldn't mind to have a second type of queue with this behaviour,
> with a mean of configuration for setting one or other queue with it's
> specific management encapsulated.
> 
> Personally, in my domains of usage ( crawling, querying and in
> sensor/actuator) i personally wouldn't use it. But I suppose you have
> a better domain for this case. It would be good to discuss it to have
> a better understanding of the need. 
>  
> > 
> > 
> > > Are there ideas how to add remote task scheduling? Maybe use
> > Seamless for it?
> > > Since you speak about seamless here, i suppose two different
> > images, doesn't matter where. 
> > > It's not a bad idea to go seamless, but i did not go through the
> > first restriction of remote executions (if the remote image can or
> > not execute the task and if both images share the same semantic for
> > the execution), then i did not yet checked on which communication
> > platform to use for it 
> > 
> > Right it would need to be homogeneous images (and care taken that
> > the external interface remains similar enough).
> I would like to understand a bit better what you are trying to do. 
> I have the hunch that you are looking for a multiple images solution,
> for load balance in between images. TaskIT is mean to plan tasks into
> process, regarding to the local image needs. You seems to need
> something for planning tasks of a general system, beyond one image,
> and maybe taking in care a process / network topology.
> 
> If it's more that side, we should discuss in what extensions we can
> do into taskit to be suitable of usage in this case, but surely I
> would be inclined to do a higher level framework or even middleware
> that uses taskit, than to add all those complexities in taskit. The
> good news is that i may be needing something similar, so I will be
> able to help there. 
> 
> 
> > 
> > 
> > > Have workers connect to the scheduler? Othe

Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Andrew Glynn
Generally to avoid this I've used the Synapse micro service bus.  It
also allows the creation of an unlimited number of queues, allowing
higher priority tasks to "jump the queue".  ' Backpressure' is
precisely what message buses avoid in distributed computing.
One of my never-have-time-for projects is to port Synapse to Pharo.
SST has a 'start a slave on another node and route to it' methodology
but it's hella complex, especially in terms of distributed garbage
collection etc.  For real-time systems SST is great, not really
necessary to get into that kind of complexity otherwise though.
AndrewOn Tue, 2018-04-24 at 15:31 +, Santiago Bragagnolo wrote:
> 
> 
> On Tue, 24 Apr 2018 at 16:18 Holger Freyther 
> wrote:
> > 
> > 
> > > On 24. Apr 2018, at 20:16, Santiago Bragagnolo 
> > o...@gmail.com> wrote:
> > > 
> > > Hi Holger! 
> > > I respond in bold
> > 
> > hehe. And in the reply I am back to non rich text. Let me see if I
> > quote it correctly.
> hahahaha, non rich?, how come?  I will still bolding hoping that if
> you need to ensure the content you will check in a rich text client
> :D 
> 
> > 
> > 
> > > 
> > > 
> > > 
> > > On Tue, 24 Apr 2018 at 12:00 Holger Freyther 
> > wrote:
> > > Hey!
> > > 
> > 
> > 
> > > I wondered if somebody thought of remote task execution?
> > > 
> > > *If you mean something else, I would need more information :).
> > 
> > 
> > 
> > 
> > > When you do [ action ] schedule / [ action ] future, both created
> > tasks are scheduled into the default runner. The default runner is
> > a working pool with a default 'poolSizeMax' on 4, meaning, limit 4
> > processes working over the tasks. (this is a dynamic configuration,
> > you can change it by 
> > > TKTConfiguration runner poolMaxSize: 20. )
> > 
> > Yes. But with more work than the workers can handle the queue will
> > grow. Which means the (median/max) latency of the system will
> > monotonically increase.. to the point of the entire system failing
> > (tasks handled after the external deadlines expired, effectively no
> > work being done).
> 
> Normally the worker pool adjust to the minimal needed workers (there
> is a watch dog checking how much idle processes are there, or more
> workers are needed, and ensuring to spawn or stop process regarding
> to the state). 
> So, the number poolMaxSize is just a maximal limit. This limit should
> be set for ensuring that the tasks that are running concurrently are
> not incurring into too much resource consumption or into too much
> overhead leading to kind of trashing. 
> I am not really friend of setting only a number for such a complex
> problematic, but so far is the only approach I found that it does not
> lead to a complex design. If you have better ideas to discuss on this
> subject, i am completely open. (the same to deal with priorities by
> general system understanding rather than absolute numbers) 
> 
>  
> > 
> > For network connected systems I like to think in terms of "back
> > pressure" (not read more from the socket than the image can handle,
> > eventually leading to the TCP window shrinking) and one way of
> > doing it is to have bounded queues (and/or sleep when scheduling
> > work).
> > 
> > I can see multiple parts of a solution (and they have different
> > benefits and issues):
> > 
> > * Be able to attach a deadline to a task (e.g. see context.Context
> > in go)
> > * Be able to have a "blocking until queue is less than X elements"
> > schedule  (but that is difficult as one task might be scheduled
> > during the >>#value of another task).
> > 
> I wouldn't mind to have a second type of queue with this behaviour,
> with a mean of configuration for setting one or other queue with it's
> specific management encapsulated.
> 
> Personally, in my domains of usage ( crawling, querying and in
> sensor/actuator) i personally wouldn't use it. But I suppose you have
> a better domain for this case. It would be good to discuss it to have
> a better understanding of the need. 
>  
> > 
> > 
> > > Are there ideas how to add remote task scheduling? Maybe use
> > Seamless for it?
> > > Since you speak about seamless here, i suppose two different
> > images, doesn't matter where. 
> > > It's not a bad idea to go seamless, but i did not go through the
> > first restriction of remote executions (if the remote image can or
> > not execute the task and if both images share the same semantic for
> > the execution), then i did not yet checked on which communication
> > platform to use for it 
> > 
> > Right it would need to be homogeneous images (and care taken that
> > the external interface remains similar enough).
> I would like to understand a bit better what you are trying to do. 
> I have the hunch that you are looking for a multiple images solution,
> for load balance in between images. TaskIT is mean to plan tasks into
> process, regarding to the local image needs. You seems to need
> something for planning tasks of a general system, beyond one image,
> and maybe taki

Re: [Pharo-users] Projects using Magritte meta models

2018-04-24 Thread Sean P. DeNigris
Rafael Luque wrote
> gitlab://… ZnUrl>>enforceKnownScheme

I think you have to do `Iceberg enableMetacelloIntegration: true.` first to
get gitlab:// URLs to work…



-
Cheers,
Sean
--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html



Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Andrew Glynn
What about using VertStix for remote execution?
Andrew On Tue, 2018-04-24 at 15:31 +, Santiago Bragagnolo wrote:
> 
> 
> On Tue, 24 Apr 2018 at 16:18 Holger Freyther 
> wrote:
> > 
> > 
> > > On 24. Apr 2018, at 20:16, Santiago Bragagnolo 
> > o...@gmail.com> wrote:
> > > 
> > > Hi Holger! 
> > > I respond in bold
> > 
> > hehe. And in the reply I am back to non rich text. Let me see if I
> > quote it correctly.
> hahahaha, non rich?, how come?  I will still bolding hoping that if
> you need to ensure the content you will check in a rich text client
> :D 
> 
> > 
> > 
> > > 
> > > 
> > > 
> > > On Tue, 24 Apr 2018 at 12:00 Holger Freyther 
> > wrote:
> > > Hey!
> > > 
> > 
> > 
> > > I wondered if somebody thought of remote task execution?
> > > 
> > > *If you mean something else, I would need more information :).
> > 
> > 
> > 
> > 
> > > When you do [ action ] schedule / [ action ] future, both created
> > tasks are scheduled into the default runner. The default runner is
> > a working pool with a default 'poolSizeMax' on 4, meaning, limit 4
> > processes working over the tasks. (this is a dynamic configuration,
> > you can change it by 
> > > TKTConfiguration runner poolMaxSize: 20. )
> > 
> > Yes. But with more work than the workers can handle the queue will
> > grow. Which means the (median/max) latency of the system will
> > monotonically increase.. to the point of the entire system failing
> > (tasks handled after the external deadlines expired, effectively no
> > work being done).
> 
> Normally the worker pool adjust to the minimal needed workers (there
> is a watch dog checking how much idle processes are there, or more
> workers are needed, and ensuring to spawn or stop process regarding
> to the state). 
> So, the number poolMaxSize is just a maximal limit. This limit should
> be set for ensuring that the tasks that are running concurrently are
> not incurring into too much resource consumption or into too much
> overhead leading to kind of trashing. 
> I am not really friend of setting only a number for such a complex
> problematic, but so far is the only approach I found that it does not
> lead to a complex design. If you have better ideas to discuss on this
> subject, i am completely open. (the same to deal with priorities by
> general system understanding rather than absolute numbers) 
> 
>  
> > 
> > For network connected systems I like to think in terms of "back
> > pressure" (not read more from the socket than the image can handle,
> > eventually leading to the TCP window shrinking) and one way of
> > doing it is to have bounded queues (and/or sleep when scheduling
> > work).
> > 
> > I can see multiple parts of a solution (and they have different
> > benefits and issues):
> > 
> > * Be able to attach a deadline to a task (e.g. see context.Context
> > in go)
> > * Be able to have a "blocking until queue is less than X elements"
> > schedule  (but that is difficult as one task might be scheduled
> > during the >>#value of another task).
> > 
> I wouldn't mind to have a second type of queue with this behaviour,
> with a mean of configuration for setting one or other queue with it's
> specific management encapsulated.
> 
> Personally, in my domains of usage ( crawling, querying and in
> sensor/actuator) i personally wouldn't use it. But I suppose you have
> a better domain for this case. It would be good to discuss it to have
> a better understanding of the need. 
>  
> > 
> > 
> > > Are there ideas how to add remote task scheduling? Maybe use
> > Seamless for it?
> > > Since you speak about seamless here, i suppose two different
> > images, doesn't matter where. 
> > > It's not a bad idea to go seamless, but i did not go through the
> > first restriction of remote executions (if the remote image can or
> > not execute the task and if both images share the same semantic for
> > the execution), then i did not yet checked on which communication
> > platform to use for it 
> > 
> > Right it would need to be homogeneous images (and care taken that
> > the external interface remains similar enough).
> I would like to understand a bit better what you are trying to do. 
> I have the hunch that you are looking for a multiple images solution,
> for load balance in between images. TaskIT is mean to plan tasks into
> process, regarding to the local image needs. You seems to need
> something for planning tasks of a general system, beyond one image,
> and maybe taking in care a process / network topology.
> 
> If it's more that side, we should discuss in what extensions we can
> do into taskit to be suitable of usage in this case, but surely I
> would be inclined to do a higher level framework or even middleware
> that uses taskit, than to add all those complexities in taskit. The
> good news is that i may be needing something similar, so I will be
> able to help there. 
> 
> 
> > 
> > 
> > > Have workers connect to the scheduler? Other ideas?
> > > what do you mean by connection to the scheduler? Th

Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Santiago Bragagnolo
On Tue, 24 Apr 2018 at 16:18 Holger Freyther  wrote:

>
>
> > On 24. Apr 2018, at 20:16, Santiago Bragagnolo <
> santiagobragagn...@gmail.com> wrote:
> >
> > Hi Holger!
> > I respond in bold
>
> hehe. And in the reply I am back to non rich text. Let me see if I quote
> it correctly.
>

*hahahaha, non rich?, how come?  I will still bolding hoping that if you
need to ensure the content you will check in a rich text client :D *


>
> >
> >
> >
> > On Tue, 24 Apr 2018 at 12:00 Holger Freyther  wrote:
> > Hey!
> >
>
>
> > I wondered if somebody thought of remote task execution?
> >
> > *If you mean something else, I would need more information :).
>
>
>
>
> > When you do [ action ] schedule / [ action ] future, both created tasks
> are scheduled into the default runner. The default runner is a working pool
> with a default 'poolSizeMax' on 4, meaning, limit 4 processes working over
> the tasks. (this is a dynamic configuration, you can change it by
> > TKTConfiguration runner poolMaxSize: 20. )
>
> Yes. But with more work than the workers can handle the queue will grow.
> Which means the (median/max) latency of the system will monotonically
> increase.. to the point of the entire system failing (tasks handled after
> the external deadlines expired, effectively no work being done).



*Normally the worker pool adjust to the minimal needed workers (there is a
watch dog checking how much idle processes are there, or more workers are
needed, and ensuring to spawn or stop process regarding to the state). *
*So, the number poolMaxSize is just a maximal limit. This limit should be
set for ensuring that the tasks that are running concurrently are not
incurring into too much resource consumption or into too much overhead
leading to kind of trashing. *
*I am not really friend of setting only a number for such a complex
problematic, but so far is the only approach I found that it does not lead
to a complex design. If you have better ideas to discuss on this subject, i
am completely open. (the same to deal with priorities by general system
understanding rather than absolute numbers) *



>
> For network connected systems I like to think in terms of "back pressure"
> (not read more from the socket than the image can handle, eventually
> leading to the TCP window shrinking) and one way of doing it is to have
> bounded queues (and/or sleep when scheduling work).
>
> I can see multiple parts of a solution (and they have different benefits
> and issues):
>
> * Be able to attach a deadline to a task (e.g. see context.Context in go)
> * Be able to have a "blocking until queue is less than X elements"
> schedule  (but that is difficult as one task might be scheduled during the
> >>#value of another task).
>
> *I wouldn't mind to have a second type of queue with this behaviour, with
a mean of configuration for setting one or other queue with it's specific
management encapsulated.*

*Personally, in my domains of usage ( crawling, querying and in
sensor/actuator) i personally wouldn't use it. But I suppose you have a
better domain for this case. It would be good to discuss it to have a
better understanding of the need. *


>
>
> > Are there ideas how to add remote task scheduling? Maybe use Seamless
> for it?
> > Since you speak about seamless here, i suppose two different images,
> doesn't matter where.
> > It's not a bad idea to go seamless, but i did not go through the first
> restriction of remote executions (if the remote image can or not execute
> the task and if both images share the same semantic for the execution),
> then i did not yet checked on which communication platform to use for it
>
> Right it would need to be homogeneous images (and care taken that the
> external interface remains similar enough).
>

*I would like to understand a bit better what you are trying to do. *
*I have the hunch that you are looking for a multiple images solution, for
load balance in between images. TaskIT is mean to plan tasks into process,
regarding to the local image needs. You seems to need something for
planning tasks of a general system, beyond one image, and maybe taking in
care a process / network topology.*

*If it's more that side, we should discuss in what extensions we can do
into taskit to be suitable of usage in this case, but surely I would be
inclined to do a higher level framework or even middleware that uses
taskit, than to add all those complexities in taskit. The good news is that
i may be needing something similar, so I will be able to help there. *



>
> > Have workers connect to the scheduler? Other ideas?
> > what do you mean by connection to the scheduler? The workers we use do
> not know their pools, if that is what you mean.
>
> Let's assume scheduling a task is simple (read something from a socket)
> but the computation is expensive (database look-up, math, etc). Hopefully
> one will reach the point where one image can schedule more tasks than a
> single worker image can handle. At that point it could be neat to 

Re: [Pharo-users] get.pharo.org broken?

2018-04-24 Thread Paul DeBruicker
Ahh. Thanks Sven.  

I was just copying/pasting from the  block on https://get.pharo.org/64
and it didn't work.  

Adding the trailing slash like you suggest fixes it.  







Sven Van Caekenberghe-2 wrote
> curl -L https://get.pharo.org/64
> 
> or https://get.pharo.org/64/
> 
>> On 24 Apr 2018, at 16:27, PAUL DEBRUICKER <

> pdebruic@

> > wrote:
>> 
>> Hi - 
>> 
>> 
>> curl https://get.pharo.org/64 | bash
>> 
>> 
>> gives an error on MacOS X:
>> 
>> 
>> 
>> paul@a:~/pharo/maf$ curl https://get.pharo.org/64 | bash
>>  % Total% Received % Xferd  Average Speed   TimeTime Time 
>> Current
>> Dload  Upload   Total   SpentLeft 
>> Speed
>> 100   237  100   2370 0315  0 --:--:-- --:--:-- --:--:--  
>> 315
>> bash: line 1: syntax error near unexpected token `newline'
>> bash: line 1: `> 2.0//EN">'
>> 
>> 
>> 
>> am I doing something wrong or is it broken for other people too?
>> 
>> Thanks
>> 
>> Paul





--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html



Re: [Pharo-users] Projects using Magritte meta models

2018-04-24 Thread Rafael Luque
Stef,

I've sent a pull request via GitHub with my reviews:
https://github.com/SquareBracketAssociates/Booklet-Magritte/pull/2



2018-04-21 8:49 GMT+00:00 Stephane Ducasse :

> Cool I will have a look when I go back to Magritte
> Rafael if you see mistake in the booklet please report them to me.
> I will do a pass in a couple of weeks I hope
>
> On Sat, Apr 21, 2018 at 4:07 AM, Sean P. DeNigris 
> wrote:
> > Rafael Luque wrote
> >> I wonder if there are other relevant projects I could study to discover
> >> other possible
> >> uses cases of Magritte.
> >
> > I use it in nearly all my personal projects, almost always via Morphic,
> not
> > Seaside. Here is a public one you can have a look at:
> > https://github.com/seandenigris/Small-World
> >
> > Load in Pharo 6.1 via:
> > Metacello new
> > baseline: 'SmallWorld';
> > repository: 'github://seandenigris/
> SmallWorld:master/repository';
> > onConflict: [ :ex | ex allow ];
> > load.
> >
> > Browse senders of magritteDescription for classes prefixed with "Small".
> >
> >
> >
> > -
> > Cheers,
> > Sean
> > --
> > Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
> >
>
>


Re: [Pharo-users] get.pharo.org broken?

2018-04-24 Thread Sven Van Caekenberghe
curl -L https://get.pharo.org/64

or https://get.pharo.org/64/

> On 24 Apr 2018, at 16:27, PAUL DEBRUICKER  wrote:
> 
> Hi - 
> 
> 
> curl https://get.pharo.org/64 | bash
> 
> 
> gives an error on MacOS X:
> 
> 
> 
> paul@a:~/pharo/maf$ curl https://get.pharo.org/64 | bash
>  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
> Dload  Upload   Total   SpentLeft  Speed
> 100   237  100   2370 0315  0 --:--:-- --:--:-- --:--:--   315
> bash: line 1: syntax error near unexpected token `newline'
> bash: line 1: `'
> 
> 
> 
> am I doing something wrong or is it broken for other people too?
> 
> Thanks
> 
> Paul




[Pharo-users] get.pharo.org broken?

2018-04-24 Thread PAUL DEBRUICKER
Hi - 


curl https://get.pharo.org/64 | bash


gives an error on MacOS X:



paul@a:~/pharo/maf$ curl https://get.pharo.org/64 | bash
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   237  100   2370 0315  0 --:--:-- --:--:-- --:--:--   315
bash: line 1: syntax error near unexpected token `newline'
bash: line 1: `'



am I doing something wrong or is it broken for other people too?

Thanks

Paul


Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Holger Freyther


> On 24. Apr 2018, at 20:16, Santiago Bragagnolo  
> wrote:
> 
> Hi Holger! 
> I respond in bold

hehe. And in the reply I am back to non rich text. Let me see if I quote it 
correctly.


> 
> 
> 
> On Tue, 24 Apr 2018 at 12:00 Holger Freyther  wrote:
> Hey!
> 


> I wondered if somebody thought of remote task execution?
> 
> *If you mean something else, I would need more information :).




> When you do [ action ] schedule / [ action ] future, both created tasks are 
> scheduled into the default runner. The default runner is a working pool with 
> a default 'poolSizeMax' on 4, meaning, limit 4 processes working over the 
> tasks. (this is a dynamic configuration, you can change it by 
> TKTConfiguration runner poolMaxSize: 20. )

Yes. But with more work than the workers can handle the queue will grow. Which 
means the (median/max) latency of the system will monotonically increase.. to 
the point of the entire system failing (tasks handled after the external 
deadlines expired, effectively no work being done).

For network connected systems I like to think in terms of "back pressure" (not 
read more from the socket than the image can handle, eventually leading to the 
TCP window shrinking) and one way of doing it is to have bounded queues (and/or 
sleep when scheduling work).

I can see multiple parts of a solution (and they have different benefits and 
issues):

* Be able to attach a deadline to a task (e.g. see context.Context in go)
* Be able to have a "blocking until queue is less than X elements" schedule  
(but that is difficult as one task might be scheduled during the >>#value of 
another task).



> Are there ideas how to add remote task scheduling? Maybe use Seamless for it?
> Since you speak about seamless here, i suppose two different images, doesn't 
> matter where. 
> It's not a bad idea to go seamless, but i did not go through the first 
> restriction of remote executions (if the remote image can or not execute the 
> task and if both images share the same semantic for the execution), then i 
> did not yet checked on which communication platform to use for it 

Right it would need to be homogeneous images (and care taken that the external 
interface remains similar enough).


> Have workers connect to the scheduler? Other ideas?
> what do you mean by connection to the scheduler? The workers we use do not 
> know their pools, if that is what you mean. 

Let's assume scheduling a task is simple (read something from a socket) but the 
computation is expensive (database look-up, math, etc). Hopefully one will 
reach the point where one image can schedule more tasks than a single worker 
image can handle. At that point it could be neat to scale by just starting 
another image. By inverting the launch order (workers connect to the scheduler) 
scaling can become more easy.

holger





Re: [Pharo-users] Projects using Magritte meta models

2018-04-24 Thread Rafael Luque
Hi Sean,

Thank you for your answer.

I'm looking forward to read this project's code, however, when I try to
load it following your instructions I get a ZnUnknownScheme when it is
loading files from gitlab://SeanDeNigris/gitlab-smalltalk-ci:master/src:

ZnUrl>>enforceKnownScheme
ZnRequestLine>>uri:
ZnRequest>>url:
ZnClient>>url:
[ client := self httpClient.
client
ifFail: [ :exception |
(exception className beginsWith: 'Zn')
ifTrue: [ MCRepositoryError
signal: 'Could not access ' , self location , ': ' , exception printString ]
ifFalse: [ exception pass ] ];
url: self locationWithTrailingSlash;
queryAt: 'C' put: 'M;O=D';
"legacy that some servers maybe expect"
get.


Thank you!


2018-04-21 2:07 GMT+00:00 Sean P. DeNigris :

> Rafael Luque wrote
> > I wonder if there are other relevant projects I could study to discover
> > other possible
> > uses cases of Magritte.
>
> I use it in nearly all my personal projects, almost always via Morphic, not
> Seaside. Here is a public one you can have a look at:
> https://github.com/seandenigris/Small-World
>
> Load in Pharo 6.1 via:
> Metacello new
> baseline: 'SmallWorld';
> repository: 'github://seandenigris/SmallWorld:master/repository';
> onConflict: [ :ex | ex allow ];
> load.
>
> Browse senders of magritteDescription for classes prefixed with "Small".
>
>
>
> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
>
>


Re: [Pharo-users] UFFI and autoRelease

2018-04-24 Thread Esteban Lorenzano
hi,

there is nothing like that and I do not recommend messing with the registry in 
general.
but… you can always extend the classes for your use, is around 
FFIExternalResourceManager.

cheers,
Esteban


> On 24 Apr 2018, at 14:48, Serge Stinckwich  wrote:
> 
> 
> I'm using autoRelease on some FFIOpaqueObject instances.
> 
> I need to test some behavior when I delete explicitly one of these objects. 
> ​How to remove these objects from the finalization list, in order they are 
> not freed two times.
> 
> Something like ignoreFinalization ?
> 
> -- 
> Serge Stinckwich
> UMI UMMISCO 209 (SU/IRD/UY1)
> "Programs must be written for people to read, and only incidentally for 
> machines to execute."
> http://www.doesnotunderstand.org/ 


Re: [Pharo-users] SortedCollection>>reverse answers an inconsistent object in Pharo 6

2018-04-24 Thread Richard O'Keefe
Let me offer a simple example.
#(a a) isSortedBy: [:x :y | (x <= y) not]
is false, while
#(a a) isSortedBy: [:x :y | y <= x]
is true.

On 24 April 2018 at 02:43, Erik Stel  wrote:

> Richard,
>
> Can you explain me what you mean by "sortBlock is supposed to act like
> #<="?
> Isn't it up to the developer to decide what the logic should be? I simply
> used #<= because #> might not have been implemented for all relevant
> classes, but would otherwise have chosen #> as a means to get the
> 'reversal'
> of #<=.
>
> Your solution is not resulting in the behaviour I would expect it. Adding
> elements which are compared equally, but still have a different value, do
> not get added in the same position. In the examples below I use
> Associations, since it compares on the key only (for #<= comparison).
>
> Consider a regular SortedCollection with default sortBlock (set
> explicitly):
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | a <= b ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Some' #k->'Value' #k->'Or' #k->'Other')"
>
> When I create a SortedCollection with your code's sortBlock reversed I get:
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | b <= a ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Some' #k->'Value' #k->'Or' #k->'Other')"
>
> The order of the result is the same in both situations. What I would expect
> is the result you get from the sortBlock I suggested:
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | (a <= b) not ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Other' #k->'Or' #k->'Value' #k->'Some')"
>
> This last result is the reversal of the original result.
>
> (side note)
> IF in the above #addAll: was used instead of the repeated #add:, things
> might be different again. Since the #addAll: implementation would on some
> occasions (when size of receiver vs size of added elements is certain
> ratio)
> add the elements at the end and then perform a #reSort. With values being
> compared as equal, the order will be decided by the order they were added.
> So the result of #addAll: depends on the collection sizes. This might not
> be
> what a user would expect ;-).
>
>
>
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
>
>


[Pharo-users] UFFI and autoRelease

2018-04-24 Thread Serge Stinckwich
I'm using autoRelease on some FFIOpaqueObject instances.

I need to test some behavior when I delete explicitly one of these objects.
​How to remove these objects from the finalization list, in order they are
not freed two times.

Something like ignoreFinalization ?

-- 
Serge Stinckwich
UMI UMMISCO 209 (SU/IRD/UY1)
"Programs must be written for people to read, and only incidentally for
machines to execute."http://www.doesnotunderstand.org/


Re: [Pharo-users] SortedCollection>>reverse answers an inconsistent object in Pharo 6

2018-04-24 Thread Richard O'Keefe
"sortBlock is supposed to act like #<=" means
"for every triple of elements x y z that might be
 in the collection,
  b(x,x) is true
  b(x,y) is Boolean
  b(x,y) | b(y,x)
  b(x,y) & b(y,z) implies b(x,z)."
The first condition distinguishes it from #< .
In particular, if you want to sort a sequence
of *identical* elements, the sortBlock must
satisfy b(x,x).

There are other predicates around that act like
<= in the relevant sense: >= for example.
But #beginsWith: and #endsWith: don't satisfy
dichotomy.


On 24 April 2018 at 02:43, Erik Stel  wrote:

> Richard,
>
> Can you explain me what you mean by "sortBlock is supposed to act like
> #<="?
> Isn't it up to the developer to decide what the logic should be? I simply
> used #<= because #> might not have been implemented for all relevant
> classes, but would otherwise have chosen #> as a means to get the
> 'reversal'
> of #<=.
>
> Your solution is not resulting in the behaviour I would expect it. Adding
> elements which are compared equally, but still have a different value, do
> not get added in the same position. In the examples below I use
> Associations, since it compares on the key only (for #<= comparison).
>
> Consider a regular SortedCollection with default sortBlock (set
> explicitly):
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | a <= b ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Some' #k->'Value' #k->'Or' #k->'Other')"
>
> When I create a SortedCollection with your code's sortBlock reversed I get:
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | b <= a ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Some' #k->'Value' #k->'Or' #k->'Other')"
>
> The order of the result is the same in both situations. What I would expect
> is the result you get from the sortBlock I suggested:
> | aCollection |
> aCollection := SortedCollection sortBlock: [ :a :b | (a <= b) not ].
> {#k->'Some'. #k->'Value'. #k->'Or'. #k->'Other'} do: [ :each | aCollection
> add: each ].
> aCollection.
>  "a SortedCollection(#k->'Other' #k->'Or' #k->'Value' #k->'Some')"
>
> This last result is the reversal of the original result.
>
> (side note)
> IF in the above #addAll: was used instead of the repeated #add:, things
> might be different again. Since the #addAll: implementation would on some
> occasions (when size of receiver vs size of added elements is certain
> ratio)
> add the elements at the end and then perform a #reSort. With values being
> compared as equal, the order will be decided by the order they were added.
> So the result of #addAll: depends on the collection sizes. This might not
> be
> what a user would expect ;-).
>
>
>
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
>
>


Re: [Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Santiago Bragagnolo
Hi Holger!
I respond in *bold*



On Tue, 24 Apr 2018 at 12:00 Holger Freyther  wrote:

> Hey!
>
> I look into using Taskit for a new development and wondered about some
> features. What is the right upstream repository?

*The main repo so far is https://github.com/sbragagnolo/taskit
*



> What are the goals to get builds greens?

*Don't have really a strategy yet, I did not have time to check on them. I
can put my hands on it for a while and come up with a plan on that *



> I wondered if somebody thought of remote task execution?
>

** If by remote calls you mean through REST Api's or things like that, no
yet. but is easy to use as: *
*[ service call ] schedule. *
*or*
*result := [ service call ] future *

** If what you mean is to execute a program on the underlying operative
system, we have a already a way to do it for linux (not tested on mac, but
it may work there) *
*result := [ :spec | *
* spec*
* command: 'bash';*
* option: '-c';*
* argument: command ] asOSTask future.*

*The result deployed on the future is the stdout of the process .*
*The text on the exception, the one on stderr of the process. *
*Is based on OSSubProcess, it does not work properly with long stdout. *

** If what you mean is the execution of code deployed on other images,
earlier versions of taskit had this feature, based on OSProcess, but we are
not really ready for doing it for real. We need to be able to define if an
image is suitable or not to run a command. *

**If you mean something else, I would need more information :).*


> What I am missing is handling for overload. E.g. before queuing too many
> tasks I would prefer an exception to be raised (or the task
> blocking/slowing down). Signalling an exception is probably more reasonable
> as one task could queue another task (while >>#value is being executed...).
> What are the plans here? I can mitigate by always using futures and using
> >>#waitForCompletion:..
>
**For raising exceptions on too much tasks you only need to set a custom
made kind of queue. I don't think this is the way we want to go. Since
there is not much to do in the case of 'too much tasks scheduled'. Is easy
to try later from a do-it, but from an application point of view, is too
much. *

*For dealing with the sharing of resources, we have implemented the worker
pools abstractions. *

*When you do [ action ] schedule / [ action ] future, both created tasks
are scheduled into the default runner. The default runner is a working pool
with a default 'poolSizeMax' on 4, meaning, limit 4 processes working over
the tasks. (this is a dynamic configuration, you can change it by *
*TKTConfiguration runner poolMaxSize: 20. )*



>
> Are there ideas how to add remote task scheduling? Maybe use Seamless for
> it?

*Since you speak about seamless here, i suppose two different images,
doesn't matter where. *
*It's not a bad idea to go seamless, but i did not go through the first
restriction of remote executions (if the remote image can or not execute
the task and if both images share the same semantic for the execution),
then i did not yet checked on which communication platform to use for it *


> Have workers connect to the scheduler? Other ideas?

*what do you mean by connection to the scheduler? The workers we use do not
know their pools, if that is what you mean. *


> Who would have time to review an approach and the code?

*You can send it to me, Disclaimer, i am going on vacations for a while
tomorrow*

*Nice to know your interests! *
*cheers.*
*Santiago*


>
> cheers
> holger
>


Re: [Pharo-users] Proper way to file in code

2018-04-24 Thread Sven Van Caekenberghe


> On 24 Apr 2018, at 11:52, Guillermo Polito  wrote:
> 
> Hi,
> 
> I think the more proper API to use is
> 
> CodeImporter evaluateFileNamed: '/path/to/my/file.st'.
> 
> Check CodeImporter class side for more options (streams, strings...).
> 
> CodeImporter is there for already 3/4 years I think. The idea is that filing 
> in is not a file responsibility.

Yeah, but then you should change it to no longer use the deprecated FileStream 
;-) 

See CodeImporter class>>#fileNamed:

> Guille
> 
> On Mon, Apr 23, 2018 at 10:35 PM, Hilaire  wrote:
> That's a plan!
> 
> I realized from my code I was already using a mix of the new and old world. 
> It will be nice to get rid of the antic one to reduce the confusion when 
> manipulating file.
> 
> Thanks
> 
> Le 22/04/2018 à 20:12, Sven Van Caekenberghe a écrit :
> It is not hard at all, just start from FileSystem (i.e. FileReference, 
> FileLocator, etc, ..) and open your streams from there and you are good. I 
> guess the FileSystem from the Deep into Pharo book is a good start.
> 
>  From a user perspective, the changes are not that big. If something is not 
> clear, you can always ask.
> 
> -- 
> Dr. Geo
> http://drgeo.eu
> 
> 
> 
> 
> 
> 
> -- 
>
> Guille Polito
> Research Engineer
> 
> Centre de Recherche en Informatique, Signal et Automatique de Lille
> CRIStAL - UMR 9189
> French National Center for Scientific Research - http://www.cnrs.fr
> 
> Web: http://guillep.github.io
> Phone: +33 06 52 70 66 13




[Pharo-users] Right repo for TaskIt and features?

2018-04-24 Thread Holger Freyther
Hey!

I look into using Taskit for a new development and wondered about some 
features. What is the right upstream repository? What are the goals to get 
builds greens? I wondered if somebody thought of remote task execution?

What I am missing is handling for overload. E.g. before queuing too many tasks 
I would prefer an exception to be raised (or the task blocking/slowing down). 
Signalling an exception is probably more reasonable as one task could queue 
another task (while >>#value is being executed...). What are the plans here? I 
can mitigate by always using futures and using >>#waitForCompletion:..

Are there ideas how to add remote task scheduling? Maybe use Seamless for it? 
Have workers connect to the scheduler? Other ideas? Who would have time to 
review an approach and the code?

cheers
holger


Re: [Pharo-users] Proper way to file in code

2018-04-24 Thread Guillermo Polito
Hi,

I think the more proper API to use is

CodeImporter evaluateFileNamed: '/path/to/my/file.st'.

Check CodeImporter class side for more options (streams, strings...).

CodeImporter is there for already 3/4 years I think. The idea is that
filing in is not a file responsibility.

Guille

On Mon, Apr 23, 2018 at 10:35 PM, Hilaire  wrote:

> That's a plan!
>
> I realized from my code I was already using a mix of the new and old
> world. It will be nice to get rid of the antic one to reduce the confusion
> when manipulating file.
>
> Thanks
>
> Le 22/04/2018 à 20:12, Sven Van Caekenberghe a écrit :
>
>> It is not hard at all, just start from FileSystem (i.e. FileReference,
>> FileLocator, etc, ..) and open your streams from there and you are good. I
>> guess the FileSystem from the Deep into Pharo book is a good start.
>>
>>  From a user perspective, the changes are not that big. If something is
>> not clear, you can always ask.
>>
>
> --
> Dr. Geo
> http://drgeo.eu
>
>
>
>


-- 



Guille Polito

Research Engineer

Centre de Recherche en Informatique, Signal et Automatique de Lille

CRIStAL - UMR 9189

French National Center for Scientific Research - *http://www.cnrs.fr
*


*Web:* *http://guillep.github.io* 

*Phone: *+33 06 52 70 66 13