Yes, the lockfile method is only if you cannot get startup times down 
otherwise and need to resort to starting only once, but do not want to 
have a "forever running daemon".


On 02/02/2017 10:09 AM, pet...@riseup.net wrote:
> On 2017-02-01 22:44, Jim Mack wrote:
>> Forgive me if I sounded like I was pushing.  I am working daily in OS
>> development where tools like grunt are commonly accepted, and
>> developers
>> accept choices that maximize their power/ease over many other concerns.
>>   I
>> never heard the slippery slope argument used against node.js, and it
>> would
>> surely lose!
>>
>> On Wed, Feb 1, 2017 at 2:20 PM, Timothy Hobbs <timo...@hobbs.cz> wrote:
>>
>>> Just so you understand the gif, what happens is a client starts up. It
>>> grabs a lock on the counter file. It reads the counter file. If the
>>> counter is zero, it launches the service. If it is greater than zero,
>>> then it connects to the service. After launching or connecting to the
>>> service, it increments the counter, and releases the lock on the lock
>>> file. When a client closes, it sends a command to the service, which
>>> then grabs a lock on the lock file. Decriments the counter, and if the
>>> counter is now zero, it shuts down, otherwise it merely releases the
>>> lock. There are no race conditions with this method. If two clients
>>> start up at the same time, they will have to wait in line for a
>>> connection to the lock file. If a client shuts down at the same time
>>> as
>>> another client starts up, this to means that the service and the
>>> client
>>> will have to wait their turns for the lock, and therefore, no races...
>>>
>>>
>>> On 02/01/2017 11:16 PM, Timothy Hobbs wrote:
>>>> You can use a dbus on-demand service or your own locking mechanism, if
>>>> you, like me, don't like dbus. Here is a gif which describes the process
>>>> for starting and stopping a race-free on demand service
>>>> http://timothy.hobbs.cz/subuser-client-service/lock-file.gif using
>>>> standard lock files. You can modify this method, so that the service
>>>> remains running for a certain number of seconds after the client counter
>>>> has reached zero, so that in a sequential shell script, you wouldn't be
>>>> launching and stopping the service over and over again.
>>>>
>>>> However, what I was refering to, with shared executable memory, has
>>>> nothing to do with background daemons. It is a feature that is built
>>>> into most modern OS kernels. Many kernels load memory pages that are
>>>> marked as executable as read-only, and share those pages between
>>>> processes. This greatly reduces startup times, and also improves
>>>> security (marking them read only that is). It has the dissadvantage,
>>>> that self modifying code is impossible. Factor, being a weird system
>>>> that self-modifies itself, cannot take advantage of this mechanism at
>>>> all. So we'll have to do something more advanced, like use criu, which I
>>>> linked to previously.
>>>>
>>>> On 02/01/2017 11:10 PM, pet...@riseup.net wrote:
>>>>> On 2017-02-01 19:40, Jim Mack wrote:
>>>>>> So why not create a separate small process that passes on its
>>>>>> parameters to
>>>>>> a running factor if it can find it, or starts a new one if it can't?
>>>>>>
>>>>> That's like running a server and sending requests to it. I take several
>>>>> issues with that:
>>>>>
>>>>> 1 - I need one instance to have *all* my libraries, present and future
>>>>> to be pre-loaded. But more importantly:
>>>>> 2 - a typical shell script can call a dozen external executables. Some
>>>>> will be in C, some in bash, some in python, some in perl etc. If every
>>>>> language would need a huge server to run, where would that leave us?
>>>>>
>>>>>> On Wed, Feb 1, 2017 at 7:51 AM, Timothy Hobbs <timo...@hobbs.cz>
>>> wrote:
>>>>>>> Have you tried loading the
>>>>>>> factor interpreter in the background and seeing if factor launches
>>>>>>> quicker while another factor process is running?
>>>>> I did what I think is fair - started it once so everything necessary
>>>>> gets cached in RAM and discard that run. As noted above I don't think
>>>>> running a server for each possible language is a good solution.
>>>>>
>>>>>
>>>>> Feel free to contradict me gentlemen, I'm open to discussion, but I do
>>>>> have my own opinion of what is acceptable and transferable to other PCs
>>>>> / colleagues. I'm not looking for some local hack to speed things up but
>>>>> a general solution that doesn't put any more burden on the end users
>>>>> than it is necessary.
>>>>>
>>>> ------------------------------------------------------------
>>> ------------------
>>>> Check out the vibrant tech community on one of the world's most
>>>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>>>> _______________________________________________
>>>> Factor-talk mailing list
>>>> Factor-talk@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/factor-talk
>>>
>>> ------------------------------------------------------------
>>> ------------------
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>>> _______________________________________________
>>> Factor-talk mailing list
>>> Factor-talk@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/factor-talk
>>>
>> ------------------------------------------------------------------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>> _______________________________________________
>> Factor-talk mailing list
>> Factor-talk@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/factor-talk
> No need to apologize, as far as I'm concerned we're just discussing :)
> Now I hope I didn't sound too rude. I'm just trying to get on the same
> page with everyone. The provided solutions are in the line of "if the
> initialization is taking longer than you'd like, here's a way how to
> initialize only once". My focus is somewhere else though - why is the
> initialization taking so long? John is saying there's room for
> improvement, room to do less.
>


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk

Reply via email to