On 13 April 2013 00:16, Igor Stasenko <siguc...@gmail.com> wrote:
> On 12 April 2013 23:47, Norbert Hartl <norb...@hartl.name> wrote:
>>
>> Am 12.04.2013 um 23:23 schrieb Igor Stasenko <siguc...@gmail.com>:
>>
>>> On 12 April 2013 23:14, p...@highoctane.be <p...@highoctane.be> wrote:
>>>> +1. No clue either. But discovered the terminate process shortcut to kill
>>>> them faster...
>>>>
>>>> Coping over solving ...
>>>>
>>>
>>> One of the "solutions" i proposed is to rewrite the code and get rid
>>> of "nano-second" """"synchronization""" of date and time with system clock
>>> because
>>> a) there is no real users of it (to my knowledge)
>>> b) you cannot have precision higher than precision of system primitive
>>> we're using,
>>> which is in millisecond range..
>>>
>> Where do you see a nanosecond synchronization? It is still millisecond clock 
>> as far as I can see. Only the instVar is called nanos.
>
> Ah, sorry.. i mistaken by some orders of magnitude. ;)
>
> The offending code starts from here:
>
> initializeOffsets
>         | durationSinceEpoch secondsSinceMidnight nowSecs |
>         LastTick := 0.
>         nowSecs := self clock secondsWhenClockTicks.
>         LastMilliSeconds := self millisecondClockValue.
>         durationSinceEpoch := Duration
>                 days: SqueakEpoch
>                 hours: 0
>                 minutes: 0
>                 seconds: nowSecs.
>         DaysSinceEpoch := durationSinceEpoch days.
>         secondsSinceMidnight := (durationSinceEpoch -
>                 (Duration
>                         days: DaysSinceEpoch
>                         hours: 0
>                         minutes: 0
>                         seconds: 0)) asSeconds.
>         MilliSecondOffset := secondsSinceMidnight * 1000 - LastMilliSeconds
>
> (notice that 1000 multiplier, which gives us "nanosecond" precision)
>
> But that's fine.. now look at
>
> secondsWhenClockTicks
>
>         "waits for the moment when a new second begins"
>
>         | lastSecond |
>
>         lastSecond := self primSecondsClock.
>         [ lastSecond = self primSecondsClock ] whileTrue: [ (Delay
> forMilliseconds: 1) wait ].
>
>         ^ lastSecond + 1
>
> that is complete nonsense. Sorry.
>
> This code relying on primSecondsClock resolution, which is..... (drum
> roll..... )
> 1 second..
>
> then it is combined with millisecondClockValue , as you see later to get
> system time with millisecond precision..
>
> I am not genius in math and physics.. but even i understand that if
> you measurement has error X
> you cannot get more precision than X, even if you combine it with
> another measurement with higher precision.
>

yeah.. and if you need millisecond precision so badly, with same success
i could do just:

 MilliSecondOffset := secondsSinceMidnight * 1000 -/+ (1000 atRandom).

.. you know.. much less to code, after all :)

> (But i can be wrong with that.. if so, please explain why)
>
>
>>
>>> c) i see it completely stupid to try to do magic tricks trying to be
>>> smart and squeeze more
>>> precision than underlying system can offer.
>>>
>>> For that: i would use non-existing-yet primitive, lets say:
>>>
>>> <primitive: 'NanoSecondSystemTimeFrom1Jan1900' module: ''>
>>>
>>> and since this primitive fails, because it don't exists, the fallback
>>> will use old primitive which
>>> currently in VM..
>>>
>>> because (repeating again) doing black magick and trickery in image
>>> buys us nothing and only serves as a source of bugs.
>>
>> Can you explain where the black magic happens?
>
> Sure.
> See
> initializeOffsets
> secondsWhenClockTicks
> and all of the users of LastMilliSeconds class var..
>
> btw, did i mentioned, that if we get rid of that code, DateAndTime
> will no longer need startup?
> I am just amazed at the shitloads of code which doing this stuff.. and
> has nothing to do with correct/precise measurement of system date and
> time.
>
> Sorry if it offends you or anyone else. As i said before, my attacks
> is always against bad code,
> never against people. I just saying how i feel when i see it.
>
>>I integrated the cuis changeset back then because I wanted something more 
>>fine grained than seconds. Do you think this is already black magic? I think 
>>we can make smaller slices today :) That it forces the whole DateAndTime in 
>>this precision is probably not necessary and I understand that Sven did his 
>>own timestamp. The same goes for timezones. Maybe we need more levels of 
>>features in the hierarchy. If the system (or any software) is not dependent 
>>on precision below a second it would be good to have such a coarse grained 
>>type at hand. But to have the possibility to have at least milliseconds I 
>>find important.
>
> What is most funny was that TimeStamp simply erases nanoseconds back to ZERO..
> But of course it does it in a very peculiar way, that you don't really
> understand what it is:
>
> TimeStamp class>>current
>
>         | ts ticks |
>         ts := super now.
>
>         ticks := ts ticks.
>         ticks at: 3 put: 0.
>         ts ticks: ticks offset: ts offset.
>
>         ^ ts
>
> instead of something like:
>
> TimeStamp class>>current
>
>   ^super now clearNanoseconds
>
> That code, btw also offends me a lot , especially "ticks at: 3 put: 0."
>
>>
>> My wild guess would be the startup initialization of DateAndTime. It takes 
>> quite while to do.
>
> yes, you could call  ~ 2^31 milliseconds delay wait time as a "while"..
> sure , it is still nothing in terms of known universe existence time,
> which is around 13 bil years  :)
>
>> So the code forks off the initialization in order not to slow down startup.
>
> hahaha... kind of ;)
>
> the "code" in image uses timestamps everywhere (like  showing new
> startup time in transcript)... and
> god knows where else in system.
> The point is that every time you say something like "Date now" or
> "DateAndTime now"
> it will be blocked until this fork will finish its work.
>
>> Without knowing exactly my gut tells me this is not a good idea. It  might 
>> be that it produces late jumps of time in startup which makes timeouts 
>> inactive. Or jumps the check for negative delays and introduces some 
>> negative wait which will native wise quit big. Were we at "wild guessing" or 
>> "very wild guessing" again?
>>
> yes.. but still we're not guessing what was before big bang.. i hope :)
>
>> Norbert
>>
>>
>
>
>
> --
> Best regards,
> Igor Stasenko.



-- 
Best regards,
Igor Stasenko.

Reply via email to