Since it seems you have a good overview in this domain I will give more 
details:
We are working in signal processing and especially in image processing. The 
goal here is just the adaptive optic: we just want to stabilize the image 
and not get the final image.
The consequence is that we will not store anything on the hard drive: we 
read an image, process it and destroy it. We stay in RAM all the time.
The processing is done by using/coding our algorithms. So for now, no need 
of any external library (for now, but I don't see any reason for that now)

First I would like to apologize: just after posting my answer I went to 
wikipedia to search the difference between soft and real time. 
I should have done it before so that you don't have to spend more time to 
explain.

In the end I still don't know if I am hard real time or soft real time: the 
timing is given by the camera speed and the processing should be done 
between the acquisition of two images.
We don't want to miss an image or delay the processing, I still need to 
clarify the consequences of a delay or if we miss an image.
For now let's just say that we can miss some images so we want soft real 
time.

I'm making a benchmark that should match the system in term of complexity, 
these are my first remarks:

When you say that one allocation is unacceptable, I say it's shockingly 
true: In my case I had 2 allocations done by:
    A +=1 where A is an array
and in 7 seconds I had 600k allocations. 
Morality :In closed loop you cannot accept any alloc and so you have to 
explicit all loops.

I have two problems now:

1/ Many times, the first run that include the compilation was the fastest 
and then any other run was slower by a factor 2.
2/ If I relaunch many times the main function that is in a module, there 
are some run that were very different (slower) from the previous.

About 1/, although I find it strange I don't really care.
2/ If far more problematic, once the code is compiled I want it to act the 
same whatever the number of launch.
I have some ideas why but no certitudes. What bother me the most is that 
all the runs in the benchmark will be slower, it's not a temporary slowdown 
it's all the current benchmark that will be slower.
If I launch again it will be back to the best performances.

Thank you for the links they are very interesting and I keep that in mind.

Note: I disabled hyperthreading and overclock, so it should not be the CPU 
doing funky things.

Le vendredi 3 juin 2016 18:28:58 UTC+2, Páll Haraldsson a écrit :
>
> On Thursday, June 2, 2016 at 7:55:03 AM UTC, John leger wrote:
>>
>> Páll: don't worry about the project failing because of YOUUUUUU ;) in any 
>> case we wanted to try Julia and see if we could get help/tips from the 
>> community.
>>
>
> Still, feel free to ask me anytime. I just do not want to give bad 
> professional advice or oversell Julia.
>  
>
>> About the nogc I wonder if activating it will also prevent the core of 
>> Julia to be garbage collected ? If yes for long run it's a bad idea to 
>> disable it too long.
>>
>
> Not really,* see below.
>  
>
>> For now the only options options are C/C++ and Julia, sorry no D or Lisp 
>> :) Why would you not recommend C for this kind of tasks ?
>> And I said 1000 images/sec but the camera may be able to go up to 10 000 
>> images/sec so I think we can define it as hard real time.
>>
>
> Not really. There's a hard and fast definition of hard real-time (and 
> real-time in general), it's not about speed, is about timely actions. That 
> said 10 000 images/sec is a lot.. 9 GB uncompressed data per second, 
> assuming gray-scale byte-per-pixel megapixel resolution. You will fill up 
> your 2 TB SSD I've seen advertised [I don't know about radiation-hardening 
> those, I guess anything is possible, you know anything about the potential 
> hardware used?], in three and a half minute.
>
> How fast are the down-links on these satellites? Would you get all the 
> [processed] data down to earth? If you can not, do you pick and choose 
> framerate and/or which period of time to "download"? Since I'm sure you 
> want lossless compression, it seems http://flif.info/ might be of 
> interest to you. [FLIF should really be wrapped as a Julia library.. 
> There's also a native executable, that could do, while maybe not suitable 
> for you/real-time, for invoking a separate process.] FLIF was GPL licensed, 
> that shouldn't be a problem for government work, and should be even more 
> non-issue now [for anybody].
>
>
> You can see from here:
>
> https://github.com/JuliaLang/julia/pull/12915#issuecomment-137114298
>
> that soft real-time was proposed for the NEWS section and even that 
> proposal was shot down. That may have been be overly cautious for the 
> incremental GC and I've seen audio (that is more latency sensitive than 
> video - at the usual frame rates..) being talked about working in some 
> thread, and software-defined-radio being discussed as a Julia project.
>
>
> * "About the nogc", if you meant the function to disable the GC, then it 
> doesn't block allocations (but my proposal did), only postpones 
> deallocations. There is no @nogc macro; my proposal for @nogc to block 
> allocations, was only a proposal, and rethinking it, not really too 
> helpful. It was a fail-fast debugging proposal, but as @time does show 
> allocations (or not when there are none), not just GC activity, it should 
> do, for debugging. I did a test:
>
> [Note, this has to be in a function, not in the global scope:]
>
> julia> function test()
>          @time a=[1 2 3]
>          @time a[1] = 2
>        end
> test (generic function with 1 method)
>
> julia> test()
>   0.000001 seconds (1 allocation: 96 bytes)
>   0.000000 seconds
>
> You want to see similar to the latter result, not the former, not even 
> with "1 allocation". It seems innocent enough, as there is no GC activity 
> (then there would be more text), but that is just an accident. When garbage 
> accumulates, even one allocation can tip off a GC and lots of 
> deallocations. And take an unbounded amount of time in naive GC 
> implementations. Incremental, means it's not that bad, but still 
> theoretically unbounded time I think.
>
> I've seen recommending disabling GC periodically, such as in games with 
> Lua, after each drawn frame ("vblank"). That scenario is superficially 
> similar to yours. I'm however skeptical of that approach, as a general 
> idea, if you do not minimize allocations. Note, that in games, the heavy 
> lifting is done by game engines, almost exclusively done in C++. As they do 
> not use GC (while GC IS optional in C++ and C), Lua will handle game logic 
> with probably much less memory allocated, so it works ok there, postponing 
> deallocations, while taking [potentially] MORE cumulative time later at the 
> convenient time.
>
> Why do I say more? The issue of running out of RAM because of garbage 
> isn't the only issue. NOT deallocating early, prevents reusing memory (that 
> is currently covered by the cache) and doing that would have helped for 
> cache purposes.
>
> By recommending FILF, I've actually recommended using C++ indirectly, and 
> reusing C++ (or C) code isn't bad all things equal. It's just that for new 
> code, I recommend not using C for many reasons, such as safety and C++ as 
> it's a complex language, to easy to "blow your leg off", to quote it's 
> designer.. and in both cases there are better languages, with some rare 
> exceptions that do not apply here (except one reason, can be reusing the 
> code, that MAY apply here).
>
> I believe lossy compression such as JPEG (even MPEG etc. at least on 
> average), has a consistent performance profile. But you wouldn't want to 
> use lossy. In general, lossless cannot guarantee any compression, while in 
> practice you would always get some. That makes me wander if any kind of 
> lossless is compatible with [hard] real-time.. It's probably hard to know 
> the (once in a million) worst case (or just bad) time complexity..
>
> If it is sufficient for you, to be ok with missing some frames 
> infrequently, it makes the problem no longer hard real-time. I understand 
> that is called soft real-time. You should still be able to get a know if 
> some frame is missed, such as by a timestamp. I haven't thought through is 
> missing some frame, and not knowing, would screw up some black hole video 
> analysis for Hawking. I'm only an amateur physicist; I've still not gotten 
> QM to work with general relativity, so I'm not sure about the Hawking 
> radiation-theory, and what a missed frame could do.
>
> Why D seems a better language than C and C++ is in part that you can avoid 
> the GC (and still be a better language), but also that you can use the GC! 
> That you can use @nogc, ensures at compile time that no future maintenance 
> of your code will add some accidental allocation and then GC [pause]. It 
> isn't really that you can't avoid GC in Julia, but this possibility, that 
> you add some, say logging, and forget to disable it..
>
>
> https://en.wikipedia.org/wiki/Ariane_5
>
> rocket blew up, in part because of failed maintenance of software, despite 
> the "safe" language Ada. Requirements changed and the software should have 
> been changed, but wasn't.
>
>
> Linus Torvalds, on his own Linux kernel (may be outdated, there is real-time 
> kernel now available, it's not the default, just read the fine print there):
>
>
> http://yarchive.net/comp/linux/rtlinux.html
>
> "Can we make the whole kernel truly hard-RT? Sure, possible in theory. In
> practice? No way, José. It's just not mainline enough."
>
> Note what he says about CPUs with caches (all modern CPUs.. even some 
> microcontrollers, those without wouldn't be fast enough anyway..). Silicon 
> Graphics had real-time I/O capabilities in their filesystem:
>
> https://en.wikipedia.org/wiki/XFS
> "A feature unique to XFS is the pre-allocation of I/O bandwidth at a 
> pre-determined rate, which is suitable for many real-time applications; 
> however, this feature was supported only on IRIX, and only with specialized 
> hardware."
>
> This isn't I guess too much of a problem, as [XFS was for spinning disks 
> and] you just do not do any concurrent I/O. SSDs could have some issues, do 
> not trust them blindly.. Similarly, with the Linux kernel (or any kernel), 
> you can NOT run many processes. Real-time operating system, are to solve 
> that problem. You can't get down to one process, but it might be close 
> enough.
>
>
> While googling for XFS I found [might be interesting]: 
> http://moss.csc.ncsu.edu/~mueller/rt/rt05/readings/g7/
>
>
> Mostly unread [in addition to below IBM's Metronome GC allows 
> hard-real-time without having to avoid the GC], but at least interesting 
> (note real-time Java dates back to 1998 but not quite to when it was first 
> public, I recall it being disallowed in the license and if I recall for 
> nuclear reactors..):
>
>
> http://www.oracle.com/technetwork/articles/java/nilsen-realtime-pt1-2264405.html
>
> * "Learn why Java SE is a good choice for implementing real-time systems, 
> especially those that are large, complex, and dynamic.*
>
> Published August 2014
> [..]
> The presented methods and techniques have been proven in many successfully 
> deployed Java SE applications, including a variety of telecommunications 
> infrastructure devices; automation of manufacturing processes, ocean-based 
> oil drilling rigs, and fossil fuel power plants; multiple radar systems; 
> and the modernization of the US Navy's Aegis Warship Weapons Control System 
> with enhanced ballistic missile defense capabilities.
>
> *Note*: The full source code for the sample application described in this 
> article is available here 
> <https://java.net/projects/otn/downloads/download/Projects/DeployingRealTimeSoftware.zip>
> .
> [..]
> Java SE Versus Other Languages 
>
> The use of Java SE APIs in the implementation of real-time systems is most 
> appropriate for soft real-time development. Using Java SE for hard 
> real-time development is also possible, but generally requires the use of 
> more specialized techniques such as the use of NoHeapRealtimeThread 
> abstractions, as described in the Real-Time Specification for Java (JSR 1), 
> or the use of the somewhat simpler ManagedSchedulable abstractions of the 
> Safety Critical Java Technology specification (JSR 302).
>
> [..]
>
> Projects that can be implemented entirely by one or two developers in a 
> year's time are more likely to be implemented in a less powerful language 
> such as C or C++
>
> [..]
> About the Author 
>
> As Chief Technology Officer over Java at Atego Systems—a mission- and 
> safety-critical solutions provider—Dr. Kelvin Nilsen oversees the design 
> and implementation of the Perc Ultra virtual machine and other Atego 
> embedded and real-time oriented products. Prior to joining Atego, Dr. 
> Nilsen served on the faculty of Iowa State University where he performed 
> seminal research on real-time Java that led to the Perc family of virtual 
> machine products."
>
>
>
>> Thank you for all these ideas !
>>
>>
>> Le 01/06/2016 23:59, Páll Haraldsson a écrit :
>>
>> On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote: 
>>>
>>> So for now the best is to build a toy that is equivalent in processing 
>>> time to the original and see by myself what I'm able to get.
>>> We have many ideas, many theories due to the nature of the GC so the 
>>> best is to try.
>>>
>>> Páll -> Thanks for the links
>>>
>>
>> No problem.
>>
>> While I did say it would be cool to now of Julia in space, I would hate 
>> for the project to fail because of Julia (because of my advice).
>>
>> I endorse Julia for all kinds of uses, hard real-time (and building 
>> operating systems) are where I have doubts.
>>
>> A. I thought a little more about making a macro @nogc to mark functions, 
>> and it's probably not possible. You could I guess for one function, as the 
>> macro has access to the AST of it. But what you really want to disallow, is 
>> that function calling functions that are not similarly marked. I do not 
>> know about metadata on functions and if a nogc-bit could be put in, but 
>> even then, in theory couldn't that function be changed at runtime..?
>>
>> What you would want is that this nogc property is statically checked as I 
>> guess D does, but Julia isn't separately compiled by default. Note there is 
>> Julia2C, and see
>>
>> http://juliacomputing.com/blog/2016/02/09/static-julia.html
>>
>> for gory details on compiling Julia.
>>
>> I haven't looked, I guess Julia2C does not generate malloc and free, only 
>> some malloc substitute in libjulia runtime. That substitute will allocate 
>> and run the GC when needed. These are the calls you want to avoid in your 
>> code and could maybe grep for.. There is a Lint.jl tool, but as memory 
>> allocation isn't an error it would not flag it, maybe it could be an 
>> option..
>>
>> B. One idea I just had (in the shower..), if @nogc is used or just on 
>> "gc_disable" (note it is deprecated*), it would disallow allocations (with 
>> an exception if tried), not just postpone them, it would be much easier to 
>> test if your code uses allocations or calls code that would. Still, you 
>> would have to check all code-paths..
>>
>> C. Ada, or the Spark-subset, might be the go-to language for hard 
>> real-time. Rust seems also good, just not as tried. D could also be an 
>> option with @nogc. And then there is C and especially C++ that I try do 
>> avoid recommending.
>>
>> D. Do tell if you only need soft real-time, it makes the matter so much 
>> simpler.. not just programming language choice..
>>
>> *
>> help?> gc_enable
>> search: gc_enable
>>
>>   gc_enable(on::Bool)
>>
>>   Control whether garbage collection is enabled using a boolean argument 
>> (true for enabled, false for disabled). Returns previous GC state. Disabling
>>   garbage collection should be used only with extreme caution, as it can 
>> cause memory use to grow without bound.
>>
>>
>>  
>>
>>>
>>> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit : 
>>>>
>>>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote: 
>>>>>
>>>>> If you are prepared to make your code to not perform any heap 
>>>>> allocations, I don't see a reason why there should be any issue. When I 
>>>>> once worked on a very first multi-threading version of Julia I wrote 
>>>>> exactly such functions that won't trigger gc since the later was not 
>>>>> thread 
>>>>> safe. This can be hard work but I would assume that its at least not more 
>>>>> work than implementing the application in C/C++ (assuming that you have 
>>>>> some Julia experience)
>>>>>
>>>>
>>>> I would really like to know why the work is hard, is it getting rid of 
>>>> the allocations, or being sure there are no more hidden in your code? I 
>>>> would also like to know then if you can do the same as in D language:
>>>>
>>>> http://wiki.dlang.org/Memory_Management 
>>>>
>>>> The most reliable way to guarantee latency is to preallocate all data 
>>>> that will be needed by the time critical portion. If no calls to allocate 
>>>> memory are done, the GC will not run and so will not cause the maximum 
>>>> latency to be exceeded.
>>>>
>>>> It is possible to create a real-time thread by detaching it from the 
>>>> runtime, marking the thread function @nogc, and ensuring the real-time 
>>>> thread does not hold any GC roots. GC objects can still be used in the 
>>>> real-time thread, but they must be referenced from other threads to 
>>>> prevent 
>>>> them from being collected."
>>>>
>>>> that is would it be possible to make a macro @nogc and mark functions 
>>>> in a similar way? I'm not aware that such a macro is available, to 
>>>> disallow. There is a macro, e.g. @time, that is not sufficient, that shows 
>>>> GC actitivy, but knowing there was none could have been an accident; if 
>>>> you 
>>>> run your code again and memory fills up you see different result.
>>>>
>>>> As with D, the GC in Julia is optional. The above @nogc, is really the 
>>>> only thing different, that I can think of that is better with their 
>>>> optional memory management. But I'm no expert on D, and I mey not have 
>>>> looked too closely:
>>>>
>>>> https://dlang.org/spec/garbage.html
>>>>
>>>>
>>>>> Tobi
>>>>>
>>>>> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger: 
>>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> I am working in astronomy and we are thinking of using Julia for a 
>>>>>> real time, high performance adaptive optics system on a solar telescope.
>>>>>>
>>>>>> This is how the system is supposed to work: 
>>>>>>    1) the image is read from the camera
>>>>>>    2) some correction are applied
>>>>>>    3) the atmospheric turbulence is numerically estimated in order to 
>>>>>> calculate the command to be sent to the deformable mirror
>>>>>>
>>>>>> The overall process should be executed in less than 1ms so that it 
>>>>>> can be integrated to the chain (closed loop).
>>>>>>
>>>>>> Do you think it is possible to do all the computation in Julia or 
>>>>>> would it be better to code some part in C/C++. What I fear the most is 
>>>>>> the 
>>>>>> GC but in our case we can pre-allocate everything, so once we launch the 
>>>>>> system there will not be any memory allocated during the experiment and 
>>>>>> it 
>>>>>> will run for days.
>>>>>>
>>>>>> So, what do you think? Considering the current state of Julia will I 
>>>>>> be able to get the performances I need. Will the garbage collector be an 
>>>>>> hindrance ?
>>>>>>
>>>>>> Thank you.
>>>>>>
>>>>>
>>

Reply via email to