Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-09 Thread John leger


Le mercredi 8 juin 2016 17:33:18 UTC+2, Páll Haraldsson a écrit :
>
> On Monday, June 6, 2016 at 9:41:29 AM UTC, John leger wrote:
>>
>> Since it seems you have a good overview in this domain I will give more 
>> details:
>> We are working in signal processing and especially in image processing. 
>> The goal here is just the adaptive optic: we just want to stabilize the 
>> image and not get the final image.
>> The consequence is that we will not store anything on the hard drive: we 
>> read an image, process it and destroy it. We stay in RAM all the time.
>> The processing is done by using/coding our algorithms. So for now, no 
>> need of any external library (for now, but I don't see any reason for that 
>> now)
>>
>
> I completely misread/missed reading 3) about the "deformable mirror" 
> seeing now it's a down-to-earth project - literally.. :)
>
> Still, glad to help, even if it doesn't get Julia into space. :)
>
>
>
>> First I would like to apologize: just after posting my answer I went to 
>> wikipedia to search the difference between soft and real time. 
>> I should have done it before so that you don't have to spend more time to 
>> explain.
>>
>> In the end I still don't know if I am hard real time or soft real time: 
>> the timing is given by the camera speed and the processing should be done 
>> between the acquisition of two images.
>>
>
>
> From: 
> https://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computing
>
>- *Hard* – missing a deadline is a total system failure.
>- *Firm* – infrequent deadline misses are tolerable, but may degrade 
>the system's quality of service. The usefulness of a result is zero after 
>its deadline.
>- *Soft* – the usefulness of a result degrades after its deadline, 
>thereby degrading the system's quality of service.
>
> [Note also, real-time also applies to doing stuff too early, not only to 
> not doing stuff too late.. In some cases, say in games, that is not a [big] 
> problem, getting a frame ready earlier isn't a big concern.]
>
>
>
That's why in the previous mail I said that for now we will consider the 
system as a Soft real Time but in our case. But even if we will tolerate 
some deadline we don't want it to happen too many times. So Soft is not bad 
but some 95% Hard real time (firm) sound better in our case.
 

> Are you sure "the processing should be done between the acquisition of two 
> images" is a strict requirement? I assume the "atmospheric turbulence" to 
> not change extremely quickly and you could have some latency with you 
> calculation applying for some time/at least a few/many frames after and 
> then your project seems not hard real-time at all. Maybe soft or firm, a 
> category I had forgotten..
>
>
>
The system is a closed loop without thread. So the closed loop do all the 
steps described before one after each other and restart, so taking the flow 
of the camera as the reference timer is a good idea.
Your assumption is not correct for the turbulence is the main reason why we 
need the 1kHz so much and you can add the fact that we are working with the 
sun in visible spectrum (we want to observe fast things in hard conditions).

 

> At least your timescale is much longer than the camera speed to capture 
> each frame in a video?
>
>
> You also said "1000 images/sec but the camera may be able to go up to 10 
> 000 images/sec". I'm aware of very high-speed photography, such as 
> capturing a picture of a bullet from a gun, or seeing light literally 
> spreading across a room. Still do you need many frames per second for 
> (capturing video, that seems not your job) or for correction? Did you mix 
> up camera speed for exposure time? Ordinary cameras go up to 1/1000 s 
> shutter speed, but might only take video at up to 30, 60 or say 120 fps.
>
>
>
This will be the kind of camera we will be using:
http://www.mikrotron.de/en/products/machine-vision-cameras/coaxpressr.html 
<- 4CXP
If you look at the datasheet and consider the fact we will work at a 
resolution ~400x400, 1000fps is an easy thing to do. 
 

>
> >I like the definition of 95% hard real time; it suits my needs. Thanks 
> for this good paper.
>
> The term/title, sounds like firm real-time..
>
>  
>
>> We don't want to miss an image or delay the processing, I still need to 
>> clarify the consequences of a delay or if we miss an image.
>> For now let's just say that we can miss some images so we want soft real 
>> time.
>>
>
> You could store with each frame a) how long since the mirror was 
> corrected, based on b) the measurement since how long ago. Also can't you 
> [easily] see from a picture if it is mirror is maladjusted? Does to then 
> look blurred and then high-frequency content missing?
>
> How many "mirrors" are adjusted, or points in the mirror[s]?
>

We will use this DM the 97-15, so 97 actuators.
http://www.alpao.com/Products/Deformable_mirrors.htm
All of the values I gave to you were given to me by the people currently

[julia-users] Re: Using Julia for real time astronomy

2016-06-08 Thread Islam Badreldin
Hi Páll,


> The @nogc macro was made a long time ago, I now see:
>
>
> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Suspending$20Garbage$20Collection$20for$20Performance...good$20idea$20or$20bad$20idea$3F/julia-users/6_XvoLBzN60/nkB30SwmdHQJ
>

This is a very informative thread. Thank you for pointing it out!
 

>
> I'm not saying disabling the GC is preferred, just that the macro has been 
> done to do it had already been done.
>
> Karpinski has his own exception variant a little down the thread with "you 
> really want to put a try-catch around it". I just changed that variant so 
> it can be called recursively (and disabled try-catch as it was broken):
>
> macro nogc(ex)
>  quote
>#try
>  local pref = gc_enable(false)
>  local val = $(esc(ex))
>#finally
>  gc_enable(pref)
>#end
>val
>  end
>end
>
>
>
  -Islam 


[julia-users] Re: Using Julia for real time astronomy

2016-06-08 Thread Páll Haraldsson
On Tuesday, May 31, 2016 at 4:44:17 PM UTC, Páll Haraldsson wrote:
>
> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>
>> If you are prepared to make your code to not perform any heap 
>> allocations, I don't see a reason why there should be any issue. When I 
>> once worked on a very first multi-threading version of Julia I wrote 
>> exactly such functions that won't trigger gc since the later was not thread 
>> safe. This can be hard work but I would assume that its at least not more 
>> work than implementing the application in C/C++ (assuming that you have 
>> some Julia experience)
>>
>
> I would really like to know why the work is hard, is it getting rid of the 
> allocations, or being sure there are no more hidden in your code? I would 
> also like to know then if you can do the same as in D language:
>
> http://wiki.dlang.org/Memory_Management
>
 

> that is would it be possible to make a macro @nogc and mark functions in a 
> similar way?
>

The @nogc macro was made a long time ago, I now see:

https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Suspending$20Garbage$20Collection$20for$20Performance...good$20idea$20or$20bad$20idea$3F/julia-users/6_XvoLBzN60/nkB30SwmdHQJ

I'm not saying disabling the GC is preferred, just that the macro has been 
done to do it had already been done.

Karpinski has his own exception variant a little down the thread with "you 
really want to put a try-catch around it". I just changed that variant so 
it can be called recursively (and disabled try-catch as it was broken):

macro nogc(ex)
 quote
   #try
 local pref = gc_enable(false)
 local val = $(esc(ex))
   #finally
 gc_enable(pref)
   #end
   val
 end
   end




Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-08 Thread Páll Haraldsson
On Monday, June 6, 2016 at 9:41:29 AM UTC, John leger wrote:
>
> Since it seems you have a good overview in this domain I will give more 
> details:
> We are working in signal processing and especially in image processing. 
> The goal here is just the adaptive optic: we just want to stabilize the 
> image and not get the final image.
> The consequence is that we will not store anything on the hard drive: we 
> read an image, process it and destroy it. We stay in RAM all the time.
> The processing is done by using/coding our algorithms. So for now, no need 
> of any external library (for now, but I don't see any reason for that now)
>

I completely misread/missed reading 3) about the "deformable mirror" seeing 
now it's a down-to-earth project - literally.. :)

Still, glad to help, even if it doesn't get Julia into space. :)



> First I would like to apologize: just after posting my answer I went to 
> wikipedia to search the difference between soft and real time. 
> I should have done it before so that you don't have to spend more time to 
> explain.
>
> In the end I still don't know if I am hard real time or soft real time: 
> the timing is given by the camera speed and the processing should be done 
> between the acquisition of two images.
>


From: 
https://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computing

   - *Hard* – missing a deadline is a total system failure.
   - *Firm* – infrequent deadline misses are tolerable, but may degrade the 
   system's quality of service. The usefulness of a result is zero after its 
   deadline.
   - *Soft* – the usefulness of a result degrades after its deadline, 
   thereby degrading the system's quality of service.

[Note also, real-time also applies to doing stuff too early, not only to 
not doing stuff too late.. In some cases, say in games, that is not a [big] 
problem, getting a frame ready earlier isn't a big concern.]


Are you sure "the processing should be done between the acquisition of two 
images" is a strict requirement? I assume the "atmospheric turbulence" to 
not change extremely quickly and you could have some latency with you 
calculation applying for some time/at least a few/many frames after and 
then your project seems not hard real-time at all. Maybe soft or firm, a 
category I had forgotten..


At least your timescale is much longer than the camera speed to capture 
each frame in a video?


You also said "1000 images/sec but the camera may be able to go up to 10 
000 images/sec". I'm aware of very high-speed photography, such as 
capturing a picture of a bullet from a gun, or seeing light literally 
spreading across a room. Still do you need many frames per second for 
(capturing video, that seems not your job) or for correction? Did you mix 
up camera speed for exposure time? Ordinary cameras go up to 1/1000 s 
shutter speed, but might only take video at up to 30, 60 or say 120 fps.



>I like the definition of 95% hard real time; it suits my needs. Thanks for 
this good paper.

The term/title, sounds like firm real-time..

 

> We don't want to miss an image or delay the processing, I still need to 
> clarify the consequences of a delay or if we miss an image.
> For now let's just say that we can miss some images so we want soft real 
> time.
>

You could store with each frame a) how long since the mirror was corrected, 
based on b) the measurement since how long ago. Also can't you [easily] see 
from a picture if it is mirror is maladjusted? Does to then look blurred 
and then high-frequency content missing?

How many "mirrors" are adjusted, or points in the mirror[s]?


> I'm making a benchmark that should match the system in term of complexity, 
> these are my first remarks:
>
> When you say that one allocation is unacceptable, I say it's shockingly 
> true: In my case I had 2 allocations done by:
> A +=1 where A is an array
> and in 7 seconds I had 600k allocations. 
> Morality :In closed loop you cannot accept any alloc and so you have to 
> explicit all loops.
>

I think you mean two (or even one) allocation are bad because they are in a 
loop. And that loop runs for each adjustment.

I meant even just one allocation (per adjustment, or frame of you will) can 
be a problem. Well, not strictly, but say there have been many in the past, 
then it's only the last one that is the problem.
 

>
> I have two problems now:
>
> 1/ Many times, the first run that include the compilation was the fastest 
> and then any other run was slower by a factor 2.
> 2/ If I relaunch many times the main function that is in a module, there 
> are some run that were very different (slower) from the previous.
>
> About 1/, although I find it strange I don't really care.
> 2/ If far more problematic, once the code is compiled I want it to act the 
> same whatever the number of launch.
> I have some ideas why but no certitudes. What bother me the most is that 
> all the runs in the benchmark will be slower, it's not a temporary slowd

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-07 Thread Islam Badreldin
Hi John,

Please see below ..

On Tuesday, June 7, 2016 at 5:26:32 AM UTC-4, John leger wrote:
>
> Hi Islam,
>
> I like the definition of 95% hard real time; it suits my needs. Thanks for 
> this good paper.
>
> Le lundi 6 juin 2016 18:45:35 UTC+2, Islam Badreldin a écrit :
>>
>> Hi John,
>>
>> I am currently pursuing similar effort. I got a GPIO pin on the 
>> BeagleBone Black embedded board toggling in hard real-time and verified the 
>> jitter with an oscilloscope. For that, I used a vanilla Linux 4.4.11 kernel 
>> with the PREEMPT_RT patch applied. I also released an initial version of a 
>> Julia package that wraps the clock_nanosleep() and clock_gettime() 
>> functions from the POSIX real-time extensions. Please see this other thread:
>> https://groups.google.com/forum/#!topic/julia-users/0Vr2rCRwJY4
>>
>> I tested that package both on Intel-based laptop and on the BeagleBone 
>> Black. I am giving some of the relevant details below..
>>
>> On Monday, June 6, 2016 at 5:41:29 AM UTC-4, John leger wrote:
>>>
>>> Since it seems you have a good overview in this domain I will give more 
>>> details:
>>> We are working in signal processing and especially in image processing. 
>>> The goal here is just the adaptive optic: we just want to stabilize the 
>>> image and not get the final image.
>>> The consequence is that we will not store anything on the hard drive: we 
>>> read an image, process it and destroy it. We stay in RAM all the time.
>>> The processing is done by using/coding our algorithms. So for now, no 
>>> need of any external library (for now, but I don't see any reason for that 
>>> now)
>>>
>>> First I would like to apologize: just after posting my answer I went to 
>>> wikipedia to search the difference between soft and real time. 
>>> I should have done it before so that you don't have to spend more time 
>>> to explain.
>>>
>>> In the end I still don't know if I am hard real time or soft real time: 
>>> the timing is given by the camera speed and the processing should be done 
>>> between the acquisition of two images.
>>> We don't want to miss an image or delay the processing, I still need to 
>>> clarify the consequences of a delay or if we miss an image.
>>> For now let's just say that we can miss some images so we want soft real 
>>> time.
>>>
>>
>> The real-time performance you are after could be 95% hard real-time. See 
>> e.g. here: https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf
>>  
>>
>>>
>>> I'm making a benchmark that should match the system in term of 
>>> complexity, these are my first remarks:
>>>
>>> When you say that one allocation is unacceptable, I say it's shockingly 
>>> true: In my case I had 2 allocations done by:
>>> A +=1 where A is an array
>>> and in 7 seconds I had 600k allocations. 
>>> Morality :In closed loop you cannot accept any alloc and so you have to 
>>> explicit all loops.
>>>
>>
>> Yes, try to completely avoid memory allocations while developing your own 
>> algorithms in Julia. Pre-allocations and in-place operations are your 
>> friends! The example script available on the POSIXClock package is one way 
>> to do this (
>> https://github.com/ibadr/POSIXClock.jl/blob/master/examples/rt_histogram.jl).
>>  
>> The real-time section of the code is marked by a ccall to mlockall() in 
>> order to cause immediate failure upon memory allocations in the real-time 
>> section. You can also use the --track-allocation option to hunt down 
>> memory allocations while developing your algorithm. See e.g. 
>> http://docs.julialang.org/en/release-0.4/manual/profile/#man-track-allocation
>>  
>>
>
> I discovered --track-allocation not so long ago and it is a good tool. 
> For now I think I will rely on tracking allocation manually. I am a little 
> afraid of using mlockall(): In soft or real time crashing (failure) is not 
> a good option for me...
> Since you are talking about --track-allocation I have a question:
>
>
> - function deflat(v::globalVar)
> 0 @simd for i in 1:v.len_sub
> 0 @inbounds v.sub_imagef[i] = v.flat[i]*v.image[i]
> - end
> - 
> 0 @simd for i in 1:v.len_ref
> 0 @inbounds v.ref_imagef[i] = v.flat[i]*v.image[i]
> - end
> 0 return
> - end
> - 
> - # get min max
> - # apply norm_coef
> - # MORE TO DO HERE
> - function normalization(v::globalVar)
> 0 min::Float32 = Float32(4095)
> 0 max::Float32 = Float32(0)
> 0 tmp::Float32 = Float32(0)
> 0 norm_fact::Float32 = Float32(0)
> 0 norm_coef::Float32 = Float32(0)
> - # find min max
> 0 @simd for i in 1:v.nb_mat
> 0 # Doing something with no allocs
> 0 end
> 0 end
> 0 
>   1226415 # SAD[70] 16x16 de Ref_Image sur Sub

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-07 Thread John leger
Hi Islam,

I like the definition of 95% hard real time; it suits my needs. Thanks for 
this good paper.

Le lundi 6 juin 2016 18:45:35 UTC+2, Islam Badreldin a écrit :
>
> Hi John,
>
> I am currently pursuing similar effort. I got a GPIO pin on the BeagleBone 
> Black embedded board toggling in hard real-time and verified the jitter 
> with an oscilloscope. For that, I used a vanilla Linux 4.4.11 kernel with 
> the PREEMPT_RT patch applied. I also released an initial version of a Julia 
> package that wraps the clock_nanosleep() and clock_gettime() functions from 
> the POSIX real-time extensions. Please see this other thread:
> https://groups.google.com/forum/#!topic/julia-users/0Vr2rCRwJY4
>
> I tested that package both on Intel-based laptop and on the BeagleBone 
> Black. I am giving some of the relevant details below..
>
> On Monday, June 6, 2016 at 5:41:29 AM UTC-4, John leger wrote:
>>
>> Since it seems you have a good overview in this domain I will give more 
>> details:
>> We are working in signal processing and especially in image processing. 
>> The goal here is just the adaptive optic: we just want to stabilize the 
>> image and not get the final image.
>> The consequence is that we will not store anything on the hard drive: we 
>> read an image, process it and destroy it. We stay in RAM all the time.
>> The processing is done by using/coding our algorithms. So for now, no 
>> need of any external library (for now, but I don't see any reason for that 
>> now)
>>
>> First I would like to apologize: just after posting my answer I went to 
>> wikipedia to search the difference between soft and real time. 
>> I should have done it before so that you don't have to spend more time to 
>> explain.
>>
>> In the end I still don't know if I am hard real time or soft real time: 
>> the timing is given by the camera speed and the processing should be done 
>> between the acquisition of two images.
>> We don't want to miss an image or delay the processing, I still need to 
>> clarify the consequences of a delay or if we miss an image.
>> For now let's just say that we can miss some images so we want soft real 
>> time.
>>
>
> The real-time performance you are after could be 95% hard real-time. See 
> e.g. here: https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf
>  
>
>>
>> I'm making a benchmark that should match the system in term of 
>> complexity, these are my first remarks:
>>
>> When you say that one allocation is unacceptable, I say it's shockingly 
>> true: In my case I had 2 allocations done by:
>> A +=1 where A is an array
>> and in 7 seconds I had 600k allocations. 
>> Morality :In closed loop you cannot accept any alloc and so you have to 
>> explicit all loops.
>>
>
> Yes, try to completely avoid memory allocations while developing your own 
> algorithms in Julia. Pre-allocations and in-place operations are your 
> friends! The example script available on the POSIXClock package is one way 
> to do this (
> https://github.com/ibadr/POSIXClock.jl/blob/master/examples/rt_histogram.jl). 
> The real-time section of the code is marked by a ccall to mlockall() in 
> order to cause immediate failure upon memory allocations in the real-time 
> section. You can also use the --track-allocation option to hunt down 
> memory allocations while developing your algorithm. See e.g. 
> http://docs.julialang.org/en/release-0.4/manual/profile/#man-track-allocation
>  
>

I discovered --track-allocation not so long ago and it is a good tool. For 
now I think I will rely on tracking allocation manually. I am a little 
afraid of using mlockall(): In soft or real time crashing (failure) is not 
a good option for me...
Since you are talking about --track-allocation I have a question:


- function deflat(v::globalVar)
0 @simd for i in 1:v.len_sub
0 @inbounds v.sub_imagef[i] = v.flat[i]*v.image[i]
- end
- 
0 @simd for i in 1:v.len_ref
0 @inbounds v.ref_imagef[i] = v.flat[i]*v.image[i]
- end
0 return
- end
- 
- # get min max
- # apply norm_coef
- # MORE TO DO HERE
- function normalization(v::globalVar)
0 min::Float32 = Float32(4095)
0 max::Float32 = Float32(0)
0 tmp::Float32 = Float32(0)
0 norm_fact::Float32 = Float32(0)
0 norm_coef::Float32 = Float32(0)
- # find min max
0 @simd for i in 1:v.nb_mat
0 # Doing something with no allocs
0 end
0 end
0 
  1226415 # SAD[70] 16x16 de Ref_Image sur Sub_Image[60]
- function correlation_SAD(v::globalVar)
0 
- end
- 

In the mem output file I have this information: at the end of normalization 
I have no alloc and in front of the SAD comment and before the em

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-06 Thread Islam Badreldin
Hi John,

I am currently pursuing similar effort. I got a GPIO pin on the BeagleBone 
Black embedded board toggling in hard real-time and verified the jitter 
with an oscilloscope. For that, I used a vanilla Linux 4.4.11 kernel with 
the PREEMPT_RT patch applied. I also released an initial version of a Julia 
package that wraps the clock_nanosleep() and clock_gettime() functions from 
the POSIX real-time extensions. Please see this other thread:
https://groups.google.com/forum/#!topic/julia-users/0Vr2rCRwJY4

I tested that package both on Intel-based laptop and on the BeagleBone 
Black. I am giving some of the relevant details below..

On Monday, June 6, 2016 at 5:41:29 AM UTC-4, John leger wrote:
>
> Since it seems you have a good overview in this domain I will give more 
> details:
> We are working in signal processing and especially in image processing. 
> The goal here is just the adaptive optic: we just want to stabilize the 
> image and not get the final image.
> The consequence is that we will not store anything on the hard drive: we 
> read an image, process it and destroy it. We stay in RAM all the time.
> The processing is done by using/coding our algorithms. So for now, no need 
> of any external library (for now, but I don't see any reason for that now)
>
> First I would like to apologize: just after posting my answer I went to 
> wikipedia to search the difference between soft and real time. 
> I should have done it before so that you don't have to spend more time to 
> explain.
>
> In the end I still don't know if I am hard real time or soft real time: 
> the timing is given by the camera speed and the processing should be done 
> between the acquisition of two images.
> We don't want to miss an image or delay the processing, I still need to 
> clarify the consequences of a delay or if we miss an image.
> For now let's just say that we can miss some images so we want soft real 
> time.
>

The real-time performance you are after could be 95% hard real-time. See 
e.g. here: https://www.osadl.org/fileadmin/dam/rtlws/12/Brown.pdf
 

>
> I'm making a benchmark that should match the system in term of complexity, 
> these are my first remarks:
>
> When you say that one allocation is unacceptable, I say it's shockingly 
> true: In my case I had 2 allocations done by:
> A +=1 where A is an array
> and in 7 seconds I had 600k allocations. 
> Morality :In closed loop you cannot accept any alloc and so you have to 
> explicit all loops.
>

Yes, try to completely avoid memory allocations while developing your own 
algorithms in Julia. Pre-allocations and in-place operations are your 
friends! The example script available on the POSIXClock package is one way 
to do this 
(https://github.com/ibadr/POSIXClock.jl/blob/master/examples/rt_histogram.jl). 
The real-time section of the code is marked by a ccall to mlockall() in 
order to cause immediate failure upon memory allocations in the real-time 
section. You can also use the --track-allocation option to hunt down memory 
allocations while developing your algorithm. See e.g. 
http://docs.julialang.org/en/release-0.4/manual/profile/#man-track-allocation
 

>
> I have two problems now:
>
> 1/ Many times, the first run that include the compilation was the fastest 
> and then any other run was slower by a factor 2.
> 2/ If I relaunch many times the main function that is in a module, there 
> are some run that were very different (slower) from the previous.
>
> About 1/, although I find it strange I don't really care.
> 2/ If far more problematic, once the code is compiled I want it to act the 
> same whatever the number of launch.
> I have some ideas why but no certitudes. What bother me the most is that 
> all the runs in the benchmark will be slower, it's not a temporary slowdown 
> it's all the current benchmark that will be slower.
> If I launch again it will be back to the best performances.
>
> Thank you for the links they are very interesting and I keep that in mind.
>
> Note: I disabled hyperthreading and overclock, so it should not be the CPU 
> doing funky things.
>
>
>
Regarding these two issues, I encountered similar ones. Are you running on 
an Intel-based computer? I had to do many tweaks to get to acceptable 
real-time performance with Intel processors. Many factors could be at play. 
As you said, you have to make sure hyper-threading is disabled and not to 
overclock the processor. Also, monitor the kernel dmesg log for any errors 
or warnings regarding RT throttling or local_softitq_pending.

Additionally, I had to use the following options in the Linux command line 
(pass them from the bootloader):

intel_idle.max_cstate=0 processor.max_cstate=0 idle=poll

Together with removing the intel_powerclamp kernel module (sudo rm 
intel_powerclamp). Caution: be extremely careful with such configuration as 
it disables many power saving features in the processor and can potentially 
overheat it. Keep an eye on the kernel dmesg log and try to monitor

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-06 Thread John leger
Since it seems you have a good overview in this domain I will give more 
details:
We are working in signal processing and especially in image processing. The 
goal here is just the adaptive optic: we just want to stabilize the image 
and not get the final image.
The consequence is that we will not store anything on the hard drive: we 
read an image, process it and destroy it. We stay in RAM all the time.
The processing is done by using/coding our algorithms. So for now, no need 
of any external library (for now, but I don't see any reason for that now)

First I would like to apologize: just after posting my answer I went to 
wikipedia to search the difference between soft and real time. 
I should have done it before so that you don't have to spend more time to 
explain.

In the end I still don't know if I am hard real time or soft real time: the 
timing is given by the camera speed and the processing should be done 
between the acquisition of two images.
We don't want to miss an image or delay the processing, I still need to 
clarify the consequences of a delay or if we miss an image.
For now let's just say that we can miss some images so we want soft real 
time.

I'm making a benchmark that should match the system in term of complexity, 
these are my first remarks:

When you say that one allocation is unacceptable, I say it's shockingly 
true: In my case I had 2 allocations done by:
A +=1 where A is an array
and in 7 seconds I had 600k allocations. 
Morality :In closed loop you cannot accept any alloc and so you have to 
explicit all loops.

I have two problems now:

1/ Many times, the first run that include the compilation was the fastest 
and then any other run was slower by a factor 2.
2/ If I relaunch many times the main function that is in a module, there 
are some run that were very different (slower) from the previous.

About 1/, although I find it strange I don't really care.
2/ If far more problematic, once the code is compiled I want it to act the 
same whatever the number of launch.
I have some ideas why but no certitudes. What bother me the most is that 
all the runs in the benchmark will be slower, it's not a temporary slowdown 
it's all the current benchmark that will be slower.
If I launch again it will be back to the best performances.

Thank you for the links they are very interesting and I keep that in mind.

Note: I disabled hyperthreading and overclock, so it should not be the CPU 
doing funky things.

Le vendredi 3 juin 2016 18:28:58 UTC+2, Páll Haraldsson a écrit :
>
> On Thursday, June 2, 2016 at 7:55:03 AM UTC, John leger wrote:
>>
>> Páll: don't worry about the project failing because of YOUU ;) in any 
>> case we wanted to try Julia and see if we could get help/tips from the 
>> community.
>>
>
> Still, feel free to ask me anytime. I just do not want to give bad 
> professional advice or oversell Julia.
>  
>
>> About the nogc I wonder if activating it will also prevent the core of 
>> Julia to be garbage collected ? If yes for long run it's a bad idea to 
>> disable it too long.
>>
>
> Not really,* see below.
>  
>
>> For now the only options options are C/C++ and Julia, sorry no D or Lisp 
>> :) Why would you not recommend C for this kind of tasks ?
>> And I said 1000 images/sec but the camera may be able to go up to 10 000 
>> images/sec so I think we can define it as hard real time.
>>
>
> Not really. There's a hard and fast definition of hard real-time (and 
> real-time in general), it's not about speed, is about timely actions. That 
> said 10 000 images/sec is a lot.. 9 GB uncompressed data per second, 
> assuming gray-scale byte-per-pixel megapixel resolution. You will fill up 
> your 2 TB SSD I've seen advertised [I don't know about radiation-hardening 
> those, I guess anything is possible, you know anything about the potential 
> hardware used?], in three and a half minute.
>
> How fast are the down-links on these satellites? Would you get all the 
> [processed] data down to earth? If you can not, do you pick and choose 
> framerate and/or which period of time to "download"? Since I'm sure you 
> want lossless compression, it seems http://flif.info/ might be of 
> interest to you. [FLIF should really be wrapped as a Julia library.. 
> There's also a native executable, that could do, while maybe not suitable 
> for you/real-time, for invoking a separate process.] FLIF was GPL licensed, 
> that shouldn't be a problem for government work, and should be even more 
> non-issue now [for anybody].
>
>
> You can see from here:
>
> https://github.com/JuliaLang/julia/pull/12915#issuecomment-137114298
>
> that soft real-time was proposed for the NEWS section and even that 
> proposal was shot down. That may have been be overly cautious for the 
> incremental GC and I've seen audio (that is more latency sensitive than 
> video - at the usual frame rates..) being talked about working in some 
> thread, and software-defined-radio being discussed as a Julia project.
>
>
> * "

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-03 Thread Páll Haraldsson
On Thursday, June 2, 2016 at 7:55:03 AM UTC, John leger wrote:
>
> Páll: don't worry about the project failing because of YOUU ;) in any 
> case we wanted to try Julia and see if we could get help/tips from the 
> community.
>

Still, feel free to ask me anytime. I just do not want to give bad 
professional advice or oversell Julia.
 

> About the nogc I wonder if activating it will also prevent the core of 
> Julia to be garbage collected ? If yes for long run it's a bad idea to 
> disable it too long.
>

Not really,* see below.
 

> For now the only options options are C/C++ and Julia, sorry no D or Lisp 
> :) Why would you not recommend C for this kind of tasks ?
> And I said 1000 images/sec but the camera may be able to go up to 10 000 
> images/sec so I think we can define it as hard real time.
>

Not really. There's a hard and fast definition of hard real-time (and 
real-time in general), it's not about speed, is about timely actions. That 
said 10 000 images/sec is a lot.. 9 GB uncompressed data per second, 
assuming gray-scale byte-per-pixel megapixel resolution. You will fill up 
your 2 TB SSD I've seen advertised [I don't know about radiation-hardening 
those, I guess anything is possible, you know anything about the potential 
hardware used?], in three and a half minute.

How fast are the down-links on these satellites? Would you get all the 
[processed] data down to earth? If you can not, do you pick and choose 
framerate and/or which period of time to "download"? Since I'm sure you 
want lossless compression, it seems http://flif.info/ might be of interest 
to you. [FLIF should really be wrapped as a Julia library.. There's also a 
native executable, that could do, while maybe not suitable for 
you/real-time, for invoking a separate process.] FLIF was GPL licensed, 
that shouldn't be a problem for government work, and should be even more 
non-issue now [for anybody].


You can see from here:

https://github.com/JuliaLang/julia/pull/12915#issuecomment-137114298

that soft real-time was proposed for the NEWS section and even that 
proposal was shot down. That may have been be overly cautious for the 
incremental GC and I've seen audio (that is more latency sensitive than 
video - at the usual frame rates..) being talked about working in some 
thread, and software-defined-radio being discussed as a Julia project.


* "About the nogc", if you meant the function to disable the GC, then it 
doesn't block allocations (but my proposal did), only postpones 
deallocations. There is no @nogc macro; my proposal for @nogc to block 
allocations, was only a proposal, and rethinking it, not really too 
helpful. It was a fail-fast debugging proposal, but as @time does show 
allocations (or not when there are none), not just GC activity, it should 
do, for debugging. I did a test:

[Note, this has to be in a function, not in the global scope:]

julia> function test()
 @time a=[1 2 3]
 @time a[1] = 2
   end
test (generic function with 1 method)

julia> test()
  0.01 seconds (1 allocation: 96 bytes)
  0.00 seconds

You want to see similar to the latter result, not the former, not even with 
"1 allocation". It seems innocent enough, as there is no GC activity (then 
there would be more text), but that is just an accident. When garbage 
accumulates, even one allocation can tip off a GC and lots of 
deallocations. And take an unbounded amount of time in naive GC 
implementations. Incremental, means it's not that bad, but still 
theoretically unbounded time I think.

I've seen recommending disabling GC periodically, such as in games with 
Lua, after each drawn frame ("vblank"). That scenario is superficially 
similar to yours. I'm however skeptical of that approach, as a general 
idea, if you do not minimize allocations. Note, that in games, the heavy 
lifting is done by game engines, almost exclusively done in C++. As they do 
not use GC (while GC IS optional in C++ and C), Lua will handle game logic 
with probably much less memory allocated, so it works ok there, postponing 
deallocations, while taking [potentially] MORE cumulative time later at the 
convenient time.

Why do I say more? The issue of running out of RAM because of garbage isn't 
the only issue. NOT deallocating early, prevents reusing memory (that is 
currently covered by the cache) and doing that would have helped for cache 
purposes.

By recommending FILF, I've actually recommended using C++ indirectly, and 
reusing C++ (or C) code isn't bad all things equal. It's just that for new 
code, I recommend not using C for many reasons, such as safety and C++ as 
it's a complex language, to easy to "blow your leg off", to quote it's 
designer.. and in both cases there are better languages, with some rare 
exceptions that do not apply here (except one reason, can be reusing the 
code, that MAY apply here).

I believe lossy compression such as JPEG (even MPEG etc. at least on 
average), has a consistent performance profile. B

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-02 Thread Cedric St-Jean
John: Common Lisp and Julia have a lot in common. I didn't mean to suggest
writing your software in Lisp, I meant that if ITA was able to run a hugely
popular website involving a complicated optimization problem without
triggering the GC, then you can do the same in Julia. Like others have
suggested, you just preallocate everything (global const arrays), and make
sure that every code path is run once (to force compilation) before the
system goes online. @time will tell you if you've been successful at
eliminating everything. You might run into issues with libraries allocating
during their calls, and it might be easier all things considered in C, but
it's certainly doable with enough efforts in Julia. I might be up for
helping out, if you're interested.

On Thu, Jun 2, 2016 at 3:54 AM, Leger Jonathan 
wrote:

> Páll: don't worry about the project failing because of YOUU ;) in any
> case we wanted to try Julia and see if we could get help/tips from the
> community.
> About the nogc I wonder if activating it will also prevent the core of
> Julia to be garbage collected ? If yes for long run it's a bad idea to
> disable it too long.
>
> For now the only options options are C/C++ and Julia, sorry no D or Lisp
> :) Why would you not recommend C for this kind of tasks ?
> And I said 1000 images/sec but the camera may be able to go up to 10 000
> images/sec so I think we can define it as hard real time.
>
> Thank you for all these ideas !
>
>
>
> Le 01/06/2016 23:59, Páll Haraldsson a écrit :
>
> On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>>
>> So for now the best is to build a toy that is equivalent in processing
>> time to the original and see by myself what I'm able to get.
>> We have many ideas, many theories due to the nature of the GC so the best
>> is to try.
>>
>> Páll -> Thanks for the links
>>
>
> No problem.
>
> While I did say it would be cool to now of Julia in space, I would hate
> for the project to fail because of Julia (because of my advice).
>
> I endorse Julia for all kinds of uses, hard real-time (and building
> operating systems) are where I have doubts.
>
> A. I thought a little more about making a macro @nogc to mark functions,
> and it's probably not possible. You could I guess for one function, as the
> macro has access to the AST of it. But what you really want to disallow, is
> that function calling functions that are not similarly marked. I do not
> know about metadata on functions and if a nogc-bit could be put in, but
> even then, in theory couldn't that function be changed at runtime..?
>
> What you would want is that this nogc property is statically checked as I
> guess D does, but Julia isn't separately compiled by default. Note there is
> Julia2C, and see
>
> http://juliacomputing.com/blog/2016/02/09/static-julia.html
>
> for gory details on compiling Julia.
>
> I haven't looked, I guess Julia2C does not generate malloc and free, only
> some malloc substitute in libjulia runtime. That substitute will allocate
> and run the GC when needed. These are the calls you want to avoid in your
> code and could maybe grep for.. There is a Lint.jl tool, but as memory
> allocation isn't an error it would not flag it, maybe it could be an
> option..
>
> B. One idea I just had (in the shower..), if @nogc is used or just on
> "gc_disable" (note it is deprecated*), it would disallow allocations (with
> an exception if tried), not just postpone them, it would be much easier to
> test if your code uses allocations or calls code that would. Still, you
> would have to check all code-paths..
>
> C. Ada, or the Spark-subset, might be the go-to language for hard
> real-time. Rust seems also good, just not as tried. D could also be an
> option with @nogc. And then there is C and especially C++ that I try do
> avoid recommending.
>
> D. Do tell if you only need soft real-time, it makes the matter so much
> simpler.. not just programming language choice..
>
> *
> help?> gc_enable
> search: gc_enable
>
>   gc_enable(on::Bool)
>
>   Control whether garbage collection is enabled using a boolean argument
> (true for enabled, false for disabled). Returns previous GC state. Disabling
>   garbage collection should be used only with extreme caution, as it can
> cause memory use to grow without bound.
>
>
>
>
>>
>> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>>
>>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:

 If you are prepared to make your code to not perform any heap
 allocations, I don't see a reason why there should be any issue. When I
 once worked on a very first multi-threading version of Julia I wrote
 exactly such functions that won't trigger gc since the later was not thread
 safe. This can be hard work but I would assume that its at least not more
 work than implementing the application in C/C++ (assuming that you have
 some Julia experience)

>>>
>>> I would really like to know why the work is hard, is i

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-02 Thread Leger Jonathan
Páll: don't worry about the project failing because of YOUU ;) in 
any case we wanted to try Julia and see if we could get help/tips from 
the community.
About the nogc I wonder if activating it will also prevent the core of 
Julia to be garbage collected ? If yes for long run it's a bad idea to 
disable it too long.


For now the only options options are C/C++ and Julia, sorry no D or Lisp 
:) Why would you not recommend C for this kind of tasks ?
And I said 1000 images/sec but the camera may be able to go up to 10 000 
images/sec so I think we can define it as hard real time.


Thank you for all these ideas !


Le 01/06/2016 23:59, Páll Haraldsson a écrit :

On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:

So for now the best is to build a toy that is equivalent in
processing time to the original and see by myself what I'm able to
get.
We have many ideas, many theories due to the nature of the GC so
the best is to try.

Páll -> Thanks for the links


No problem.

While I did say it would be cool to now of Julia in space, I would 
hate for the project to fail because of Julia (because of my advice).


I endorse Julia for all kinds of uses, hard real-time (and building 
operating systems) are where I have doubts.


A. I thought a little more about making a macro @nogc to mark 
functions, and it's probably not possible. You could I guess for one 
function, as the macro has access to the AST of it. But what you 
really want to disallow, is that function calling functions that are 
not similarly marked. I do not know about metadata on functions and if 
a nogc-bit could be put in, but even then, in theory couldn't that 
function be changed at runtime..?


What you would want is that this nogc property is statically checked 
as I guess D does, but Julia isn't separately compiled by default. 
Note there is Julia2C, and see


http://juliacomputing.com/blog/2016/02/09/static-julia.html

for gory details on compiling Julia.

I haven't looked, I guess Julia2C does not generate malloc and free, 
only some malloc substitute in libjulia runtime. That substitute will 
allocate and run the GC when needed. These are the calls you want to 
avoid in your code and could maybe grep for.. There is a Lint.jl tool, 
but as memory allocation isn't an error it would not flag it, maybe it 
could be an option..


B. One idea I just had (in the shower..), if @nogc is used or just on 
"gc_disable" (note it is deprecated*), it would disallow allocations 
(with an exception if tried), not just postpone them, it would be much 
easier to test if your code uses allocations or calls code that would. 
Still, you would have to check all code-paths..


C. Ada, or the Spark-subset, might be the go-to language for hard 
real-time. Rust seems also good, just not as tried. D could also be an 
option with @nogc. And then there is C and especially C++ that I try 
do avoid recommending.


D. Do tell if you only need soft real-time, it makes the matter so 
much simpler.. not just programming language choice..


*
help?> gc_enable
search: gc_enable

  gc_enable(on::Bool)

  Control whether garbage collection is enabled using a boolean 
argument (true for enabled, false for disabled). Returns previous GC 
state. Disabling
  garbage collection should be used only with extreme caution, as it 
can cause memory use to grow without bound.




Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :

On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:

If you are prepared to make your code to not perform any
heap allocations, I don't see a reason why there should be
any issue. When I once worked on a very first
multi-threading version of Julia I wrote exactly such
functions that won't trigger gc since the later was not
thread safe. This can be hard work but I would assume that
its at least not more work than implementing the
application in C/C++ (assuming that you have some Julia
experience)


I would really like to know why the work is hard, is it
getting rid of the allocations, or being sure there are no
more hidden in your code? I would also like to know then if
you can do the same as in D language:

http://wiki.dlang.org/Memory_Management


The most reliable way to guarantee latency is to preallocate
all data that will be needed by the time critical portion. If
no calls to allocate memory are done, the GC will not run and
so will not cause the maximum latency to be exceeded.

It is possible to create a real-time thread by detaching it
from the runtime, marking the thread function @nogc, and
ensuring the real-time thread does not hold any GC roots. GC
objects can still be used in the real-time thread, but they
must be re

[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread Cedric St-Jean
Apparently, ITA Software (Orbitz) was written nearly entirely in Lisp, with 
0 heap-allocation during runtime to have performance guarantees. It's 
pretty inspiring , in a 
I-crossed-the-Himalayas-barefoot kind of way.

On Wednesday, June 1, 2016 at 5:59:15 PM UTC-4, Páll Haraldsson wrote:
>
> On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>>
>> So for now the best is to build a toy that is equivalent in processing 
>> time to the original and see by myself what I'm able to get.
>> We have many ideas, many theories due to the nature of the GC so the best 
>> is to try.
>>
>> Páll -> Thanks for the links
>>
>
> No problem.
>
> While I did say it would be cool to now of Julia in space, I would hate 
> for the project to fail because of Julia (because of my advice).
>
> I endorse Julia for all kinds of uses, hard real-time (and building 
> operating systems) are where I have doubts.
>
> A. I thought a little more about making a macro @nogc to mark functions, 
> and it's probably not possible. You could I guess for one function, as the 
> macro has access to the AST of it. But what you really want to disallow, is 
> that function calling functions that are not similarly marked. I do not 
> know about metadata on functions and if a nogc-bit could be put in, but 
> even then, in theory couldn't that function be changed at runtime..?
>
> What you would want is that this nogc property is statically checked as I 
> guess D does, but Julia isn't separately compiled by default. Note there is 
> Julia2C, and see
>
> http://juliacomputing.com/blog/2016/02/09/static-julia.html
>
> for gory details on compiling Julia.
>
> I haven't looked, I guess Julia2C does not generate malloc and free, only 
> some malloc substitute in libjulia runtime. That substitute will allocate 
> and run the GC when needed. These are the calls you want to avoid in your 
> code and could maybe grep for.. There is a Lint.jl tool, but as memory 
> allocation isn't an error it would not flag it, maybe it could be an 
> option..
>
> B. One idea I just had (in the shower..), if @nogc is used or just on 
> "gc_disable" (note it is deprecated*), it would disallow allocations (with 
> an exception if tried), not just postpone them, it would be much easier to 
> test if your code uses allocations or calls code that would. Still, you 
> would have to check all code-paths..
>
> C. Ada, or the Spark-subset, might be the go-to language for hard 
> real-time. Rust seems also good, just not as tried. D could also be an 
> option with @nogc. And then there is C and especially C++ that I try do 
> avoid recommending.
>
> D. Do tell if you only need soft real-time, it makes the matter so much 
> simpler.. not just programming language choice..
>
> *
> help?> gc_enable
> search: gc_enable
>
>   gc_enable(on::Bool)
>
>   Control whether garbage collection is enabled using a boolean argument 
> (true for enabled, false for disabled). Returns previous GC state. Disabling
>   garbage collection should be used only with extreme caution, as it can 
> cause memory use to grow without bound.
>
>
>  
>
>>
>> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>>
>>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:

 If you are prepared to make your code to not perform any heap 
 allocations, I don't see a reason why there should be any issue. When I 
 once worked on a very first multi-threading version of Julia I wrote 
 exactly such functions that won't trigger gc since the later was not 
 thread 
 safe. This can be hard work but I would assume that its at least not more 
 work than implementing the application in C/C++ (assuming that you have 
 some Julia experience)

>>>
>>> I would really like to know why the work is hard, is it getting rid of 
>>> the allocations, or being sure there are no more hidden in your code? I 
>>> would also like to know then if you can do the same as in D language:
>>>
>>> http://wiki.dlang.org/Memory_Management 
>>>
>>> The most reliable way to guarantee latency is to preallocate all data 
>>> that will be needed by the time critical portion. If no calls to allocate 
>>> memory are done, the GC will not run and so will not cause the maximum 
>>> latency to be exceeded.
>>>
>>> It is possible to create a real-time thread by detaching it from the 
>>> runtime, marking the thread function @nogc, and ensuring the real-time 
>>> thread does not hold any GC roots. GC objects can still be used in the 
>>> real-time thread, but they must be referenced from other threads to prevent 
>>> them from being collected."
>>>
>>> that is would it be possible to make a macro @nogc and mark functions in 
>>> a similar way? I'm not aware that such a macro is available, to disallow. 
>>> There is a macro, e.g. @time, that is not sufficient, that shows GC 
>>> actitivy, but knowing there was none could have been an accident; if you 
>>> run your co

[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread Páll Haraldsson
On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>
> So for now the best is to build a toy that is equivalent in processing 
> time to the original and see by myself what I'm able to get.
> We have many ideas, many theories due to the nature of the GC so the best 
> is to try.
>
> Páll -> Thanks for the links
>

No problem.

While I did say it would be cool to now of Julia in space, I would hate for 
the project to fail because of Julia (because of my advice).

I endorse Julia for all kinds of uses, hard real-time (and building 
operating systems) are where I have doubts.

A. I thought a little more about making a macro @nogc to mark functions, 
and it's probably not possible. You could I guess for one function, as the 
macro has access to the AST of it. But what you really want to disallow, is 
that function calling functions that are not similarly marked. I do not 
know about metadata on functions and if a nogc-bit could be put in, but 
even then, in theory couldn't that function be changed at runtime..?

What you would want is that this nogc property is statically checked as I 
guess D does, but Julia isn't separately compiled by default. Note there is 
Julia2C, and see

http://juliacomputing.com/blog/2016/02/09/static-julia.html

for gory details on compiling Julia.

I haven't looked, I guess Julia2C does not generate malloc and free, only 
some malloc substitute in libjulia runtime. That substitute will allocate 
and run the GC when needed. These are the calls you want to avoid in your 
code and could maybe grep for.. There is a Lint.jl tool, but as memory 
allocation isn't an error it would not flag it, maybe it could be an 
option..

B. One idea I just had (in the shower..), if @nogc is used or just on 
"gc_disable" (note it is deprecated*), it would disallow allocations (with 
an exception if tried), not just postpone them, it would be much easier to 
test if your code uses allocations or calls code that would. Still, you 
would have to check all code-paths..

C. Ada, or the Spark-subset, might be the go-to language for hard 
real-time. Rust seems also good, just not as tried. D could also be an 
option with @nogc. And then there is C and especially C++ that I try do 
avoid recommending.

D. Do tell if you only need soft real-time, it makes the matter so much 
simpler.. not just programming language choice..

*
help?> gc_enable
search: gc_enable

  gc_enable(on::Bool)

  Control whether garbage collection is enabled using a boolean argument 
(true for enabled, false for disabled). Returns previous GC state. Disabling
  garbage collection should be used only with extreme caution, as it can 
cause memory use to grow without bound.


 

>
> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>
>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>>
>>> If you are prepared to make your code to not perform any heap 
>>> allocations, I don't see a reason why there should be any issue. When I 
>>> once worked on a very first multi-threading version of Julia I wrote 
>>> exactly such functions that won't trigger gc since the later was not thread 
>>> safe. This can be hard work but I would assume that its at least not more 
>>> work than implementing the application in C/C++ (assuming that you have 
>>> some Julia experience)
>>>
>>
>> I would really like to know why the work is hard, is it getting rid of 
>> the allocations, or being sure there are no more hidden in your code? I 
>> would also like to know then if you can do the same as in D language:
>>
>> http://wiki.dlang.org/Memory_Management 
>>
>> The most reliable way to guarantee latency is to preallocate all data 
>> that will be needed by the time critical portion. If no calls to allocate 
>> memory are done, the GC will not run and so will not cause the maximum 
>> latency to be exceeded.
>>
>> It is possible to create a real-time thread by detaching it from the 
>> runtime, marking the thread function @nogc, and ensuring the real-time 
>> thread does not hold any GC roots. GC objects can still be used in the 
>> real-time thread, but they must be referenced from other threads to prevent 
>> them from being collected."
>>
>> that is would it be possible to make a macro @nogc and mark functions in 
>> a similar way? I'm not aware that such a macro is available, to disallow. 
>> There is a macro, e.g. @time, that is not sufficient, that shows GC 
>> actitivy, but knowing there was none could have been an accident; if you 
>> run your code again and memory fills up you see different result.
>>
>> As with D, the GC in Julia is optional. The above @nogc, is really the 
>> only thing different, that I can think of that is better with their 
>> optional memory management. But I'm no expert on D, and I mey not have 
>> looked too closely:
>>
>> https://dlang.org/spec/garbage.html
>>
>>
>>> Tobi
>>>
>>> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:

 Hi everyone,

 I am working in ast

[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread John leger
So for now the best is to build a toy that is equivalent in processing time 
to the original and see by myself what I'm able to get.
We have many ideas, many theories due to the nature of the GC so the best 
is to try.

Páll -> Thanks for the links 

Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>
> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>
>> If you are prepared to make your code to not perform any heap 
>> allocations, I don't see a reason why there should be any issue. When I 
>> once worked on a very first multi-threading version of Julia I wrote 
>> exactly such functions that won't trigger gc since the later was not thread 
>> safe. This can be hard work but I would assume that its at least not more 
>> work than implementing the application in C/C++ (assuming that you have 
>> some Julia experience)
>>
>
> I would really like to know why the work is hard, is it getting rid of the 
> allocations, or being sure there are no more hidden in your code? I would 
> also like to know then if you can do the same as in D language:
>
> http://wiki.dlang.org/Memory_Management 
>
> The most reliable way to guarantee latency is to preallocate all data that 
> will be needed by the time critical portion. If no calls to allocate memory 
> are done, the GC will not run and so will not cause the maximum latency to 
> be exceeded.
>
> It is possible to create a real-time thread by detaching it from the 
> runtime, marking the thread function @nogc, and ensuring the real-time 
> thread does not hold any GC roots. GC objects can still be used in the 
> real-time thread, but they must be referenced from other threads to prevent 
> them from being collected."
>
> that is would it be possible to make a macro @nogc and mark functions in a 
> similar way? I'm not aware that such a macro is available, to disallow. 
> There is a macro, e.g. @time, that is not sufficient, that shows GC 
> actitivy, but knowing there was none could have been an accident; if you 
> run your code again and memory fills up you see different result.
>
> As with D, the GC in Julia is optional. The above @nogc, is really the 
> only thing different, that I can think of that is better with their 
> optional memory management. But I'm no expert on D, and I mey not have 
> looked too closely:
>
> https://dlang.org/spec/garbage.html
>
>
>> Tobi
>>
>> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:
>>>
>>> Hi everyone,
>>>
>>> I am working in astronomy and we are thinking of using Julia for a real 
>>> time, high performance adaptive optics system on a solar telescope.
>>>
>>> This is how the system is supposed to work: 
>>>1) the image is read from the camera
>>>2) some correction are applied
>>>3) the atmospheric turbulence is numerically estimated in order to 
>>> calculate the command to be sent to the deformable mirror
>>>
>>> The overall process should be executed in less than 1ms so that it can 
>>> be integrated to the chain (closed loop).
>>>
>>> Do you think it is possible to do all the computation in Julia or would 
>>> it be better to code some part in C/C++. What I fear the most is the GC but 
>>> in our case we can pre-allocate everything, so once we launch the system 
>>> there will not be any memory allocated during the experiment and it will 
>>> run for days.
>>>
>>> So, what do you think? Considering the current state of Julia will I be 
>>> able to get the performances I need. Will the garbage collector be an 
>>> hindrance ?
>>>
>>> Thank you.
>>>
>>

[julia-users] Re: Using Julia for real time astronomy

2016-05-31 Thread Páll Haraldsson
On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>
> If you are prepared to make your code to not perform any heap allocations, 
> I don't see a reason why there should be any issue. When I once worked on a 
> very first multi-threading version of Julia I wrote exactly such functions 
> that won't trigger gc since the later was not thread safe. This can be hard 
> work but I would assume that its at least not more work than implementing 
> the application in C/C++ (assuming that you have some Julia experience)
>

I would really like to know why the work is hard, is it getting rid of the 
allocations, or being sure there are no more hidden in your code? I would 
also like to know then if you can do the same as in D language:

http://wiki.dlang.org/Memory_Management 

The most reliable way to guarantee latency is to preallocate all data that 
will be needed by the time critical portion. If no calls to allocate memory 
are done, the GC will not run and so will not cause the maximum latency to 
be exceeded.

It is possible to create a real-time thread by detaching it from the 
runtime, marking the thread function @nogc, and ensuring the real-time 
thread does not hold any GC roots. GC objects can still be used in the 
real-time thread, but they must be referenced from other threads to prevent 
them from being collected."

that is would it be possible to make a macro @nogc and mark functions in a 
similar way? I'm not aware that such a macro is available, to disallow. 
There is a macro, e.g. @time, that is not sufficient, that shows GC 
actitivy, but knowing there was none could have been an accident; if you 
run your code again and memory fills up you see different result.

As with D, the GC in Julia is optional. The above @nogc, is really the only 
thing different, that I can think of that is better with their optional 
memory management. But I'm no expert on D, and I mey not have looked too 
closely:

https://dlang.org/spec/garbage.html


> Tobi
>
> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:
>>
>> Hi everyone,
>>
>> I am working in astronomy and we are thinking of using Julia for a real 
>> time, high performance adaptive optics system on a solar telescope.
>>
>> This is how the system is supposed to work: 
>>1) the image is read from the camera
>>2) some correction are applied
>>3) the atmospheric turbulence is numerically estimated in order to 
>> calculate the command to be sent to the deformable mirror
>>
>> The overall process should be executed in less than 1ms so that it can be 
>> integrated to the chain (closed loop).
>>
>> Do you think it is possible to do all the computation in Julia or would 
>> it be better to code some part in C/C++. What I fear the most is the GC but 
>> in our case we can pre-allocate everything, so once we launch the system 
>> there will not be any memory allocated during the experiment and it will 
>> run for days.
>>
>> So, what do you think? Considering the current state of Julia will I be 
>> able to get the performances I need. Will the garbage collector be an 
>> hindrance ?
>>
>> Thank you.
>>
>

[julia-users] Re: Using Julia for real time astronomy

2016-05-31 Thread Páll Haraldsson
On Monday, May 30, 2016 at 12:10:39 PM UTC, Uwe Fechner wrote:
>
> I think, that would be difficult.
>
> As soon as you use any packages for image conversion or estimation you 
> have to assume that they use dynamic memory allocation.
>
> The garbage collector of Julia is fast, but not suitable for hard 
> real-time requirements. Implementing a garbage collector for hard real-time
> applications is possible, but a lot of work and will probably not happen 
> in the near future.
>
> Their was an issue on this topic, that was closed as "won't fix":
> https://github.com/JuliaLang/julia/issues/8543 
> 
>

Well, the "won't fix"-label was later taken off the issue.

Yes, the issue is still closed, but it's unclear to me what has changed 
with the GC, when. I know incremental GC was implemented at some point. No 
hard-real-time GC is available.

It would be cool to know of Julia in space so I gave this some thought..

I recall from MicroPython, that they claimed hard-real-time GC (also 
available for Java with Metronome), that is predictable pause times. I 
remember thinking, how can they do/claim that (and if I recall, didn't 
change the GC)? MicroPython is meant for microcontrollers (at the time only 
one), that has a known amount of memory. I can't locate the information I 
read at the time now, I think they where talking in megabytes range. Then 
worst case, you have to scan a fixed amount of memory, and the speed of the 
CPU is also known. Unlike with MicroPython, you will have an operating 
system (that is not real-time, but Linux can be configured as such, but 
caches are a problem..). Maybe if you can limit the RAM, or just how much 
Julia will try to allocate, it helps in the same way.

Anyway, you may not strictly need hard-real-time. I think, as always (in 
non-real-time/concurrent GC variants)?, the garbage collection only happens 
when you try to allocate memory and it is full. If you preallocate all 
memory and make sure no more is allocated, I can't see the GC being a 
problem (you can also disable it for some period of time).

Libc.malloc and free is also available with Julia..


[Possibly it helps to split your task into more than one process, having 
only one real-time? If you can have shared memory between two processes, 
would that help? Be careful with that.. I'm not sure it's a good idea or at 
least I need to explain it better..]



https://github.com/micropython/micropython/wiki/FAQ
"Regarding RAM usage, MicroPython can start up with 2KB of heap. Adding 
stack and required static memory, a 4KB microcontroller could start a 
MicroController, but hardly could go further than interpreting simple 
expressions. Thus, 8KB is minimal amount to run simple scripts."

https://forum.micropython.org/viewtopic.php?t=1778
"today I painfully learned, that uPy's automatic garbage collection can 
really mess up your 500Hz feedback control loop, since it takes forever 
(>1ms  :o :shock: :cry: )."


http://entitycrisis.blogspot.is/2007/12/is-hard-real-time-python-possible.html

http://stackoverflow.com/questions/1402933/python-on-an-real-time-operation-system-rtos


>
> Uwe
>
> On Monday, May 30, 2016 at 12:00:13 PM UTC+2, John leger wrote:
>>
>> Hi everyone,
>>
>> I am working in astronomy and we are thinking of using Julia for a real 
>> time, high performance adaptive optics system on a solar telescope.
>>
>> This is how the system is supposed to work: 
>>1) the image is read from the camera
>>2) some correction are applied
>>3) the atmospheric turbulence is numerically estimated in order to 
>> calculate the command to be sent to the deformable mirror
>>
>> The overall process should be executed in less than 1ms so that it can be 
>> integrated to the chain (closed loop).
>>
>> Do you think it is possible to do all the computation in Julia or would 
>> it be better to code some part in C/C++. What I fear the most is the GC but 
>> in our case we can pre-allocate everything, so once we launch the system 
>> there will not be any memory allocated during the experiment and it will 
>> run for days.
>>
>> So, what do you think? Considering the current state of Julia will I be 
>> able to get the performances I need. Will the garbage collector be an 
>> hindrance ?
>>
>> Thank you.
>>
>

[julia-users] Re: Using Julia for real time astronomy

2016-05-30 Thread 'Tobias Knopp' via julia-users
If you are prepared to make your code to not perform any heap allocations, 
I don't see a reason why there should be any issue. When I once worked on a 
very first multi-threading version of Julia I wrote exactly such functions 
that won't trigger gc since the later was not thread safe. This can be hard 
work but I would assume that its at least not more work than implementing 
the application in C/C++ (assuming that you have some Julia experience)

Tobi

Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:
>
> Hi everyone,
>
> I am working in astronomy and we are thinking of using Julia for a real 
> time, high performance adaptive optics system on a solar telescope.
>
> This is how the system is supposed to work: 
>1) the image is read from the camera
>2) some correction are applied
>3) the atmospheric turbulence is numerically estimated in order to 
> calculate the command to be sent to the deformable mirror
>
> The overall process should be executed in less than 1ms so that it can be 
> integrated to the chain (closed loop).
>
> Do you think it is possible to do all the computation in Julia or would it 
> be better to code some part in C/C++. What I fear the most is the GC but in 
> our case we can pre-allocate everything, so once we launch the system there 
> will not be any memory allocated during the experiment and it will run for 
> days.
>
> So, what do you think? Considering the current state of Julia will I be 
> able to get the performances I need. Will the garbage collector be an 
> hindrance ?
>
> Thank you.
>


Re: [julia-users] Re: Using Julia for real time astronomy

2016-05-30 Thread Tamas Papp
You could test whether the GC is fast enough by implementing the
computational core (using simulated data or something similar), then
just running it. Then if you find it is not acceptable, you haven't
wasted time on writing the code for interfacing with the equipment.

Also, you could think about the "cost" of an occasional longer GC run,
and what the acceptable failure rate is. For example, is it a great
concern if you have suboptimal quality or even total loss of every
1000th frame? or 1th? Of course one would like to have all the data,
but equipment can be down for all sorts of reasons and maybe the GC
hiccups will not be your primary concern.

Best,

Tamas

On Mon, May 30 2016, Leger Jonathan wrote:

> Thanks for the answer.
>
> I don't intend to use any package, only use my array so I can confirm 
> that I will not have dynamic memory allocation (let's hope that I'm true 
> ;) ).
> But even in this case Julia itself may do allocations, so my question 
> would be more: if there is nearly nothing to do, is the GC fast ?
> I already read many topics about GC and yes, even if there was very good 
> improvements, is it enough for my case ?
>
> In the worst case Julia will be for testing and will only call the main 
> loop in C++.
>
> Le 30/05/2016 14:10, Uwe Fechner a écrit :
>> I think, that would be difficult.
>>
>> As soon as you use any packages for image conversion or estimation you 
>> have to assume that they use dynamic memory allocation.
>>
>> The garbage collector of Julia is fast, but not suitable for hard 
>> real-time requirements. Implementing a garbage collector for hard 
>> real-time
>> applications is possible, but a lot of work and will probably not 
>> happen in the near future.
>>
>> Their was an issue on this topic, that was closed as "won't fix":
>> https://github.com/JuliaLang/julia/issues/8543
>>
>> Uwe
>>
>> On Monday, May 30, 2016 at 12:00:13 PM UTC+2, John leger wrote:
>>
>> Hi everyone,
>>
>> I am working in astronomy and we are thinking of using Julia for a
>> real time, high performance adaptive optics system on a solar
>> telescope.
>>
>> This is how the system is supposed to work:
>>1) the image is read from the camera
>>2) some correction are applied
>>3) the atmospheric turbulence is numerically estimated in order
>> to calculate the command to be sent to the deformable mirror
>>
>> The overall process should be executed in less than 1ms so that it
>> can be integrated to the chain (closed loop).
>>
>> Do you think it is possible to do all the computation in Julia or
>> would it be better to code some part in C/C++. What I fear the
>> most is the GC but in our case we can pre-allocate everything, so
>> once we launch the system there will not be any memory allocated
>> during the experiment and it will run for days.
>>
>> So, what do you think? Considering the current state of Julia will
>> I be able to get the performances I need. Will the garbage
>> collector be an hindrance ?
>>
>> Thank you.
>>



Re: [julia-users] Re: Using Julia for real time astronomy

2016-05-30 Thread Leger Jonathan

Thanks for the answer.

I don't intend to use any package, only use my array so I can confirm 
that I will not have dynamic memory allocation (let's hope that I'm true 
;) ).
But even in this case Julia itself may do allocations, so my question 
would be more: if there is nearly nothing to do, is the GC fast ?
I already read many topics about GC and yes, even if there was very good 
improvements, is it enough for my case ?


In the worst case Julia will be for testing and will only call the main 
loop in C++.


Le 30/05/2016 14:10, Uwe Fechner a écrit :

I think, that would be difficult.

As soon as you use any packages for image conversion or estimation you 
have to assume that they use dynamic memory allocation.


The garbage collector of Julia is fast, but not suitable for hard 
real-time requirements. Implementing a garbage collector for hard 
real-time
applications is possible, but a lot of work and will probably not 
happen in the near future.


Their was an issue on this topic, that was closed as "won't fix":
https://github.com/JuliaLang/julia/issues/8543

Uwe

On Monday, May 30, 2016 at 12:00:13 PM UTC+2, John leger wrote:

Hi everyone,

I am working in astronomy and we are thinking of using Julia for a
real time, high performance adaptive optics system on a solar
telescope.

This is how the system is supposed to work:
   1) the image is read from the camera
   2) some correction are applied
   3) the atmospheric turbulence is numerically estimated in order
to calculate the command to be sent to the deformable mirror

The overall process should be executed in less than 1ms so that it
can be integrated to the chain (closed loop).

Do you think it is possible to do all the computation in Julia or
would it be better to code some part in C/C++. What I fear the
most is the GC but in our case we can pre-allocate everything, so
once we launch the system there will not be any memory allocated
during the experiment and it will run for days.

So, what do you think? Considering the current state of Julia will
I be able to get the performances I need. Will the garbage
collector be an hindrance ?

Thank you.





[julia-users] Re: Using Julia for real time astronomy

2016-05-30 Thread Uwe Fechner
I think, that would be difficult.

As soon as you use any packages for image conversion or estimation you have 
to assume that they use dynamic memory allocation.

The garbage collector of Julia is fast, but not suitable for hard real-time 
requirements. Implementing a garbage collector for hard real-time
applications is possible, but a lot of work and will probably not happen in 
the near future.

Their was an issue on this topic, that was closed as "won't fix":
https://github.com/JuliaLang/julia/issues/8543

Uwe

On Monday, May 30, 2016 at 12:00:13 PM UTC+2, John leger wrote:
>
> Hi everyone,
>
> I am working in astronomy and we are thinking of using Julia for a real 
> time, high performance adaptive optics system on a solar telescope.
>
> This is how the system is supposed to work: 
>1) the image is read from the camera
>2) some correction are applied
>3) the atmospheric turbulence is numerically estimated in order to 
> calculate the command to be sent to the deformable mirror
>
> The overall process should be executed in less than 1ms so that it can be 
> integrated to the chain (closed loop).
>
> Do you think it is possible to do all the computation in Julia or would it 
> be better to code some part in C/C++. What I fear the most is the GC but in 
> our case we can pre-allocate everything, so once we launch the system there 
> will not be any memory allocated during the experiment and it will run for 
> days.
>
> So, what do you think? Considering the current state of Julia will I be 
> able to get the performances I need. Will the garbage collector be an 
> hindrance ?
>
> Thank you.
>