Le mercredi 8 juin 2016 17:33:18 UTC+2, Páll Haraldsson a écrit :
>
> On Monday, June 6, 2016 at 9:41:29 AM UTC, John leger wrote:
>>
>> Since it seems you have a good overview in this domain I will give more 
>> details:
>> We are working in signal processing and especially in image processing. 
>> The goal here is just the adaptive optic: we just want to stabilize the 
>> image and not get the final image.
>> The consequence is that we will not store anything on the hard drive: we 
>> read an image, process it and destroy it. We stay in RAM all the time.
>> The processing is done by using/coding our algorithms. So for now, no 
>> need of any external library (for now, but I don't see any reason for that 
>> now)
>>
>
> I completely misread/missed reading 3) about the "deformable mirror" 
> seeing now it's a down-to-earth project - literally.. :)
>
> Still, glad to help, even if it doesn't get Julia into space. :)
>
>
>
>> First I would like to apologize: just after posting my answer I went to 
>> wikipedia to search the difference between soft and real time. 
>> I should have done it before so that you don't have to spend more time to 
>> explain.
>>
>> In the end I still don't know if I am hard real time or soft real time: 
>> the timing is given by the camera speed and the processing should be done 
>> between the acquisition of two images.
>>
>
>
> From: 
> https://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computing
>
>    - *Hard* – missing a deadline is a total system failure.
>    - *Firm* – infrequent deadline misses are tolerable, but may degrade 
>    the system's quality of service. The usefulness of a result is zero after 
>    its deadline.
>    - *Soft* – the usefulness of a result degrades after its deadline, 
>    thereby degrading the system's quality of service.
>
> [Note also, real-time also applies to doing stuff too early, not only to 
> not doing stuff too late.. In some cases, say in games, that is not a [big] 
> problem, getting a frame ready earlier isn't a big concern.]
>
>
>
That's why in the previous mail I said that for now we will consider the 
system as a Soft real Time but in our case. But even if we will tolerate 
some deadline we don't want it to happen too many times. So Soft is not bad 
but some 95% Hard real time (firm) sound better in our case.
 

> Are you sure "the processing should be done between the acquisition of two 
> images" is a strict requirement? I assume the "atmospheric turbulence" to 
> not change extremely quickly and you could have some latency with you 
> calculation applying for some time/at least a few/many frames after and 
> then your project seems not hard real-time at all. Maybe soft or firm, a 
> category I had forgotten..
>
>
>
The system is a closed loop without thread. So the closed loop do all the 
steps described before one after each other and restart, so taking the flow 
of the camera as the reference timer is a good idea.
Your assumption is not correct for the turbulence is the main reason why we 
need the 1kHz so much and you can add the fact that we are working with the 
sun in visible spectrum (we want to observe fast things in hard conditions).

 

> At least your timescale is much longer than the camera speed to capture 
> each frame in a video?
>
>
> You also said "1000 images/sec but the camera may be able to go up to 10 
> 000 images/sec". I'm aware of very high-speed photography, such as 
> capturing a picture of a bullet from a gun, or seeing light literally 
> spreading across a room. Still do you need many frames per second for 
> (capturing video, that seems not your job) or for correction? Did you mix 
> up camera speed for exposure time? Ordinary cameras go up to 1/1000 s 
> shutter speed, but might only take video at up to 30, 60 or say 120 fps.
>
>
>
This will be the kind of camera we will be using:
http://www.mikrotron.de/en/products/machine-vision-cameras/coaxpressr.html 
<- 4CXP
If you look at the datasheet and consider the fact we will work at a 
resolution ~400x400, 1000fps is an easy thing to do. 
 

>
> >I like the definition of 95% hard real time; it suits my needs. Thanks 
> for this good paper.
>
> The term/title, sounds like firm real-time..
>
>  
>
>> We don't want to miss an image or delay the processing, I still need to 
>> clarify the consequences of a delay or if we miss an image.
>> For now let's just say that we can miss some images so we want soft real 
>> time.
>>
>
> You could store with each frame a) how long since the mirror was 
> corrected, based on b) the measurement since how long ago. Also can't you 
> [easily] see from a picture if it is mirror is maladjusted? Does to then 
> look blurred and then high-frequency content missing?
>
> How many "mirrors" are adjusted, or points in the mirror[s]?
>

We will use this DM the 97-15, so 97 actuators.
http://www.alpao.com/Products/Deformable_mirrors.htm
All of the values I gave to you were given to me by the people currently 
working on the telescope so even if I don't know if we are Soft, Firm or 
Hard (and I hope we will be able to find) this is what is needed for the AO 
to be working and the output image usable.
 

>
>
>> I'm making a benchmark that should match the system in term of 
>> complexity, these are my first remarks:
>>
>> When you say that one allocation is unacceptable, I say it's shockingly 
>> true: In my case I had 2 allocations done by:
>>     A +=1 where A is an array
>> and in 7 seconds I had 600k allocations. 
>> Morality :In closed loop you cannot accept any alloc and so you have to 
>> explicit all loops.
>>
>
> I think you mean two (or even one) allocation are bad because they are in 
> a loop. And that loop runs for each adjustment.
>
> I meant even just one allocation (per adjustment, or frame of you will) 
> can be a problem. Well, not strictly, but say there have been many in the 
> past, then it's only the last one that is the problem.
>

Yes, one alloc in a closed loop is deadly.
 

>  
>
>>
>> I have two problems now:
>>
>> 1/ Many times, the first run that include the compilation was the fastest 
>> and then any other run was slower by a factor 2.
>> 2/ If I relaunch many times the main function that is in a module, there 
>> are some run that were very different (slower) from the previous.
>>
>> About 1/, although I find it strange I don't really care.
>> 2/ If far more problematic, once the code is compiled I want it to act 
>> the same whatever the number of launch.
>> I have some ideas why but no certitudes. What bother me the most is that 
>> all the runs in the benchmark will be slower, it's not a temporary slowdown 
>> it's all the current benchmark that will be slower.
>> If I launch again it will be back to the best performances.
>>
>> Thank you for the links they are very interesting and I keep that in mind.
>>
>> Note: I disabled hyperthreading and overclock, so it should not be the 
>> CPU doing funky things.
>>
>
> Keep at least possible thermal throttling in mind.. The other guy, Islam, 
> had something on it. I had my mind set on the coldness or hotness of 
> space.. and radiation-hardening.
>  
>

If you have any questions, just ask.
Maybe another time for julia in space ^^  
 

> -- 
> Palli.
>
>

Reply via email to