At this stage I was just trying to formulate the problem in a quick way 
(from the implementation point of view) to test how the low level interface 
with Gurobi solver works. I honestly didn't try to make it quick for the 
solver without eliminating the states and keeping the a block banded 
matrices in the cost function but I don't think this is the cause of this 
big amount of memory allocated.

The problem data are generated at the beginning of each simulation together 
with the necessary matrices to be passed to the solver (only 6 times in my 
script). I update only vectors Theta and bineq at each stage of the 
simulations.

I removed the initial sparse matrices definitions in order to avoid 
describing dense matrices in a sparse way and I still get very big memory 
allocation. I know the optimization problem formulation is not ideal, but I 
guess Julia shouldn't keep so much memory allocated without removing the 
variables once the simulations end.

I also tried with Julia 0.4.0-dev+3752 and I encounter the same problem. 



Il giorno mercoledì 11 marzo 2015 22:51:18 UTC, Tony Kelman ha scritto:
>
> The majority of the memory allocation is almost definitely coming from the 
> problem setup here. You're using a dense block-triangular fomulation of 
> MPC, eliminating states and only solving for inputs with inequality 
> constraints. Since you're converting your problem data to sparse initially, 
> you're doing a lot of extra allocation, integer arithmetic, and consuming 
> more memory to represent a large dense matrix in sparse format. Reformulate 
> your problem to include both states and inputs as unknowns, and enforce the 
> dynamics as equality constraints. This will result in a block-banded 
> problem structure and maintain sparsity much better. The matrices within 
> the blocks are not sparse here since you're doing an exact discretization 
> with expm, but a banded problem will scale much better to longer horizons 
> than a triangular one.
>
> You also should be able to reuse the problem data, with the exception of 
> bounds or maybe vector coefficients, between different MPC iterations.
>
>
>
> On Wednesday, March 11, 2015 at 12:14:03 PM UTC-7, Bartolomeo Stellato 
> wrote:
>>
>> Thank you for the quick replies and for the suggestions!
>>
>> I checked which lines give more allocation with --track-allocation=user  and 
>> the amount of memory I posted is from the OSX Activity monitor. 
>> Even if it is not all necessarily used, if it grows too much the 
>> operating system is forced to kill Julia.  
>>
>> I slightly edited the code in order to *simulate the closed loop 6 times* 
>> (for different parameters of N and lambdau). I attach the files. The* 
>> allocated memory *with the OSX Activity monitor is *2gb now.*
>> If I run the code twice with a clear_malloc_data() in between to save  
>> --track-allocation=user 
>> information I get something around 3.77gb!
>>
>> Are there maybe problems with my code for which the allocated memory 
>> increases? I can't understand why by simply running the same function 6 
>> times, the memory increases so much. Unfortunately I need to do it hundreds 
>> of times in this way it is impossible.
>>
>> Do you think that using the push! function together with reducing the 
>> vector computations could significantly reduce this big amount of allocated 
>> memory?
>>
>>
>> Bartolomeo 
>>
>>
>> Il giorno mercoledì 11 marzo 2015 17:07:23 UTC, Tim Holy ha scritto:
>>>
>>> --track-allocation doesn't report the _net_ memory allocated, it reports 
>>> the 
>>> _gross_ memory allocation. In other words, allocate/free adds to the 
>>> tally, 
>>> even if all memory is eventually freed. 
>>>
>>> If you're still concerned about memory allocation and its likely impact 
>>> on 
>>> performance: there are some things you can do. From glancing at your 
>>> code very 
>>> briefly, a couple of comments: 
>>> - My crystal ball tells me you will soon come to adore the push! 
>>> function :-) 
>>> - If you wish (and it's your choice), you can reduce allocations by 
>>> doing more 
>>> operations with scalars. For example, in computeReferenceCurrents, 
>>> instead of 
>>> computing tpu and iref arrays outside the loop, consider performing the 
>>> equivalent operations on scalar values inside the loop. 
>>>
>>> Best, 
>>> --Tim 
>>>
>>>
>>> On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote: 
>>> > Hi all, 
>>> > 
>>> > I recently started using Julia for my *Closed loop MPC simulations.* I 
>>> fond 
>>> > very interesting the fact that I was able to do almost everything I 
>>> was 
>>> > doing in MATLAB with Julia. Unfortunately, when I started working on 
>>> more 
>>> > complex simulations I notice a *memory allocation problem*. 
>>> > 
>>> > I am using OSX Yosemite and Julia 0.3.6. I attached a MWE that can be 
>>> > executed with include("simulation.jl") 
>>> > 
>>> > The code executes a single simulation of the closed loop system with a 
>>> *MPC 
>>> > controller* solving an optimization problem at each time step via 
>>> *Gurobi 
>>> > interface*. At the end of the simulation I am interested in only *two 
>>> > performance indices* (float numbers). 
>>> > The simulation, however, takes more than 600mb of memory and, even if 
>>> most 
>>> > of the declared variables local to different functions, I can't get 
>>> rid of 
>>> > them afterwards with the garbage collector: gc() 
>>> > 
>>> > I analyzed the memory allocation with julia --track-allocation=user 
>>> and I 
>>> > included the generated .mem files. Probably my code is not optimized, 
>>> but I 
>>> > can't understand *why all that memory doesn't get deallocated after 
>>> the 
>>> > simulation*. 
>>> > 
>>> > Is there anyone who could give me any explanation or suggestion to 
>>> solve 
>>> > that problem? I need to perform several of these simulations and it is 
>>> > impossible for me to allocate for each one more than 600mb. 
>>> > 
>>> > 
>>> > Thank you! 
>>> > 
>>> > Bartolomeo 
>>>
>>>

Reply via email to