>
> My apologies for not responding sooner.  I have been clearing a stack of
> referee reports off my desk.


No problems. It's end of semester here in Australia so I've been buried
under piles of marking anyway :-)

Thus, I would be very interested in learning more about your work in coding
> the MCS in Julia.  Please, keep me updated.
>

The source is here: https://github.com/colintbowers/ForecastEval.jl

It is under heavy development at the moment - and I'm not up to the MCS yet
(working on Reality Check and SPA test at the moment). But I'll add another
message to this thread once I think it is ready to be cloned.

The fixed coefficient structural VAR version of the Gibbs sampler is
> running in Julia, but it is slow


If the source of this is available online I'll try and take a look at some
point (although unlikely to be within the next few weeks). I am by no means
an expert at writing fast Julia, but I like to think I'm starting to get a
handle on it now (been using it full-time for close to a year).

Also, I'd be interested in hearing how Mauro and my recommendations go in
speeding up your code, once you get a chance to look at it again. I
realised after my last post you can also save the memory allocation on my
slice variable by using `sub`. On v0.3.x though, this is not guaranteed to
improve performance for various reasons. But I think a recent thread here
indicated that v0.4 may be getting close to an official release.

Cheers,

Colin



On 25 June 2015 at 03:07, <jamesmna...@gmail.com> wrote:

> Dear Colin:
>
> Thanks for your comments.
>
> My apologies for not responding sooner.  I have been clearing a stack of
> referee reports off my desk.
>
> Yes, I am fortunate to be a coauthor of Peter Hansen and Asgar Lunde.  But
> at the moment, I have no plans to code the MCS into Julia.  Peter and Asgar
> work mostly in Ox.  Thus, I would be very interested in learning more about
> your work in coding the MCS in Julia.  Please, keep me updated.
>
> My question was generated by my attempt to port MatLab code of Canova and
> PĂ©rez Forero (2015, "Estimating Overidentified, Non-Recursive, Time Varying
> Coefficients Structural VARs," Quantitative Economics, forthcoming) into
> Julia.  Canova and Perez Forero propose a Metropolis within Gibbs sampler
> to estimate TVP-VARs with Leeper-Sims-Zha non-recursive identifications.
>
> The fixed coefficient structural VAR version of the Gibbs sampler is
> running in Julia, but it is slow.  My thinking was to start with the fixed
> coefficient model as a way to learn Julia.
>
> Stay in touch.
>
> Best wishes.
>
> Jim
>
>
>
> On Wednesday, June 17, 2015 at 9:26:15 PM UTC-4, colint...@gmail.com
> wrote:
>>
>> Hi Jim,
>>
>> A couple of points:
>>
>> 1) Maybe I'm missing something, but you appear to be calculating the same
>> inverse twice on every iteration of your loop. That is,
>> inv(Om_e[(j-1)*nvar+1:j*nvar,:]) gets called twice.
>>
>> 2) As Mauro points out, memory allocation is currently triggered when
>> slicing into 2d arrays on v0.3.x. You can read more about this at the
>> following StackOverflow question:
>> http://stackoverflow.com/questions/28271308/avoid-memory-allocation-when-indexing-an-array-in-julia.
>> Since you are indexing with ranges, my understanding is that in v0.4 you
>> should be able to avoid the allocation. In the meantime, you could try
>> performing the slice once and assign it to a new variable on each
>> iteration, and then use that variable in your matrix calls.
>>
>> 3) I'll strongly second Mauro's suggestion that you pass in to your
>> function everything that is not explicitly defined as a global constant.
>> This should provide a significant performance improvement. For more reading
>> on this, check out the first item in the "Performance Tips" section of the
>> official docs (
>> http://julia.readthedocs.org/en/latest/manual/performance-tips/)
>>
>> So taking all these things together, my version of your function would
>> look something like this:
>>
>> function bb_update(bbj, bbcovj, capZ, nvar, Om_e, yyy)
>>      for j = 1:size(yyy, 2)
>>         currentInverse = inv(Om_e[(j-1)*nvar+1:j*nvar,:])
>>         currentZSlice = capZ[(j-1)*nvar+1:j*nvar,:]
>>         bbcovj += currentZSlice' * currentInverse * currentZSlice
>>         bbj += currentZSlice' * currentInverse * yyy[:,j]
>>      end
>>      return (bbj, bbcovj)
>> end
>>
>> A final question: are you planning on implementing the Model Confidence
>> Set in Julia at any time soon (I'm making the possibly incorrect assumption
>> that you're the same James Nason from Hansen, Lunde, Nason (2011))? Bit of
>> a co-incidence, but I was hoping to implement the Model Confidence Set in
>> Julia sometime in the next few weeks as part of a forecast evaluation
>> package. If you're interested, I can point you to the github source once it
>> is done.
>>
>> Cheers,
>>
>> Colin
>>
>>
>> On Wednesday, 17 June 2015 14:11:49 UTC+10, james...@gmail.com wrote:
>>>
>>> Hi All:
>>>
>>> I am a novice using Julia.  As a way to learn Julia, my project is to
>>> convert MatLab code that estimates Bayesian vector autoregressions.  The
>>> estimator uses Gibbs sampling.
>>>
>>> The Julia code is running, but is slower than MatLab.  Thus, I am doing
>>> something very inefficiently in Julia.
>>>
>>> The Profile.view package and the @time function indicate the problem is
>>> located in a loop of the function bb_update(bbj, bbcovj), which appears
>>> below.
>>>
>>> bb_update(.,.) represents a step in the Gibbs sampling procedure.  This
>>> part of the Gibbs sampler updates the column vector of coefficients, bbj,
>>> which are to be estimated and its covariance matrix, bbcovj, which is psd.
>>> bb_update(bbj, bbcovj) returns bbcovj and bbj.
>>>
>>> bbj is (nvar*nvar*plags + nvar) x 1 and bbcovj is (nvar*nvar*plags +
>>> nvar) x (nvar*nvar*plags + nvar).
>>>
>>> The (global) constants are nvar = 6, number of variables (Int64) and
>>> plags = 6, number of lags in the vector autoregression (Int64), which are
>>> defined previously in the programs.
>>>
>>> The loop in bb_update(.,.) involves actual data, capZ and yyy, and a
>>> different covariance matrix, Om_e, that for the purposes of the loops is
>>> fixed.  capZ and Om_e are defined outside bb_update(.,.).  bbj is estimated
>>> conditional on Om_e.
>>>
>>> capZ = a data matrix (Array Float64, 2), which is (nvar x obs) x
>>> (nvar*nvar*plags + nvar), where the (global) constant obs = 223, number of
>>> observations (Int64).
>>>
>>> yyy is nvar x obs (Array Float64, 2).
>>>
>>> The covariance matrix Om_e is (nvar x obs) x nvar (Array Float64, 2).
>>>
>>> Prior to calling bb_update(.,.), the program sets bbj = zeros(
>>> nvar*nvar*plags + nvar, 1) and bbcovj =  zeros( nvar*nvar*plags + nvar,
>>> nvar*nvar*plags), respectively.  These are input into bb_update(.,.).
>>>
>>> The loop for j = 1:obs picks off
>>>
>>> 1) a nvar x(nvar*nvar*plags + nvar) block of capZ to form a quadratic
>>> around a nvar x nvar block of Om_e to construct bbcovj
>>>
>>> 2) the transpose of the same block of capZ, the same block of Om_e, and
>>> a nvar x 1 column of yyy are multiplied to compute bbj.
>>>
>>> The @time function suggests 80 to 90% of the run time of the entire
>>> Gibbs sampling procedure is tied up in the loop of bb_update(.,.) and that
>>> 55% of the run time of bb_update(.,.) is devoted to gc() activities.
>>>
>>> I am running version 0.3.8 of Julia and the OS is Xubuntu64, v14.04.
>>>
>>> Your help/advice is much appreciated.
>>>
>>> Sincerely,
>>>
>>> Jim Nason
>>>
>>> ======================================================
>>>
>>> function bb_update(bbj, bbcovj)
>>>
>>>      for j = 1:obs
>>>          bbcovj += capZ[(j-1)*nvar+1:j*nvar,:]'*(
>>> inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*capZ[(j-1)*nvar+1:j*nvar,:] ;
>>>          bbj += capZ[(j-1)*nvar+1:j*nvar,:]'*(
>>> inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*yyy[:,j] ;
>>>      end
>>>
>>>      return (bbj, bbcovj)
>>> end
>>>
>>

Reply via email to