Thank you again, for taking the time out to respond to my enquiry.

Shruti

On Fri, Oct 23, 2015 at 3:06 AM, Jose Luis Marin <mari...@gridquant.com>
wrote:

> Hi Shruti,
>
> In that case I was refering to our C implementation of HELM, but yes, we
> do the factorization just once and save the triangular factors, in order to
> use them repeatedly.
>
> You can actually see how this is also done in MATPOWER, in the Fast
> Decoupled methods (fdpf.m). These methods factorize the matrices just once
> and keep them constant throughout the iterations.  The Matlab function
> you're looking for is lu(), see Lines 80--104 and 129--135 in
> http://www.pserc.cornell.edu//matpower/docs/ref/matpower5.1/fdpf.html
>
>
> Regards,
>
> --
> Jose L. Marin
> Gridquant España SL
> Grupo AIA
>
>
> On Fri, Oct 23, 2015 at 5:22 AM, Shruti Rao <sra...@asu.edu> wrote:
>
>> Greetings Dr. Marin,
>>
>> In the variant of HELM, do you use backslash as well? The reason I ask
>> this question is, one of the advantages of the HELM is that the matrix does
>> not change for a given static problem (not considering bus-type switching
>> and tap changing etc.). As far as I know and I might be wrong, but
>> backslash will perform the matrix factorization every time we use it. But
>> if the matrix does not change, won't we save a lot of computation time by
>> doing the factorization only once and then doing only forward and backward
>> substitutions for finding the higher order series terms?
>>
>>
>> Regards,
>> Shruti
>>
>> On Thu, Oct 22, 2015 at 1:01 AM, Jose Luis Marin <mari...@gridquant.com>
>> wrote:
>>
>>> Hello Shri,
>>>
>>> Most of the testing we did was on CHOLMOD, as the matrices in our
>>> problem (from a variant of our HELM algorithm) were symmetric. We found
>>> that the simplicial option was always much faster than the supernodal one,
>>> since the "supernodes" were always too small.  We never got around to test
>>> KLU properly, sorry.  What is the PETSc linear solver that you're comparing
>>> to? Is it direct or iterative?  What is the storage format for sparse
>>> matrices in PETSc?  One wild guess at why you may be seeing this large
>>> difference is that there could be an overhead in converting back and forth
>>> to the compressed-sparse-column format used by KLU.  Another easy thing to
>>> test is to just deactivate the BTF step in KLU, and see if it helps with
>>> power system matrices.  If I had the time I would like to also benchmark
>>> the LU solver from CXSparse, which seems to be the simplest, barebones
>>> implementation of a simplicial sparse solver.
>>>
>>> --
>>> Jose L. Marin
>>> Gridquant España SL
>>> Grupo AIA
>>>
>>>
>>>
>>> On Wed, Oct 21, 2015 at 3:19 AM, Abhyankar, Shrirang G. <abhy...@anl.gov
>>> > wrote:
>>>
>>>> I concur with Jose’s observations on KLU, but I don’t think MATLAB will
>>>> make KLU as its default solver. It’ll be great to have a KLU linear solver
>>>> option for mplinsolve()
>>>> <http://www.pserc.cornell.edu//matpower/docs/ref/matpower5.1/mplinsolve.html>
>>>>
>>>>
>>>> I recently added an interface to KLU in PETSc
>>>> <http://www.mcs.anl.gov/petsc/> and compared PETSc and KLU’s linear
>>>> solvers on a bunch of MATPOWER test cases. I was surprised to note that
>>>> PETSc’s linear solver was actually faster than KLU (on all the test cases)
>>>> by a factor of 1.5 - 3 times. I did experiment with the available solver
>>>> and reordering schemes available with KLU, but did not find any option that
>>>> beats PETSc. I’ll appreciate pointers from anyone who has used KLU on the
>>>> KLU options they’ve used for solving their application. My power flow
>>>> example code is available with the PETSc distribution
>>>> https://bitbucket.org/petsc/petsc.git in the directory
>>>> src/snes/examples/tutorials/network/pflow for anyone who wants to test it.
>>>>
>>>> I am not surprised that PARDISO was found to be slower than ‘\'. Most
>>>> of the power system test cases in MATPOWER are so small that having a
>>>> multi-threaded linear solver may not yield any appreciable speedup. The
>>>> thread launch and synchronization would most likely dominate. In addition,
>>>> one needs to be cognizant of issues such as thread affinity and first touch
>>>> when dealing with threads, which makes it harder for performance
>>>> optimization.
>>>>
>>>> Shri
>>>>
>>>> From: Jose Luis Marin <mari...@gridquant.com>
>>>> Reply-To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>> Date: Monday, October 19, 2015 at 12:15 PM
>>>>
>>>> To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>> Subject: Re: Question about sparsity-based implementation in MATPower
>>>>
>>>>
>>>> I'd like to add that Matlab keeps incorporating the latest sparse
>>>> direct solvers coming from Tim Davis and his group from TAMU / U. of
>>>> Florida (SuiteSparse
>>>> <http://faculty.cse.tamu.edu/davis/suitesparse.html>) into their new
>>>> versions.  I believe that if the Jacobian is symmetric, current versions of
>>>> MATLAB will use CHOLMOD, while if it's not, they will use UMFPACK.
>>>>
>>>> This is great because these are solid, state of the art direct solvers;
>>>> however, as far as I know, there is still no way in Matlab to tune the
>>>> spparms in order to deactivate their multifrontal / supernodal variants and
>>>> just use the "simplicial" variants instead.  In some testing we did a while
>>>> ago on the C version of SuiteSparse, the multifrontal and supernodal
>>>> approaches performed worse on the kind matrices that one typically obtains
>>>> in power networks.  It made sense, because those techniques are essentially
>>>> trying to find denser blocks in order to use the BLAS, and power systems
>>>> matrices are just too sparse for that approach to pay off.  I hope Matlab
>>>> implements the KLU solver as an option some day, because my hunch is that
>>>> KLU is the fastest solver for power systems problems (it was used on Xyce,
>>>> a SPICE-like simulator).
>>>>
>>>>
>>>> --
>>>> Jose L. Marin
>>>> Gridquant España SL
>>>> Grupo AIA
>>>>
>>>>
>>>> On Mon, Oct 19, 2015 at 3:04 PM, Ray Zimmerman <r...@cornell.edu>
>>>> wrote:
>>>>
>>>>> I would also mention, for those who are interested, that version 5.1
>>>>> of MATPOWER includes a wrapper function mplinsolve()
>>>>> <http://www.pserc.cornell.edu//matpower/docs/ref/matpower5.1/mplinsolve.html>
>>>>>  that
>>>>> allows you to choose between different linear solvers for computing the
>>>>> Newton update step in the MIPS interior-point OPF algorithm. Currently 
>>>>> this
>>>>> includes only Matlab’s built-in \ operator or the optional PARDISO.
>>>>>
>>>>> If I remember correctly, for the Newton-Raphson power flow, I stuck
>>>>> with using Matlab’s \ operator directly rather than mplinsolve()
>>>>> <http://www.pserc.cornell.edu//matpower/docs/ref/matpower5.1/mplinsolve.html>,
>>>>> because even for the largest systems I tried, there was little or no
>>>>> advantage to PARDISO, and the extra overhead was noticeable on small
>>>>> systems.
>>>>>
>>>>>    Ray
>>>>>
>>>>>
>>>>> On Oct 19, 2015, at 1:05 AM, Shruti Rao <sra...@asu.edu> wrote:
>>>>>
>>>>> Thank you Dr. Abhyankar for the guidance. I appreciate your time and
>>>>> effort.
>>>>>
>>>>> Shruti
>>>>>
>>>>> On Sun, Oct 18, 2015 at 10:02 PM, Abhyankar, Shrirang G. <
>>>>> abhy...@anl.gov> wrote:
>>>>>
>>>>>> Shruti,
>>>>>>   MATPOWER does use “\” operator for the linear solves. However note
>>>>>> that, internally, MATLAB does perform some sort of matrix reordering to
>>>>>> reduce the fill-ins in the factored matrix. For instance, UMFPACK uses an
>>>>>> approximate minimum degree reordering scheme by default.
>>>>>>
>>>>>> Shri
>>>>>>
>>>>>> From: Shruti Rao <sra...@asu.edu>
>>>>>> Reply-To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>>>> Date: Sunday, October 18, 2015 at 8:31 PM
>>>>>> To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>>>> Subject: Re: Question about sparsity-based implementation in MATPower
>>>>>>
>>>>>> Thank you Dr. Abhyakar,
>>>>>>
>>>>>> My main aim was to confirm that MATPower uses the inbuilt "\" to
>>>>>> solve the matrix equations and not Tinney or some other form of 
>>>>>> reordering
>>>>>>  and then LU factorization followed by forward,backward substitutions. 
>>>>>> From
>>>>>> your response I assume that it is true that MATpower uses "\" right?
>>>>>>
>>>>>> Thank you for your response.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Oct 18, 2015 at 6:27 PM, Abhyankar, Shrirang G. <
>>>>>> abhy...@anl.gov> wrote:
>>>>>>
>>>>>>> Hi Shruti,
>>>>>>>   The direct linear solver used by MATLAB depends on the symmetry of
>>>>>>> the Jacobian matrix. For MATPOWER test cases that have symmetric 
>>>>>>> Jacobians
>>>>>>> (due to inactive taps), a Cholesky factorization is used (LL^T = A). For
>>>>>>> cases that lead to non-symmetric Jacobian, MATLAB uses UMFPACK for
>>>>>>> performing the linear solve.
>>>>>>>
>>>>>>> Shri
>>>>>>>
>>>>>>> From: Shruti Rao <sra...@asu.edu>
>>>>>>> Reply-To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>>>>> Date: Sunday, October 18, 2015 at 5:37 PM
>>>>>>> To: MATPOWER discussion forum <matpowe...@list.cornell.edu>
>>>>>>> Subject: Question about sparsity-based implementation in MATPower
>>>>>>>
>>>>>>> Greetings MATPower community,
>>>>>>>
>>>>>>> I had a question about the way sparsity-based techniques are used in
>>>>>>> the Newton-Raphson solver of the power flow algorithm in MATPower.
>>>>>>>
>>>>>>> I ran the code step-by-step and from my understanding, the way the
>>>>>>> sparsity of the Jacobian matrix is exploited is that it is created as a
>>>>>>> MATLAB "sparse" matrix wherein only the non-zeros are stored with the
>>>>>>> respective matrix positions and then the MATLAB operator "\" is invoked
>>>>>>> while calculating dx = -(J \ F); where J is the Jacobian and F is the
>>>>>>> vector of mismatches.
>>>>>>>
>>>>>>> MATLAB "\" by default exploits the sparsity of the matrix by using a
>>>>>>> LU solver. The kind of solver "\" uses actually depends on the matrix
>>>>>>> structure if it is diagonal/tridiagonal/banded and so on (Flowchart
>>>>>>> obtained from Mathworks website attached in the email). I assume based 
>>>>>>> on
>>>>>>> the typical  structure of the Jacobian that an LU solver is most likely 
>>>>>>> to
>>>>>>> be chosen.
>>>>>>>
>>>>>>> Is my understanding correct or am I missing something out? Thank you
>>>>>>> for your time and effort.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards,
>>>>>>> Shruti Dwarkanath Rao
>>>>>>>
>>>>>>> Graduate Research Assistant
>>>>>>> School of Electrical, Computer and Energy Engineering
>>>>>>> Arizona State University
>>>>>>> Tempe, AZ, 85281
>>>>>>> 650 996 0116
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Shruti Dwarkanath Rao
>>>>>>
>>>>>> Graduate Research Assistant
>>>>>> School of Electrical, Computer and Energy Engineering
>>>>>> Arizona State University
>>>>>> Tempe, AZ, 85281
>>>>>> 650 996 0116
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Shruti Dwarkanath Rao
>>>>>
>>>>> Graduate Research Assistant
>>>>> School of Electrical, Computer and Energy Engineering
>>>>> Arizona State University
>>>>> Tempe, AZ, 85281
>>>>> 650 996 0116
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Shruti Dwarkanath Rao
>>
>> Graduate Research Assistant
>> School of Electrical, Computer and Energy Engineering
>> Arizona State University
>> Tempe, AZ, 85281
>> 650 996 0116
>>
>
>


-- 
Best Regards,
Shruti Dwarkanath Rao

Graduate Research Assistant
School of Electrical, Computer and Energy Engineering
Arizona State University
Tempe, AZ, 85281
650 996 0116

Reply via email to