Right, ARPACK only enforces the projection for the generalized problem. I 
think your example relies on roundoff error in the orthogonalization step 
to populate the null space.

With regard to semi-definite B (or C in the OP), the shifted version of the 
algorithm (mode 3 in ARPACK) will work for this case, but does require ncv 
<= rank(B). Here one needs to solve (A - sigma * B) x = y instead of B x = 
y, with only a nonsingularity requirement.  I got this to work by 
suppressing the isposdef() check in arnoldi.jl; passing sigma=0 selects the 
right mode.  The disadvantage is that when B is *indefinite* the algorithm 
produces nonsense without complaint. Do you know of a simple test for 
semi-definiteness?

Just for the record, the logic in arnoldi.jl is not tricky.  If one has a 
really large problem where the factorizations are impractical, one could 
readily adapt the arnoldi.jl code to invoke iterative linear solvers.

On Saturday, August 6, 2016 at 6:07:58 PM UTC-4, Andreas Noack wrote:
>
> Ralph, ARPACK seems to be okay with the singularity (of A) in the 
> non-generalized case. In the problem here B=I so it's only a generalized 
> problem by algorithm, not by math and 
>
> julia> eigs(A)[1]
> 6-element Array{Float64,1}:
>  -100.0
>   100.0
>     4.44392e-15
>    -3.0281e-18
>     3.5842e-33
>     1.69212e-33
>
> so here ARPACK is doing something different. Maybe expanding with vectors 
> from the null space which I guess is fine as long the αs and βs are zeroed 
> and you reorthogonalize.
>
> On Saturday, August 6, 2016 at 5:02:42 PM UTC-4, Ralph Smith wrote:
>>
>> The 0.5 behavior is the same as Fortran, and is technically correct. 
>>  ARPACK cannot create more (orthogonal) Krylov vectors than the rank of the 
>> matrix A,
>> so your example needs the additional keyword ncv=2. This doesn't seem to 
>> be adequately documented in arpack-ng either. The 0.4 behavior looks 
>> unreliable.
>>
>> On Saturday, August 6, 2016 at 2:03:21 AM UTC-4, Madeleine Udell wrote:
>>>
>>> Actually: I'm not sure we should chalk the error up to ARPACK. Julia 0.4 
>>> is able to produce a (correct, I think) answer to
>>>
>>> eigs(A, C)
>>>
>>> but 0.5 gives an ARPACK error. I don't suppose ARPACK changed between 
>>> Julia versions...!?
>>>
>>> On Sat, Aug 6, 2016 at 1:54 AM, Madeleine Udell <madelei...@gmail.com> 
>>> wrote:
>>>
>>>> Andreas, thanks for the investigation. I'll use 0.5 for now, and hope 
>>>> the real problems I encounter are within the capabilities of ARPACK.
>>>>
>>>> It's embarrassing to be bested by Matlab...
>>>>
>>>> On Fri, Aug 5, 2016 at 9:23 PM, Andreas Noack <andreasno...@gmail.com> 
>>>> wrote:
>>>>
>>>>> I've looked a bit into this. I believe there is a bug in the Julia 
>>>>> wrappers on 0.4. The good news is that the bug appears to be fixed on 
>>>>> 0.5. 
>>>>> The bad news is the ARPACK seems to have a hard time with the problem. I 
>>>>> get
>>>>>
>>>>> julia> eigs(A,C,nev = 1, which = :LR)[1]
>>>>> ERROR: Base.LinAlg.ARPACKException("unspecified ARPACK error: -9999")
>>>>>  in aupd_wrapper(::Type{T}, 
>>>>> ::Base.LinAlg.#matvecA!#67{Array{Float64,2}}, ::Base.LinAlg.##64#71{Array
>>>>> {Float64,2}}, ::Base.LinAlg.##65#72, ::Int64, ::Bool, ::Bool, 
>>>>> ::String, ::Int64, ::Int64, ::String, :
>>>>> :Float64, ::Int64, ::Int64, ::Array{Float64,1}) at 
>>>>> ./linalg/arpack.jl:53
>>>>>  in #_eigs#60(::Int64, ::Int64, ::Symbol, ::Float64, ::Int64, ::Void, 
>>>>> ::Array{Float64,1}, ::Bool, ::B
>>>>> ase.LinAlg.#_eigs, ::Array{Float64,2}, ::Array{Float64,2}) at 
>>>>> ./linalg/arnoldi.jl:271
>>>>>  in (::Base.LinAlg.#kw##_eigs)(::Array{Any,1}, ::Base.LinAlg.#_eigs, 
>>>>> ::Array{Float64,2}, ::Array{Floa
>>>>> t64,2}) at ./<missing>:0
>>>>>  in #eigs#54(::Array{Any,1}, ::Function, ::Array{Float64,2}, 
>>>>> ::Array{Float64,2}) at ./linalg/arnoldi.
>>>>> jl:80
>>>>>  in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, 
>>>>> ::Array{Float64,2}, ::Array{Float6
>>>>> 4,2}) at ./<missing>:0
>>>>>
>>>>> and since SciPy ends up with the same conclusion I conclude that the 
>>>>> issue is ARPACK. Matlab is doing something else because they are able to 
>>>>> handle this problem.
>>>>>
>>>>> Given that 0.5 is almost release, I'll not spend more time on the 
>>>>> issue on 0.4. Thought, if anybody is able to figure out what is going on, 
>>>>> please let us know.
>>>>>
>>>>> On Friday, August 5, 2016 at 8:47:26 AM UTC-4, Madeleine Udell wrote:
>>>>>>
>>>>>> Setting `which=:LR, nev=1` does not return the generalized eigenvalue 
>>>>>> with the largest real parts, and does not give a warning or error:
>>>>>>
>>>>>> n = 10
>>>>>> C = eye(n)
>>>>>> A = zeros(n,n)
>>>>>> A[1] = 100
>>>>>> A[end] = -100
>>>>>> @assert eigs(A, C, nev=1, which=:LR)[1][1] == maximum(eigs(A, C)[1])
>>>>>>
>>>>>> Am I expected to set nev greater than the number of eigenvalues I 
>>>>>> truly desire, based on my intuition as a numerical analyst? Or has eigs 
>>>>>> broken its implicit guarantee?
>>>>>>
>>>>>>
>>>>
>>>>
>>>> -- 
>>>> Madeleine Udell
>>>> Assistant Professor, Operations Research and Information Engineering
>>>> Cornell University
>>>> https://people.orie.cornell.edu/mru8/
>>>> (415) 729-4115
>>>>
>>>
>>>
>>>
>>> -- 
>>> Madeleine Udell
>>> Assistant Professor, Operations Research and Information Engineering
>>> Cornell University
>>> https://people.orie.cornell.edu/mru8/
>>> (415) 729-4115
>>>
>>

Reply via email to