[petsc-users] Problem in loading Matrix Market format

2019-02-11 Thread Eda Oktay via petsc-users
Hello,

I am trying to load matrix in Matrix Market format. I found an example on
mat file  (ex78) whih can be tested by using .dat file. Since .dat file and
.mtx file are similar  in structure (specially afiro_A.dat file is similar
to amesos2_test_mat0.mtx since they both have 3 columns and the columns
represent the same properties), I tried to run ex78 by using
amesos2_test_mat0.mtx instead of afiro_A.dat. However, I got the error
"Badly formatted input file". Here is the full error message:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Badly formatted input file

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named 7330.wls.metu.edu.tr
by edaoktay Tue Feb 12 10:47:58 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
--download-metis --download-parmetis --download-superlu_dist
--download-slepc --download-mpich
[0]PETSC ERROR: #1 main() line 73 in
/home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -Ain
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor

I know there is also an example (ex72) for Matrix Market format but in
description, it is only proper for symmmetric and lower triangle, so I
decided to use ex78.

Best regards,

Eda


Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-11 Thread Jed Brown via petsc-users
Justin Chang via petsc-users  writes:

> So I used -mat_view draw -draw_pause -1 on my medium sized matrix and got
> this output:
>
> [image: 1MPI.png]
>
> So it seems there are lots of off-diagonal terms, and that a decomposition
> of the problem via matload would give a terrible unbalanced problem.
>
> Given the initial A and b Mat/Vec, I experimented with MatPartioning and
> inserted the following lines into my code:
>
> Mat   Apart;
> Vec   bpart;
> MatPartitioning   part;
> ISis,isrows;
> ierr = MatPartitioningCreate(PETSC_COMM_WORLD, &part);CHKERRQ(ierr);
> ierr = MatPartitioningSetAdjacency(part, A);CHKERRQ(ierr);
> ierr = MatPartitioningSetFromOptions(part);CHKERRQ(ierr);
> ierr = MatPartitioningApply(part, &is);CHKERRQ(ierr);
> ierr = ISBuildTwoSided(is,NULL,&isrows);CHKERRQ(ierr);
> ierr = MatCreateSubMatrix(A, isrows,isrows, MAT_INITIAL_MATRIX,
> &Apart);CHKERRQ(ierr);
> ierr = MatSetOptionsPrefix(Apart, "part_");CHKERRQ(ierr);
> ierr = MatSetFromOptions(Apart);CHKERRQ(ierr);
> ierr = VecGetSubVector(b,isrows,&bpart);CHKERRQ(ierr);
>
> /* Set Apart and bpart in the KSPSolve */
> ...
>
> And here are the mat_draw figures from 2 and 4 MPI processes respectively:
>
> [image: 2MPI.png][image: 4MPI.png]
>
> Is this "right"? It just feels like I'm just duplicating the nnz structure
> among all the MPI processes. And it didn't really improve the performance
> of ASM.

ASM might not be an appropriate preconditioner (or it might need a
special sort of overlap for stability of the local problems).  The edge
cuts look relatively small, so it doesn't look to me like power law or
social network problems that don't admit vertex partitions with low edge
cut.

We really have to understand the spectrum to comment further on fast
solvers.


Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-11 Thread Smith, Barry F. via petsc-users

   Given the nonzero structure of the matrix I'd be surprised if either 
GAMG/boomerAMG or ASM was particularly efficacious. I don't have any 
recommendations for iterative methods.


   Barry


> On Feb 11, 2019, at 8:41 AM, Abhyankar, Shrirang G via petsc-users 
>  wrote:
> 
>  
>  
> From: Justin Chang  
> Sent: Friday, February 8, 2019 4:57 PM
> To: Abhyankar, Shrirang G 
> Cc: Mark Adams ; PETSc users list 
> Subject: Re: [petsc-users] Preconditioning systems of equations with complex 
> numbers
>  
> So I used -mat_view draw -draw_pause -1 on my medium sized matrix and got 
> this output:
>  
> 
>  
> So it seems there are lots of off-diagonal terms, and that a decomposition of 
> the problem via matload would give a terrible unbalanced problem.
>  
> Given the initial A and b Mat/Vec, I experimented with MatPartioning and 
> inserted the following lines into my code:
>  
> Mat   Apart;
> Vec   bpart;
> MatPartitioning   part;
> ISis,isrows;
> ierr = MatPartitioningCreate(PETSC_COMM_WORLD, &part);CHKERRQ(ierr);
> ierr = MatPartitioningSetAdjacency(part, A);CHKERRQ(ierr);
> ierr = MatPartitioningSetFromOptions(part);CHKERRQ(ierr);
> ierr = MatPartitioningApply(part, &is);CHKERRQ(ierr);
> ierr = ISBuildTwoSided(is,NULL,&isrows);CHKERRQ(ierr);
> ierr = MatCreateSubMatrix(A, isrows,isrows, MAT_INITIAL_MATRIX, 
> &Apart);CHKERRQ(ierr);
> ierr = MatSetOptionsPrefix(Apart, "part_");CHKERRQ(ierr);
> ierr = MatSetFromOptions(Apart);CHKERRQ(ierr);
> ierr = VecGetSubVector(b,isrows,&bpart);CHKERRQ(ierr);
>  
> /* Set Apart and bpart in the KSPSolve */
> ...
>  
> And here are the mat_draw figures from 2 and 4 MPI processes respectively:
>  
> 
>  
> Is this "right"? It just feels like I'm just duplicating the nnz structure 
> among all the MPI processes. And it didn't really improve the performance of 
> ASM. 
>  
> This looks kind of right. You have the big fat diagonal blocks represent 
> separate meshed networks, and the sparse off-diagonal blocks that show very 
> less connectivity between the networks. This is typical of power grid where 
> the connectivity between different areas is sparse. However, the blocks being 
> identical is puzzling. I would expect the blocks to be of different sizes. 
> Was this network created by duplicating smaller networks?
> I was hoping ASM would be faster on the partitioned matrix rather than the 
> original one. What options are you using for sub_pc? -sub_pc_type lu 
> -sub_pc_factor_mat_ordering_type amd?
>  
> Also, this nnz structure appears to depend on the distribution system we cook 
> up in OpenDSS, for example the largest matrix we have thus far looks like 
> this:
> 
> Yes, the matrix structure depends on the network structure. This to me looks 
> like a star-shaped network where you have a tightly interconnected network 
> (bottom dense block) and smallish radial distribution networks emanating from 
> the nodes. That’s just one guess. It could be some other type as well.
>  
>  
> Is there a better alternative to the partitioning I implemented above? Or did 
> I do something catastrophically wrong
>  
> Thanks,
> Justin
>  
> Side node - Yes I tried KLU, but it needed several KSP iterations to obtain 
> the solution. Here's the ksp monitor/view outputs:
>  
> 0 KSP preconditioned resid norm 1.705434112839e+06 true resid norm 
> 2.242813827253e+12 ||r(i)||/||b|| 1.e+00
>   1 KSP preconditioned resid norm 1.122965789284e+05 true resid norm 
> 3.057749589444e+11 ||r(i)||/||b|| 1.363354172463e-01
>   2 KSP preconditioned resid norm 1.962518730076e+04 true resid norm 
> 2.510054932552e+10 ||r(i)||/||b|| 1.119154386357e-02
>   3 KSP preconditioned resid norm 3.094963519133e+03 true resid norm 
> 6.489763653495e+09 ||r(i)||/||b|| 2.893581078660e-03
>   4 KSP preconditioned resid norm 1.755871992454e+03 true resid norm 
> 2.315676037474e+09 ||r(i)||/||b|| 1.032486963178e-03
>   5 KSP preconditioned resid norm 1.348939340771e+03 true resid norm 
> 1.864929933344e+09 ||r(i)||/||b|| 8.315134812722e-04
>   6 KSP preconditioned resid norm 5.532203694243e+02 true resid norm 
> 9.985525631209e+08 ||r(i)||/||b|| 4.452231170449e-04
>   7 KSP preconditioned resid norm 3.636087020506e+02 true resid norm 
> 5.712899201028e+08 ||r(i)||/||b|| 2.547201703329e-04
>   8 KSP preconditioned resid norm 2.926812321412e+02 true resid norm 
> 3.627282296417e+08 ||r(i)||/||b|| 1.617290856843e-04
>   9 KSP preconditioned resid norm 1.629184033135e+02 true resid norm 
> 1.048838851435e+08 ||r(i)||/||b|| 4.676441881580e-05
>  10 KSP preconditioned resid norm 8.297821067807e+01 true resid norm 
> 3.423640694920e+07 ||r(i)||/||b|| 1.526493484800e-05
>  11 KSP preconditioned resid norm 2.997246200648e+01 true resid norm 
> 9.880250538293e+06 ||r(i)||/||b|| 4.405292324417e-06
>  12 KSP preconditioned resid norm 2.156940809471e+01 true resid norm 
> 3.521518932572e+06 ||r(i)||/||b|| 1.570134306192e-06
>  13 KSP preconditioned resid norm 1.2118233084

Re: [petsc-users] About the value of the PETSC_SMALL

2019-02-11 Thread Jed Brown via petsc-users
You're probably looking for PETSC_MACHINE_EPSILON.

ztdepyahoo via petsc-users  writes:

> Dear sir:
> I output the value  of the "PETSC_SMALL", it is 1E-10.  But i think it
> should be more smaller than this for double float number.
> Regards


[petsc-users] About the value of the PETSC_SMALL

2019-02-11 Thread ztdepyahoo via petsc-users





Dear sir:    I output the value  of the "PETSC_SMALL", it is 1E-10.  But i think it should be more smaller than this for double float number.Regards