Re: [petsc-users] Parallelization of the code

2019-02-12 Thread Smith, Barry F. via petsc-users



> On Feb 12, 2019, at 8:29 PM, Maahi Talukder  wrote:
> 
> Thank you for your suggestions. I will go through them.
> 
> But can't I do it any other way? Like using MatMPIAIJPreallocation? Cause I 
> have already calculated the elements of the matrix, and all I have to do is 
> to efficiently put the values in the right position of the matrix.

  Yes, but where are you computing the matrix entries? All on the first 
process? On all the processes? Or on some random process? In order to perform 
efficiently the code needs to generate the vast majority of the matrix entries 
ON the process where they will eventually be stored. This means the code needs 
to be aware of the (good) data decomposition that allows this. The DMDA object 
is one way to manage this decomposition (that is easy to use), of course you 
can mange the decomposition yourself but it still needs to be managed no matter 
what. Actually creating the matrix entries is not the hard part, the hard part 
is setting up the (good) decomposition and writing the code that uses that 
decomposition.

  Barry

> 
> 
> Regards,
> Maahi Talukder
> 
> On Tue, Feb 12, 2019 at 9:03 PM Smith, Barry F.  wrote:
> 
>   Sounds like you have a single structured grid so you should use 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDACreate2d.html
>  and start with something like src/ksp/ksp/examples/tutorials/ex29.c and 
> ex46.c 
> 
>   The DMDA manages dividing up the grid among processes and simplifies 
> generating the matrix in parallel.
> 
>   Good luck,
> 
>Barry
> 
> 
> > On Feb 12, 2019, at 7:43 PM, Maahi Talukder  wrote:
> > 
> > I am using Finite Difference to discretize my system and I am working with 
> > nine point stencil. So I am populating in Q() all the values. The code to 
> > do that is the following - 
> > 
> > Do i = 2,ymax-1
> > 
> > Do j = 2,xmax-1
> > 
> > g11(i-1,j-1) = 0.25*(x(i,j+1)-x(i,j-1))*(x(i,j+1)-x(i,j-1)) 
> > +0.25*(y(i,j+1)-y(i,j-1))*(y(i,j+1)-y(i,j-1))
> > g12(i-1,j-1) = 0.25*(x(i,j+1)-x(i,j-1))*(x(i+1,j)-x(i-1,j))+ 
> > 0.25*(y(i,j+1)-y(i,j-1))*(y(i+1,j)-y(i-1,j))
> > g22(i-1,j-1) = 0.25*(x(i+1,j)-x(i-1,j))*(x(i+1,j)-x(i-1,j)) + 
> > 0.25*(y(i+1,j)-y(i-1,j))*(y(i+1,j)-y(i-1,j))
> > 
> > M(j-1,j-1+xmax*(i-2))= (-1)*(g12(i-1,j-1)/2)
> > M(j-1,j+xmax*(i-2))= g11(i-1,j-1)
> > M(j-1,j+1+xmax*(i-2)) = g12(i-1,j-1)/2
> > M(j-1,j-1+xmax*(i-2)+xmax)= g22(i-1,j-1)
> > M(j-1,j+xmax*(i-2)+xmax)= (-2)*(g22(i-1,j-1)+g11(i-1,j-1))
> > M(j-1,j+1+xmax*(i-2)+xmax)= g22(i-1,j-1)
> > M(j-1,j-1+xmax*(i-2)+2*xmax) = g12(i-1,j-1)/2
> > M(j-1,j+xmax*(i-2)+2*xmax) = g11(i-1,j-1)
> > M(j-1,j+1+xmax*(i-2)+2*xmax) = (-1)*(g12(i-1,j-1)/2)   
> > 
> > end Do
> > 
> > N(1+(xmax-2)*(i-2):(xmax-2)*(i-1),1:xmax*ymax)= M(1:xmax-2,1:xmax*ymax)
> > M = 0
> > 
> > end Do
> > 
> > E(1:(xmax-2)*(ymax-2),1:xmax*ymax-2*xmax) = 
> > N(1:(xmax-2)*(ymax-2),xmax+1:xmax*ymax-xmax)
> > 
> > Do i = 1,ymax-2
> > 
> > Q(1:(xmax-2)*(ymax-2),1+(xmax-2)*(i-1):(xmax-2)*i)= 
> > E(1:(xmax-2)*(ymax-2),2+xmax*(i-1):(xmax-1)+xmax*(i-1))
> > 
> > end Do
> > 
> > So how do I go about decomposing this Q() matrix across processes ? What 
> > PETSc function do I use for that?
> > 
> > 
> > Regards,
> > Maahi
> > 
> > On Tue, Feb 12, 2019 at 8:25 PM Smith, Barry F.  wrote:
> > 
> >   With MPI parallelism you need to do a decomposition of your data across 
> > the processes. Thus, for example, each process will generate a subset of 
> > the matrix entries. In addition, for large problems (which is what 
> > parallelism is for) you cannot use "dense" data structures, like your Q(), 
> > to store sparse matrices. For large problems with true sparse matrices the 
> > Q() will not fit on your machine and consists almost completely of zeros 
> > which it makes no sense to save in memory.
> > 
> >How are your Q() entries being generated? This will determine the 
> > decomposition of the data you need to make. For example, with the finite 
> > element method each process would be assigned a subset of the elements 
> > (generally for a subregion of the entire domain).
> > 
> >Barry
> > 
> > 
> > 
> > > On Feb 12, 2019, at 5:42 PM, Maahi Talukder via petsc-users 
> > >  wrote:
> > > 
> > > 
> > > Dear All,
> > > 
> > > 
> > > I am tying to solve a linear system using ksp solvers. I have managed to 
> > > solve the system with a sequential code. The part of my sequential code 
> > > that deals with creating Matrix and setting values is as the following - 
> > > 
> > > call MatCreate(PETSC_COMM_WORLD,Mp,ierr)
> > > call MatSetSizes(Mp,PETSC_DECIDE,PETSC_DECIDE,u*v,u*v,ierr)
> > > call MatSetFromOptions(Mp,ierr)
> > > call MatSetUp(Mp,ierr)
> > > 
> > > Do p = 1,29008
> > > Do r = 1,29008
> > > if(Q(p,r)/=0.0) then
> > > val(1) = Q(p,r)
> > > col(1) = r-1
> > > call MatSetValues(Mp,ione,p-1,ione,col,val,INSERT_VALUES,ierr)
> > > endif
> > > end Do
> > > end Do
> > > 
> > > call MatAssemblyBegin(Mp,MAT_FINAL_ASSEMBLY,ierr)
> > > call 

Re: [petsc-users] Parallelization of the code

2019-02-12 Thread Smith, Barry F. via petsc-users


  Sounds like you have a single structured grid so you should use 
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDACreate2d.html
 and start with something like src/ksp/ksp/examples/tutorials/ex29.c and ex46.c 

  The DMDA manages dividing up the grid among processes and simplifies 
generating the matrix in parallel.

  Good luck,

   Barry


> On Feb 12, 2019, at 7:43 PM, Maahi Talukder  wrote:
> 
> I am using Finite Difference to discretize my system and I am working with 
> nine point stencil. So I am populating in Q() all the values. The code to do 
> that is the following - 
> 
> Do i = 2,ymax-1
> 
> Do j = 2,xmax-1
> 
> g11(i-1,j-1) = 0.25*(x(i,j+1)-x(i,j-1))*(x(i,j+1)-x(i,j-1)) 
> +0.25*(y(i,j+1)-y(i,j-1))*(y(i,j+1)-y(i,j-1))
> g12(i-1,j-1) = 0.25*(x(i,j+1)-x(i,j-1))*(x(i+1,j)-x(i-1,j))+ 
> 0.25*(y(i,j+1)-y(i,j-1))*(y(i+1,j)-y(i-1,j))
> g22(i-1,j-1) = 0.25*(x(i+1,j)-x(i-1,j))*(x(i+1,j)-x(i-1,j)) + 
> 0.25*(y(i+1,j)-y(i-1,j))*(y(i+1,j)-y(i-1,j))
> 
> M(j-1,j-1+xmax*(i-2))= (-1)*(g12(i-1,j-1)/2)
> M(j-1,j+xmax*(i-2))= g11(i-1,j-1)
> M(j-1,j+1+xmax*(i-2)) = g12(i-1,j-1)/2
> M(j-1,j-1+xmax*(i-2)+xmax)= g22(i-1,j-1)
> M(j-1,j+xmax*(i-2)+xmax)= (-2)*(g22(i-1,j-1)+g11(i-1,j-1))
> M(j-1,j+1+xmax*(i-2)+xmax)= g22(i-1,j-1)
> M(j-1,j-1+xmax*(i-2)+2*xmax) = g12(i-1,j-1)/2
> M(j-1,j+xmax*(i-2)+2*xmax) = g11(i-1,j-1)
> M(j-1,j+1+xmax*(i-2)+2*xmax) = (-1)*(g12(i-1,j-1)/2)   
> 
> end Do
> 
> N(1+(xmax-2)*(i-2):(xmax-2)*(i-1),1:xmax*ymax)= M(1:xmax-2,1:xmax*ymax)
> M = 0
> 
> end Do
> 
> E(1:(xmax-2)*(ymax-2),1:xmax*ymax-2*xmax) = 
> N(1:(xmax-2)*(ymax-2),xmax+1:xmax*ymax-xmax)
> 
> Do i = 1,ymax-2
> 
> Q(1:(xmax-2)*(ymax-2),1+(xmax-2)*(i-1):(xmax-2)*i)= 
> E(1:(xmax-2)*(ymax-2),2+xmax*(i-1):(xmax-1)+xmax*(i-1))
> 
> end Do
> 
> So how do I go about decomposing this Q() matrix across processes ? What 
> PETSc function do I use for that?
> 
> 
> Regards,
> Maahi
> 
> On Tue, Feb 12, 2019 at 8:25 PM Smith, Barry F.  wrote:
> 
>   With MPI parallelism you need to do a decomposition of your data across the 
> processes. Thus, for example, each process will generate a subset of the 
> matrix entries. In addition, for large problems (which is what parallelism is 
> for) you cannot use "dense" data structures, like your Q(), to store sparse 
> matrices. For large problems with true sparse matrices the Q() will not fit 
> on your machine and consists almost completely of zeros which it makes no 
> sense to save in memory.
> 
>How are your Q() entries being generated? This will determine the 
> decomposition of the data you need to make. For example, with the finite 
> element method each process would be assigned a subset of the elements 
> (generally for a subregion of the entire domain).
> 
>Barry
> 
> 
> 
> > On Feb 12, 2019, at 5:42 PM, Maahi Talukder via petsc-users 
> >  wrote:
> > 
> > 
> > Dear All,
> > 
> > 
> > I am tying to solve a linear system using ksp solvers. I have managed to 
> > solve the system with a sequential code. The part of my sequential code 
> > that deals with creating Matrix and setting values is as the following - 
> > 
> > call MatCreate(PETSC_COMM_WORLD,Mp,ierr)
> > call MatSetSizes(Mp,PETSC_DECIDE,PETSC_DECIDE,u*v,u*v,ierr)
> > call MatSetFromOptions(Mp,ierr)
> > call MatSetUp(Mp,ierr)
> > 
> > Do p = 1,29008
> > Do r = 1,29008
> > if(Q(p,r)/=0.0) then
> > val(1) = Q(p,r)
> > col(1) = r-1
> > call MatSetValues(Mp,ione,p-1,ione,col,val,INSERT_VALUES,ierr)
> > endif
> > end Do
> > end Do
> > 
> > call MatAssemblyBegin(Mp,MAT_FINAL_ASSEMBLY,ierr)
> > call MatAssemblyEnd(Mp,MAT_FINAL_ASSEMBLY,ierr)
> > 
> > And the part of my sequential code that creates the vector is -
> > 
> > call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,u*v,Bx,ierr)
> > call VecSetFromOptions(Bx,ierr)
> > call VecDuplicate(Bx,Xp,ierr)
> > call VecSet(Bx,zero,ierr) 
> > 
> > Do p = 1,29008
> > if(Fx(p,1)/=0.0) then
> > val(1) = Fx(p,1)
> > call VecSetValues(Bx,ione,p-1,val,INSERT_VALUES,ierr)
> > endif
> > end Do
> > 
> > call VecAssemblyBegin(Bx,ierr)
> > call VecAssemblyEnd(Bx,ierr)
> > 
> > So when I run the code on single processor, it runs fine. But when I tried 
> > to run it on more than one processor, it failed.  Now from what I 
> > understood from going through the manual is that if I use MatCreate to 
> > create my Matrix, then depending on the no of processor  that I put in 
> > after 'mpiexec -n ...' , it either acts either as a sequential code or a 
> > parallel code. And I don't need to anything extra to make it work in 
> > parallel, as PETSc does that internally. 
> > 
> > So would you please let me know  what modifications I need to do to my 
> > existing sequential code to make it work in parallel like using 
> > MatGetOwnershipRange ? 
> > 
> > Regards,
> > Maahi Talukder
> > MSc Student
> > Clarkson University
> > 
> > 
> > 
> 



Re: [petsc-users] Parallelization of the code

2019-02-12 Thread Smith, Barry F. via petsc-users


  With MPI parallelism you need to do a decomposition of your data across the 
processes. Thus, for example, each process will generate a subset of the matrix 
entries. In addition, for large problems (which is what parallelism is for) you 
cannot use "dense" data structures, like your Q(), to store sparse matrices. 
For large problems with true sparse matrices the Q() will not fit on your 
machine and consists almost completely of zeros which it makes no sense to save 
in memory.

   How are your Q() entries being generated? This will determine the 
decomposition of the data you need to make. For example, with the finite 
element method each process would be assigned a subset of the elements 
(generally for a subregion of the entire domain).

   Barry



> On Feb 12, 2019, at 5:42 PM, Maahi Talukder via petsc-users 
>  wrote:
> 
> 
> Dear All,
> 
> 
> I am tying to solve a linear system using ksp solvers. I have managed to 
> solve the system with a sequential code. The part of my sequential code that 
> deals with creating Matrix and setting values is as the following - 
> 
> call MatCreate(PETSC_COMM_WORLD,Mp,ierr)
> call MatSetSizes(Mp,PETSC_DECIDE,PETSC_DECIDE,u*v,u*v,ierr)
> call MatSetFromOptions(Mp,ierr)
> call MatSetUp(Mp,ierr)
> 
> Do p = 1,29008
> Do r = 1,29008
> if(Q(p,r)/=0.0) then
> val(1) = Q(p,r)
> col(1) = r-1
> call MatSetValues(Mp,ione,p-1,ione,col,val,INSERT_VALUES,ierr)
> endif
> end Do
> end Do
> 
> call MatAssemblyBegin(Mp,MAT_FINAL_ASSEMBLY,ierr)
> call MatAssemblyEnd(Mp,MAT_FINAL_ASSEMBLY,ierr)
> 
> And the part of my sequential code that creates the vector is -
> 
> call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,u*v,Bx,ierr)
> call VecSetFromOptions(Bx,ierr)
> call VecDuplicate(Bx,Xp,ierr)
> call VecSet(Bx,zero,ierr) 
> 
> Do p = 1,29008
> if(Fx(p,1)/=0.0) then
> val(1) = Fx(p,1)
> call VecSetValues(Bx,ione,p-1,val,INSERT_VALUES,ierr)
> endif
> end Do
> 
> call VecAssemblyBegin(Bx,ierr)
> call VecAssemblyEnd(Bx,ierr)
> 
> So when I run the code on single processor, it runs fine. But when I tried to 
> run it on more than one processor, it failed.  Now from what I understood 
> from going through the manual is that if I use MatCreate to create my Matrix, 
> then depending on the no of processor  that I put in after 'mpiexec -n ...' , 
> it either acts either as a sequential code or a parallel code. And I don't 
> need to anything extra to make it work in parallel, as PETSc does that 
> internally. 
> 
> So would you please let me know  what modifications I need to do to my 
> existing sequential code to make it work in parallel like using 
> MatGetOwnershipRange ? 
> 
> Regards,
> Maahi Talukder
> MSc Student
> Clarkson University
> 
> 
> 



[petsc-users] Parallelization of the code

2019-02-12 Thread Maahi Talukder via petsc-users
Dear All,


I am tying to solve a linear system using ksp solvers. I have managed to
solve the system with a sequential code. The part of my sequential code
that deals with creating Matrix and setting values is as the following -

call MatCreate(PETSC_COMM_WORLD,Mp,ierr)
call MatSetSizes(Mp,PETSC_DECIDE,PETSC_DECIDE,u*v,u*v,ierr)
call MatSetFromOptions(Mp,ierr)
call MatSetUp(Mp,ierr)

Do p = 1,29008
Do r = 1,29008
if(Q(p,r)/=0.0) then
val(1) = Q(p,r)
col(1) = r-1
call MatSetValues(Mp,ione,p-1,ione,col,val,INSERT_VALUES,ierr)
endif
end Do
end Do

call MatAssemblyBegin(Mp,MAT_FINAL_ASSEMBLY,ierr)
call MatAssemblyEnd(Mp,MAT_FINAL_ASSEMBLY,ierr)

And the part of my sequential code that creates the vector is -

call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,u*v,Bx,ierr)
call VecSetFromOptions(Bx,ierr)
call VecDuplicate(Bx,Xp,ierr)
call VecSet(Bx,zero,ierr)

Do p = 1,29008
if(Fx(p,1)/=0.0) then
val(1) = Fx(p,1)
call VecSetValues(Bx,ione,p-1,val,INSERT_VALUES,ierr)
endif
end Do

call VecAssemblyBegin(Bx,ierr)
call VecAssemblyEnd(Bx,ierr)

So when I run the code on single processor, it runs fine. But when I tried
to run it on more than one processor, it failed.  Now from what I
understood from going through the manual is that if I use MatCreate to
create my Matrix, then depending on the no of processor  that I put in
after 'mpiexec -n ...' , it either acts either as a sequential code or a
parallel code. And I don't need to anything extra to make it work in
parallel, as PETSc does that internally.

So would you please let me know  what modifications I need to do to my
existing sequential code to make it work in parallel like using
MatGetOwnershipRange ?

Regards,
Maahi Talukder
MSc Student
Clarkson University


Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Jed Brown via petsc-users
We should make the (two line) functionality a command-line feature of
PetscBinaryIO.py.  Then a user could do

  python -m PetscBinaryIO matrix.mm matrix.petsc

Matthew Knepley via petsc-users  writes:

> It definitely should not be there under 'datafiles'. We should put it in an
> example, as Junchao graciously agreed to do.
>
>   Thanks,
>
> Matt
>
> On Tue, Feb 12, 2019 at 11:15 AM Zhang, Hong  wrote:
>
>> We have /home/petsc/datafiles/matrices/MtxMarket/mm2petsc.c
>> Hong
>>
>> On Tue, Feb 12, 2019 at 9:52 AM Zhang, Junchao via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Sure.
>>> --Junchao Zhang
>>>
>>>
>>> On Tue, Feb 12, 2019 at 9:47 AM Matthew Knepley 
>>> wrote:
>>>
 Hi Junchao,

 Could you fix the MM example in PETSc to have this full support? That
 way we will always have it.

  Thanks,

 Matt

 On Tue, Feb 12, 2019 at 10:27 AM Zhang, Junchao via petsc-users <
 petsc-users@mcs.anl.gov> wrote:

> Eda,
>   I have a code that can read in Matrix Market and write out PETSc
> binary files.  Usage:  mpirun -n 1 ./mm2petsc -fin  -fout
> .  You can have a try.
> --Junchao Zhang
>
>
> On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hello,
>>
>> I am trying to load matrix in Matrix Market format. I found an example
>> on mat file  (ex78) whih can be tested by using .dat file. Since .dat 
>> file
>> and .mtx file are similar  in structure (specially afiro_A.dat file is
>> similar to amesos2_test_mat0.mtx since they both have 3 columns and the
>> columns represent the same properties), I tried to run ex78 by using
>> amesos2_test_mat0.mtx instead of afiro_A.dat. However, I got the error
>> "Badly formatted input file". Here is the full error message:
>>
>> [0]PETSC ERROR: - Error Message
>> --
>> [0]PETSC ERROR: Badly formatted input file
>>
>> [0]PETSC ERROR: See
>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble
>> shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
>> [0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named
>> 7330.wls.metu.edu.tr by edaoktay Tue Feb 12 10:47:58 2019
>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
>> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
>> --download-metis --download-parmetis --download-superlu_dist
>> --download-slepc --download-mpich
>> [0]PETSC ERROR: #1 main() line 73 in
>> /home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
>> [0]PETSC ERROR: PETSc Option Table entries:
>> [0]PETSC ERROR: -Ain
>> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
>> [0]PETSC ERROR: End of Error Message ---send
>> entire error message to petsc-ma...@mcs.anl.gov--
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
>> :
>> system msg for write_line failure : Bad file descriptor
>>
>> I know there is also an example (ex72) for Matrix Market format but in
>> description, it is only proper for symmmetric and lower triangle, so I
>> decided to use ex78.
>>
>> Best regards,
>>
>> Eda
>>
>

 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 

>>>
>
> -- 
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Zhang, Hong via petsc-users
We have /home/petsc/datafiles/matrices/MtxMarket/mm2petsc.c
Hong

On Tue, Feb 12, 2019 at 9:52 AM Zhang, Junchao via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Sure.
--Junchao Zhang


On Tue, Feb 12, 2019 at 9:47 AM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
Hi Junchao,

Could you fix the MM example in PETSc to have this full support? That way we 
will always have it.

 Thanks,

Matt

On Tue, Feb 12, 2019 at 10:27 AM Zhang, Junchao via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Eda,
  I have a code that can read in Matrix Market and write out PETSc binary 
files.  Usage:  mpirun -n 1 ./mm2petsc -fin  -fout .  You can 
have a try.
--Junchao Zhang


On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,

I am trying to load matrix in Matrix Market format. I found an example on mat 
file  (ex78) whih can be tested by using .dat file. Since .dat file and .mtx 
file are similar  in structure (specially afiro_A.dat file is similar to 
amesos2_test_mat0.mtx since they both have 3 columns and the columns represent 
the same properties), I tried to run ex78 by using amesos2_test_mat0.mtx 
instead of afiro_A.dat. However, I got the error "Badly formatted input file". 
Here is the full error message:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Badly formatted input file

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named 
7330.wls.metu.edu.tr by edaoktay Tue Feb 12 
10:47:58 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
--download-metis --download-parmetis --download-superlu_dist --download-slepc 
--download-mpich
[0]PETSC ERROR: #1 main() line 73 in 
/home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -Ain 
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
[0]PETSC ERROR: End of Error Message ---send entire error 
message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor

I know there is also an example (ex72) for Matrix Market format but in 
description, it is only proper for symmmetric and lower triangle, so I decided 
to use ex78.

Best regards,

Eda


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Zhang, Junchao via petsc-users
Sure.
--Junchao Zhang


On Tue, Feb 12, 2019 at 9:47 AM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
Hi Junchao,

Could you fix the MM example in PETSc to have this full support? That way we 
will always have it.

 Thanks,

Matt

On Tue, Feb 12, 2019 at 10:27 AM Zhang, Junchao via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Eda,
  I have a code that can read in Matrix Market and write out PETSc binary 
files.  Usage:  mpirun -n 1 ./mm2petsc -fin  -fout .  You can 
have a try.
--Junchao Zhang


On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,

I am trying to load matrix in Matrix Market format. I found an example on mat 
file  (ex78) whih can be tested by using .dat file. Since .dat file and .mtx 
file are similar  in structure (specially afiro_A.dat file is similar to 
amesos2_test_mat0.mtx since they both have 3 columns and the columns represent 
the same properties), I tried to run ex78 by using amesos2_test_mat0.mtx 
instead of afiro_A.dat. However, I got the error "Badly formatted input file". 
Here is the full error message:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Badly formatted input file

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named 
7330.wls.metu.edu.tr by edaoktay Tue Feb 12 
10:47:58 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
--download-metis --download-parmetis --download-superlu_dist --download-slepc 
--download-mpich
[0]PETSC ERROR: #1 main() line 73 in 
/home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -Ain 
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
[0]PETSC ERROR: End of Error Message ---send entire error 
message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor

I know there is also an example (ex72) for Matrix Market format but in 
description, it is only proper for symmmetric and lower triangle, so I decided 
to use ex78.

Best regards,

Eda


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Zhang, Junchao via petsc-users
Eda,
  I have a code that can read in Matrix Market and write out PETSc binary 
files.  Usage:  mpirun -n 1 ./mm2petsc -fin  -fout .  You can 
have a try.
--Junchao Zhang


On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,

I am trying to load matrix in Matrix Market format. I found an example on mat 
file  (ex78) whih can be tested by using .dat file. Since .dat file and .mtx 
file are similar  in structure (specially afiro_A.dat file is similar to 
amesos2_test_mat0.mtx since they both have 3 columns and the columns represent 
the same properties), I tried to run ex78 by using amesos2_test_mat0.mtx 
instead of afiro_A.dat. However, I got the error "Badly formatted input file". 
Here is the full error message:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Badly formatted input file

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named 
7330.wls.metu.edu.tr by edaoktay Tue Feb 12 
10:47:58 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
--download-metis --download-parmetis --download-superlu_dist --download-slepc 
--download-mpich
[0]PETSC ERROR: #1 main() line 73 in 
/home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -Ain 
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
[0]PETSC ERROR: End of Error Message ---send entire error 
message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor

I know there is also an example (ex72) for Matrix Market format but in 
description, it is only proper for symmmetric and lower triangle, so I decided 
to use ex78.

Best regards,

Eda


matrixmarket2petsc.tgz
Description: matrixmarket2petsc.tgz


Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Jose E. Roman via petsc-users
It is better to convert the matrices to PETSc binary format first. One easy way 
is to read them into Matlab with mmread.m and write with PETSc's 
PetscBinaryWrite.m. This can be done similarly in python.
Jose


> El 12 feb 2019, a las 8:50, Eda Oktay via petsc-users 
>  escribió:
> 
> Hello,
> 
> I am trying to load matrix in Matrix Market format. I found an example on mat 
> file  (ex78) whih can be tested by using .dat file. Since .dat file and .mtx 
> file are similar  in structure (specially afiro_A.dat file is similar to 
> amesos2_test_mat0.mtx since they both have 3 columns and the columns 
> represent the same properties), I tried to run ex78 by using 
> amesos2_test_mat0.mtx instead of afiro_A.dat. However, I got the error "Badly 
> formatted input file". Here is the full error message:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Badly formatted input file
> 
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 
> [0]PETSC ERROR: ./ex78 on a arch-linux2-c-debug named 7330.wls.metu.edu.tr by 
> edaoktay Tue Feb 12 10:47:58 2019
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
> --download-metis --download-parmetis --download-superlu_dist --download-slepc 
> --download-mpich
> [0]PETSC ERROR: #1 main() line 73 in 
> /home/edaoktay/petsc-3.10.3/src/mat/examples/tests/ex78.c
> [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -Ain 
> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/amesos2_test_mat0.mtx
> [0]PETSC ERROR: End of Error Message ---send entire error 
> message to petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
> :
> system msg for write_line failure : Bad file descriptor
> 
> I know there is also an example (ex72) for Matrix Market format but in 
> description, it is only proper for symmmetric and lower triangle, so I 
> decided to use ex78.
> 
> Best regards,
> 
> Eda