[deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-25 Thread 'Maxi Miller' via deal.II User Group
After running some tests, I ended up reducing the problem to the transfer 
to and from the Epetra-vectors. I have to write an interface to the model 
(according to the instructions), and as shown in the code above. There I 
have Epetra-Vectors created by my interface, with a size of 
locally_relevant_dofs.size(), and TrilinosWrappers::MPI::Vectors. Transfer 
from the Epetra-Vectors to the MPI::Vectors works fine, but the way back 
does not work (The MPI::Vectors are larger than the Epetra_Vectors). Are 
there any hints in how I still could fit the data from the MPI::Vector into 
the Epetra_Vector? Or should I rather ask this on the Trilinos mailing list?
Thanks!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 1999 - 2018 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Author: Wolfgang Bangerth, University of Heidelberg, 1999
 */


// @sect3{Include files}

//Nox include files
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 

#include 
#include 

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include 
#endif

// This is C++ ...
#include 
#include 

using namespace dealii;

template 
class BoundaryValues : public dealii::Function
{
public:
BoundaryValues(const size_t n_components)
: dealii::Function(1), n_components(n_components)
{}

virtual double value (const dealii::Point   &p,
  const unsigned int  component = 0) const;

virtual void vector_value(const dealii::Point &p, dealii::Vector &value) const;
private:
const size_t n_components;
};


template 
double BoundaryValues::value (const dealii::Point &p,
  const unsigned int) const
{
const double x = p[0];
const double y = p[1];
return sin(M_PI * x) * sin(M_PI * y);
}

template 
void BoundaryValues::vector_value(const dealii::Point &p, dealii::Vector &value) const
{
for(size_t i = 0; i < value.size(); ++i)
value[i] = BoundaryValues::value(p, i);
}

template 
class Solution : public Function
{
public:
Solution() : Function(1)
{

}

virtual double value(const Point &p, const unsigned int component) const override;
virtual Tensor<1, dim> gradient(const Point &p, const unsigned int component) const override;

private:
};

template 
double Solution::value(const Point &p, const unsigned int) const
{
const double x = p[0];
const double y = p[1];
return sin(M_PI * x) * sin(M_PI * y);
}

template
Tensor<1, dim> Solution::gradient(const Point &p, const unsigned int) const
{
Tensor<1, dim> return_value;
AssertThrow(dim == 2, ExcNotImplemented());

const double x = p[0];
const double y = p[1];
return_value[0] = M_PI * cos(M_PI * x) * sin(M_PI * y);
return_value[1] = M_PI * cos(M_PI * y) * sin(M_PI * x);
return return_value;
}

template 
class ProblemIn

Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-25 Thread Wolfgang Bangerth
On 4/25/19 10:13 AM, 'Maxi Miller' via deal.II User Group wrote:
> After running some tests, I ended up reducing the problem to the transfer to 
> and from the Epetra-vectors. I have to write an interface to the model 
> (according to the instructions), and as shown in the code above. There I have 
> Epetra-Vectors created by my interface, with a size of 
> locally_relevant_dofs.size(), and TrilinosWrappers::MPI::Vectors. Transfer 
> from the Epetra-Vectors to the MPI::Vectors works fine, but the way back does 
> not work (The MPI::Vectors are larger than the Epetra_Vectors). Are there any 
> hints in how I still could fit the data from the MPI::Vector into the 
> Epetra_Vector? Or should I rather ask this on the Trilinos mailing list?

Good question. I think it would probably be very useful to have small testcase 
others could look at. The program you have attached is still 1,300 lines, 
which is far too much for anyone to understand. But since you have an idea of 
what the problem is, do you think you could come up with a small testcase that 
illustrates exactly the issue you have? It doesn't have to do anything useful 
at all -- in your case, I think all that's necessary is to create a vector, 
convert it as you describe above, and then output the sizes to show that they 
are not the same. Run this on two processors, and you should have something 
that could be done in 50 or 100 lines, and I think that might be small enough 
for someone who doesn't know your code to understand what is necessary.

I've built these sorts of testcases either from scratch in the past, or by 
just keep removing things from a program that (i) are run after the time the 
problem happens, and (ii) then removing everything that is not necessary.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-26 Thread 'Maxi Miller' via deal.II User Group
I tried to reduce it as much as possible while still being able to run, the 
result is attached. It still has ~600 lines, but I am not sure if I should 
remove things like the output-function? Furthermore the NOX-logic takes 
quite a lot of lines, and I would like to keep the program as usable as 
possible while removing unneccessary lines. Is that example usable?

Am Freitag, 26. April 2019 05:12:39 UTC+2 schrieb Wolfgang Bangerth:
>
> On 4/25/19 10:13 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > After running some tests, I ended up reducing the problem to the 
> transfer to 
> > and from the Epetra-vectors. I have to write an interface to the model 
> > (according to the instructions), and as shown in the code above. There I 
> have 
> > Epetra-Vectors created by my interface, with a size of 
> > locally_relevant_dofs.size(), and TrilinosWrappers::MPI::Vectors. 
> Transfer 
> > from the Epetra-Vectors to the MPI::Vectors works fine, but the way back 
> does 
> > not work (The MPI::Vectors are larger than the Epetra_Vectors). Are 
> there any 
> > hints in how I still could fit the data from the MPI::Vector into the 
> > Epetra_Vector? Or should I rather ask this on the Trilinos mailing list? 
>
> Good question. I think it would probably be very useful to have small 
> testcase 
> others could look at. The program you have attached is still 1,300 lines, 
> which is far too much for anyone to understand. But since you have an idea 
> of 
> what the problem is, do you think you could come up with a small testcase 
> that 
> illustrates exactly the issue you have? It doesn't have to do anything 
> useful 
> at all -- in your case, I think all that's necessary is to create a 
> vector, 
> convert it as you describe above, and then output the sizes to show that 
> they 
> are not the same. Run this on two processors, and you should have 
> something 
> that could be done in 50 or 100 lines, and I think that might be small 
> enough 
> for someone who doesn't know your code to understand what is necessary. 
>
> I've built these sorts of testcases either from scratch in the past, or by 
> just keep removing things from a program that (i) are run after the time 
> the 
> problem happens, and (ii) then removing everything that is not necessary. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Nox include files
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 

#include 
#include 

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include 
#endif

// This is C++ ...
#include 
#include 

using namespace dealii;

template 
class BoundaryValues : public dealii::Function
{
public:
BoundaryValues(const size_t n_components)
: dealii::Function(1), n_components(n_components)
{}

virtual double value (const dealii::Point   &p,
  const unsigned int  component = 0) const;

virtual void vector_value(const dealii::Point &p, dealii::Vector &value) const;
private:
const size_t n_components;
};


template 
double BoundaryValues::value (const dealii::Point &p,
   const unsigned int) const
{
const double x = p[0];
con

Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-26 Thread Wolfgang Bangerth
On 4/26/19 1:59 AM, 'Maxi Miller' via deal.II User Group wrote:
> I tried to reduce it as much as possible while still being able to run, 
> the result is attached. It still has ~600 lines, but I am not sure if I 
> should remove things like the output-function? Furthermore the NOX-logic 
> takes quite a lot of lines, and I would like to keep the program as 
> usable as possible while removing unneccessary lines. Is that example 
> usable?

Remove everything you can. If you get an error at one point, no output 
will be created -- so remove the output function. If the error happens 
before the linear system is solved, remove the code that computes the 
entries of the matrix. Then remove the matrix object itself. If you are 
currently solving a linear system before the error happens, try what 
happens if you just comment out the solution step -- and if the error 
still happens (because it probably doesn't depend on the actual values 
of the vector), then remove the solution step and the assembly of the 
matrix and the matrix. If you need a DoFHandler to set up the vector, 
output the index sets and sizes you use for the processors involved and 
build these index sets by hand instead -- then remove the DoFHandler and 
the finite element and everything else related to this. Continue this 
style of work until you really do have a small testcase :-)

Cheers
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-26 Thread 'Maxi Miller' via deal.II User Group
I think the current version is the smallest version I can get. It will work 
for a single thread, but not for two or more threads.
One observation I made was that if I not only copy the locally owned 
values, but also the locally relevant values from the deal.II-vector to the 
Epetra_Vector, the solver still does not converge, but the result is 
correct. When changing line 159 from locally_owned_elements to 
locally_relevant_elements, I get an access error, though, so there must be 
a different way to achieve that.

Am Freitag, 26. April 2019 17:00:58 UTC+2 schrieb Wolfgang Bangerth:
>
> On 4/26/19 1:59 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > I tried to reduce it as much as possible while still being able to run, 
> > the result is attached. It still has ~600 lines, but I am not sure if I 
> > should remove things like the output-function? Furthermore the NOX-logic 
> > takes quite a lot of lines, and I would like to keep the program as 
> > usable as possible while removing unneccessary lines. Is that example 
> > usable? 
>
> Remove everything you can. If you get an error at one point, no output 
> will be created -- so remove the output function. If the error happens 
> before the linear system is solved, remove the code that computes the 
> entries of the matrix. Then remove the matrix object itself. If you are 
> currently solving a linear system before the error happens, try what 
> happens if you just comment out the solution step -- and if the error 
> still happens (because it probably doesn't depend on the actual values 
> of the vector), then remove the solution step and the assembly of the 
> matrix and the matrix. If you need a DoFHandler to set up the vector, 
> output the index sets and sizes you use for the processors involved and 
> build these index sets by hand instead -- then remove the DoFHandler and 
> the finite element and everything else related to this. Continue this 
> style of work until you really do have a small testcase :-) 
>
> Cheers 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Nox include files
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 

#include 
#include 

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include 
#endif

// This is C++ ...
#include 
#include 

using namespace dealii;

template 
class BoundaryValues : public dealii::Function
{
public:
BoundaryValues(const size_t n_components)
: dealii::Function(1), n_components(n_components)
{}

virtual double value (const dealii::Point   &p,
  const unsigned int  component = 0) const;

virtual void vector_value(const dealii::Point &p, dealii::Vector &value) const;
private:
const size_t n_components;
};


template 
double BoundaryValues::value (const dealii::Point &p,
   const unsigned int) const
{
const double x = p[0];
const double y = p[1];
return sin(M_PI * x) * sin(M_PI * y);
}

template 
void BoundaryValues::vector_value(const dealii::Point &p, dealii::Vector &value) const
{
for(size_t i = 0; i < value.size(); ++i)
value[i] = BoundaryValues::value(p, i);
}

template 
class Problem

Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-26 Thread Wolfgang Bangerth
On 4/26/19 2:28 PM, 'Maxi Miller' via deal.II User Group wrote:
> I think the current version is the smallest version I can get. It will work 
> for a single thread, but not for two or more threads.

I tried to run this on my machine, but I don't have NOX as part of my Trilinos 
installation. That might have to wait for next week.

But I think there is still room for making the program shorter. The point is 
that the parts of the program that run before the error happens do not 
actually have to make sense. For example, could you replace your own boundary 
values class by ZeroFunction? Instead of assembling something real, could you 
just use the identity matrix on each cell?

Other things you don't need: Timers, convergence tables, etc. Which of the 
parameters you set in solve_for_NOX() are necessary? (Necessary to reproduce 
the bug; I know they're necessary for your application, but that's irrelevant 
here.) Could you replace building the residual by just putting random stuff in 
the vector?

Similarly, the computeJacobian and following functions really just check 
conditions, but when you run the program to illustrate the bug you're chasing, 
do these checks trigger? If not, remove them, then remove the calls to these 
functions (because they don't do anything any more), then remove the whole 
function.

I think there's still room to make this program smaller by at least a factor 
of 2 or 3 or 4 :-)

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread 'Maxi Miller' via deal.II User Group
The functions computePreconditioner and computeJacobian are necessary, else 
I will get compilation problems. Thus I think the current version is the 
best MWE I can get. Running with one thread gives a "Test passed", using 
more threads will result in "Convergence failed".

Am Samstag, 27. April 2019 05:43:47 UTC+2 schrieb Wolfgang Bangerth:
>
> On 4/26/19 2:28 PM, 'Maxi Miller' via deal.II User Group wrote: 
> > I think the current version is the smallest version I can get. It will 
> work 
> > for a single thread, but not for two or more threads. 
>
> I tried to run this on my machine, but I don't have NOX as part of my 
> Trilinos 
> installation. That might have to wait for next week. 
>
> But I think there is still room for making the program shorter. The point 
> is 
> that the parts of the program that run before the error happens do not 
> actually have to make sense. For example, could you replace your own 
> boundary 
> values class by ZeroFunction? Instead of assembling something real, could 
> you 
> just use the identity matrix on each cell? 
>
> Other things you don't need: Timers, convergence tables, etc. Which of the 
> parameters you set in solve_for_NOX() are necessary? (Necessary to 
> reproduce 
> the bug; I know they're necessary for your application, but that's 
> irrelevant 
> here.) Could you replace building the residual by just putting random 
> stuff in 
> the vector? 
>
> Similarly, the computeJacobian and following functions really just check 
> conditions, but when you run the program to illustrate the bug you're 
> chasing, 
> do these checks trigger? If not, remove them, then remove the calls to 
> these 
> functions (because they don't do anything any more), then remove the whole 
> function. 
>
> I think there's still room to make this program smaller by at least a 
> factor 
> of 2 or 3 or 4 :-) 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread 'Maxi Miller' via deal.II User Group
Now with file...

Am Montag, 29. April 2019 10:19:44 UTC+2 schrieb Maxi Miller:
>
> The functions computePreconditioner and computeJacobian are necessary, 
> else I will get compilation problems. Thus I think the current version is 
> the best MWE I can get. Running with one thread gives a "Test passed", 
> using more threads will result in "Convergence failed".
>
> Am Samstag, 27. April 2019 05:43:47 UTC+2 schrieb Wolfgang Bangerth:
>>
>> On 4/26/19 2:28 PM, 'Maxi Miller' via deal.II User Group wrote: 
>> > I think the current version is the smallest version I can get. It will 
>> work 
>> > for a single thread, but not for two or more threads. 
>>
>> I tried to run this on my machine, but I don't have NOX as part of my 
>> Trilinos 
>> installation. That might have to wait for next week. 
>>
>> But I think there is still room for making the program shorter. The point 
>> is 
>> that the parts of the program that run before the error happens do not 
>> actually have to make sense. For example, could you replace your own 
>> boundary 
>> values class by ZeroFunction? Instead of assembling something real, could 
>> you 
>> just use the identity matrix on each cell? 
>>
>> Other things you don't need: Timers, convergence tables, etc. Which of 
>> the 
>> parameters you set in solve_for_NOX() are necessary? (Necessary to 
>> reproduce 
>> the bug; I know they're necessary for your application, but that's 
>> irrelevant 
>> here.) Could you replace building the residual by just putting random 
>> stuff in 
>> the vector? 
>>
>> Similarly, the computeJacobian and following functions really just check 
>> conditions, but when you run the program to illustrate the bug you're 
>> chasing, 
>> do these checks trigger? If not, remove them, then remove the calls to 
>> these 
>> functions (because they don't do anything any more), then remove the 
>> whole 
>> function. 
>>
>> I think there's still room to make this program smaller by at least a 
>> factor 
>> of 2 or 3 or 4 :-) 
>>
>> Best 
>>   W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Nox include files
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 

#include 
#include 

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include 
#endif

// This is C++ ...
#include 
#include 

using namespace dealii;

template 
class ProblemInterface : public NOX::Epetra::Interface::Required,
public NOX::Epetra::Interface::Preconditioner,
public NOX::Epetra::Interface::Jacobian
{
public:
ProblemInterface(std::function residual_function,
 const MPI_Comm &mpi_communicator,
 const IndexSet &locally_owned_dofs,
 const IndexSet &locally_relevant_dofs) :
residual_function(residual_function),
mpi_communicator(mpi_communicator),
locally_owned_dofs(locally_owned_dofs),
locally_relevant_dofs(locally_relevant_dofs)
{
};

~ProblemInterface(){

};

bool computeF(const Epetra_Vector &x, Epetra_Vector &FVec, NOX::Epetra::Interface::Required::FillType)
{
f_vec.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_communicator);
residu

Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread 'Maxi Miller' via deal.II User Group
Could get it to work, now the code works equally fine for one or several 
MPI threads. Still, the next open question is how to apply non-zero 
Dirichlet boundary conditions (as in the file, for cos(pi*x)*cos(pi*y). 
When applying them as I do in the file, I get the warning that the solver 
is not converged. When using Neumann-Conditions, the solver converges and I 
get the correct output. What would be the solution for that?
Thanks!

Am Montag, 29. April 2019 10:20:13 UTC+2 schrieb Maxi Miller:
>
> Now with file...
>
> Am Montag, 29. April 2019 10:19:44 UTC+2 schrieb Maxi Miller:
>>
>> The functions computePreconditioner and computeJacobian are necessary, 
>> else I will get compilation problems. Thus I think the current version is 
>> the best MWE I can get. Running with one thread gives a "Test passed", 
>> using more threads will result in "Convergence failed".
>>
>> Am Samstag, 27. April 2019 05:43:47 UTC+2 schrieb Wolfgang Bangerth:
>>>
>>> On 4/26/19 2:28 PM, 'Maxi Miller' via deal.II User Group wrote: 
>>> > I think the current version is the smallest version I can get. It will 
>>> work 
>>> > for a single thread, but not for two or more threads. 
>>>
>>> I tried to run this on my machine, but I don't have NOX as part of my 
>>> Trilinos 
>>> installation. That might have to wait for next week. 
>>>
>>> But I think there is still room for making the program shorter. The 
>>> point is 
>>> that the parts of the program that run before the error happens do not 
>>> actually have to make sense. For example, could you replace your own 
>>> boundary 
>>> values class by ZeroFunction? Instead of assembling something real, 
>>> could you 
>>> just use the identity matrix on each cell? 
>>>
>>> Other things you don't need: Timers, convergence tables, etc. Which of 
>>> the 
>>> parameters you set in solve_for_NOX() are necessary? (Necessary to 
>>> reproduce 
>>> the bug; I know they're necessary for your application, but that's 
>>> irrelevant 
>>> here.) Could you replace building the residual by just putting random 
>>> stuff in 
>>> the vector? 
>>>
>>> Similarly, the computeJacobian and following functions really just check 
>>> conditions, but when you run the program to illustrate the bug you're 
>>> chasing, 
>>> do these checks trigger? If not, remove them, then remove the calls to 
>>> these 
>>> functions (because they don't do anything any more), then remove the 
>>> whole 
>>> function. 
>>>
>>> I think there's still room to make this program smaller by at least a 
>>> factor 
>>> of 2 or 3 or 4 :-) 
>>>
>>> Best 
>>>   W. 
>>>
>>> -- 
>>>  
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/ 
>>>
>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Nox include files
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 

#include 
#include 

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include 
#endif

// This is C++ ...
#include 
#include 

using namespace dealii;

template 
class BoundaryValues : public dealii::Function
{
public:
BoundaryValues(const size_t n_components)
: dealii::Function(1), n_components(n_components)
{}

virtual double value (const dealii::Point   &p,
  const unsigned int  component = 0) cons

Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread Wolfgang Bangerth
On 4/29/19 7:35 AM, 'Maxi Miller' via deal.II User Group wrote:
> Could get it to work, now the code works equally fine for one or several 
> MPI threads.

What did you have to do? I was going to reproduce your problem today by 
installing a Trilinos version that has NOX enabled. Is this moot now, 
i.e., was it a bug in your code or did you just work around the issue in 
some way that doesn't expose it?


> Still, the next open question is how to apply non-zero 
> Dirichlet boundary conditions (as in the file, for cos(pi*x)*cos(pi*y). 
> When applying them as I do in the file, I get the warning that the 
> solver is not converged. When using Neumann-Conditions, the solver 
> converges and I get the correct output. What would be the solution for that?

The question to ask is who is right. When you get the message that the 
solver is not converged, is this correct? That is, does it converge if 
you allow more iterations? What if you use a case where you know the 
exact solution -- is the computed solution actively wrong?

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread 'Maxi Miller' via deal.II User Group


Am Montag, 29. April 2019 22:36:55 UTC+2 schrieb Wolfgang Bangerth:
>
> On 4/29/19 7:35 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > Could get it to work, now the code works equally fine for one or several 
> > MPI threads. 
>
> What did you have to do? I was going to reproduce your problem today by 
> installing a Trilinos version that has NOX enabled. Is this moot now, 
> i.e., was it a bug in your code or did you just work around the issue in 
> some way that doesn't expose it? 
>
> NOX expects vectors containing only the local elements, but no ghost 
elements. Thus I had to initialize all vectors going in or out from any 
NOX-related function using locally_owned_dofs, and copy accordingly if 
external vectors contained ghost elements. 

>
> > Still, the next open question is how to apply non-zero 
> > Dirichlet boundary conditions (as in the file, for cos(pi*x)*cos(pi*y). 
> > When applying them as I do in the file, I get the warning that the 
> > solver is not converged. When using Neumann-Conditions, the solver 
> > converges and I get the correct output. What would be the solution for 
> that? 
>
> The question to ask is who is right. When you get the message that the 
> solver is not converged, is this correct? That is, does it converge if 
> you allow more iterations? What if you use a case where you know the 
> exact solution -- is the computed solution actively wrong? 
>
The solver does not converge, and the output looks as if it is using 
Dirichlet-conditions with u = 0, independently of the "real" boundary 
conditions.  

>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-04-29 Thread Wolfgang Bangerth


Maxi,

> What did you have to do? I was going to reproduce your problem today by
> installing a Trilinos version that has NOX enabled. Is this moot now,
> i.e., was it a bug in your code or did you just work around the issue in
> some way that doesn't expose it?
> 
> NOX expects vectors containing only the local elements, but no ghost 
> elements. 
> Thus I had to initialize all vectors going in or out from any NOX-related 
> function using locally_owned_dofs, and copy accordingly if external vectors 
> contained ghost elements.

I see. So this is a NOX requirement, not something that we could have done 
anything about on the deal.II side?


> The solver does not converge, and the output looks as if it is using 
> Dirichlet-conditions with u = 0, independently of the "real" boundary 
> conditions.

I don't know NOX, but is it using an update that it adds to the solution in 
each step? If so, you need to have the correct boundary conditions in the 
initial guess already. What happens if you only allow NOX to do zero or one 
iterations?

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-05-01 Thread 'Maxi Miller' via deal.II User Group


Am Dienstag, 30. April 2019 05:59:05 UTC+2 schrieb Wolfgang Bangerth:
>
>
> Maxi, 
>
> > What did you have to do? I was going to reproduce your problem today 
> by 
> > installing a Trilinos version that has NOX enabled. Is this moot 
> now, 
> > i.e., was it a bug in your code or did you just work around the 
> issue in 
> > some way that doesn't expose it? 
> > 
> > NOX expects vectors containing only the local elements, but no ghost 
> elements. 
> > Thus I had to initialize all vectors going in or out from any 
> NOX-related 
> > function using locally_owned_dofs, and copy accordingly if external 
> vectors 
> > contained ghost elements. 
>
> I see. So this is a NOX requirement, not something that we could have done 
> anything about on the deal.II side? 
>
> No, as far as I understand. NOX needs those vectors to be in the correct 
way, else it will not work. 

>
> > The solver does not converge, and the output looks as if it is using 
> > Dirichlet-conditions with u = 0, independently of the "real" boundary 
> conditions. 
>
> I don't know NOX, but is it using an update that it adds to the solution 
> in 
> each step? If so, you need to have the correct boundary conditions in the 
> initial guess already. What happens if you only allow NOX to do zero or 
> one 
> iterations? 
>
> Best 
>   W. 
>
> Zero iterations is not allowed, the first iteration already goes way off 
(expected highest value: 1, calculated value: 1e6), regardless if I set 
boundary conditions already to the solution vector I feed to NOX, or if I 
set the boundary conditions during the assembly of the right hand side. 

> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Integrating a deal.II-specific function in a NOX-Interface for MPI-threads

2019-05-01 Thread Wolfgang Bangerth
On 5/1/19 5:58 AM, 'Maxi Miller' via deal.II User Group wrote:
> 
> I don't know NOX, but is it using an update that it adds to the
> solution in
> each step? If so, you need to have the correct boundary conditions
> in the
> initial guess already. What happens if you only allow NOX to do zero
> or one
> iterations?
> 
> Zero iterations is not allowed, the first iteration already goes way off 
> (expected highest value: 1, calculated value: 1e6), regardless if I set 
> boundary conditions already to the solution vector I feed to NOX, or if 
> I set the boundary conditions during the assembly of the right hand side.

I don't know NOX at all, so I'm afraid there is little else I can to 
help you with this. As always, make the problem small and easy (e.g., 
try to find a constant solution where the nonlinear solver only has to 
find *which* constant value you are looking for).

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.