Re: [petsc-dev] MatNest and FieldSplit

2019-03-25 Thread Pierre Jolivet via petsc-dev
Thanks, this makes (slightly) more sense to me know.
For some reason my application is still not acting properly but I must be 
screwing somewhere else the nested FieldSplit…

Thank you,
Pierre

> On 24 Mar 2019, at 11:42 PM, Dave May via petsc-dev  
> wrote:
> 
> Matt is right.
> 
> When you defined the operator S, you basically invalidate the operator N (in 
> the sense that they are no longer consistent). Hence when you use KSP nest to 
> solve your problem your A matrix looks like 
>   A = diag[1, 2, 4, 0, 8]
> but the B matrix you have defined looks like
>   B = diag[1, 2, 4, 0.1]
> 
> The only way to obtain the correct answer with your code is thus to use the 
> option
> -ksp_type preonly
> 
> Thanks
> Dave
> 
> 
> 
> On Sun, 24 Mar 2019 at 22:09, Mark Adams via petsc-dev  > wrote:
> I think he is saying that this line seems to have no effect (and the comment 
> is hence wrong):
> 
> KSPSetOperators(subksp[nsplits - 1], S, S);
> // J2 = [[4, 0] ; [0, 0.1]]
> 
> J2 is a 2x2 but this block has been changed into two single equation fields. 
> Does this KSPSetOperators supposed to copy this 1x1 S matrix into the (1,1) 
> block of the "J2", or do some sort of correct mixing internally, to get what 
> he wants?
> 
> BTW, this line does not seem necessary to me so maybe I'm missing something.
> 
> KSPSetOperators(sub, J2, J2);
> 
> 
> On Sun, Mar 24, 2019 at 4:33 PM Matthew Knepley via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> On Sun, Mar 24, 2019 at 10:21 AM Pierre Jolivet  > wrote:
> It’s a 4x4 matrix.
> The first 2x2 diagonal matrix is a field.
> The second 2x2 diagonal matrix is another field.
> In the second field, the first diagonal coefficient is a subfield.
> In the second field, the second diagonal coefficient is another subfield.
> I’m changing the operators from the second subfield (last diagonal 
> coefficient of the matrix).
> When I solve a system with the complete matrix (2 fields), I get a different 
> “partial solution" than when I solve the “partial system” on just the second 
> field (with the two subfields in which I modified the operators from the 
> second one).
> 
> I may understand waht you are doing.
> Fieldsplit calls MatGetSubMatrix() which can copy values, depending on the 
> implementation,
> so changing values in the original matrix may or may not change it in the PC.
>  
>Matt
> 
> I don’t know if this makes more or less sense… sorry :\
> Thanks,
> Pierre
> 
>> On 24 Mar 2019, at 8:42 PM, Matthew Knepley > > wrote:
>> 
>> On Sat, Mar 23, 2019 at 9:12 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> I’m trying to figure out why both solutions are not consistent in the 
>> following example.
>> Is what I’m doing complete nonsense?
>> 
>> The code does not make clear what you are asking. I can see its a nested 
>> fieldsplit.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> Thanks in advance for your help,
>> Pierre
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-dev] Is there a good reason that BuildSystem's cuda.py requires GNU compilers?

2019-03-25 Thread Mills, Richard Tran via petsc-dev
Folks,

I've spent a while looking at the BuildSystem code, and I think this is going 
to take me more time than I have available right now to figure it out on my 
own. Someone more familiar with BuildSystem needs to give me some hints -- 
soon, if possible, as I really think that building with non-GCC compilers and 
CUDA should be supported in the upcoming release.

What I want to do is to add a test inside cuda.py that checks to see if 
something like

  nvcc --compiler-option=  
hello.c

will return successfully.

What I wasn't sure about was how to get at the values for a bunch of the above 
variables within the cuda.py code. After deciding I couldn't really follow 
everything that is happening in the code buy just looking at it, I used the 
'pdb' python debugger to stick a breakpoint in the configureLibrary() method in 
cuda.py so I could poke around.

 Aside: Looking at contents of configure objects? 
I had hoped I could look at everything that is stashed in the different objects 
by doing things like

(Pdb) p dir(self.compilers)

But this doesn't actually list everything in there. There is no 'CUDAC' 
attribute listed, for instance, but it is there for me to print:

(Pdb) p self.compilers.CUDAC
'nvcc'

Is there a good way for me to actually see all the attributes in something like 
the self.compilers object? Sorry, my Python skills are very rusty -- haven't 
written much Python in about a decade.
 End aside 

It appears that what I need to construct my command line is then available in

self.compilers.CUDAC -- The invocation for the CUDA compiler
self.compilers.CXXFLAGS -- The flags passed to the C++ compiler (our "host")
self.compilers.CUDAFLAGS -- The flags like "-ccbin pgc++" being passed to nvcc 
or whatever CUDAC is

I could use these to construct a command that I then pass to the command shell, 
and maybe I should just do this, but this doesn't seem to follow the 
BuildSystem paradigm. It seems like I should be able to run this test by doing 
something like

self.pushLanguage('CUDA')
self.checkCompile(cuda_test)

which is, in fact, invoked in checkCUDAVersion(). But the command put together 
by checkCompile() does not include "--compiler-option=". Should I be modifying the code the code somewhere so that 
this argument goes into the compiler invocation constructed in 
self.checkCompile? If so, where should I be doing this?

--Richard



On 3/22/19 10:24 PM, Mills, Richard Tran wrote:


On 3/22/19 3:28 PM, Mills, Richard Tran wrote:
On 3/22/19 12:13 PM, Balay, Satish wrote:

Is there currently an existing check like this somewhere? Or will things just 
fail when running 'make' right now?



Most likely no. Its probably best to attempt the error case - and
figure-out how to add a check.

I gave things a try and verified that there is no check for this anywhere in 
configure -- things just fail at 'make' time. I think that all we need is a 
test that will try to compile any simple, valid C program using "nvcc 
--compiler-options= ". If the 
test fails, it should report something like "Compiler flags do not work with 
CUDA compiler; perhaps you need to provide to use -ccbin in CUDAFLAGS to 
specify the intended host compiler".

I'm not sure where this test should go. Does it make sense for this to go in 
cuda.py with the other checks like checkNVCCDoubleAlign()? If so, how do I get 
at the values of  and ? I'm 
not sure what modules I need to import from BuildSystem...
OK, answering part of my own question here: Re-familiarizing myself with how 
the configure packages work, and then looking through the makefiles, I see that 
the argument to "--compiler-options" is filled in by the makefile variables

${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}

and it appears that this partly maps to self.compilers.CFLAGS in BuildSystem. 
But so far I've not managed to employ the right combination of find and grep to 
figure out where PCC_FLAGS and CCPPFLAGS come from.

--Richard

--Richard

Satish

On Fri, 22 Mar 2019, Mills, Richard Tran via petsc-dev wrote:



On 3/18/19 7:29 PM, Balay, Satish wrote:

On Tue, 19 Mar 2019, Mills, Richard Tran via petsc-dev wrote:



Colleagues,

It took me a while to get PETSc to build at all with anything on Summit other 
than the GNU compilers, but, once this was accomplished, editing out the 
isGNU() test and then passing something like

'--with-cuda=1',
'--with-cudac=nvcc -ccbin pgc++',



Does the following also work?

--with-cuda=1 --with-cudac=nvcc CUDAFLAGS='-ccbin pgc++'

Yes, using CUDAFLAGS as above also works, and that does seem to be a better way 
to do things.

After experimenting with a lot of different builds on Summit, and doing more 
reading about how CUDA compilation works on different platforms, I'm now 
thinking that perhaps configure.py should *avoid* doing anything clever to try 
figure out what the value of "-ccbin" should be. For one, this is not anything 
that NVIDIA's toolchain does for the user in the first place: If you want to 
use n