Re: [deal.II] Installation: Candi on cluster (Platform)

2017-10-05 Thread Kartik Jujare
That is exactly the problem. The script tries to install zlib, bzip2 and 
boost even though they are switched off in the configuration file.

Regards,
Kartik Jujare

On Wednesday, October 4, 2017 at 11:30:25 PM UTC+2, Uwe Köcher wrote:
>
> Ah okay, now I understand. Since the package boost (from candi) has failed 
> here, deal.II should have used its own bundled version I think.
>
> If the tests are fine, then it should work.
>
> You could switch off the package boost in the candi.cfg if you need 
> another run.
>
> regards
>   Uwe
>
> On Wednesday, 4 October 2017 22:05:23 UTC+2, Kartik Jujare wrote:
>>
>> Yes. Thanks. All tests pass.
>>
>> regards.
>>
>> On Wednesday, October 4, 2017 at 7:50:54 PM UTC+2, Timo Heister wrote:
>>>
>>> > My question: Does it matter if I use centos7.platform instead 
>>> > of the linux_cluster.platform? 
>>>
>>> If it works for you, then all is well of course. You should check the 
>>> deal.II installation by doing something like: 
>>>
>>> cd tmp/build/deal*/ 
>>> make test 
>>>
>>> -- 
>>> Timo Heister 
>>> http://www.math.clemson.edu/~heister/ 
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Installation: Candi on cluster (Platform)

2017-10-04 Thread Kartik Jujare
Yes. Thanks. All tests pass.

regards.

On Wednesday, October 4, 2017 at 7:50:54 PM UTC+2, Timo Heister wrote:
>
> > My question: Does it matter if I use centos7.platform instead 
> > of the linux_cluster.platform? 
>
> If it works for you, then all is well of course. You should check the 
> deal.II installation by doing something like: 
>
> cd tmp/build/deal*/ 
> make test 
>
> -- 
> Timo Heister 
> http://www.math.clemson.edu/~heister/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Installation: Candi on cluster (Platform)

2017-10-04 Thread Kartik Jujare
The way I checked if they were installed was by using the "whereis" 
command. That command returned the following paths.

$whereis zlib
$zlib: /usr/include/zlib.h /usr/share/man/man3/zlib.3.gz

$whereis bzip2
$bzip2: /usr/bin/bzip2 /usr/share/man/man1/bzip2.1.gz

Also, boost is available as a module. 

lib/boost/1.63.0/intel
lib/boost/1.63.0/gcc

Tried running candi script with and without boost loaded. Ends up in the 
same error.

The zlib and bzip2 are also installed by candi script.


Regards,
Kartik Jujare


On Wednesday, October 4, 2017 at 8:04:56 PM UTC+2, Uwe Köcher wrote:
>
> usually it does not matter, when you specify a platform file.
>
> But here a boost error occurs (name clash for v1.63).
> I think something will be wrong in your configuration.
>
> Do you have zlib and bzip2 installed via your system (with devel packages) 
> or via candi?
>
> On Wednesday, 4 October 2017 19:50:54 UTC+2, Timo Heister wrote:
>>
>> > My question: Does it matter if I use centos7.platform instead 
>> > of the linux_cluster.platform? 
>>
>> If it works for you, then all is well of course. You should check the 
>> deal.II installation by doing something like: 
>>
>> cd tmp/build/deal*/ 
>> make test 
>>
>> -- 
>> Timo Heister 
>> http://www.math.clemson.edu/~heister/ 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Installation: Candi on cluster (Platform)

2017-10-04 Thread Kartik Jujare

Hello all,

I have installed the candi suite on my university cluster: 
https://www.tu-braunschweig.de/it/dienste/21/phoenix . However, the 
platform that I had to force was the centos7.platform. Since for the 
linux_cluster.platform I received the error that is mentioned at the end of 
this message. My question: Does it matter if I use centos7.platform instead 
of the linux_cluster.platform? The centos7.platform started its 
installation directly from parmetis instead of the dependencies.



Thanks and regards,
Kartik Jujare


TERMINAL OUTPUT:
==

***
candi tries now to download, configure, build and install:

Project:  deal.II-toolchain
Platform: ./deal.II-toolchain/platforms/supported/linux_cluster.platform

Fetching zlib 1.2.8
Trying to download 
https://www.ces.clemson.edu/dealii/mirror/zlib-1.2.8.tar.gz

curl: (1) Protocol https not supported or disabled in libcurl
Trying to download http://zlib.net/zlib-1.2.8.tar.gz

curl: (6) Couldn't resolve host 'zlib.net'
Trying to download 
https://www.ces.clemson.edu/dealii/mirror/zlib-1.2.8.tar.gz
--2017-10-03 01:51:48--  
https://www.ces.clemson.edu/dealii/mirror/zlib-1.2.8.tar.gz
Resolving www.ces.clemson.edu (www.ces.clemson.edu)... 130.127.200.19
Connecting to www.ces.clemson.edu 
(www.ces.clemson.edu)|130.127.200.19|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://cecas.clemson.edu/dealii/mirror/zlib-1.2.8.tar.gz 
[following]
--2017-10-03 01:51:48--  
https://cecas.clemson.edu/dealii/mirror/zlib-1.2.8.tar.gz
Resolving cecas.clemson.edu (cecas.clemson.edu)... 130.127.200.74
Connecting to cecas.clemson.edu (cecas.clemson.edu)|130.127.200.74|:443... 
connected.
HTTP request sent, awaiting response... 200 OK
Length: 571091 (558K) [application/x-gzip]
Saving to: ‘zlib-1.2.8.tar.gz’

100%[==>]
 
571.091  944KB/s   in 0,6s   

2017-10-03 01:51:50 (944 KB/s) - ‘zlib-1.2.8.tar.gz’ saved [571091/571091]

Verifying zlib-1.2.8.tar.gz
zlib-1.2.8.tar.gz: OK
Unpacking zlib-1.2.8.tar.gz
Building zlib 1.2.8
Checking for shared library support...
Building shared library libz.so.1.2.8 with mpicc.
Checking for off64_t... Yes.
Checking for fseeko... Yes.
Checking for strerror... Yes.
Checking for unistd.h... Yes.
Checking for stdarg.h... Yes.
Checking whether to use vs[n]printf() or s[n]printf()... using 
vs[n]printf().
Checking for vsnprintf() in stdio.h... Yes.
Checking for return value of vsnprintf()... Yes.
Checking for attribute(visibility) support... Yes.
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o adler32.o adler32.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o crc32.o crc32.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o deflate.o deflate.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o infback.o infback.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o inffast.o inffast.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o inflate.o inflate.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o inftrees.o 
inftrees.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o trees.o trees.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o zutil.o zutil.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o compress.o 
compress.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o uncompr.o uncompr.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o gzclose.o gzclose.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o gzlib.o gzlib.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o gzread.o gzread.c
mpicc -O3  -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN   -c -o gzwrite.o gzwrite.c
ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o 
inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o 
gzwrite.o 
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/adler32.o adler32.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/crc32.o crc32.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/deflate.o deflate.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/infback.o infback.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/inffast.o inffast.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/inflate.o inflate.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/inftrees.o inftrees.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/trees.o trees.c
mpicc -O3  -fPIC -D_LARGEFILE64_SOURCE=1 -DHAVE_HIDDEN -DPIC -c -o 
objs/zutil.o zutil.c
mpicc -O3  -f

[deal.II] Re: openmp flag in CMake?

2017-07-16 Thread Kartik Jujare
Dear Jean,

Thank you for pointing that out. I somehow missed the the 3.7 section while 
browsing through it. 

Regards,
Kartik Jujare

On Sunday, July 16, 2017 at 10:12:56 PM UTC+2, Jean-Paul Pelteret wrote:
>
> Dear Kartik,
>
> There's both a section in the main 
> <https://www.dealii.org/developer/users/cmake.html#compiler> and 
> user-project 
> <https://www.dealii.org/developer/users/cmakelists.html#cmakeadvanced.setup_target>
>  
> documentation on CMake about how to configure CMake with extra compile and 
> linker flags. Take a look there, and perhaps the information contained 
> therein will help clear things up. I think that you need to extend `
> DEAL_II_CXX_FLAGS` or `DEAL_II_LINKER_FLAGS` with this extra flag after 
> DEAL_II_INITIALIZE_CACHED_VARIABLES but before calling 
> DEAL_II_INVOKE_AUTOPILOT 
> or DEAL_II_SETUP_TARGET.
>
> I hope that this helps.
> Regards,
> Jean-Paul
>
> On Sunday, July 16, 2017 at 7:28:27 PM UTC+2, Kartik Jujare wrote:
>>
>> Hello,
>>
>> Would be grateful if anyone could correct or direct me to the right 
>> solution.
>>
>> As a test I have a following for loop is a step file
>>
>> #pragma omp parallel
>> {
>> #pragma omp for //private(k) reduction(+:integral)
>> for (k = 0; k < 4; k++)
>> {
>> sleep(1);
>> }
>> }
>>
>>
>> Now the time outputs of the this for loop and another normal one (without 
>> openmp) are the same. I think the problem lies in the flag that I am 
>> passing in the Cmakefile but cant figure out what? 
>>
>> I have tried setting the following flags:
>>
>> SET (MPI_CXX_COMPILE_FLAGS "-fopenmp")  
>> //OR
>> SET (DEAL_II_CXX_FLAGS "-fopenmp")
>>
>> I suppose I dont understand cmake very well. What would be an appropriate 
>> place to place the above line in the CMakeLists.txt
>>
>> Thank you and regards,
>> Kartik Jujare
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: openmp flag in CMake?

2017-07-16 Thread Kartik Jujare
Dear Maxi,

Thank you so much for reminding me of that !! Works fantastic!

Regards,
Kartik Jujare


On Sunday, July 16, 2017 at 9:06:20 PM UTC+2, Maxi Miller wrote:
>
> Can you try CMake-GUI? At least there you can find all occurences of 
> openmp and enable it.
>
> Am Sonntag, 16. Juli 2017 19:28:27 UTC+2 schrieb Kartik Jujare:
>>
>> Hello,
>>
>> Would be grateful if anyone could correct or direct me to the right 
>> solution.
>>
>> As a test I have a following for loop is a step file
>>
>> #pragma omp parallel
>> {
>> #pragma omp for //private(k) reduction(+:integral)
>> for (k = 0; k < 4; k++)
>> {
>> sleep(1);
>> }
>> }
>>
>>
>> Now the time outputs of the this for loop and another normal one (without 
>> openmp) are the same. I think the problem lies in the flag that I am 
>> passing in the Cmakefile but cant figure out what? 
>>
>> I have tried setting the following flags:
>>
>> SET (MPI_CXX_COMPILE_FLAGS "-fopenmp")  
>> //OR
>> SET (DEAL_II_CXX_FLAGS "-fopenmp")
>>
>> I suppose I dont understand cmake very well. What would be an appropriate 
>> place to place the above line in the CMakeLists.txt
>>
>> Thank you and regards,
>> Kartik Jujare
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] openmp flag in CMake?

2017-07-16 Thread Kartik Jujare
Hello,

Would be grateful if anyone could correct or direct me to the right 
solution.

As a test I have a following for loop is a step file

#pragma omp parallel
{
#pragma omp for //private(k) reduction(+:integral)
for (k = 0; k < 4; k++)
{
sleep(1);
}
}


Now the time outputs of the this for loop and another normal one (without 
openmp) are the same. I think the problem lies in the flag that I am 
passing in the Cmakefile but cant figure out what? 

I have tried setting the following flags:

SET (MPI_CXX_COMPILE_FLAGS "-fopenmp")  
//OR
SET (DEAL_II_CXX_FLAGS "-fopenmp")

I suppose I dont understand cmake very well. What would be an appropriate 
place to place the above line in the CMakeLists.txt

Thank you and regards,
Kartik Jujare

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] step-40. petsc with openmp

2017-07-12 Thread Kartik Jujare
Thank you for the answer

Regards,
Kartik

On Tuesday, July 11, 2017 at 5:53:48 PM UTC+2, Wolfgang Bangerth wrote:
>
> On 07/11/2017 08:33 AM, Kartik Jujare wrote: 
> > 
> > Does this still hold true of the Petsc wrappers not being thread-safe? 
>
> Yes. But it's not the wrappers that are the problem, it's that PETSc 
> itself is 
> not thread safe. 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] step-40. petsc with openmp

2017-07-11 Thread Kartik Jujare
Hi Timo,

Does this still hold true of the Petsc wrappers not being thread-safe?

Regards,
Kartik Jujare


On Friday, March 14, 2014 at 3:24:18 PM UTC+1, Timo Heister wrote:
>
> > I am trying to use openmp in the element assemble procedure of step-40. 
> Has 
> > anyone done this before? Any advice is appreciated. 
>
> Our implementation of PETScWrappers is not thread-safe, so you can not 
> write into matrices/vectors concurrently. That means using TBB (what 
> we use inside deal.II) or OpenMP won't help you much, because it would 
> require locking. 
>
> Our Trilinos wrappers are thread-safe, though. 
>
> Even if you change the implementation of PETScWrappers to allow this 
> (it wouldn't be too difficult, ask me if you want to know more), you 
> still have the problem that anything in the linear solver (matrix 
> vector products, preconditioners, ...) are likely not running 
> multi-threaded. 
>
> > My purpose is to increase the assembly time by making use of 
> multi-threads 
> > of each core. Is this feasible? 
>
> You know you can run one MPI task per core, right? 
>
> -- 
> Timo Heister 
> http://www.math.clemson.edu/~heister/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: slepc installation error - How does one find the CHECKSUM for slepc to put in /candi/deal.II-toolchain/packages/slepc.package ?

2017-06-28 Thread Kartik Jujare
Thank you Uwe.

regards,
Kartik Jujare

On Wednesday, June 28, 2017 at 9:10:23 AM UTC+2, Uwe Köcher wrote:
>
> the download location for slepc is fixed now, should work with the current 
> version of candi
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: slepc installation error - How does one find the CHECKSUM for slepc to put in /candi/deal.II-toolchain/packages/slepc.package ?

2017-06-28 Thread Kartik Jujare
Dear Jean,

Thank you for your answer. This was new for me. However a slight correction 
to the syntax which I found out from one of the forums. The correct syntax 
I came to know is "md5sum slepc-3.7.4.tar.gz". Thanks a ton.

Regards,
Kartik Jujare

On Wednesday, June 28, 2017 at 8:24:41 AM UTC+2, Jean-Paul Pelteret wrote:
>
> Hi Kartik,
>
> It looks like candi uses md5 
> <https://github.com/dealii/candi/blob/master/candi.sh#L225> to verify 
> downloads. What you could do is manually download the file off of their 
> website and run "md5 slepc-3.7.4.tar.gz" to compute the checksum yourself.
>
> I hope that this helps!
>
> Regards,
> Jean-Paul
>
> On Tuesday, June 27, 2017 at 7:07:53 PM UTC+2, Kartik Jujare wrote:
>>
>> Hello everyone,
>>
>> I am facing this error while installing. 
>>
>> ===
>> Packages: 
>> load:dealii-prepare 
>> once:parmetis 
>> once:superlu_dist 
>> once:hdf5 
>> once:p4est 
>> once:trilinos 
>> once:petsc 
>> once:slepc 
>> dealii 
>>
>> ---
>>  
>>
>> Building stable releases of deal.II-toolchain packages. 
>>
>> Compiler Variables: 
>> ---
>>  
>>
>> CC variable not set, but default mpicc found. 
>> CC  = /usr/bin/mpicc 
>> CXX variable not set, but default mpicxx found. 
>> CXX = /usr/bin/mpicxx 
>> FC variable not set, but default mpif90 found. 
>> FC  = /usr/bin/mpif90 
>> FF variable not set, but default mpif77 found. 
>> FF  = /usr/bin/mpif77 
>>
>> ---
>>  
>>
>> Once ready, hit enter to continue! 
>>
>>
>> ***
>>  
>>
>> candi tries now to download, configure, build and install: 
>>
>> Project:  deal.II-toolchain 
>> Platform: deal.II-toolchain/platforms/supported/ubuntu14.platform 
>>
>> Loading dealii-prepare 
>> Skipping parmetis 
>> Skipping superlu_dist 
>> Skipping hdf5 
>> Skipping p4est 
>> trilinos: configuration with ParMETIS 
>> trilinos: configuration with SuperLU_dist 
>> Skipping trilinos 
>> PETSc: configuration with ParMETIS 
>> Skipping petsc 
>> Fetching slepc 3.7.3 
>> Trying to download 
>> https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
>>  % Total% Received % Xferd  Average Speed   TimeTime Time 
>>  Current 
>> Dload  Upload   Total   SpentLeft 
>>  Speed 
>> 100   351  100   3510 0352  0 --:--:-- --:--:-- --:--:-- 
>>   352 
>>  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
>> 0 
>> curl: (22) The requested URL returned error: 404 Not Found 
>> Trying to download 
>> http://slepc.upv.es/download/download.php?filename=slepc-3.7.3.tar.gz 
>>  % Total% Received % Xferd  Average Speed   TimeTime Time 
>>  Current 
>> Dload  Upload   Total   SpentLeft 
>>  Speed 
>>  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
>> 0 
>> curl: (22) The requested URL returned error: 404 Not Found 
>> Trying to download 
>> https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
>> --2017-06-27 17:01:34--  
>> https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
>> Resolving www.ces.clemson.edu (www.ces.clemson.edu)... 130.127.200.19 
>> Connecting to www.ces.clemson.edu 
>> (www.ces.clemson.edu)|130.127.200.19|:443... 
>> connected. 
>> HTTP request sent, awaiting response... 301 Moved Permanently 
>> Location: https://cecas.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
>> [following] 
>> --2017-06-27 17:01:34--  
>> https://cecas.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
>> Resolving cecas.clemson.edu (cecas.clemson.edu)... 130.127.200.74 
>> Connecting to cecas.clemson.edu (cecas.clemson.edu)|130.127.200.74|:443... 
>> connected. 
>> HTTP request sent, awaiting response... 404 Not Found 
>> 2017-06-27 17:01:35 ERROR 404: Not Found. 
>>
>> Trying to download 
>> http://slepc.upv.es/download/download.php?filename=slepc-3.7.3.tar.gz 
>> --2017-06-27 17:01:35--  
>> http://slepc.upv.es/download/download.php?filename=slepc-3.

[deal.II] slepc installation error - How does one find the CHECKSUM for slepc to put in /candi/deal.II-toolchain/packages/slepc.package ?

2017-06-27 Thread Kartik Jujare
Hello everyone,

I am facing this error while installing. 

===
Packages: 
load:dealii-prepare 
once:parmetis 
once:superlu_dist 
once:hdf5 
once:p4est 
once:trilinos 
once:petsc 
once:slepc 
dealii 

--- 

Building stable releases of deal.II-toolchain packages. 

Compiler Variables: 
--- 

CC variable not set, but default mpicc found. 
CC  = /usr/bin/mpicc 
CXX variable not set, but default mpicxx found. 
CXX = /usr/bin/mpicxx 
FC variable not set, but default mpif90 found. 
FC  = /usr/bin/mpif90 
FF variable not set, but default mpif77 found. 
FF  = /usr/bin/mpif77 

--- 

Once ready, hit enter to continue! 


*** 

candi tries now to download, configure, build and install: 

Project:  deal.II-toolchain 
Platform: deal.II-toolchain/platforms/supported/ubuntu14.platform 

Loading dealii-prepare 
Skipping parmetis 
Skipping superlu_dist 
Skipping hdf5 
Skipping p4est 
trilinos: configuration with ParMETIS 
trilinos: configuration with SuperLU_dist 
Skipping trilinos 
PETSc: configuration with ParMETIS 
Skipping petsc 
Fetching slepc 3.7.3 
Trying to download 
https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
 % Total% Received % Xferd  Average Speed   TimeTime Time 
 Current 
Dload  Upload   Total   SpentLeft 
 Speed 
100   351  100   3510 0352  0 --:--:-- --:--:-- --:--:-- 
  352 
 0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0 
curl: (22) The requested URL returned error: 404 Not Found 
Trying to download 
http://slepc.upv.es/download/download.php?filename=slepc-3.7.3.tar.gz 
 % Total% Received % Xferd  Average Speed   TimeTime Time 
 Current 
Dload  Upload   Total   SpentLeft 
 Speed 
 0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0 
curl: (22) The requested URL returned error: 404 Not Found 
Trying to download 
https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
--2017-06-27 17:01:34-- 
 https://www.ces.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
Resolving www.ces.clemson.edu (www.ces.clemson.edu)... 130.127.200.19 
Connecting to www.ces.clemson.edu 
(www.ces.clemson.edu)|130.127.200.19|:443... connected. 
HTTP request sent, awaiting response... 301 Moved Permanently 
Location: https://cecas.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
[following] 
--2017-06-27 17:01:34-- 
 https://cecas.clemson.edu/dealii/mirror/slepc-3.7.3.tar.gz 
Resolving cecas.clemson.edu (cecas.clemson.edu)... 130.127.200.74 
Connecting to cecas.clemson.edu (cecas.clemson.edu)|130.127.200.74|:443... 
connected. 
HTTP request sent, awaiting response... 404 Not Found 
2017-06-27 17:01:35 ERROR 404: Not Found. 

Trying to download 
http://slepc.upv.es/download/download.php?filename=slepc-3.7.3.tar.gz 
--2017-06-27 17:01:35-- 
 http://slepc.upv.es/download/download.php?filename=slepc-3.7.3.tar.gz 
Resolving slepc.upv.es (slepc.upv.es)... 158.42.185.71 
Connecting to slepc.upv.es (slepc.upv.es)|158.42.185.71|:80... connected. 
HTTP request sent, awaiting response... 404 Not Found 
2017-06-27 17:01:35 ERROR 404: Not Found. 

Failure with exit status: 2 
Exit message: Error verifying checksum for slepc-3.7.3.tar.gz 
Make sure that you are connected to the internet. 



It seems the package cannot be found on the site because the slepc has been 
updated to 3.7.4. So I made the changes in the configuration 
file /candi/deal.II-toolchain/packages/slepc.package with a changed version 
number. But I cant find the checksum value anywhere on the internet. Can 
someone please help me?

Regards,
Kartik Jujare.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Create_point_source_vector Magnitude

2017-05-15 Thread Kartik Jujare
Thank you Bruno. My mistake.

Regards,
Kartik Jujare

On Friday, May 12, 2017 at 3:42:54 PM UTC+2, Bruno Turcksin wrote:
>
> 2017-05-12 9:32 GMT-04:00 Kartik Jujare <kartik...@gmail.com >: 
>
> > The graph shows the steady state solution given a point source at 
> (0.6,0.6). 
> Why would you expect the _solution_ to be one? You set the _source_ to 
> one at (0.6, 0.6) What you are essentially saying is that the 
> divergence at this point should be one at this point not the solution. 
>
> Best, 
>
> Bruno 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Create_point_source_vector Magnitude

2017-05-12 Thread Kartik Jujare
The graph shows the steady state solution given a point source at (0.6,0.6).

On Friday, May 12, 2017 at 3:21:02 PM UTC+2, Bruno Turcksin wrote:
>
> 2017-05-12 9:06 GMT-04:00 Kartik Jujare <kartik...@gmail.com >: 
>
> > Yes, as mentioned in the cited thread with the dirac delta function, my 
> > \phi(x,y) is just a constant. So my question is, shouldn't the value on 
> the 
> > graph attached show a value of 1 without multiplying this constant \phi 
> > value? 
> What I am looking at? Is this the right hand side or the solution 
> given a point source? 
>
> Best, 
>
> Bruno 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Superlu_dist usage in dealii

2017-04-18 Thread Kartik Jujare
Thank you Dennis for the clarification. 

Will contribute in the near future, but this time. :)

Regards,
Kartik Jujare 

On Tuesday, April 18, 2017 at 9:45:36 PM UTC+2, Denis Davydov wrote:
>
>
> On 18 Apr 2017, at 21:39, Kartik Jujare <kartik...@gmail.com > 
> wrote:
>
> Thank you Uwe and Dennis dor your additions
>
> However Dennis, I have a supplementary question. How does one make you of 
> the arguments inside the program specifically inside solve function?
>
>
> currently you can’t, it is **command-line only**. 
> One would need to add a wrapper for the direct solver, similar to 
> SolverDirectMUMPS:
>
> https://www.dealii.org/developer/doxygen/deal.II/classPETScWrappers_1_1SparseDirectMUMPS.html
>
> you can give it a try and contribute this feature to the library.
>
> Regards,
> Denis.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Superlu_dist usage in dealii

2017-04-18 Thread Kartik Jujare
Dear Jean,

Thank you for your answer. I will try this.

Regards,
Kartik 

On Tuesday, April 18, 2017 at 7:21:04 AM UTC+2, Jean-Paul Pelteret wrote:
>
> Dear Kartik,
>
> It is possible to use the SuperLU direct solver through the 
> TrilinosWrappers. Obviously this requires that Trilinos it configured with 
> SuperLU enabled. Step-33 
> <https://www.dealii.org/8.5.0/doxygen/deal.II/step_33.html#ConservationLawsolve>
>  
> demonstrates how to setup and use the direct solver itself, and you would 
> simply need to select the desired solver type 
> <https://dealii.org/8.5.0/doxygen/deal.II/structTrilinosWrappers_1_1SolverDirect_1_1AdditionalData.html#a22a275c6fa139cd8bf86cea54ff8f045>
> :
>
> SolverControl 
> <https://www.dealii.org/8.5.0/doxygen/deal.II/classSolverControl.html> 
> solver_control (1,0);
> TrilinosWrappers::SolverDirect::AdditionalData 
> <https://www.dealii.org/8.5.0/doxygen/deal.II/structTrilinosWrappers_1_1SolverDirect_1_1AdditionalData.html>
>  
> data;
> data.solver_type = "Amesos_Superlu";
> TrilinosWrappers::SolverDirect 
> <https://www.dealii.org/8.5.0/doxygen/deal.II/classTrilinosWrappers_1_1SolverDirect.html>
>  
> direct (solver_control, data);
>
> direct.solve (system_matrix, newton_update, right_hand_side);
>
> I hope that this helps.
>
> Regards,
> Jean-Paul
>
> On Monday, April 17, 2017 at 1:08:29 PM UTC+2, Kartik Jujare wrote:
>>
>> Hello,
>>
>> Can anyone please provide an implementation example on how to proceed 
>> using superlu_dist in a dealii solve function? 
>>
>> Thanks and regards,
>> Kartik 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Direct Solvers in Parallel

2017-04-15 Thread Kartik Jujare
Hi Uwe,

Needed a clarification on this thread. I am using the following 
installation:

All installed using Candi.
dealii - 8.4.2
petsc 3.6.4
other softwares

How should I proceed installing MUMPS in combination with PETSc installed 
by candi. Is there a way to do it without having to delete the existing 
installation?

Happy Easter!
Thanks,
Kartik 

 

On Tuesday, October 25, 2016 at 10:56:14 AM UTC+2, Uwe Köcher wrote:
>
> Dear Hamed,
>
> I think you can only use the TrilinosWrappers::SolverDirect classes for 
> TrilinosWrappers::MPI::Vector 's and of course TrilinosWrappers::Vector 's.
>
> Unfortunately, I've no experience with  PETScWrappers::SparseDirectMUMPS 
> ,
>  
> and I don't think we have included that to candi so far.
> (You need to install mumps against your mpi compiler and point to that 
> during the petsc installation)
>
> Best regards
>   Uwe
>
>
> On Tuesday, October 25, 2016 at 3:17:39 AM UTC+2, Hamed Babaei wrote:
>>
>> Hi friends,
>>
>> I am parallelizing a code similar to step-44 in which it is possible to 
>> use either an iterative solver SolverCG or a direct solver, 
>> SparseDirectUMFPACK. I have used the latter in the non-parallel code and 
>> works great.
>> Using iterative solvers like SolverCG I have problem in convergence so I 
>> want to check a direct solver which works in parallel. My problem is that 
>> my code doesn't recognize  PETScWrappers::SparseDirectMUMPS 
>> 
>>   
>> nor TrilinosWrappers::SolverDirect 
>> 
>>  .
>> I have installed Dealii and all of its dependent libraries (Petsc, 
>> Trilinos, P4est ...) via Candi (https://github.com/koecher/candi). I was 
>> wondering which direct solver I should use which works the same as 
>> SparseDirectUMFPACK and how to make dealii know them.
>>
>> Thanks,
>> Hamed
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallelization of step-2: DynamicMatrix to SparsityMatrix

2017-02-14 Thread Kartik Jujare
Thank you Timo for your reply.

On Sunday, February 12, 2017 at 3:54:19 PM UTC+1, Timo Heister wrote:
>
> Yes, it looks like you can not copy a DSP with an IndexSet into a 
> SparsityPattern. We could make that work but it also is not a very 
> useful operation. Can you try replacing 
>   DynamicSparsityPattern dsp (locally_relevant_dofs); 
> with 
>   DynamicSparsityPattern dsp; 
> ? This is not efficient for large computations because the DSP now 
> stores all rows, but you might be able to copy it to a SparsityPattern 
> now. 
>
> Or, you could look at the implementation of print_svg(). It should be 
> easy to write an implementation for DSP based on it. 
>
>
>
>
> On Sat, Feb 11, 2017 at 2:09 AM, Kartik Jujare <kartik...@gmail.com 
> > wrote: 
> > Hello everyone, 
> > 
> > This question is regarding DynamicSparsityPattern and SparsityPattern. 
> > 
> > I am trying, as a small exercise to parallelize step files and observe 
> the 
> > output. In the step-2 file. I am not able to use the copy_from() 
> function 
> > when I run it in parallel. Could anyone please suggest a workaround  to 
> view 
> > the svg file for each processor or point out any mistake that I might 
> have 
> > made? 
> > 
> > Following is the distribute_dofs function. 
> > 
> > * The gnuplots get printed out properly but it is for the Dynamic 
> Matrix: 
> > 
> > 
> > void distribute_dofs (parallel::distributed::Triangulation 
> > , 
> >   DoFHandler _handler, MPI_Comm 
> > mpi_communicator) 
> > { 
> >  static const FE_Q finite_element(1); 
> > 
> > 
> >  IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs(); 
> >  IndexSet locally_relevant_dofs; 
> > 
> > 
> > ConstraintMatrix  constraints; 
> > 
> > dof_handler.distribute_dofs (finite_element); 
> > 
> > 
> > locally_owned_dofs = dof_handler.locally_owned_dofs (); 
> > DoFTools::extract_locally_relevant_dofs (dof_handler, 
> >  locally_relevant_dofs); 
> > 
> > 
> > /*constraints.clear (); 
> > constraints.reinit (locally_relevant_dofs); 
> > constraints.close ();*/ 
> > 
> > 
> > DynamicSparsityPattern dsp (locally_relevant_dofs); 
> > 
> > 
> > DoFTools::make_sparsity_pattern (dof_handler, dsp, 
> >  constraints, false); 
> > SparsityTools::distribute_sparsity_pattern (dsp, 
> > 
> > dof_handler.n_locally_owned_dofs_per_processor(), 
> > mpi_communicator, 
> > locally_relevant_dofs); 
> > 
> > 
> > std::ofstream out ("sparsity_pattern_1_" + Utilities::int_to_string 
> >  (triangulation.locally_owned_subdomain(), 4)); 
> > 
> > 
> > dsp.print_gnuplot(out); 
> > 
> > 
> > /* SparsityPattern sparsity_pattern; 
> >  sparsity_pattern.copy_from (dsp); 
> > 
> > 
> >  std::ofstream out ("sparsity_pattern_1_" + Utilities::int_to_string 
> >  (triangulation.locally_owned_subdomain(), 4) + ".svg"); 
> >  sparsity_pattern.print_svg (out);*/ 
> > 
> > 
> > } 
> > 
> > 
> > The following is the error obtained after I uncomment the sparsity 
> pattern 
> > block: 
> > 
> > An error occurred in line <341> of file 
> > 
> 
>  
>
> > in function 
> > bool 
> > 
> dealii::DynamicSparsityPattern::exists(dealii::DynamicSparsityPattern::size_type,
>  
>
> > dealii::DynamicSparsityPattern::size_type) const 
> > The violated condition was: 
> > rowset.size()==0 || rowset.is_element(i) 
> > The name and call sequence of the exception was: 
> > ExcInternalError() 
> > Additional Information: 
> > This exception -- which is used in many places in the library -- usually 
> > indicates that some condition which the author of the code thought must 
> be 
> > satisfied at a certain point in an algorithm, is not fulfilled. An 
> example 
> > would be that the first part of an algo 
> > rithm sorts elements of an array in ascending order, and a second part 
> of 
> > the algorithm later encounters an an element that is not larger than the 
> > previous one. 
> > 
> > There is usually not very much you can do if you encounter such an 
> exception 
> > since it indicates an error in de

[deal.II] Parallelization of step-2: DynamicMatrix to SparsityMatrix

2017-02-10 Thread Kartik Jujare
Hello everyone,

This question is regarding DynamicSparsityPattern and SparsityPattern.

I am trying, as a small exercise to parallelize step files and observe the 
output. In the step-2 file. I am not able to use the copy_from() function 
when I run it in parallel. Could anyone please suggest a workaround  to 
view the svg file for each processor or point out any mistake that I might 
have made?

Following is the distribute_dofs function.

* The gnuplots get printed out properly but it is for the Dynamic Matrix:


void distribute_dofs (parallel::distributed::Triangulation &
triangulation,
  DoFHandler _handler, MPI_Comm 
mpi_communicator)
{
 static const FE_Q finite_element(1);


 IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs();
 IndexSet locally_relevant_dofs;


ConstraintMatrix  constraints;
   
dof_handler.distribute_dofs (finite_element);


locally_owned_dofs = dof_handler.locally_owned_dofs ();
DoFTools::extract_locally_relevant_dofs (dof_handler,
 locally_relevant_dofs);


/*constraints.clear ();
constraints.reinit (locally_relevant_dofs);
constraints.close ();*/


DynamicSparsityPattern dsp (locally_relevant_dofs);


DoFTools::make_sparsity_pattern (dof_handler, dsp,
 constraints, false);
SparsityTools::distribute_sparsity_pattern (dsp,
dof_handler.
n_locally_owned_dofs_per_processor(),
mpi_communicator,
locally_relevant_dofs);


std::ofstream out ("sparsity_pattern_1_" + Utilities::int_to_string
 (triangulation.locally_owned_subdomain(), 4));


dsp.print_gnuplot(out);


/* SparsityPattern sparsity_pattern;
 sparsity_pattern.copy_from (dsp);


 std::ofstream out ("sparsity_pattern_1_" + Utilities::int_to_string
 (triangulation.locally_owned_subdomain(), 4) + ".svg");
 sparsity_pattern.print_svg (out);*/


}


The following is the error obtained after I uncomment the sparsity pattern 
block:

An error occurred in line <341> of file  in function 
bool dealii::DynamicSparsityPattern::exists(dealii::
DynamicSparsityPattern::size_type, dealii::DynamicSparsityPattern::size_type
) const 
The violated condition was:  
rowset.size()==0 || rowset.is_element(i) 
The name and call sequence of the exception was: 
ExcInternalError() 
Additional Information:  
This exception -- which is used in many places in the library -- usually 
indicates that some condition which the author of the code thought must be 
satisfied at a certain point in an algorithm, is not fulfilled. An example 
would be that the first part of an algo
rithm sorts elements of an array in ascending order, and a second part of 
the algorithm later encounters an an element that is not larger than the 
previous one. 
 
There is usually not very much you can do if you encounter such an 
exception since it indicates an error in deal.II, not in your own program. 
Try to come up with the smallest possible program that still demonstrates 
the error and contact the deal.II mailing list
s with it to obtain help. 
 
Stacktrace: 
--- 
#0  /home/dulcet/deal.ii-candi/deal.II-v8.4.2/lib/libdeal_II.g.so.8.4.2: 
dealii::DynamicSparsityPattern::exists(unsigned int, unsigned int) const 
#1  /home/dulcet/deal.ii-candi/deal.II-v8.4.2/lib/libdeal_II.g.so.8.4.2: 
void 
dealii::SparsityPattern::copy_from(dealii::DynamicSparsityPattern
 
const&) 
#2  ./step-2: 
distribute_dofs(dealii::parallel::distributed::Triangulation<2, 2>&, 
dealii::DoFHandler<2, 2>&, ompi_communicator_t*) 
#3  ./step-2: main


Thanks in advance.

Regards,
Kartik Jujare

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Running Step-40 with eclipse PTP

2017-01-09 Thread Kartik Jujare
Thank you Wolfgang and Seyed for your replies. I tested it on another PC 
where dealii was installed using the candi script. The eclipse parallel run 
was successful for step 40. So I uninstalled my installations of p4est 
petsc and dealii and reinstalled using candi without changing any 
configurations in eclipse.

The only roadblock I faced initially and the procedure followed is this:

a) to run the candi script on 16.04, I had to force the platform as follows:
./candi.sh 
--platform=./deal.II-toolchain/platforms/supported/ubuntu15.platform -j2 

b) Using the candi script to install on 16.04 will ask for prerequisties to 
be installed of which the following two were not found.
libblas3gf liblapack3gf
 
These two I was able to install by searching for 
"lapack" and "blas" in synaptic and installing everything I thought 
relevant.

c)Then hit enter and continue as usual.
The eclipse parallel run was successful.

I think there was some mistake that was made while installing the softwares 
individually.
Also, please excuse me for marking my own answer as the best answer. But I 
guess this might help someone.

Regards,
Kartik Jujare



On Wednesday, January 4, 2017 at 3:24:42 PM UTC+1, Wolfgang Bangerth wrote:
>
> On 12/30/2016 10:05 AM, Kartik Jujare wrote: 
> > 
> > I am trying to run the step 40 through eclipse PTP but have been 
> unsuccessful 
> > in running it on more than one processor. 
> > 
> > The following are the steps I tried: 
> > a) Step - 40 compiles and runs on two processors when run from the 
> terminal. 
> > b) The template MPI Pi C++ project runs on two processors after creating 
> the 
> > run configuration file for 2 ranks. 
> > c) The size is shown as two in the MPI template output twice, while the 
> size 
> > is shown as one in the step - 40 file twice - Available processors on 
> local - 2 
> > * 
> > * 
> > *//step-40* 
> > *int main()* 
> > *{* 
> > *.* 
> > *//mpi_initFinalize();* 
> > *.* 
> > *int size = MPI::COMM_WORLD.Get_size();* 
> > *std::cout << size << std::endl; 
> > * 
> > * 
> > * 
> > *.* 
> > *.* 
> > *.* 
> > *}* 
> > 
> > Output: 
> > 
> > *1* 
> > *1* 
> > 
> > 
> > Could someone please point out to me what the right procedure is to 
> import 
> > step 40 to eclipse PTP is? Would be glad to provide more information. 
>
> Kartik -- I'm not sure anyone has tried to use the PTP. Do I understand 
> you 
> correctly that 
> * step-40 runs correctly (shows size=2, twice) when run on the command 
> line 
> * step-40 shows size=1, but twice, when run from inside the PTP 
> ? Is this the exact same executable? 
>
> If this is correct, that would suggest that PTP is indeed starting the 
> executable twice, but that they do not communicate. This can happen if the 
> 'mpirun' command you use on the command line corresponds to the MPI 
> implementation you used to build deal.II, but the 'mpirun' command PTP 
> uses 
> corresponds to a *different* MPI implementation on your system (presumably 
> one 
> that comes with PTP). That will not work. But I don't know enough about 
> PTP to 
> know how to resolve this issue. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Running Step-40 with eclipse PTP

2016-12-30 Thread Kartik Jujare
Correction to the code above: mpi_initFinalize is not commented in the code.

On Friday, December 30, 2016 at 6:05:03 PM UTC+1, Kartik Jujare wrote:
>
> Hello,
>
> I am trying to run the step 40 through eclipse PTP but have been 
> unsuccessful in running it on more than one processor.
>
> The following are the steps I tried:
> a) Step - 40 compiles and runs on two processors when run from the 
> terminal.
> b) The template MPI Pi C++ project runs on two processors after creating 
> the run configuration file for 2 ranks.
> c) The size is shown as two in the MPI template output twice, while the 
> size is shown as one in the step - 40 file twice - Available processors on 
> local - 2
>
> *//step-40*
> *int main()*
> *{*
> *.*
> *//mpi_initFinalize();*
> *.*
> *int size = MPI::COMM_WORLD.Get_size();*
>
> *std::cout << size << std::endl;*
>
> *.*
> *.*
> *.*
> *}*
>
> Output:
>
> *1*
> *1*
>
>
> Could someone please point out to me what the right procedure is to import 
> step 40 to eclipse PTP is? Would be glad to provide more information.
>
> Regards,
> Kartik Jujare
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Unsuccessful integration of p4est with dealii

2016-12-13 Thread Kartik Jujare
Thank you for your help.

On Tuesday, December 13, 2016 at 8:39:40 AM UTC+1, Uwe Köcher wrote:
>
> Dear Kartik,
>
> well, candi has users around the world - if it reports a successful 
> installation, then everything should be fine.
> But anyhow, you can go to each installation folder, e.g. deal.II, and try 
> to run their specific test suite, if available.
>
> Kind regards
>   Uwe
>
> On Monday, December 12, 2016 at 4:14:52 AM UTC+1, Kartik Jujare wrote:
>>
>> Dear Uwe, 
>>
>> It seems that the candi script was able to install everything without a 
>> problem. Is there anyway I can verify the installation was a success? Could 
>> you please give me directions on how this could work?
>>
>> Regards,
>> Kartik
>>
>> On Saturday, December 10, 2016 at 5:29:49 PM UTC+1, Uwe Köcher wrote:
>>>
>>> Dear Kartik,
>>>
>>> I cannot help you out in your specific problem, but have you tried out 
>>> candi:
>>>   https://github.com/dealii/candi
>>> to download, compile and install deal.II with MPI + p4est (and other 
>>> tools if
>>> you like)?
>>>
>>> Kind regards
>>>   Uwe
>>>
>>> On Tuesday, December 6, 2016 at 6:14:04 PM UTC+1, Kartik Jujare wrote:
>>>>
>>>> Hi Timo,
>>>>
>>>>  
>>>> > ubuntu@dulcet:~/software/dealii-8.4.1/build/tests/quick_tests$ mpirun 
>>>> -np 4 
>>>> > ./p4est.debug 
>>>> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 
>>>>
>>>> >>Does this also happen with 1 rank?
>>>>
>>>> with $mpirun -np 1 ./p4est-debug 
>>>>
>>>> it runs without error. It does not show any output. The control returns 
>>>> to the terminal. 
>>>>
>>>>
>>>>
>>>>
>>>> On Tuesday, December 6, 2016 at 4:25:13 PM UTC+1, Timo Heister wrote:
>>>>>
>>>>> > installation folder.  But I guess getting this output does not make 
>>>>> a lot of 
>>>>> > sense. 
>>>>> > 
>>>>> > ubuntu@dulcet:~/software/dealii-8.4.1/build/tests/quick_tests$ 
>>>>> mpirun -np 4 
>>>>> > ./p4est.debug 
>>>>> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 
>>>>>
>>>>> Does this also happen with 1 rank? 
>>>>>
>>>>> >>> try running in a debugger 
>>>>> > Will have to try this and I will come back with what I get. but I 
>>>>> hope you 
>>>>> > mean using eclipse for it. Or are you referring to gdb and xterm. I 
>>>>> am still 
>>>>> > a novice at debugging parallel codes. Would you be able to point to 
>>>>> > resources in this regard? 
>>>>>
>>>>> see 
>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_wiki_Frequently-2DAsked-2DQuestions-23how-2Ddo-2Di-2Ddebug-2Dmpi-2Dprograms=CwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=YN0JHrQA35iJTgejwPRe1gm5nGqklqgPm0wTTWjc0a8=zu4qxhp7eX0DAdKfncd0oL1f3zARknQWkFdAYvxnEFA=
>>>>>  
>>>>>
>>>>>
>>>>> -- 
>>>>> Timo Heister 
>>>>> http://www.math.clemson.edu/~heister/ 
>>>>>
>>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Unsuccessful integration of p4est with dealii

2016-12-11 Thread Kartik Jujare
Dear Uwe, 

It seems that the candi script was able to install everything without a 
problem. Is there anyway I can verify the installation was a success? Could 
you please give me directions on how this could work?

Regards,
Kartik

On Saturday, December 10, 2016 at 5:29:49 PM UTC+1, Uwe Köcher wrote:
>
> Dear Kartik,
>
> I cannot help you out in your specific problem, but have you tried out 
> candi:
>   https://github.com/dealii/candi
> to download, compile and install deal.II with MPI + p4est (and other tools 
> if
> you like)?
>
> Kind regards
>   Uwe
>
> On Tuesday, December 6, 2016 at 6:14:04 PM UTC+1, Kartik Jujare wrote:
>>
>> Hi Timo,
>>
>>  
>> > ubuntu@dulcet:~/software/dealii-8.4.1/build/tests/quick_tests$ mpirun 
>> -np 4 
>> > ./p4est.debug 
>> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 
>>
>> >>Does this also happen with 1 rank?
>>
>> with $mpirun -np 1 ./p4est-debug 
>>
>> it runs without error. It does not show any output. The control returns 
>> to the terminal. 
>>
>>
>>
>>
>> On Tuesday, December 6, 2016 at 4:25:13 PM UTC+1, Timo Heister wrote:
>>>
>>> > installation folder.  But I guess getting this output does not make a 
>>> lot of 
>>> > sense. 
>>> > 
>>> > ubuntu@dulcet:~/software/dealii-8.4.1/build/tests/quick_tests$ mpirun 
>>> -np 4 
>>> > ./p4est.debug 
>>> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 
>>>
>>> Does this also happen with 1 rank? 
>>>
>>> >>> try running in a debugger 
>>> > Will have to try this and I will come back with what I get. but I hope 
>>> you 
>>> > mean using eclipse for it. Or are you referring to gdb and xterm. I am 
>>> still 
>>> > a novice at debugging parallel codes. Would you be able to point to 
>>> > resources in this regard? 
>>>
>>> see 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_wiki_Frequently-2DAsked-2DQuestions-23how-2Ddo-2Di-2Ddebug-2Dmpi-2Dprograms=CwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=YN0JHrQA35iJTgejwPRe1gm5nGqklqgPm0wTTWjc0a8=zu4qxhp7eX0DAdKfncd0oL1f3zARknQWkFdAYvxnEFA=
>>>  
>>>
>>>
>>> -- 
>>> Timo Heister 
>>> http://www.math.clemson.edu/~heister/ 
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Unsuccessful integration of p4est with dealii

2016-12-04 Thread Kartik Jujare
Timo,

Yes. My mistake. I was looking at the source folder instead of the
installation folder.  But I guess getting this output does not make a lot
of sense.

ubuntu@dulcet:~/software/dealii-8.4.1/build/tests/quick_tests$ mpirun -np 4
./p4est.debug
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

>> try running in a debugger
Will have to try this and I will come back with what I get. but I hope you
mean using eclipse for it. Or are you referring to gdb and xterm. I am
still a novice at debugging parallel codes. Would you be able to point to
resources in this regard?

Regards,
Kartik Jujare



On Sun, Dec 4, 2016 at 9:24 PM, Timo Heister <heis...@clemson.edu> wrote:

> Kartik,
>
> > It returned this when I tried to compile it this way. Is this the right
> way?
>
> No, don't compile it manually. You will be missing a large number of
> preprocessor defines, include paths, etc.. If you run "make test", the
> binaries should still exist. Just try to run them as is.
>
> On Sun, Dec 4, 2016 at 3:01 PM, Kartik Jujare <kartik.juj...@gmail.com>
> wrote:
> > This is not the case. Still unable to figure out.
> >
> > On Sun, Dec 4, 2016 at 8:36 PM, Kartik Jujare <kartik.juj...@gmail.com>
> > wrote:
> >>
> >> Hi Timo,
> >>
> >> I slightly remember that I had included -DDEAL_II_WITH_THREADS=OFF .
> This
> >> could be a cause of the problem right? If it could be should I
> reconfigure
> >> dealii with this option on?
> >>
> >> Regards,
> >> Kartik Jujare.
> >>
> >> On Sun, Dec 4, 2016 at 6:54 PM, Timo Heister <heis...@clemson.edu>
> wrote:
> >>>
> >>> Hey Kartik,
> >>>
> >>> I haven't seen an error like this before. A few ideas:
> >>> 1. Can you check that p4est is using the same mpi libraries? You can
> >>> for example compare the output of "ldd libp4est.so | grep mpi" with
> >>> the one from the deal.II libs.
> >>> 2. you can go into the tests/quicktests/ directory inside your build
> >>> directory and run the p4est test manually using mpirun. Maybe vary the
> >>> number of MPI ranks and/or try running in a debugger to see where
> >>> things are breaking.
> >>>
> >>> On Sun, Dec 4, 2016 at 6:29 AM, Kartik Jujare <kartik.juj...@gmail.com
> >
> >>> wrote:
> >>> > Hello,
> >>> >
> >>> > I was wondering if anyone could help pinpoint any anomaly with my
> p4est
> >>> > and
> >>> > dealii integration.
> >>> >
> >>> > I was able to successfully install Petsc and run its own tests using
> my
> >>> > already installed mpich.
> >>> > Next, I installed p4est which also seemed to install without
> problems.
> >>> > Then dealii which also did not give me any problems.
> >>> >
> >>> > I could not find any topics relating to this error. I'd be grateful
> if
> >>> > anyone could help solve this issue. Following is more information
> about
> >>> > my
> >>> > installations.
> >>> >
> >>> > The problem came when I ran the following command after dealii
> >>> > installation
> >>> > $ make test
> >>> >
> >>> > The attachments:
> >>> > detailed.log -- dealii
> >>> > config.log -- p4est build directory
> >>> >
> >>> > Here is the output from the terminal:
> >>> >
> >>> >
> >>> > 
> 
> 
> >>> > ubuntu@dulcet:~/software/dealii-8.4.1/build$ make test
> >>> > Scanning dependencies of target test
> >>> > [100%] Running quicktests...
> >>> > Test project /home/ubuntu/software/dealii-
> 8.4.1/build/tests/quick_tests
> >>> > Start 1: step.debug
> >>> > 1/7 Test #1: step.debug ...   Passed   20.28 sec
> >>> > Start 2: step.release
> >>> > 2/7 Test #2: step.release .   Passed   17.81 sec
> >>> > Start 3: affinity.debug
> >>> > 3/7 Test #3: affinity.debug ...   Passed   12.33 sec
> >>> > Start 4: mpi.debug
> >>> > 4/7 Test #4: mpi.debug    Passed   12.67 sec
> >>> > Start 5: tbb.d

Re: [deal.II] Unsuccessful integration of p4est with dealii

2016-12-04 Thread Kartik Jujare
Hi Timo,

I slightly remember that I had included -DDEAL_II_WITH_THREADS=OFF . This
could be a cause of the problem right? If it could be should I reconfigure
dealii with this option on?

Regards,
Kartik Jujare.

On Sun, Dec 4, 2016 at 6:54 PM, Timo Heister <heis...@clemson.edu> wrote:

> Hey Kartik,
>
> I haven't seen an error like this before. A few ideas:
> 1. Can you check that p4est is using the same mpi libraries? You can
> for example compare the output of "ldd libp4est.so | grep mpi" with
> the one from the deal.II libs.
> 2. you can go into the tests/quicktests/ directory inside your build
> directory and run the p4est test manually using mpirun. Maybe vary the
> number of MPI ranks and/or try running in a debugger to see where
> things are breaking.
>
> On Sun, Dec 4, 2016 at 6:29 AM, Kartik Jujare <kartik.juj...@gmail.com>
> wrote:
> > Hello,
> >
> > I was wondering if anyone could help pinpoint any anomaly with my p4est
> and
> > dealii integration.
> >
> > I was able to successfully install Petsc and run its own tests using my
> > already installed mpich.
> > Next, I installed p4est which also seemed to install without problems.
> > Then dealii which also did not give me any problems.
> >
> > I could not find any topics relating to this error. I'd be grateful if
> > anyone could help solve this issue. Following is more information about
> my
> > installations.
> >
> > The problem came when I ran the following command after dealii
> installation
> > $ make test
> >
> > The attachments:
> > detailed.log -- dealii
> > config.log -- p4est build directory
> >
> > Here is the output from the terminal:
> >
> > 
> 
> 
> > ubuntu@dulcet:~/software/dealii-8.4.1/build$ make test
> > Scanning dependencies of target test
> > [100%] Running quicktests...
> > Test project /home/ubuntu/software/dealii-8.4.1/build/tests/quick_tests
> > Start 1: step.debug
> > 1/7 Test #1: step.debug ...   Passed   20.28 sec
> > Start 2: step.release
> > 2/7 Test #2: step.release .   Passed   17.81 sec
> > Start 3: affinity.debug
> > 3/7 Test #3: affinity.debug ...   Passed   12.33 sec
> > Start 4: mpi.debug
> > 4/7 Test #4: mpi.debug    Passed   12.67 sec
> > Start 5: tbb.debug
> > 5/7 Test #5: tbb.debug    Passed   11.19 sec
> > Start 6: p4est.debug
> > 6/7 Test #6: p4est.debug ..***Failed   17.08 sec
> > Test p4est.debug: RUN
> > ===   OUTPUT BEGIN
> > ===
> > Scanning dependencies of target kill-p4est.debug-OK
> > [  0%] Built target kill-p4est.debug-OK
> > [  0%] Built target expand_instantiations_exe
> > [  0%] Built target obj_opencascade.inst
> > [  1%] Built target obj_opencascade.debug
> > [  6%] Built target obj_boost_serialization.debug
> > [  6%] Built target obj_boost_system.debug
> > [ 13%] Built target obj_tbb.debug
> > [ 15%] Built target obj_muparser.debug
> > [ 20%] Built target obj_numerics.inst
> > [ 28%] Built target obj_numerics.debug
> > [ 37%] Built target obj_fe.inst
> > [ 47%] Built target obj_fe.debug
> > [ 49%] Built target obj_dofs.inst
> > [ 52%] Built target obj_dofs.debug
> > [ 55%] Built target obj_lac.inst
> > [ 66%] Built target obj_lac.debug
> > [ 67%] Built target obj_base.inst
> > [ 79%] Built target obj_base.debug
> > [ 83%] Built target obj_grid.inst
> > [ 86%] Built target obj_grid.debug
> > [ 88%] Built target obj_hp.inst
> > [ 89%] Built target obj_hp.debug
> > [ 91%] Built target obj_multigrid.inst
> > [ 93%] Built target obj_multigrid.debug
> > [ 94%] Built target obj_distributed.inst
> > [ 96%] Built target obj_distributed.debug
> > [ 96%] Built target obj_algorithms.inst
> > [ 98%] Built target obj_algorithms.debug
> > [ 98%] Built target obj_integrators.debug
> > [100%] Built target obj_matrix_free.inst
> > [100%] Built target obj_matrix_free.debug
> > [100%] Built target obj_meshworker.inst
> > [100%] Built target obj_meshworker.debug
> > [100%] Built target deal_II.g
> > Scanning dependencies of target p4est.debug
> > [100%] Building CXX object
> > tests/quick_tests/CMakeFiles/p4est.debug.dir/p4est.cc.o
> > Linking CXX executable p4est.debug
> > [

Re: [deal.II] Unsuccessful integration of p4est with dealii

2016-12-04 Thread Kartik Jujare
Hi Timo, 

Thank you for your reply.

I ran the commands as you instructed:
1)
ubuntu@dulcet:~/programfiles/p4est/FAST/lib$ ldd libp4est.so | grep mpi
libmpi.so.12 => /home/ubuntu/programfiles/mpich/lib/libmpi.so.12 
(0x7f281d956000)

ubuntu@dulcet:~/programfiles/dealii/lib$ ldd libdeal_II.so | grep mpi
libmpi.so.12 => /home/ubuntu/programfiles/mpich/lib/libmpi.so.12 
(0x7f2dbaea8000)
libmpifort.so.12 => /home/ubuntu/programfiles/mpich/lib/libmpifort.so.12 
(0x7f2db9e6e000)

2) It returned this when I tried to compile it this way. Is this the right 
way? I am running this on a remote virtual machine. I am trying to see how 
to resolve this compilation error.   

ubuntu@dulcet:~/software/dealii-8.4.1/tests/quick_tests$ mpicxx p4est.cc -o 
p4est -I/home/ubuntu/programfiles/dealii/include/

In file included from 
/home/ubuntu/programfiles/dealii/include/deal.II/base/logstream.h:23:0,
 from p4est.cc:21:
/home/ubuntu/programfiles/dealii/include/deal.II/base/thread_local_storage.h:23:46:
 
fatal error: tbb/enumerable_thread_specific.h: No such file or directory
 #  include 
  ^
compilation terminated.

Thanks.

On Sunday, December 4, 2016 at 6:54:53 PM UTC+1, Timo Heister wrote:
>
> Hey Kartik, 
>
> I haven't seen an error like this before. A few ideas: 
> 1. Can you check that p4est is using the same mpi libraries? You can 
> for example compare the output of "ldd libp4est.so | grep mpi" with 
> the one from the deal.II libs. 
> 2. you can go into the tests/quicktests/ directory inside your build 
> directory and run the p4est test manually using mpirun. Maybe vary the 
> number of MPI ranks and/or try running in a debugger to see where 
> things are breaking. 
>
> On Sun, Dec 4, 2016 at 6:29 AM, Kartik Jujare <kartik...@gmail.com 
> > wrote: 
> > Hello, 
> > 
> > I was wondering if anyone could help pinpoint any anomaly with my p4est 
> and 
> > dealii integration. 
> > 
> > I was able to successfully install Petsc and run its own tests using my 
> > already installed mpich. 
> > Next, I installed p4est which also seemed to install without problems. 
> > Then dealii which also did not give me any problems. 
> > 
> > I could not find any topics relating to this error. I'd be grateful if 
> > anyone could help solve this issue. Following is more information about 
> my 
> > installations. 
> > 
> > The problem came when I ran the following command after dealii 
> installation 
> > $ make test 
> > 
> > The attachments: 
> > detailed.log -- dealii 
> > config.log -- p4est build directory 
> > 
> > Here is the output from the terminal: 
> > 
> > 
> 
>  
>
> > ubuntu@dulcet:~/software/dealii-8.4.1/build$ make test 
> > Scanning dependencies of target test 
> > [100%] Running quicktests... 
> > Test project /home/ubuntu/software/dealii-8.4.1/build/tests/quick_tests 
> > Start 1: step.debug 
> > 1/7 Test #1: step.debug ...   Passed   20.28 sec 
> > Start 2: step.release 
> > 2/7 Test #2: step.release .   Passed   17.81 sec 
> > Start 3: affinity.debug 
> > 3/7 Test #3: affinity.debug ...   Passed   12.33 sec 
> > Start 4: mpi.debug 
> > 4/7 Test #4: mpi.debug    Passed   12.67 sec 
> > Start 5: tbb.debug 
> > 5/7 Test #5: tbb.debug    Passed   11.19 sec 
> > Start 6: p4est.debug 
> > 6/7 Test #6: p4est.debug ..***Failed   17.08 sec 
> > Test p4est.debug: RUN 
> > ===   OUTPUT BEGIN 
> > === 
> > Scanning dependencies of target kill-p4est.debug-OK 
> > [  0%] Built target kill-p4est.debug-OK 
> > [  0%] Built target expand_instantiations_exe 
> > [  0%] Built target obj_opencascade.inst 
> > [  1%] Built target obj_opencascade.debug 
> > [  6%] Built target obj_boost_serialization.debug 
> > [  6%] Built target obj_boost_system.debug 
> > [ 13%] Built target obj_tbb.debug 
> > [ 15%] Built target obj_muparser.debug 
> > [ 20%] Built target obj_numerics.inst 
> > [ 28%] Built target obj_numerics.debug 
> > [ 37%] Built target obj_fe.inst 
> > [ 47%] Built target obj_fe.debug 
> > [ 49%] Built target obj_dofs.inst 
> > [ 52%] Built target obj_dofs.debug 
> > [ 55%] Built target obj_lac.inst 
> > [ 66%] Built target obj_lac.debug 
> > [ 67%] Built target obj_base

[deal.II] Unsuccessful integration of p4est with dealii

2016-12-04 Thread Kartik Jujare
Hello,

I was wondering if anyone could help pinpoint any anomaly with my p4est and 
dealii integration. 

I was able to successfully install Petsc and run its own tests using my 
already installed mpich.
Next, I installed p4est which also seemed to install without problems.
Then dealii which also did not give me any problems.

I could not find any topics relating to this error. I'd be grateful if 
anyone could help solve this issue. Following is more information about my 
installations.

The problem came when I ran the following command after dealii installation
$ make test

The attachments:
detailed.log -- dealii
config.log -- p4est build directory

Here is the output from the terminal:


ubuntu@dulcet:~/software/dealii-8.4.1/build$ make test
Scanning dependencies of target test
[100%] Running quicktests...
Test project /home/ubuntu/software/dealii-8.4.1/build/tests/quick_tests
Start 1: step.debug
1/7 Test #1: step.debug ...   Passed   20.28 sec
Start 2: step.release
2/7 Test #2: step.release .   Passed   17.81 sec
Start 3: affinity.debug
3/7 Test #3: affinity.debug ...   Passed   12.33 sec
Start 4: mpi.debug
4/7 Test #4: mpi.debug    Passed   12.67 sec
Start 5: tbb.debug
5/7 Test #5: tbb.debug    Passed   11.19 sec
Start 6: p4est.debug
6/7 Test #6: p4est.debug ..***Failed   17.08 sec
Test p4est.debug: RUN
===   OUTPUT BEGIN 
 ===
Scanning dependencies of target kill-p4est.debug-OK
[  0%] Built target kill-p4est.debug-OK
[  0%] Built target expand_instantiations_exe
[  0%] Built target obj_opencascade.inst
[  1%] Built target obj_opencascade.debug
[  6%] Built target obj_boost_serialization.debug
[  6%] Built target obj_boost_system.debug
[ 13%] Built target obj_tbb.debug
[ 15%] Built target obj_muparser.debug
[ 20%] Built target obj_numerics.inst
[ 28%] Built target obj_numerics.debug
[ 37%] Built target obj_fe.inst
[ 47%] Built target obj_fe.debug
[ 49%] Built target obj_dofs.inst
[ 52%] Built target obj_dofs.debug
[ 55%] Built target obj_lac.inst
[ 66%] Built target obj_lac.debug
[ 67%] Built target obj_base.inst
[ 79%] Built target obj_base.debug
[ 83%] Built target obj_grid.inst
[ 86%] Built target obj_grid.debug
[ 88%] Built target obj_hp.inst
[ 89%] Built target obj_hp.debug
[ 91%] Built target obj_multigrid.inst
[ 93%] Built target obj_multigrid.debug
[ 94%] Built target obj_distributed.inst
[ 96%] Built target obj_distributed.debug
[ 96%] Built target obj_algorithms.inst
[ 98%] Built target obj_algorithms.debug
[ 98%] Built target obj_integrators.debug
[100%] Built target obj_matrix_free.inst
[100%] Built target obj_matrix_free.debug
[100%] Built target obj_meshworker.inst
[100%] Built target obj_meshworker.debug
[100%] Built target deal_II.g
Scanning dependencies of target p4est.debug
[100%] Building CXX object 
tests/quick_tests/CMakeFiles/p4est.debug.dir/p4est.cc.o
Linking CXX executable p4est.debug
[100%] Built target p4est.debug
Scanning dependencies of target p4est.debug.run
p4est.debug: RUN failed. Output:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
make[7]: *** [tests/quick_tests/CMakeFiles/p4est.debug.run] Error 1
make[6]: *** [tests/quick_tests/CMakeFiles/p4est.debug.run.dir/all] Error 2
make[5]: *** [tests/quick_tests/CMakeFiles/p4est.debug.run.dir/rule] Error 2
make[4]: *** [p4est.debug.run] Error 2


p4est.debug: **RUN failed***

===OUTPUT END   
===
Expected stage PASSED - aborting
CMake Error at 
/home/ubuntu/software/dealii-8.4.1/cmake/scripts/run_test.cmake:140 
(MESSAGE):
  *** abort



Start 7: step-petsc.debug
Errors while running CTest
7/7 Test #7: step-petsc.debug .   Passed   16.50 sec

86% tests passed, 1 tests failed out of 7

Total Test time (real) = 107.88 sec

The following tests FAILED:
 6 - p4est.debug (Failed)


*** WARNING 
***

Some of the tests failed!

Please scroll up or check the file tests/quick_tests/quicktests.log for the
error messages. If you are unable to fix the problems, see the FAQ or write
to the mailing list linked at http://www.dealii.org


The p4est test can fail if you are running an OpenMPI version before 1.5.
This is a known problem and the only work around is to update to a more
recent version or use a different MPI library like MPICH.

Built target test



Thank you for your help.

Regards,
Kartik Jujare

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum