Re: [deal.II] Does deal.ll support import grid from openfoam solver

2023-07-15 Thread vachan potluri
I don't know about a direct technique, but you can first use foamToVTK to convert foam mesh to vtk and then import vtk in dealii. Vachan On Sat, 15 Jul, 2023, 16:47 ztdep...@gmail.com, wrote: > I want to couple the mesh adaptivity off deal.ll with openfoam solver. > Could you please give me

Re: [deal.II] Calculate cell center distance from a boundary

2022-02-17 Thread vachan potluri
> > Hello, > Here is the PR https://github.com/dealii/dealii/pull/13394 that adds the > new wrappers for ArborX > Best, > Bruno Thank you very much! Didn't expect it to come so fast :) ! -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see

Re: [deal.II] Calculate cell center distance from a boundary

2022-02-10 Thread vachan potluri
Dear Dr. Wolfgang, Thank you very much for the kind reply. > This is a very difficult operation to do even in sequential computations > unless you have an analytical description of the boundary. That's because > in > principle you would have to compare the current position with all points > (or

Re: [deal.II] Small suggestion to improve GridIn::read_unv()

2021-10-04 Thread vachan potluri
> > Yes, this makes sense. A patch would be welcome! Please have a look at pr12787 . It really is a complete nightmare and my preference would be if that > file format was banned from existence by the QAnon high council. My second > choice would be if

Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-08-03 Thread vachan potluri
;> >> I have a few questions. >> >>1. What is the "buffer" argument in evaluate_and_process()? >>2. The documentation for this function says get_point_ptrs() must be >>used to "process" the output in case the point-cell map is

Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-08-03 Thread vachan potluri
t-cell map is not one-one. I will surely encounter such cases. How can I use the data returned by get_point_ptrs() and how exactly should I "process" the output? I couldn't find this used in any examples. Any clarifications would be greatly helpful. On Tue, 3 Aug 2021 at 04:47, Wolfga

Re: [deal.II] Unexpected data output with cell data vector

2021-07-09 Thread vachan potluri
> > You want to use cell->active_cell_index() as the index into the vector. The > vector should have >triangulation.n_active_cells() > as its size. This corresponds to the *local* number of active cells, > including > ghost and artificial cells (for which vector entries are then just >

Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-07-05 Thread vachan potluri
Dr. Wolfgang, Thank you for the reply. So is your fine mesh a refinement of the coarse one? If not, you may want to > look at FEFieldFunction. Yes, it is. But the "refinement" is done by the meshing software, outside dealii. Is there any simplification possible in such a case? Otherwise, I

Re: [deal.II] dealii 9.3.0 make install fails at "Generating mpi.inst" with Invalid instantiation list: missing 'for'

2021-06-23 Thread vachan potluri
I had noticed that this make error would occur if the file being expanded doesn't have a prefix 'for'. The following snippet is from line 453- of dealii/cmake/scripts/expand_instantiations.cc if (!has_prefix(whole_file, "for")) { std::cerr << "Invalid instantiation list: missing 'for'" <<

Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-14 Thread vachan potluri
ector into a Vector of size triang.n_active_cells() and adding this vector instead to DataOut works. Thanks again! On Tue, 15 Jun 2021 at 04:24, Wolfgang Bangerth wrote: > On 6/11/21 12:09 AM, vachan potluri wrote: > > > > I am having an issue in using DataOut for such vector in a parallel

Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-11 Thread vachan potluri
Hello, I am having an issue in using DataOut for such vector in a parallel process. I am attaching a MWE which captures my problem. I am encountering a segmentation fault (signal 11). #include #include #include #include #include #include #include #include #include #include #include

Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-08 Thread vachan potluri
Thank you :). On Tue, 8 Jun, 2021, 19:24 Wolfgang Bangerth, wrote: > On 6/8/21 4:18 AM, vachanpo...@gmail.com wrote: > > If I want to add such a vector to DataOut, will the regular > > DataOut::add_data_vector() work? Or is something else required to be > done? > > Yes,

Re: [deal.II] Compiling deal.II with GCC version 9.3.0 results in missing C++11 features error

2021-06-07 Thread vachan potluri
Alex, I think this is a problem related to the cluster's OS. On Cray XC50, I had to explicitly set the link type to dynamic before installation, because by default Cray does a static link. I had to set export XTPE_LINK_TYPE=dynamic export CRAYPE_LINK_TYPE=dynamic before the installation. You

Re: [deal.II] Compiling deal.II with GCC version 9.3.0 results in missing C++11 features error

2021-06-03 Thread vachan potluri
Hi Alex, I previously ran into a lot of issues when I tried to install dealii on our institute's cluster. The OS was different though and I had problems with PETSc. I don't know if this helps but this is the relevant section in dealii-9.2.0/cmake/modules/FindTRILINOS.cmake file which searches

Re: [deal.II] Ordering of polynomials in FE_DGQLegendre<3>

2021-05-25 Thread vachan potluri
Wolfgang, > It doesn't have to be. For example, for FE_Q, we also build on > TensorProductPolynomials but the ordering is not lexicographic. So it would > still be of interest to document the order of shape functions if you end up > finding out what it is! Noted. So I have verified this with

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-14 Thread vachan potluri
Here is a summary of the installation process on Cray XC50. I have configured deal.II with MPI, LAPACK, SCALAPACK, PETSc and p4est. Our system didn't have p4est so I started with installing it. All cray libraries are in /opt/cray/pe/lib64/ in out system. *Installing p4est* 1. Download source

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-13 Thread vachan potluri
It is working! The mistake I did was to open an interactive job and run the executables through bash. When I instead submitted a job, and executed using aprun (Cray's equivalent to mpirun) to run the executables, they ran successfully. I tested step-1, step-18 and my own code too. The

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-12 Thread vachan potluri
I have found few reports of glibc version 2.28 causing such behaviour (e.g. see here ). It might be possible that /lib64/ld-linux-x86-64.so.2 on our system "links" to this version of glibc. But it actually is a static library: $ ldd -v ld-linux-x86-64.so.2

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-12 Thread vachan potluri
This is the full backtrace with gdb. (gdb) bt #0 __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at /home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/source/numerics/time_dependent.cc:1196 #1 0x7fffec1aa6f8 in _GLOBAL__sub_I_time_dependent.cc(void)

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-11 Thread vachan potluri
Step-1 aborts with Illegal Instruction (core dumped). The error msg gdb prints is the following. Program received signal SIGILL, Illegal instruction. __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-10 Thread vachan potluri
Ok. After installing newer cmake version and making _lapack_libraries OPTIONAL, LAPACK configuration has gone fine. For PETSc, I did something dirty. I figured that FindPETSC.cmake searches for libraries in a file named petscvariables. I made my own copy of petscvariables file and modified the

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-09 Thread vachan potluri
> > If only an old version is the problem, I would just go ahead and download > and compile a recent version myself. I never had any issues with that and > should be quite simple. I did this. Indeed the cmake output now prints A library with LAPACK API found. However, the lapack configuration

[deal.II] Re: deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-07 Thread vachan potluri
Dear Prof. Bangerth, Can you attach it to a reply? It would be interesting to see why the > version > detection didn't work. (Although I see that cmake complains that it can't > find > the file, so that is probably the issue. I don't know why it can't find > the > file...) I really

[deal.II] Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-07 Thread vachan potluri
Hello, I am trying to install deal.II on a cray XC50 machine. I had posted a question related to MPI here https://groups.google.com/forum/#!topic/dealii/EJm6ePrI81w. 1. Configuring with MPI was "successful" with the following cmake invocation. cmake -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1/

[deal.II] Re: deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-06 Thread vachan potluri
> > If you know to which standard the MPI installation is conforming, you > could try to set it via > cmake -DMPI_VERSION=... > yourself. The MPI version is 3.1. But will this be of use? After all, the include paths, linker flags and library variables will still be blank. But separately,

[deal.II] deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-05 Thread vachan potluri
Hello, I am trying to install deal.II on a cray XC50 supercomputer. cmake -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1 \ -DPREFIX_PATH=/opt/cray/pe \ -DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \ -DWITH_MPI=ON \ -DWITH_PETSC=OFF -DPETSC_DIR=$PETSC_DIR

[deal.II] Re: Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
I was able to reproduce this behaviour with the following code (also attached); the CMakeLists file is also attached. The code hangs after printing 'Scaled variable 0'. Let me mention that I have used a different algorithm to obtain locally relevant dofs, rather than directly using the

[deal.II] Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
Hello, I am facing a weird problem. At a point in code, I have PETScWrappers::VectorBase::scale() called for few distributed vectors. Subsequently, I have assignment operator on ghosted versions of these vectors for parallel communication. When I launch the code with 2 or 4 processes, it

[deal.II] Strategy for efficiently calculating face flux in time evolution problems

2019-11-21 Thread vachan potluri
Hello all, This question is a general one, may not be specific just to deal.II. I am writing a code to solve the compressible Navier-Stokes equations. Every time step requires calculation of numerical flux on every face. There can be three cases in a distributed triangulation. I am not

[deal.II] Re: Query regarding DoFTools::dof_indices_with_subdomain_association()

2019-10-09 Thread vachan potluri
Daniel, DoFTools::dof_indices_with_subdomain_association returns the degrees of > freedom of all the cells that have the given subdomain id. For > parallel::distributed::Triangulation objects the subdomain id is the the > MPI rank and this is the only valid input. > In this case, the function

[deal.II] Query regarding DoFTools::dof_indices_with_subdomain_association()

2019-10-09 Thread vachan potluri
Hello all, I am writing an MPI parallel DG code for linear advection equation, just for understanding the parallel programming paradigm of deal.II. To evaluate numerical flux at a face connecting two subdomains, I only require the solution (from a different mpi process) at dofs which lie on

[deal.II] Re: Installation error, unable to configure with p4est

2019-10-05 Thread vachan potluri
> > Yes, you can likely ignore the error. If you really want to run this > quicktest, you can change > make_quicktest("p4est" ${_mybuild} 10) > to > make_quicktest("p4est" ${_mybuild} 4) > in tests/quick_tests/CMakeLists.txt. This works. Thanks. -- The deal.II project is located at

[deal.II] Re: Installation error, unable to configure with p4est

2019-10-04 Thread vachan potluri
Okay, I found the error. Some time back, I noted changing include/deal.II/base/config.h.in to include/deal.II/base/config.h (removed the .in). I don't remember exactly, but the reason I did this was because some error popped up while compiling one of the initial tutorials. This was my mistake.

[deal.II] Re: Installation error, unable to configure with p4est

2019-10-04 Thread vachan potluri
Sorry for incomplete information, cmake exits with the following message. ### # # deal.II configuration: #CMAKE_BUILD_TYPE: DebugRelease #BUILD_SHARED_LIBS: ON #CMAKE_INSTALL_PREFIX: /home/vachan/bin/dealii #CMAKE_SOURCE_DIR:

[deal.II] Re: Calculation of local flux matrix in DG: looping over a cell's faces

2019-09-24 Thread vachan potluri
Found the face ordering here https://www.dealii.org/current/doxygen/deal.II/structGeometryInfo.html -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are

[deal.II] Calculation of local flux matrix in DG: looping over a cell's faces

2019-09-23 Thread vachan potluri
Hello all, I want to calculate local flux matrix of a cell (in 2D). The algorithm I thought of is the following: [image: CodeCogsEqn.png] Within loop over cells: start loop over faces: re-initialize fe_face_values for current face calculate the 1D mass matrix add

[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-18 Thread vachan potluri
step-33 does compute interior face integrals twice. One way I handle this is to attach a cell user index Thanks Praveen, this is similar to owner/neighbour concept of OpenFOAM :). Doug, Many thanks for the detailed explanations and your code :). -- The deal.II project is located at

[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-17 Thread vachan potluri
Doug and Praveen, Thanks for your answers. I had a look at step-33. As far as I understand, although the looping through cells is not through MeshWorker, the assembly is still global! So, for a non-cartesian mesh, I think you are suggesting using such a loop over all cells to calculate local

[deal.II] DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-17 Thread vachan potluri
Hello all, I am a beginner in dealii. I want to solve a linear, transient advection equation explicitly in two dimensions using DG. The resulting discrete equation will have a mass matrix as the system matrix and a sum of terms which depend on previous solution (multiplied by mass,

[deal.II] Re: Member description of MeshWorker::DoFInfo.matrix

2019-09-16 Thread vachan potluri
Sorry for the late reply. Thank you Prof. Bangerth. On Wednesday, September 4, 2019 at 9:19:42 PM UTC+5:30, vachan potluri wrote: > > Hello, > > I am reading step-12 of the tutorial. The following lines are from local > integrator for interior face (dinfo is an alias to Mesh

[deal.II] Member description of MeshWorker::DoFInfo.matrix

2019-09-04 Thread vachan potluri
Hello, I am reading step-12 of the tutorial. The following lines are from local integrator for interior face (dinfo is an alias to MeshWorker::DoFInfo). FullMatrix _v1_matrix = dinfo1.matrix(0, false).matrix; FullMatrix