Re: [deal.II] Getting RHS values at nodes with DBC

2018-08-01 Thread RAJAT ARORA
_owned_dofs.is_element(d) && constraints.is_constrained(d) > && !hanging_node_constraints.is_constrained(d)) > const double value = unconstrained_rhs_vector(d); > > Best, > Jean-Paul > > On 01 Aug 2018, at 05:33, RAJAT ARORA > > wrote: > >

Re: [deal.II] Shape function derivative wrt to a different cell

2018-03-15 Thread RAJAT ARORA
ote: > > On 03/07/2018 12:19 PM, RAJAT ARORA wrote: > > > > I want to do the following thing: I have a scalar quantity available > > only at the Gauss points of a cell. (cell = 2d rectangular element) > > I want to get the derivative of this quantity at the center of

[deal.II] Shape function derivative wrt to a different cell

2018-03-07 Thread RAJAT ARORA
Hello all, I am using deal.ii to solve a 2d solid mechanics problem. I want to do the following thing: I have a scalar quantity available only at the Gauss points of a cell. (cell = 2d rectangular element) I want to get the derivative of this quantity at the center of the cell. One way to do

Re: [deal.II] Deal.ii installation problem:- Step 40 runtime error

2018-01-23 Thread RAJAT ARORA
. On Monday, January 22, 2018 at 11:09:41 PM UTC-5, Wolfgang Bangerth wrote: > > On 01/22/2018 08:48 AM, RAJAT ARORA wrote: > > > > Running with PETSc on 2 MPI rank(s)... > > Cycle 0: > > Number of active cells: 1024 > > Number of degrees of

[deal.II] Deal.ii installation problem:- Step 40 runtime error

2018-01-22 Thread RAJAT ARORA
Hello all, I recently installed deal.ii using candi but after everything successfully finishes, I am not able to run step-40 on more than 1 processor. For, 1 processor it runs fine. To install everything, I used candi and and loaded following modules. 1) psc_path/1.1 2) slurm/17.02.5

[deal.II] Documenting code for Code gallery

2017-12-19 Thread RAJAT ARORA
Hello all, I have written a code for a small strain elasto-plastic solid mechanics problem in 2d using deal.ii. I wanted to contribute it to code gallery and have gone through the instructions given here: http://www.dealii.org/code-gallery.html#instructions I wanted to know how can I document

[deal.II] Re: Moving Mesh when using Fe_Q(p) p>1

2017-12-19 Thread RAJAT ARORA
node and then calculate the Jacobian and other information? Thanks. On Monday, December 11, 2017 at 8:39:20 AM UTC-5, Bruno Turcksin wrote: > > Hi, > > On Sunday, December 10, 2017 at 9:55:49 PM UTC-5, RAJAT ARORA wrote: > >> My question is: >> Is the mesh moveme

[deal.II] Moving Mesh when using Fe_Q(p) p>1

2017-12-10 Thread RAJAT ARORA
Hello all, I am using the following function to move a p:d:triangulation. template void FEM:: move_mesh (const TrilinosWrappers::MPI::Vector ) const { std::vector vertex_touched(triangulation.n_vertices(), false); for (typename DoFHandler::active_cell_iterator cell =

[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread RAJAT ARORA
Hello Sambit, Can you try doing this? Also move the vertices of the ghost cell and avoid calling dftPtr->triangulation.communicate_locally_moved_vertices(locally_owned_vertices); When I tried to use triangulation.communicate_locally_moved_vertices(locally_owned_vertices) last time,

[deal.II] New Assert class (Discussion)

2017-12-03 Thread RAJAT ARORA
Hello all, This is more of a discussion than a question. I was wondering if there is a need for a new class which does something like Assert (Lets call it myAssert for now). Its function is as defined below. While developing an application code, once I am confident that I am not making a

[deal.II] compile master without examples using Candi

2017-12-01 Thread RAJAT ARORA
Hello, >From one of the posts, I came to know that DEAL_II_COMPILE_EXAMPLES=ON by default for master. I am installing deal.ii master branch using candi. I do the build myself, I can pass that flag = OFF during the "cmake ." step but can anyone please tell me, how can I switch it off if I am

[deal.II] Re: Questions related to FE_Nothing

2017-11-19 Thread RAJAT ARORA
triangulation.save() or tria.load() for serialization and deserialization. How can I use checkpoint/restart with parallel:shared:tria. Thanks. On Friday, November 17, 2017 at 6:38:25 PM UTC-5, RAJAT ARORA wrote: > > Hello all, > > I am using deal.ii to solve a system of Hamilton Jacobi

[deal.II] Questions related to FE_Nothing

2017-11-17 Thread RAJAT ARORA
Hello all, I am using deal.ii to solve a system of Hamilton Jacobi equations. I need to use FE_Nothing element in a part of my domain. I read a couple of posts in this forum regarding this but I have some questions. I will appreciate if anyone can answer these. Since I need to use MPI, and I

Re: [deal.II] adaptive mesh refinement doubts

2017-10-01 Thread RAJAT ARORA
Hello Professor, Thanks a lot for your earlier answers. I implemented all the details and it works like a charm. I still have a few questions that I want to ask. The questions also have solutions/suggestions. Sorry for being too verbose. Also, I am not sure if this should be a separate post

[deal.II] adaptive mesh refinement doubts

2017-09-27 Thread RAJAT ARORA
Hello everyone, I am using deal.ii to solve a 3D solid mechanics problem. My code uses Parallel::Distributed::Triangulation and Petsc Wrappers. Until now, the code does not support AMR, however, I want to implement it in the code. For the last couple of days, I am looking at the example codes

[deal.II] Line Integral of function

2017-08-10 Thread RAJAT ARORA
Hello everyone, I am using deal.ii to solve a 3D solid mechanics problem. My code uses PetSc and P4est. The geometry is a cubical box [0 1] X [0 1] X [0 1] with a uniform rectilinear mesh in the domain. I have obtained the scalar solution vector u in the domain. I need to find the line

Re: [deal.II] fixing one component of solution to the same value

2017-04-20 Thread RAJAT ARORA
Hello Daniel, Thank you so much for pointing this out. I have been stuck here for a long time and could not figure out. I understood that the sparsity pattern class was some how failing to know that space needs to be allocated for coupling with *the_dof even though it was not locally

Re: [deal.II] fixing one component of solution to the same value

2017-04-20 Thread RAJAT ARORA
Hello Daniel, Thanks for the reply. I am actually doing what you suggested. Please look at the updated code that I had posted earlier and works with deal.ii 8.5. These lines are already there in the code. Surprisingly, such an error is still showing up. I am not sure what is still causing

Re: [deal.II] fixing one component of solution to the same value

2017-04-17 Thread RAJAT ARORA
Hello Professor, I am now using deal.ii 8.5 and I am attaching the updated code. The error message still remains the same. On Monday, April 17, 2017 at 4:43:35 PM UTC-4, RAJAT ARORA wrote: > > > Hello Professor, > > I have created a test case to show this wierd behaviour (b

Re: [deal.II] Deal.ii installation Error

2017-04-17 Thread RAJAT ARORA
n MPI and compiler you can try? We > have had many problems with intel compilers in the past, so I would > try a gcc with mpich or openmpi unless you are sure in what you are > doing. > > On Sat, Apr 15, 2017 at 5:19 PM, RAJAT ARORA <rajat.a...@gmail.com > > wrote:

Re: [deal.II] fixing one component of solution to the same value

2017-04-12 Thread RAJAT ARORA
he same element. And the dof 6 is for the x component and dof 2249 is for z component. Can you please guide me as to what is wrong and how should I proceed? Thanks. On Thursday, March 23, 2017 at 2:20:58 PM UTC-4, RAJAT ARORA wrote: > > Thanks Professor. > > > On Wednesday, March 15, 2017

Re: [deal.II] Fe_values->shape_grad() wrt to reference mesh

2017-04-01 Thread RAJAT ARORA
Hello Professor, The error is resolved when I used the template parameter "PETScWrappers::Vector". Thanks a lot for the help. I understand that this was not the most elegant way. To clarify, I had put the declaration in the header file of the class so that I am sure that the vector is not

Re: [deal.II] Fe_values->shape_grad() wrt to reference mesh

2017-04-01 Thread RAJAT ARORA
Type’ was not declared in this scope On Friday, March 31, 2017 at 11:15:54 PM UTC-4, RAJAT ARORA wrote: > > Hello, > > Since I am moving the mesh physically, the current coordinates are of the > current configuration. I am trying to make a new fe_values object with > mapping

Re: [deal.II] Fe_values->shape_grad() wrt to reference mesh

2017-03-31 Thread RAJAT ARORA
Hello, Since I am moving the mesh physically, the current coordinates are of the current configuration. I am trying to make a new fe_values object with mapping as shown below but I am getting an error An error occurred in line <114> of file in function virtual

Re: [deal.II] fixing one component of solution to the same value

2017-03-23 Thread RAJAT ARORA
Thanks Professor. On Wednesday, March 15, 2017 at 3:35:42 PM UTC-4, Wolfgang Bangerth wrote: > > > > I am using deal.ii to solve a 3D solid mechanics problem which uses > > p::d::triangulation. > > > > I am solving equilibrium equations to solve for the displacement in the > domain. > > The

[deal.II] fixing one component of solution to the same value

2017-03-14 Thread RAJAT ARORA
Hello all, I am using deal.ii to solve a 3D solid mechanics problem which uses p::d::triangulation. I am solving equilibrium equations to solve for the displacement in the domain. The domain of the problem is a cylinder with the z-axis aligned along the axis of the cylinder. To avoid the

[deal.II] Re: Using constraints for already assembled RHS

2017-03-13 Thread RAJAT ARORA
system_rhs); } On Monday, September 12, 2016 at 1:35:52 PM UTC-4, RAJAT ARORA wrote: > > Hi Daniel, > > Thanks for the reply. > > This indeed is one way of doing it. > > I was wondering if there is any other way in which this can be done. > > Than

[deal.II] Question regarding P:D:Tria

2017-03-04 Thread RAJAT ARORA
Hello all, I am using deal.ii along with PetSc and P4est to solve a 3D solid mechanics problem. I am using the GridGenerator::hyper_rectangle function to create a rectangular grid with 1000 X 500 X 1 (0.5 million) elements. My question is: - since the coarsest mesh contains 0.5 million

[deal.II] Re: How to output a single scalar in a parallel code

2017-02-23 Thread RAJAT ARORA
Hello, Please look at the solution by Timo. In your case, you can do something like. if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0) { std::ofstream myfile; myfile.open ("resultant_strass.txt"); myfile<< resultant_stress<

[deal.II] Re: How to output a single scalar in a parallel code

2017-02-22 Thread RAJAT ARORA
Hello, I am solving a similar problem. What I do is to sum the contributions from all the processors, and then use Utilities::MPI::Sum() to get the overall force which I then write in a file on a master process, For the contribution of a single cell, you have to integrate stress X normal X

[deal.II] Query using hdf5 with deal.ii

2017-02-14 Thread RAJAT ARORA
hat contains the data from all the gauss points spread over multiple processors ? Or can this data be included while doing triangulation.save() which saves triangulation and data that is associated with it using Parallel solution transfer class ? Thanks for the help. -Rajat Arora PhD. Stud

Re: [deal.II] Problem with MUMPS Solver

2017-02-14 Thread RAJAT ARORA
Hello Timo, Thanks for your detailed reply giving many possible reasons. There are no constraints on the equation. It is a first order time dependent problem, so there are just initial conditions that are taken care of separately. The code is for 3D brick elements with 9 dofs per node. I just

[deal.II] Problem with MUMPS Solver

2017-02-12 Thread RAJAT ARORA
Hello all, I am using deal.ii to solve a 3D solid mechanics problem along with Petsc and P4est. The problem has 9 Dofs per node. I run my code on stampede. This error may or may not be directly related to an issue with deal.ii (may be sparsity pattern ?) or its usage. But, given a diverse

Re: [deal.II] Doubt with set_boundary_id()

2016-12-09 Thread RAJAT ARORA
Hello Professor, The problem was because of the mesh. I regenerated the mesh and this time there is no such issue that arises. I think the options used to export and make the mesh earlier were wrong. Thanks for your help again. On Thursday, December 8, 2016 at 11:26:49 PM UTC-5, RAJAT ARORA

Re: [deal.II] Doubt with set_boundary_id()

2016-12-08 Thread RAJAT ARORA
s::compute_no_normal_flux_constraints<3, dealii::DoFHandler, 3>(dealii::DoFHandler<3, 3> const&, unsigned int, std::set, std::allocator > const&, dealii::ConstraintMatrix&, dealii::Mapping<3, 3> const&) #2 ./fdm: BugFinder<3>::BugFinder(int) #3 ./fd

Re: [deal.II] Help with triangulation::face_iterator

2016-11-27 Thread RAJAT ARORA
Thank you Professor. That worked but I am not sure why a reference didn't work but a const. reference did. On Tuesday, November 22, 2016 at 7:31:47 PM UTC-5, Wolfgang Bangerth wrote: > > > Rajat, > I don't know the exact cause of the problem, but... > > > Can we pass

[deal.II] Re: Move vertices in P::D::Triangulation

2016-11-22 Thread RAJAT ARORA
Thanks Daniel, I implemented this and it solved the issue that I was having. Just a small question, instead of using Triangulation::communicate_locally_moved_vertices , there is no harm in moving all locallyOwnedCells and ghost cells by one self. Right ? On Thursday, November 17, 2016 at

[deal.II] Move vertices in P::D::Triangulation

2016-11-16 Thread RAJAT ARORA
Hello all, I am using parallel::distributed::triangulation, P4est, and Petsc to solve a 3D solid mechanics problem using deal.ii I am moving my mesh after every time step using the below written code. I have 2 questions regarding this. template void PlasticityContactProblem:: move_mesh

Re: [deal.II] Multiplie mpi instances of code

2016-10-28 Thread RAJAT ARORA
I am surprised why it is not working. I can't recall what has changed. I don't remember installing any new libraries. It was working till Monday. Also, I have installed petsc with --download mpich tag but donot have any other mpi installation. and more importantly, it was working until monday.

[deal.II] Multiplie mpi instances of code

2016-10-28 Thread RAJAT ARORA
Hello all, I am working on a 3D solid mechanics problem using deal.ii. To run the code on n processes, I used to run the command mpirun -np ./ But, now, when I run it using this command, it runs n programs with n mpi processes each. It is like I have executed mpirun -np ./ command n times

[deal.II] Doubt with data output in parallel

2016-09-13 Thread RAJAT ARORA
Hello all, I have a small question related to data output in parallel. My program uses Parallel::distributed::triangulation. If I want to output a data array associated with cell, which one of the following is the correct way of doing it ? And why ? (This cell data type vector is to be viewed

Re: [deal.II] Applying Multi Point Constraints

2016-09-07 Thread RAJAT ARORA
Respected Prof. Wolfgang, I really appreciate your taking the time to reply. Thank you very much. Yes, by middle element I mean the central layer of the mesh. Yes you are right. It turns out this is the problem. I was creating a hyper_rectangle using grid generation class. Then 4 and 5

[deal.II] Applying Multi Point Constraints

2016-09-07 Thread RAJAT ARORA
Hello all, I am trying to apply multi point constraints to a 3D solid mechanics problem. My code used P4est, and PetSc. Lets say I have a mesh with 10 X 10 X 3 elements. I want to apply constraints such that all the dofs on the +z surface of the middle element are constrained to be equal to

[deal.II] Doubt regarding constraints

2016-09-05 Thread RAJAT ARORA
Hello all, I have a question regarding use of constraints when using parallel programming. Suppose I want to constraint dof1 owned by processor 1 to be equal to dof2 owned by processor 2. I can do that by calling ConstraintMatrix constraints; constraints.reinit(locally_relevant_dofs);

[deal.II] Re: PETScWrappers::SparseDirectMUMPS solver error

2016-07-13 Thread RAJAT ARORA
, Denis Davydov wrote: > > Hi Rajat, > > On Tuesday, July 12, 2016 at 8:37:11 AM UTC+2, RAJAT ARORA wrote: >> >> Hello Jean, >> >> Thanks for the reply. >> I understand your concern for the lack of proper information but this is >> all I have. &g

[deal.II] Re: PETScWrappers::SparseDirectMUMPS solver error

2016-07-12 Thread RAJAT ARORA
're using > quite an old version of deal.II, and the root of the problem may have > already been fixed in a later version. > > Regards, > Jean-Paul > > On Tuesday, July 12, 2016 at 3:38:54 AM UTC+2, RAJAT ARORA wrote: >> >> Hello all, >> >> I am re

[deal.II] PETScWrappers::SparseDirectMUMPS solver error

2016-07-11 Thread RAJAT ARORA
Hello all, I am recently encountering an issue with MUMPS solver. The error is coming from the dealii::PETScWrappers::SparseDirectMUMPS class in these lines. dealii::PETScWrappers::SparseDirectMUMPS solver(solver_control, mpi_communicator); solver.solve (system_matrix,

[deal.II] PETScWrappers::SparseDirectMUMPS

2016-07-11 Thread RAJAT ARORA
Hello all, I am recently encountering an issue with MUMPS solver. The error is coming from the dealii::PETScWrappers::SparseDirectMUMPS class in these lines. dealii::PETScWrappers::SparseDirectMUMPS solver(solver_control, mpi_communicator); solver.solve (system_matrix,

[deal.II] Triangulation.save() not working as expected

2016-07-08 Thread RAJAT ARORA
Hello all, I am observing some strange behavior with Parallel::distributed::triangulation::save(const * char) function. I make a triangulation and then move its vertices and then save it to a file. After saving the triangulation and I load it in a different function. However, this loaded

[deal.II] Re: Error durind checkpoint / restart using parallel distributed solution transfer

2016-07-07 Thread RAJAT ARORA
you > found the issue, and this will be helpful to know. Perhaps we can > mention/reinforce it in the documentation. > > Regards, > J-P > > On Sunday, July 3, 2016 at 9:22:06 PM UTC+2, RAJAT ARORA wrote: >> >> Hello, >> >> This is because of the ve

[deal.II] Re: Error durind checkpoint / restart using parallel distributed solution transfer

2016-07-03 Thread RAJAT ARORA
Hello, This is because of the vector locally_owned_x. As per the documentation, during the serialization, the vector should be ghosted. If I make the change, the code works fine. On Sunday, July 3, 2016 at 2:52:57 PM UTC-4, RAJAT ARORA wrote: > > Hello all, > > I am t

[deal.II] Error durind checkpoint / restart using parallel distributed solution transfer

2016-07-03 Thread RAJAT ARORA
Hello all, I am trying to use the checkpoint / restart code with the help of the example given in the documentation here I am getting a long error while saving a parallel vector and parallel

[deal.II] Restoring Gauss Point History during checkpoint/restart

2016-07-03 Thread RAJAT ARORA
d read it during a checkpoint/restart. I am using parallel distributed triangulation. The Quadrature point history is a std::vector<> of a struct PointHistory just like in step 18 <https://dealii.org/8.4.1/doxygen/deal.II/step_18.html> . Is there any way to do this ? Regards, Rajat Aror

Re: [deal.II] About MPI vectors, ghosted vectors

2016-07-03 Thread RAJAT ARORA
Hi Ehsan, On Wednesday, June 29, 2016 at 3:28:40 PM UTC-4, Ehsan Esfahani wrote: > > Dear users, > > I'm asking my question here because I think it's somehow related to this > post, and I don't think it would be necessary to open a new topic. Could > you please tell me what are differences

[deal.II] Re: Doubt in FEvalues

2016-06-01 Thread RAJAT ARORA
f here > <https://github.com/dealii/dealii/issues/2661>. So, in summary, you'll > just have to wait for a bit until the problem is fixed, or use at least one > level of global refinement :-) > J-P > > On Wednesday, June 1, 2016 at 2:04:19 AM UTC+2, RAJAT ARORA wrote: >>

[deal.II] Re: Doubt in FEvalues

2016-05-31 Thread RAJAT ARORA
lated: 1 > > However, once you perform some refinement then this error would be picked > up in debug mode (are you running your tests in debug mode?). > > Does that help? If you solve the issue, then I'd be interested to know > what the problem was. > J-P > > On Tues