You are proposing
> return _var_first_local_df[2*var]
and
> return _var_first_local_df[2*var+1]
??
That is not consistent with the way the bounds are stored. As I mentioned,
consider the three-variable system with unknowns (u,v,w). For simplicity
say there are 150 dofs total, split 50-50-50
> BTW, there is a bug in dof_map.C, which I check out from svn,
> around line 915,
>
>>> if (var == n_vars-1)
> _var_first_local_df.push_back(next_free_dof);
>
> _var_first_local_df[nvar+1] record the offset of vars, like
> [u_begin, v_begin, w_begin, p_begin, p_end],
> but without "if (var == n
> Not really a problem, but does
>
> MeshBase::n_subdomains()
>
> affect the mesh partition later on or anything?
> Would it cause any problems
> if the corresponding variables of
>
> ExodusII::get_num_elem_blk() & MeshBase::n_subdomains()
>
> were set equal?
That should not cause any problem.
> Working w/ the latest tag libMesh-0.6.3
>
> Shouldn't
>
> ExodusII::get_num_elem_blk()
>
> &
>
> MeshBase::n_subdomains()
>
> return the same value?
> or do they represent different things?
Exodus blocks are used solely to group elements of like type, not like
materials (which is how I
> You could also use Exodus... I modified the Exodus writer to correctly
> write out the boundary ids which you should then be able to see in
> Paraview (I can definitely see them with Ensight...).
>
>> I've written a little tool to twiddle boundary_info ids based on point
>> location and face norm
>> Fine with me as long as mesh.partition(n) still does what it always has.
>
> Two changes for simplicity's sake on the ParallelMesh behavior:
>
> ParMETIS instead of METIS is now the default even when the
> ParallelMesh is still serialized.
>
> The ParmetisPartitioner object doesn't get destro
>> What happens if you change the loop to
>>
>> MeshBase::const_element_iterator el =
>> mesh.active_local_elements_begin();
>> const MeshBase::const_element_iterator end_el =
>> mesh.active_local_elements_end();
>>
>>
>> Note that in the code you sent you are assembling t
> On the website, the section "Linking With Your Application" explains
> how to build an executable with :
>
> c++ -o foo foo.C `libmesh-config --cxxflags --include --ldflags`
>
> This is fine on my installation.
>
> My question, surely trivial, is : How to build a libmesh static
> library to ca
>> I don't seem to see the attachment... But I have to ask:
>>
>> If the assembly function was taken from example 3 (2 does no assembly(?)),
>> are you constraining the hanging degrees of freedom?
>
> That shouldn't be the problem, if he's just doing uniform refinement.
Roy's right, that's not
I don't seem to see the attachment... But I have to ask:
If the assembly function was taken from example 3 (2 does no assembly(?)),
are you constraining the hanging degrees of freedom? That is, is there a
line like
// We have now built the element matrix and RHS vector in terms
//
Take a look at 'src/apps/mesh_tool.cc' -
The '-b' command-line option should do exactly what you want.
You can either try using meshtool or extract the relevant code for yourself.
-Ben
On 11/19/08 9:17 AM, "Vijay S. Mahadevan" <[EMAIL PROTECTED]> wrote:
> Hi guys,
>
> I need to extract a 1-
(I'm copying this to the -devel list, feel free to take -users off any reply
to keep its traffic down)
> Okay, so dxyz seems to be appropriate for storing the metric - that
> problem is solved then. Now, one last thing: All DOFs are at the
> vertices (and we have only one DOF per vertex). Ie the g
>> Compiling C++ (in optimized mode) src/mesh/nemesis_io.C...
>> src/mesh/nemesis_io.C: In member function virtual void
>> Nemesis_IO::read(const std::string&)¹:
>> src/mesh/nemesis_io.C:254: error: reference to uint¹ is ambiguous
>> /usr/include/sys/types.h:153: error: candidates are: typedef un
>> If this makes sense, I think I have a nice place to start working on this.
>> And as long as coarsen and refine are pseudo-inverse operations,
>
> Hmm... they still may not be. If you do a System::reinit() after
> coarsen/refine, the Mesh will also be repartitioned, and I don't know
> that our
>>> For example when I have an unstructured grid, and when I coarsen
>>> uniformly twice and refine uniformly twice, would I get the exact
>>> same mesh?!
One issue is we cannot coarsen below the initial, level-0 mesh.
So if you start with a mesh, uniformly refine twice, and then uniformly
coarse
>> Of course, you would also have to propagate the mesh_data (Or again,
>> am I the only one using this ?) which stores material
>> information/element or create a view of it also.
>
> You may be the only one using this. ;-) But yes, we'd want to handle
> its restriction as well.
If your primar
>> I was juist thinking about the dof indexing too...
>>
>> What about adding a system for each mg level?
>
> Oh, you don't mean adding a System, you mean adding a system index to
> each DofObject. That would certainly make space for the indexing, and
> it would probably make the indexing proces
> But you don't necessarily have to put the exact _right_ thing either.
> There is some grey area here... and what works for one system of equations
> won't work for another. In general, if you put something resembling the
> true jacobian in there... it will greatly help your linear solves.
>> Declare a 'geometry system' which uses some C1 fe basis (clough-tocher
>> does come to mind...).
>
> Clough-Tocher may not be ideal. Since they're not h-hierarchic, you
> can't refine without (very slightly) changing the result. That's not
> a problem for my applications but it may be for thi
>> Recently, subdivision surfaces were suggested as an alternative way to
>> construct C1 (and higher) conforming surface meshes for finite element
>> simulations.
>
> Interesting. I've heard of subdivision elements being used for
> surface mesh refinement, but in a context where the subdivisio
> A clarification regarding what you said:
>
>> The DofMap will add all face neighbor degrees of freedom to an element's
>> coupled dofs if
>>
>> (1) all the variables in the system are discontinuous, or
>> (2) the command line option '--implicit_neighbor_dofs' is specified.
>
> In my system, al
> I'm not sure if this is a stupid question but I'm not entirely sure as
> to how you would find the nonzeros per row in a DG discretization.
> This is not obvious to me since in say an advection system, the flux
> at the boundaries couple the dofs in one cell to the next. And hence,
> the dof_map
> I reinit a fe object with only the same elem kind. Hence, that aspect
> of reallocating memory space due to changing elem types does not seem
> to be a problem.
What I was specifically asking is where code like
AutoPtr fe (FEBase::build(dim, fe_type));
Sits.
The issue is if it is inside your
> In reality, none of us are that diligent, and if we're going to
> be lazy anyway we ought to make laziness more convenient. ;-
Agree 100%.
-Ben
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challen
>> Is there any reason why the AutoPtr class does not have a copy
>> constructor? I would like to create an stl vector of AutoPtr
>>
>> vector< AutoPtr > > local_solution_history;
>>
>> to store the entire solution history.
>
> Unfortunately you can never ever have a container of AutoPtrs. Yo
>> Running with 1 CPU/node will hopefully perform better
>> since you are not sharing a gigE connection between processors.
>
> I'm not very familiar with the hardware part, but the lspci output
> looks like I have *two* GigE devices, so they are not shared between
> the two processors, right?
T
Sure. I'll take a look at it this afternoon.
On 9/9/08 9:46 AM, "Roy Stogner" <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 9 Sep 2008, Benjamin Kirk wrote:
>
>> That is my thinking. The System::project_vector() code does some all-to-all
>> comm
> Compute node and head node give exactly the same output. So does this
> mean I have a very slow interconnect, and is this the reason for the
> bad scalability?
That is my thinking. The System::project_vector() code does some all-to-all
communication, and this seems to be scaling quite badly as
> There are about 120 nodes with 2 CPUs each. Please find attached the
> content of /proc/cpuinfo of one of these nodes (should be typical for
> all of them). When I run with n CPUs, I usually mean that I run on
> n/2 nodes using both CPUs each (although there is also the possibility
> to use one
>> On linux, lspci will tell you something about the hardware connected
>> to the PCI bus. This may list the interconnect device(s).
>
> lspci seems not to be installed on that machine, although it is linux.
Try /sbin/lspci - there is a good chance /sbin is not in your path.
-Ben
>> So the project_vector() performance went from 168-179 sec before the patch to
>> 134-148 sec after the patch... but the total time used only went down by
>> about 3 seconds, not 30, because apparently "All" started using up the
>> remainder?
>
> Very strange, really. The application was defini
> I noticed something very reminiscent of this just two days ago. In my case
> I run a transient solution to steady-state and then stop the simulation.
>
> I then re-read this result, refine the mesh, project the solution, and
> re-converge on the refined mesh.
>
> I can't quantify it at the mom
Tim,
How many variables and vectors are in your system?
-Ben
On 9/5/08 9:42 AM, "Tim Kroeger" <[EMAIL PROTECTED]> wrote:
> Dear Roy,
>
> On Thu, 4 Sep 2008, Roy Stogner wrote:
>
>>> I see, you are also calling serial vectors "global vectors" now.
>>
>> Just one subset of serial vectors: tho
I have just caught up to speed with the mailing list after a few
distractions.
I noticed something very reminiscent of this just two days ago. In my case
I run a transient solution to steady-state and then stop the simulation.
I then re-read this result, refine the mesh, project the solution,
> Is there a way to tell Libmesh that certain variables are neither
> time-evolving or constraints, so that they are not accounted for in the
> factorization? So I would start with a FEMsystem with all of the tensor
> components, eigenvalues, and eigenvector components in the same system.
Since
> Anyone have an example showing how to properly use a CouplingMatrix?
No current example, but I can describe it pretty easy...
> I have several variables in my system, but I only want to allocate the
> diagonal blocks (ie, solve the system completely decoupled). It appears that
> the CouplingMa
The attachment was stripped, you can grab it here.
http://www.cfdlab.ae.utexas.edu/~benkirk/ex9.supg_patch
On 8/15/08 11:14 AM, "Benjamin Kirk" <[EMAIL PROTECTED]> wrote:
> There was a previous question as to how one might perform upwind
> stabilization in libMesh.
There was a previous question as to how one might perform upwind
stabilization in libMesh. One such approach, as demonstrated in the
attached patch to ex9, is the streamline-upwind Petrov-Galerkin (SUPG)
method. In this approach the test function is biased in the upwind
direction, thereby "upwind
> Maybe something went wrong in my local install?
> w/ the system gcc there is no problem.
Certainly it seems something is off...
In any case, the other approach I would call more of a hack, but it will
work...
Edit your Make.common & Make.common.in, and change line 30
>From
libmesh_LDFLAGS
> prophecy$ $GCC_HOME/bin/g++ -o ex9 ex9.C ...
>
> gives me the same runtime error.
>
> thanks,
> df
So there are a few ways to get dynamic libraries to map properly, which
seems to be what is not happening correctly here. One way, as you are doing
now, is use the linker to add the search pa
> You say there's no swapping but if it's spending system time, and it's
> the later insertions that are being slow AND it gets worse with 3D
> (which will probably have more DOFs)... it REALLY sounds like you're running
> out of memory
PETSc needs to be told how many nonzeros will be
> The hard (or at least tedious) part may be fixing our I/O classes to
> write out and read in solutions with per-subdomain variables. I'm not
> familiar with the nitty-gritty details of our output formats, but I
> wouldn't be surprised if they didn't all support such a thing.
I'm thinking for m
> So my question is, can I use LEGENDRE basis at all for non-infinite
> elements ? Would it require a lot of work to add this support ? Or is
> there a deeper reason why this was intentionally left out of the
> implementation and was written only for infinite elements alone ?
The enum LEGENDRE rig
> Yeah - it's kind of odd that PETSc defaults to _not_ compiling with
> shared support whereas libMesh defaults _to_ compiling shared
>
> On OSX shared support is basically broken... so I just always compile
> with --disable-shared
FWIW, libmesh can now build shared libraries on OSX 10.5.
> So how are other people doing boundary conditions with tri's and
> tets? With Dirichlet you can just use a penalty to swamp everything
> out. But with Nuemann?
Neumann is not an issue because the BC is in terms of the boundary integral,
so you only consider it with elements whose sides interse
Many, many ages ago we did Lagrange BCs ('cus there weren't any other types
of elements yet!) strongly. The trick is to do it at the element
matrix/vector level before inserting into the global matrix.
There is actually a member function in DenseMatrix to do this:
Ke.condense(i,j,val,Fe);
is wh
> Making the "bins" larger would just make the problem less likely to
> trigger. Using our old keys (based on pointer values) would fail on
> a ParallelMesh. Searching all neighboring bin keys would make
> LocationMap::find() take 3^d times longer, but that's the best fix I
> can think of. Anyon
> I am using Triangle separately to build a specific mesh and I am writing a
> filter to convert it in xda format. However there is a variable "Sum of
> Element Weights" which I don't know how to calculate knowing the number of
> elements, its nodes and cooodinates.
> Can anyone give me an hint?
>
> The tool I work on here at work can compare any two arbitrary
> solutions to each other... even with completely non-nested grids. The
> user gets to decide what kind of crime they want to commit though.
> Often (if the meshes are really dissimilar) we'll use an overkill
> rendezvous mesh to tra
>> ./configure --with-cc=mpicc --with-cxx=mpicxx --with-f77=mpif77 ...
>>
>> (or whatever mpi compiler wrappers LAM uses) and give it a shot?
>>
>> The MPI compiler wrappers should find the libraries directly. The
>> -ULAM_WANT_MPI2CPP may or may not still be necessary.
>
> I still don't quite
> After the mesh is partitioning, if the numbers of the points at
> processors are not equal, is it ok for libmesh? thanks a lot.
Specifically, libMesh partitions the elements and then the node partitioning
is basically defined in terms of the element partitioning. In the best case
scenario the n
#if LAM_WANT_MPI2CPP & !LAM_BUILDING
#include
#endif
So I tried adding
libmesh_CXXFLAGS += -ULAM_WANT_MPI2CPP
to my Make.common file, although I am not sure whether this is correct
and where LAM_WANT_MPI2CPP actually has been defined. I hel
>> Ok, I have found the problem...
>> ComputeHilbertKeys is calculating the same node_key for different nodes.
>> This is a mess because the mesh has no tears, even for higher h
>> refinement levels.
>>
>> Any idea? I don't know how LibHilbert works.
>
> I'm afraid this is up to Ben; it's a libra
> #if defined(c_plusplus) || defined(__cplusplus)
> // ... some other stuff here ...
> /*
> * Conditional MPI 2 C++ bindings support
> * Careful not to include it while we're building LAM (because it won't
> * exist yet)
> */
> #if LAM_WANT_MPI2CPP & !LAM_BUILDING
> #include
> #endif
> #e
> On Mon, 7 Apr 2008, Tim Kroeger wrote:
>
>> although on my desktop computer everything now seems to work fine, the
>> installation on a cluster fails. This time, I get
>>
>> /tmp/ccD7kaXW.o: In function `main':
>> amr.cc:(.text+0x1860): undefined reference to `lam_mpi_comm_world'
>> /tmp/ccD7
> I'm not sure if anyone else already responded to you, but no, I don't
> believe we currently write subdomain IDs to xda/r files. One could
> either append the subdomain ID to the end of each element connectivity
> list, or make a separate section of subdomain IDs somewhere after the
> boundary c
> Hi all,
>
> On both an Ubuntu and an ICES Sysnet (Fedora-ish) system, I've
> 1. Checked out the latest libmesh code stream using anonymous SVN access.
> 2. Run 'configure --enable-everything' and build the library successfully
> 3. Run 'make run_examples'
>
> Examples 1-9 run without any
> You must be adding the constraint rows after the matrix has been
> preallocated... Regardless of whether PeriodicBoundary calls would
> fix your problem, we ought to make sure that users are able and
> encouraged to do efficient preallocation for arbitrary user
> constraints, too.
The DOF conne
> Hi, Libmesh Users
> I find the stiffness matrix assembling is very slow at the first time. But
> it is very quickly at the second time and laters for the same system.
> It needs 2 hours to assemble the stiffness matrix of the 3d stokes system
> with 15,000 dofs. How to speed up the process of ass
> I performed a simulation using a mesh with infinite elements and all is
> fine. then I want to do some post-processing (getting the solution on
> several single points in the computational domain) but all of a sudden I
> get an error from the compute_map() function saying "negative Jacobian".
>
> does it mean I can release the allocated memory for
> old_dof_objects after the solution projection?
I think maybe, but I'll have to test it to know for sure.
-Ben
-
This SF.net email is sponsored by: Microsoft
Defy all c
> Dear all,
> I'm reading code of dofmap. Can you explain me when we
> need old_dof_object and/or old_dof_indices, so that I
> can proceed reading?
>
Sure. When you refine the mesh you obviously introduce a new finite element
approximation subspace, which means a new # of DOFs and associated DO
> Hi,
>
> Im back again with the petsc non linear solver in Libmesh. I'm trying to make
> it run under a Red Hat WS 4.
> First I installed one of the last different versions of Petsc (2.3.3, 2.3.2
> and 2.3.1) in the Libmesh directory and it worked. After that, I changed the
> "Make.common" in L
Please feel free to expound on your libMesh/PETSc installation woes &
solutions on the wiki. There is some info there thay may help:
http://libmesh.sourceforge.net/wiki/index.php/Installation
-Ben
On 2/29/08 10:35 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Thanks very much. I will hav
Are you specifically avoiding using a fortran compiler for some reason? I
notice you are downloading the C blas and that the configure script could
not find a fortran compiler.
I'm not sure that this *won't* work -- I've just never tried that.
-Ben
On 2/29/08 8:08 AM, "Roy Stogner" <[EMAIL P
>>> The node number is not the same with that in the input unv file in
>>> libmesh-0.6.2
>>> Is this a bug? If not, How can I let the node number in libmesh-0.6.2 to be
>>> the same as the input file?
>> In general, we have never guaranteed that would be the case. If it
>> happened to be in one v
> I am curious as to what kind of multiphysics problems you have solved
> with Libmesh before and what kind of approach you took for those. I
> gather you used a single mesh for both the physics but where you able
> to preserve the accuracy of the coupled solution in space and time ?
> And did you
>>> I'm not an expert on Libmesh, but I recall that PETSC allows one to
>>> set the iinitial guess. If PETSc can be accessed through the libmesh
>>> interface then it should be possible quite easily.
>>
>> Indeed it is -- this is in fact the default behavior in libMesh. Whatever
>> is in the Syst
>> I want to solve a pde equation involving a parameter to be incremented
>> sucessively. After obtaining a solution I want to use it as a guess for
>> the system to be solved next. How can I implement this?
> I'm not an expert on Libmesh, but I recall that PETSC allows one to
> set the iinitia
> Multi-physics problems usually have physics with different length
> scales and different time scales. It is necessary to use appropriate
> meshes depending on the physics to resolve the evolution of solution
> and using a single mesh (union of all physics meshes) will lead to a
> very high DoF
>> For simplification, consider 2 physics on the same domain: Consider
>> the 3-D heat conduction and a neutron diffusion model (both are
>> nonlinear diffusion-reaction equations) which are described over the
>> same 3D domain. Now, can I get away with using a single
>> EquationSystems object with
> The penalty of using operator-splitting is that you end up with a
> discrete system that has only conditional stability in time
> integration since the coupling is explicit. If you do iterate between
> the different operators at each time step, such an issue can be
> avoided but at the increased
> Assume that a multi-physics problem or a problem with single system on
> a staggerred grid (velocity, pressure on different meshes) needs to be
> solved. Since the Mesh is always associated with EquationSystems and
> the association of a LinearImplicitSystem or NonlinearImplicitSystem
> to the
Thanks a lot for that! It just so happens that I've been messing around
with the DofMap::compute_sparsity() to make it multithreaded -- I'll see
that your fix makes it in there. Also, now that Roy added a convenient way
to get the continuity for a finite element family we should probably set
impl
> Now, I am wondering how algebraic method constrains hanging nodes. Whether
> does it set the values at hanging nodes to zero? I output the matrix
> assembled in libmesh using PetSC function, I find lots of zero values in
> matrix. Because the matrix in Petsc is stored using sparse compressed stra
> This could be useful to solve Euler equation with a DG method that
> employs a Riemann solver to compute the numerical fluxes
> (discontinuous at element faces-edges) without dealing with Godunov
> fluxes to compute the jacobian... Am I wrong?
Absolutely. And a common trick from the finite volu
> Let me rewrite the expression you wrote as
>
> J*v = (F(u+epsilon*v)-F(u)) / epsilon
>
> Where epsilon is a small perturbation and F(u) is the nonlinear residual
> function and J is the jacobian matrix of the nonlinear system. The above
> formula compute the action of a Jacobian on a given vec
>> A lot of people (including myself) are still skeptical that it's even
>> a good idea. I personally think that the complexity involved in
>> creating MPISMP software outweighs any potential gains. MPI software
>> is hard to write... and so is SMP put the two together and you are
>> just ask
> hi Roy,
> thanx for the explaination. But how did you solve ((u
> * grad)u, v)_Omega? It's a square term. I heard there
> are some other methods, streamline, least square FEM
> ... I would like to hear your comments.
By square I guess you mean asymmetric?
The convection term is asymmetric and
> libMesh just hooks to PETSc and LASPACK for sparse linear algebra,
> whereas deal.II has its own multithreaded linear solvers (which IIRC
> were more efficient than PETSc?) for shared memory systems. If you're
> running on a four-core workstation, for example, I think deal.II only
> needs to sto
> In general, using pointers to class methods doesn't really work like
> you would expect the reason? Because you have to have a class
> instance to call it on... meaning you have to store two pieces of
> data... the pointer to the method and a pointer to a class instance.
> Here is some readi
>> In libMesh this is as simple as providing a residual function and then
>> using the -ksp_matrix_free or -ksp_mf_operator (I think) options to PETSc.
>
> I understand this. But my concern is how would you compute the local
> contributions to the mat-vec product ? Since I do not want to store th
> I am currently looking for a library that can work well with PETSc and can
> provide me an FEM framework to handle a set of coupled nonlinear PDEs in 2
> and 3 dimensions.
>
> I hope to compare the usability of LibMesh and Deal II for this purpose.
Glad to help!
> 2) What additional code chang
> I had the exact same problem with python-numpy, downloaded from
> sourceforge. So maybe a problem on the sf end? Ondrej
>
> On 12/3/07, Mladen Jurak <[EMAIL PROTECTED]> wrote:
>> Hi everyone,
>>
>> I have a problem with unpacking libmesh-0.6.2.tar.gz:
>> tar gives me "unexpected end of file" e
> I've done like this after building a mesh object:
>
> // Partition the mesh with ParMetis package.
> ParmetisPartitioner pmetis;
> pmetis.partition(mesh, 4); // Partition the mesh on 4 processors
>
> // Print information about the mesh to the screen.
> mesh.print_info();
>
> Hello,
>
> I am looking for the piece of MPI/ParMetis code that distribute the
> Tetrahedral Mesh
> over the processors and also the associated unknown to "update".
> Please note that I am looking for the low level ones coded in Libmesh
> and not
> the high level ones.
>
> Could you please let
> You can't mix stdio.h (which I think g++ uses internally) with
> MPICH2's C++ bindings, because for some reason the MPI-2 C++ binding
> reuses macro names from the C standard. We used to have a workaround
> for this in libMesh, but it caused its own problems, so since we only
> use the MPI C bin
>> I'm not getting stellar performances with the petsc linear solver on
>> a 64 bit Xeon (8 CPUs with 64 Gb RAM). The machine processors are
>> clocked at 3 GHz, but -log_summary tells me I'm running at 1e8 flops/s
>> (on a single processor; I don't see a big speedup with more processors,
>> but th
In the current implementation of BoundaryInfo, when the side of an element
is added all the nodes on that side are added with bc_id tags as well. This
is fine for non-refined meshes, but not with adaptivity. This is because
children inherit boundary condition information from their parent and are
The libmesh-user mailing list is being archived on gmane.org. This
provides, among other things, a nice digest option and RSS feeds.
Enjoy.
http://dir.gmane.org/gmane.comp.mathematics.libmesh.user
-
This SF.net email is sp
Thanks. The write_ascii() predates the 1D support in libMesh, and was never
updated.
I'll fix that.
-Ben
On 10/30/07 11:18 AM, "Ingo Schmidt" <[EMAIL PROTECTED]> wrote:
> Dear libmesh community,
>
> I've detected a bug at the TecplotIO::write_ascii() function. I'm
> messing around with "1D"
91 matches
Mail list logo