code to thousands of processors I probably wouldn't worry
about it.
A
On Tue, Sep 4, 2012 at 11:17 AM, TAY wee-beng wrote:
> On 4/9/2012 11:11 AM, Aron Ahmadia wrote:
>>
>> This doesn't strike me as a particularly large problem. I'm not sure
>> it's wor
This doesn't strike me as a particularly large problem. I'm not sure
it's worth doing unless you are going to be looking at more unknowns
in the future.
A
On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote:
> Hi,
>
> My Fortran CFD code is currently partitioned in the z direction. Total grid
>
>> Would be really grateful for any advice. Since I am using an IDE to build
>> the code, non-command line option tricks would also be of great help.
Any IDE worth the bandwidth needed to download it will have options
for setting environment variables and command line arguments for
different runs.
The CMake-generated Makefiles.
A
Sent from my iPhone
On Aug 29, 2012, at 6:56 AM, Thomas Witkowski wrote:
> What's the most efficient way to recompile PETSc after making some small
> changes in very few files?
>
> Best regards,
>
> Thomas
--
--
This message a
Can you re-run both codes with: -log_summary please?
A
On Wed, Aug 22, 2012 at 10:43 AM, Feng Li wrote:
> The performance of petsc_3.3p2 is five times faster than petsc_dev-r24197.
> Why?
>
> It is tested with pflotran example problem 100x100x100.
> config parameter?--with-mpi=0 --CC=gcc --FC=gf
Xin,
Unless you have modified your PYTHONPATH variable or your site-packages
settings, this command:
python setup.py install --user
will not actually install petsc4py in a location visible to your Python
installation. Unfortunately, the Python procedures and documentation for
"user-local" insta
Can you send the complete configure.log to "PETSc Maint" <
petsc-maint at mcs.anl.gov>? The developers are probably still sleeping right
now (they're all mostly on US Central time), it's not a problem I've seen
before.
A
On Wed, Jun 20, 2012 at 11:52 AM, Juha J?ykk? wrote:
> Dear list,
>
> I h
In the meanwhile, use Google's index:
query: "site:lists.mcs.anl.gov/pipermail/petsc-users XXX"
query: "site:lists.mcs.anl.gov/pipermail/petsc-dev XXX"
A
On Wed, Jun 13, 2012 at 2:39 PM, Jed Brown wrote:
> Thanks, we're looking into it.
>
>
> On Wed, Jun 13, 2012 at 6:32 AM, Thomas Witkowski
Dear Chris,
Sorry, Lisandro and I missed this. I answered your question over on
scicomp: http://scicomp.stackexchange.com/a/2356/9
Thanks for the persistence!
Regards,
Aron
On Tue, May 22, 2012 at 9:27 PM, Christian Staudt
wrote:
> Hi petsc4py users,
>
> I am running into the following error:
>
> Besides, I have a puzzle about the process (mpiexec -n 2).As you
> know,I only use PETSc in the subroutine(a function named PETSCSOLVE,may be
> taken as a tool function) and the main program call it for many times.
> So,where should I call "PetscInitialize" and "PetscFinalize",in the main
>
>
> - There are probably functions for which pure Python does not deliver the
> necessary speed (and PETSc probably does not provide the operations
> needed). I am researching how to rewrite such performance-critical parts in
> C/C++, and embed them in the Python code - using methods like scipy.wea
You do realize that MatDuplicate is making a copy, right?
A
On Wed, May 16, 2012 at 5:17 PM, Andrew Spott wrote:
> Ok, so now I'm leaking memory without even creating the extra matrix:
>
> PetscErrorCode HamiltonianJ(TS timeStepContext, PetscReal t, Vec u, Mat
> *A, Mat *B, MatStructure *flag, v
Umnn, I hope this isn't too obvious, but you're duplicating your matrix at
every time step without freeing it and expecting that to not be a problem?
A
On Wed, May 16, 2012 at 9:01 AM, Andrew Spott wrote:
> I'm attempting to run some code with a rather large (>1GB) sparse matrix,
> and I keep ge
>
> Thanks for the hint. I was wondering whether I should start with a working
> prototype based on numpy, then do parallelization with PETSc on demand - or
> start with PETSc wherever matrices and vectors are needed.
>
You need to write parallel code in a Single Program Multiple Data fashion
if y
lease let us know if you find them to be useful.
A
On Tue, May 8, 2012 at 5:21 PM, Aron Ahmadia wrote:
> b) In MATLAB, arrays are used everywhere, also for small collections,
>> where one would use tuples or lists in Python (e.g. multiple return values
>> from a function). When I e
>
> b) In MATLAB, arrays are used everywhere, also for small collections,
> where one would use tuples or lists in Python (e.g. multiple return values
> from a function). When I encounter an array in the original MATLAB code, I
> have to decide whether a tuple, a list, a numpy.ndarray or a
> PETSc.
this pretty often with mumps compiled with petsc (whereas when
> mumps is used directly it's quite rare to come along with this problem).
>
> Regards,
> Alexander
>
> - Reply message -
> From: "Aron Ahmadia"
> To: "PETSc users list"
> Subj
I'm not sure if this is related, but Parmetis+Mumps+PETSc 3.2 on BlueGene/P
was causing similar behavior without even setting any options. The only
way I was able to get a direct solver going was by switching over to
SuperLU.
A
On Tue, Apr 24, 2012 at 10:01 PM, Alexander Grayver wrote:
> Can y
running, correct?
>
>
> On Fri, Apr 20, 2012 at 11:44 AM, Aron Ahmadia kaust.edu.sa>wrote:
>
>> If I use, say Np = 16 processes on one node, MPI is running 16 versions
>>> of the code on a single node (which has 16 cores). How does OpenMP figure
>>> out how to for
>
> If I use, say Np = 16 processes on one node, MPI is running 16 versions of
> the code on a single node (which has 16 cores). How does OpenMP figure out
> how to fork? Does it fork a total of 16 threads/MPI process = 256 threads
> or is it smart to just fork a total of 16 threads/node = 1 thread
http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex34.c.htmlhas
examples both Neumann and Dirichlet BCs...
A
On Sun, Apr 15, 2012 at 3:00 AM, Zhenglun (Alan) Wei wrote:
> Dear All,
>Any suggestions on a 3D Poisson Solver with both Dirichlet and Neumann
> BC's?
>
Just to clarify this a little more as it came up recently in some of Amal's
thesis work using PyClaw. On a BlueGene/P the I/O nodes are available at
some (usually fixed) density throughout the network. On the BlueGene/P
system at KAUST, the majority of the system is available at the I/O node
dens
The PETSc user's manual has a section on working with Eclipse (13.10) which
describes one way to add the libraries*, have you consulted this?
A
* I notice that TeX overflowed the library link line in the PDF in my copy
for 3.2, but the idea should be clear
On Mon, Mar 5, 2012 at 1:24 PM, Jose E.
Hi Thomas,
That wasn't quite Matt's question, since MPI_Init gets called regardless of
whether you use mpiexec. He wants to know if your code breaks if
MPI_Init() is called, in which case we can blame this on your MPI
implementation and not PETSc :)
-A
On Tue, Dec 13, 2011 at 5:55 PM, Thomas L
>
> It would be very helpful to be able to run 2 processes in a debugger
> without relying on X11 at all...
>
As several others suggested, you can probably do this on your own laptop or
workstation if you can reproduce the problem locally. This is how I debug
all of my parallel code.
A
-
Dear Dominik,
One trick for getting around this that works on LoadLeveler (and I suspect
SLURM) is running:
xterm
Instead of the usual "mpirun" when your batch script gets executed. As
long as the scheduler's batch script inherits your X11 environment and is
running on the login node, you'll th
n PetscError
if you want to catch PETSc errors percolating up the stack.
A
On Mon, Dec 12, 2011 at 3:48 PM, Aron Ahmadia wrote:
> Not on a BG/P you can't.
>
> A
>
>
> On Mon, Dec 12, 2011 at 3:46 PM, Matthew Knepley wrote:
>
>> On Mon, Dec 12, 2011 at 2:48 AM, D
Not on a BG/P you can't.
A
On Mon, Dec 12, 2011 at 3:46 PM, Matthew Knepley wrote:
> On Mon, Dec 12, 2011 at 2:48 AM, Dominik Szczerba itis.ethz.ch>wrote:
>
>> Hi,
>> I am debugging my code on a system that does not allow any X11
>> connections, therefore the following does not work:
>>
>> mpi
| But here I need to test my designed matrix and know what PETSc brings out
in a dense format to check my procedure.
This sentence doesn't make any sense. The matrix is the same whether it is
stored in a dense or sparse format, you can query individual values of the
matrix and find out whether th
at /home/dsz/src/framework/trunk/solve/cd3t10mpi_main.cxx:526
(gdb)
On Fri, Aug 19, 2011 at 8:22 PM, Dominik Szczerba wrote:
> What do you mean by "the second break"?
>
> Dominik
>
> On Fri, Aug 19, 2011 at 6:47 PM, Aron Ahmadia
> wrote:
> > You want to do
You want to do a 'where' on the second break, when your program is raising
an abort signal...
A
On Fri, Aug 19, 2011 at 6:57 PM, Dominik Szczerba wrote:
> (gdb) where
> #0 0x7fae5b941590 in __nanosleep_nocancel () at
> ../sysdeps/unix/syscall-template.S:82
> #1 0x7fae5b94143c in __slee
PetscFinalize();
>
> }
>
> ** **
>
> Thanks,
>
> Debao****
> --
>
> *From:* petsc-users-bounces at mcs.anl.gov [mailto:
> petsc-users-bounces at mcs.anl.gov] *On Behalf Of *Aron Ahmadia
> *Sent:* Tuesday, August 0
Debao,
What is the complete traceback for this problem?
A
On Tue, Aug 9, 2011 at 11:12 AM, Debao Shao wrote:
> DA,
>
> ** **
>
> Do you happen to know what may cause this error?
>
> ?[0]PETSC ERROR: PetscFinalize() line 968 in
> src/sys/objects/pinit.c?
>
> ** **
>
> Aft
"But I encounter a new problem, the situation is:
1, the matrix is big, and can be partitioned to several blocks;
2, started several threads to handle each block of matrix;
3, integrated all block matrices together."
You should be using PETSc+MPI to handle this distribution for
>
> Found a PDF, "MATRICES IN PETSc", (after much googling) but not sure which
> of the many forms will work and which is best.
>
> ---John
>
>
As a general piece of advice, always consult the documentation provided by
the developers of a software package before typing search terms in Google.
This
What Matt is getting at is that typically we measure the computational
difficulty of a problem as a function of the 'unknowns'. If you are looking
at turning a sparse matrix O(n) bytes into a dense inverse O(n^2) bytes,
you've taken what was originally a potentially optimal problem and turned it
i
Dear Rebecca,
PETSc+UPC is certainly possible since UPC is an extension to C99, so
underneath PETSc+UPC your parallelism model would be MPI+UPC. If you are
planning on using PETSc for your distributed-memory parallelism and UPC for
your 'accelerator' such as multicore or GPU, this would just requ
n Mon, Jul 4, 2011 at 10:48 PM, Matthew Emmett wrote:
> Hey Aaron,
>
> Thanks for the tip! I'll play around with the indexing and reshaping.
>
> Matt
>
> On Mon, Jul 4, 2011 at 3:33 PM, Aron Ahmadia
> wrote:
> > Hi Matt!
> > We deal with this same issue in
Hi Matt!
We deal with this same issue in PyClaw/PetClaw, I think Amal could do a much
better job describing the approach (or copying a relevant section from her
Master's thesis) we take to avoid copying, but the idea is to follow the
native PETSc ordering with interleaved degrees-of-freedom to kee
Hi Adam,
It sounds like you are creating a 'blocked' matrix. PETSc's format for this
is described briefly here:
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateMPIBAIJ.html#MatCreateMPIBAIJ
You will also be interested in adding values blocked:
http://ww
We (KAUST) are also interested in seeing a release soon, and I am happy to
help verify the 64-bit index code on our BG/P here when you have a release
candidate ready.
A
On Wed, Jun 22, 2011 at 4:14 PM, wrote:
> Hi,
>
> A few week weeks ago you wrote about releasing PETSc 3.2 "before the summer
Hi Miguel,
Please send the complete configure.log and make.log to
petsc-maint at mcs.anl.gov
Are you able to build with --with-shared=0?
A
On Wed, May 25, 2011 at 1:38 PM, Miguel Fosas wrote:
> Dear all,
>
> I'm trying to compile PETSc (3.1-p8) on an SGI Altix using Intel C/C++
> compilers (th
Hi Vish,
What is 'painfully slow'. Do you have a profile or an estimate in terms of
GB/s? Have you taken a look at your process's memory allocation and checked
to see if it is swapping? My first guess would be that you are exceeding
RAM and your program is thrashing as parts of the page table g
Alex,
If you are adding a sparse matrix to a dense one, you are better off just
iterating through the values of your sparse matrix and adding them to your
dense matrix.
As far as I know, there are no routines in PETSc that will do this
automatically for you (and this sort of thing is really not P
I am very suspicious of any results where the fortran blas is
out-performing the MKL.
A
On Thu, Mar 17, 2011 at 3:24 AM, Natarajan CS wrote:
> Hello Rob,
> Thanks for the update, this might be very valuable for other developers
> out there!
> I am still a little surprised by the performance
I've seen a few threads in this direction:
See Sanjukta Bhowmick's work on combining machine learning with PETSc to
start:
http://cs.unomaha.edu/~bhowmick/Blog/Entries/2010/9/12_Solvers_for_Large_Sparse_Linear_Systems.html
HYPRE has something along the lines of this as well, but I have not seen a
directories
> and libraries automatically (like everyone elses mpicc etc does?) Seems very
> cumbersome that users need to know they strange directories and include them
> themselves? A real step backwards in usability?
>
>Barry
>
> On Feb 10, 2011, at 7:44 AM, Aron Ahma
add opt/ibmhpc/ppe.poe/include/ibmmpi/ to your ./configure options like
this:
--with-mpi-include=/opt/ibmhpc/ppe.poe/include/ibmmpi/
You may have to manually add the MPI libraries and their path as well, since
BuildSystem tends to like these packaged together. Ping the list back if
you can't fig
Gaurish,
I would suggest you spend some time reading the PETSc user's manual
available here:
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf
These two lines are incorrect.
ierr = VecCreateMPI(PETSC_COMM_WORLD,m,PETSC_DETERMINE,&x);CHKERRQ(ierr);
ierr = VecCreateMPI(
; Have you ever tried or worked on a parallel file reader to a common global
> matrix in PETSc, where each processor has part of the file? Thank you
>
> Regards,
> Zuhair Khayyat
>
>
> On Sat, Jan 8, 2011 at 8:42 PM, Aron Ahmadia kaust.edu.sa>wrote:
>
>> if the f
if the file is stored in the PETSc format, you can use PETSc to pull the
file in using MPIIO, which should be (hopefully) faster.
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Viewer/PetscViewerBinarySetMPIIO.html
Feel free to pop these questions to petsc-users (c
MatStencil only makes sense if you are using a distributed grid (DA), where
it corresponds to physical field locations. You probably just want
MatSetValuesBlocked (
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValuesBlocked.html
)
Warm Regards,
Aron
On
This is quite odd. The symbol __intel_sse2_strlen should be provided
by libirc.a (which should be brought in by icc or icpc, the Intel
C/C++ compilers). I am surprised that configure passed with this
error. What C compiler are you supplying to PETSc configure?
I suggest you send the complete co
Hi Jinshan,
You can pass your initial guess in to the KSP object as X, then at the
command line set the flag:
-ksp_initial_guess_nonzero
Happy Computing,
Aron
On Fri, Nov 5, 2010 at 8:41 AM, jinshan wu wrote:
> Hi all,
>
> I am using the GMRES linear solver in PETSC. I have a very good initial
It sounds like you will want to shift to matrix-free methods, but it
is hard to make that assessment without seeing your problem
formulation. I can tell you that I have seen matrix-free methods
implemented more efficiently than if you had kept the entire sparse
system, but I cannot tell you if it
ng I could prevent that by using a package that allow for
> templates. That being said, I am not an expert on PETSc by any measure! As a
> result I highly appreciate any ideas and comments if you think this is
> possible to do with PETSc.
> All the best,
> Mohammad
>
> On Fri, Oc
Dear Mohammad,
As a user of PETSc for the last 8 years, since my days as an
undergraduate, and now as a professional staff scientist at a
supercomputing center, I can say with some confidence that there are
no codes like PETSc in C++ or any other language in terms of quality
of implementation, doc
Thanks for the extra notes Lisandro, I've migrated some of the results of
this discussion to the petclaw development wiki here:
https://bitbucket.org/knepley/petclaw/wiki/How_do_I
A
On Sat, Sep 18, 2010 at 8:55 PM, Lisandro Dalcin wrote:
> On 18 September 2010 21:01, Aron Ahmadia
Dear Amal,
Thanks for the questions. These are great! I think they show a good
fundamental approach, you are thinking about these problems like a PETSc
scientist would. I am going to cc petsc-users on the reply in case anybody
wants to add or comment:
*How to set up a DA for multiple equations
Dear Nemanja,
Welcome to PETSc! It is important to note that PETSc's parallel
functionality is implemented on top of MPI. Your MPI vendor (Microsoft,
MPICH, OpenMPI, etc...) is responsible for providing you the interface for
launching your jobs in parallel, PETSc does not care how your MPI jobs
You couldn't simply template the dereference, you would need to have a way
to reformat the data into single/double-precision, and PETSc assumes you are
giving it a raw C pointer. This would have the effect of potentially
generating an expensive data copy every time you need to hand your object to
I'll take a look at this and report back Rebecca. I was seeing
similar bus errors on some PETSc example code calling VecView and we
haven't tracked it down yet.
A
On Tue, Apr 27, 2010 at 5:20 AM, Xuefei YUAN via RT
wrote:
>
> Tue Apr 27 12:20:11 2010: Request 847 was acted upon.
> ?Transaction:
A SEGV is definitely a memory access problem, as PETSc suggests, it is
likely to be a memory access out of range.
I don't recommend trying to debug this problem on amdahl, can you reproduce
the problem just running with multiple processes on your workstation?
Warm Regards,
Aron
On Wed, Apr 21, 2
ng binary
and MPI compatibility.
Hope this helps...
Warm Regards,
Aron Ahmadia
On Mon, Apr 19, 2010 at 11:05 PM, Li, Zhisong (lizs) wrote:
> Mat and Jed,
>
> Thank you for your reply.
>
> As far as I remembered, the make test was successful except the Fortran
> complier, but I
Does anybody have good references in the literature analyzing the memory
access patterns for sparse solvers and how they scale? I remember seeing
Barry's talk about multigrid memory access patterns, but I'm not sure if
I've ever seen a good paper reference.
Cheers,
Aron
On Wed, Nov 18, 2009 at 6
Hi Braxton,
I don't think there's an explicit manual page in PETSc for doing it.
You would need to do:
VecGetArray
VecGetOwnershipRange
(iterate over range on data from array)
VecRestoreArray
I cc the PETSc user's list in case anyone else has a brighter idea.
Cheers,
A
On Thu, Nov 5, 2009 at
ex19 comes to mind, though it's a bit overkill for what you're doing...
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex19.c.html
ex19 uses DAs and DMMG, which is kind of like a meta-DA for using
multigrid-style solvers. Both work well with structured
Hi All,
I can take charge of the development of this. Ulisses, we can take this
discussion offline. The PETSc team only takes patches against their current
working version, but I can work against an earlier PETSc if that is
preferred.
Thanks,
Aron
On Wed, Aug 12, 2009 at 2:19 PM, Matthew Knepl
Wouldn't it be better in this case to use an MPIScatterV?
~A
On Mon, Sep 22, 2008 at 4:03 PM, Barry Smith wrote:
>
> I would only expect good performance if you used MPI calls to send the
> blocks of rows of the matrix to the process
> they belong to and use MatGetArray() to pass into the MPI
Hi Adolph,
What are the results of running make test after you've completed PETSc
installation? Do those tests pass? If so, you should try copying over an
example makefile and using that to build your code. If they fail, something
is wrong with your PETSc installation and you should send a copy
unless you're on an OS X machine, in which case you should use libgmalloc:
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man3/libgmalloc.3.html
~A
On Wed, Jul 23, 2008 at 2:25 PM, Barry Smith wrote:
>
> To emphasis Satish's point you should definitely use www.valgrind.org
Michel,
I would recommend investing the time to write your C/C++ wrapper code a
little higher up around the Newton iteration, since PETSc provides a great
abstraction interface for it. Then you could write code to build the matrix
(or assemble a matrix-free routine!) in C/C++, and pass the parame
Hi all,
I'm preparing a multigrid lecture for Tuesday that will be motivated
by a demonstration of PETSc using Multigrid to solve the thermally and
lid-driven cavity flow problem using the DMMG solver framework.
Since only about 20-30 minutes of time will be spent to describing
this, I want to ma
112032952 bytes is about 100 MB.
Are you really running out of memory or is something else going on?
~A
On Wed, Mar 5, 2008 at 7:00 PM, Gideon Simpson wrote:
> I'm getting the following error with some code, running on a serial
> machine with, i think, 3 gigs of memory. Is there anyway to
>
ok for function
> in a sharedlibrary [instead of resolving these functions at link-time]
>
> If petsc is built with dynamic usage- then PETSC_USE_DYNAMIC_LIBRARIES
> flag is set in petscconf.h. Shared libs can be identified by looking
> at the library names.
>
> Satish
&g
Hey Matt,
You should probably clean up the documentation for MatMatSolve while
you're at it, it's indicating that x and b are vectors... Also,
should you reference the factor routine you need to use to get a
factored matrix?
~A
On Wed, Feb 27, 2008 at 2:12 PM, Matthew Knepley wrote:
> On Wed,
On Wed, Feb 27, 2008 at 2:11 AM, amjad ali wrote:
> Hello all,
>
> Please answer the following,
>
> 1) What is the difference between static and dynamic versions of petsc?
>
Start here:
http://en.wikipedia.org/wiki/Library_(computer_science)#Static_libraries
In PETSc the primary differences end
Hi Ben,
You're asking a question that is very specific to the program you're
running. I think the general consensus on this list has been that for
the more common uses of PETSc, getting dual-cores will not speed up
your performance as much as dual-processors. For OS X, dual-cores are
pretty much
Dear Tim,
It is possible to carry out the explicit inversion of a sparse matrix
using the PETSc framework with the methodology you outlined below. I
would encourage you to consider Cholesky/LU factorizations of the
matrix, which occassionally result in sparser triangular solve times
than an expli
This is a direct consequence of the so-called memory mountain, i.e.
the increasing costs of accessing memory further and further away from
the processor on a standard Von Neumann architecture: cache -> RAM ->
hard drive. Larger matrices don't completely fit in the cache, and
you're paying a cache
Sumit,
The compilers are specified at the command-line when you make your
configure call.
i.e. from $PETSC_DIR
./config/configure.py --with-cc=gcc
if you need more information, try
./config/configure.py --help
And to answer your question, the compiler to use is stored in a
configuration file
Hi Sumit,
I've posted this message to petsc-users, which is the right mailing
list for this sort of question.
You want to head to the official PETSc page if you'd like to download
the source:
http://www-unix.mcs.anl.gov/petsc/petsc-2/download/index.html
The installation instructions are also lo
The term you are looking for is non-blocking, a non-collective reduce
is almost an oxymoron.
And no, non-blocking reduces are not anywhere in the MPI Standard,
maybe one of these days.
Your best bet is to write an implementation yourself using MPI_ISEND
and a tree structure which takes advantage
Dear Pan,
I don't see everything going on here, but you have to account for
around 1966080*(4 or 8) bytes + 1966080*21*(4 or 8) bytes of indexing
information for storing the locations of the data in the Matrix, then
if you're using doubles, 1966080*21*8 bytes of data information.
Adding these up,
Hi David,
You're looking to use a PetscViewer. I see that there's not much in
the user's manual on how to use them, but the basic idea is that you
create a binary or ascii viewer (PetscViewerASCIIOpen,
PetscViewerBinaryOpen), then call VecView to save it to disk. The
inverse call is VecLoad. I
And as a correction to my last email, PETSC_ARCH is a 'global'
environment variable, it along with PETSC_DIR help you coordinate
multiple PETSc installations and builds.
~A
On 6/27/07, Aron Ahmadia wrote:
> Dear Tim,
>
> I was just in Ireland a few weeks ago, had a
Dear Tim,
I was just in Ireland a few weeks ago, had a great time climbing
Carantouhil and the Pilgrim's Path :D
Have you tried declaring a new $PETSC_ARCH and then overriding the
optimization flags in ./conf/configure? PETSc likes to use the
$PETSC_ARCH flag to maintain all local builds under t
Hi Matt,
SIGTERM (signal 15) is a very generic termination signal that can be
sent by kill or killall, but also by a batch system. We'll be able to
diagnose this better if you send the entire PETSC and MPI output from
your program with the -log_info command.
(i.e.) mpirun -np 4 ./my_prog -log_in
Try removing that -batch argument on the command-line for your
original configure statement. I don't think it's doing what you want
it to do.
~A
On 3/14/07, Bin wrote:
> Hello,
>
> Thank you.
>
> I tried: mpirun -np 1 ./conftest and I got the same results.
>
> There is no executable file "conft
Hi Ben,
This is a bad idea. Unless the systems are virtually identical (same
versions of gcc, same processor architecture, same kernel, same system
layout) just to name a few, you WILL have problems with the libraries
after they've been copied over, though less problems are likely to
spring up i
Hi Jianing,
The way I understand things, since a DA is fundamentally designed to
handle spatially discretized differential equations, it is good for
distributing your data and automatically handling situations that
arise in PDE/ODE-solving, like updating of ghost points as fields
change over time.
Sorry, that should have been subroutine, not application.
~A
On 1/27/07, Aron Ahmadia wrote:
> Brian,
>
> Work vectors should never be created/deleted every time an application
> is run, they should be generated at the beginning and then accessed.
>
> In this case calling get
Brian,
Work vectors should never be created/deleted every time an application
is run, they should be generated at the beginning and then accessed.
In this case calling get/restore should be fine.
~A
On 1/27/07, Matthew Knepley wrote:
> On 1/27/07, Aron Ahmadia wrote:
> > Soun
n21 at gmail.com
>
> http://www.columbia.edu/~bag2107/http://www.apam.columbia.edu/ctx/ctx.html
>
>
> On Jan 27, 2007, at 6:41 PM, Aron Ahmadia wrote:
>
> Hi Brian,
>
> I took a quick peek at the source for DAGetGlobalVector in the current
> release:
>
> "
&g
Hi Brian,
I took a quick peek at the source for DAGetGlobalVector in the current release:
"
for (i=0; iglobalin[i]) {
*g = da->globalin[i];
da->globalin[i] = PETSC_NULL;
goto alldone;
}
}
ierr = DACreateGlobalVector(da,g);CHKERRQ(ierr);
alldone:
for (i
Is there a good script lying around somewhere for setting the X11
connections up from the master/interactive node? This seems like it could
be a huge pain if you've got a bunch of worker nodes sitting in a private
network behind the master in classic Beowulf style and you don't have a
systems admi
Hi Jianing,
I don't think the orion cluster is set up for the worker nodes to be able to
connect back to a remote workstation. I would collect the data locally on
the master node, and use PETSC_VIEWER_DRAW_SELF.
Please double-check that the display code works on single-processor jobs,
that error
Hi Ben,
These are some really difficult questions to answer quickly. PETSc
performance is dependent on a large variety of factors, from the
architecture of your computer to the layout of your network to how much RAM
you have, most importantly the problem you are trying to solve.
I have an Intel
Hi Ben,
My PETSc install on an OS X machine requires about 343 MB of space,
about 209 MB of which is MPICH. Unfortunately this has the potential
of exceeding 500 MB temporarily I believe as the make process
generates a lot of object files during the software build.
I think what you want to do is
Dear Ben,
If I was doing this for an optimized code on a stationary grid, I
would run the if tests once at the beginning to generate lists of
points for each boundary, and then run the for loops over those lists.
~Aron
On 12/30/06, Ben Tay wrote:
> Hi,
>
> I'm now trying to modify my source cod
1 - 100 of 103 matches
Mail list logo