Dan,
Using os.environ instead of tempfile seems to be the correct incantation. I
also had to eliminate setting the $TMPDIR environment variable in the PBS
startup script because MPI warns against using an external filesystem like
Lustre. Sometimes using the Lustre filesystem worked, but it
On Thu, Apr 3, 2014 at 1:08 PM, Seufzer, William J. (LARC-D307)
bill.seuf...@nasa.gov wrote:
Dan,
Do you mean to set the environment variable with os.environ? Would this be
different from setting the variable as part of the PBS start up script?
Yes.
Are you suggesting that all cores are
Dan,
We're not completely out of the woods yet. I've gotten several successful runs
this morning but occasionally the program hangs on the Gmsh3D command. I'm
going to have to let this set for the next week but I've added to your comments
below.
Bill
On Apr 3, 2014, at 10:58 AM, Daniel
Dan,
Before you pursue the error too deeply, I have been able to load the file
across 3 nodes (12 cores). But it succeeded only twice.
It may be that we have a node or two going bad or that I need to move all of my
fipy installation, including gmsh, onto the high performance disk.
When I get
Dan,
We've made a little progress. I now get an error! Answers to your questions are
embedded.
On Mar 27, 2014, at 2:26 PM, Daniel Wheeler daniel.wheel...@gmail.com wrote:
On Thu, Mar 27, 2014 at 10:25 AM, Seufzer, William J. (LARC-D307)
bill.seuf...@nasa.gov wrote:
Dan,
We're not there
Dan,
We're not there yet. I got back to this issue late yesterday and the fix
doesn't work on my cluster.
I added the code that was recommended at: The workaround:
https://gist.github.com/wd15/9693712 but it appears now that we hang on the
line:
mesh = fp.Gmsh3D(mshFile)
As with reading the
On Thu, Mar 27, 2014 at 10:25 AM, Seufzer, William J. (LARC-D307)
bill.seuf...@nasa.gov wrote:
Dan,
We're not there yet. I got back to this issue late yesterday and the fix
doesn't work on my cluster.
I added the code that was recommended at: The workaround:
Bill,
In my case, the following error occurred when running on multiple nodes:
Traceback (most recent call last):
File thermX.py, line 15, in module
mesh = fp.Gmsh3D(geo)
File /users/wd15/git/fipy/fipy/meshes/gmshMesh.py, line 1937, in __init__
background=background)
File
On Fri, Mar 21, 2014 at 12:35 PM, Daniel Wheeler
daniel.wheel...@gmail.com wrote:
Anyway, there is a workaround, which I will send to you offline.
The workaround: https://gist.github.com/wd15/9693712
--
Daniel Wheeler
___
fipy mailing list
Thanks Dan,
Yes, I ran across 4 nodes (32 cores) and my log file returned a randomized list
of integers 0 through 31. With other information from PBS I could see the names
of the 4 nodes that were allocated (I believe I didn't have 32 processes on one
node).
Previous to this I inserted lines
On Wed, Mar 19, 2014 at 9:02 AM, Seufzer, William J. (LARC-D307)
bill.seuf...@nasa.gov wrote:
Thanks Dan,
Yes, I ran across 4 nodes (32 cores) and my log file returned a randomized
list of integers 0 through 31. With other information from PBS I could see
the names of the 4 nodes that were
On Mar 19, 2014, at 11:12 AM, Daniel Wheeler daniel.wheel...@gmail.com wrote:
Can you send me the script and geo file and I'll try running it on
multiple nodes and check that it at least works for me or try and
debug it.
I wonder if the `open('tmsh3d.geo').read()` isn't what's blocking. Try
On Tue, Mar 18, 2014 at 7:28 AM, Seufzer, William J. (LARC-D307)
bill.seuf...@nasa.gov wrote:
Fipy developers,
I have Fipy running with Trilinos on our cluster but I can't seem to go
beyond a single node (with multiple cores).
I have a 3D .geo file that I built by hand that does not cause
13 matches
Mail list logo