My original question was based on the mistaken assumption that I was on a node running Lenny. In fact it was etch, and the lenny version of rmpi use openmpi (NOT lam, as I said earlier).
However, I'm still having troubles with Rmpi on lenny (amd64); I'm getting [n7:04654] mca: base: component_find: unable to open osc pt2pt: file not found (ignored) when I run under R without invoking mpirun. When I use mpirun with R I get nothing at all. The message about pt2pt is odd, since the changelog shows that problem was fixed. Maybe the fact that it's not in a proper MPI environment is contributing. I suspect something about my local setup, or the fact that the saved image was run under lam, is behind this. Ross On Wed, 2009-04-15 at 13:55 -0700, Ross Boylan wrote: > First, thank you for getting rmpi working with openmpi. > > Second, I have a question. The r-cran-rmpi changelog (0.5-6-3) mentions > a problem with openmpi and amd64. Do you have any sense of whether this > would be an issue for the Lenny version of lam? I know openmpi is a > descendant of lam. > > Also, is the problem likely to have effects for uses other then rmpi? > > We have a new cluster that is migrating toward Lenny (currently most > nodes are etch) and is amd64 architecture (Xeon, really). Since the > lenny version of rmpi uses lam, a lam/amd64 problem might bite us. > > It looks as if using rmpi with openmpi would require a fairly > significant chunk of unstable (certainly R and openmpi, maybe more > libraries they depend on). I think our sysadmin would prefer to keep > things more stable. > > Thanks. > Ross > > -- To UNSUBSCRIBE, email to [email protected] with a subject of "unsubscribe". Trouble? Contact [email protected]

