Which version of MEEP does it run? I don't see any specs here:
http://nano.anl.gov/facilities/theory_modeling_cap.html
http://thread.gmane.org/gmane.comp.science.electromagnetism.meep.general/248
2
http://thread.gmane.org/gmane.comp.science.electromagnetism.meep.general/252
6
http://thread.gmane.org/gmane.comp.science.electromagnetism.meep.general/317
8

Looking at your code and going by:
http://ab-initio.mit.edu/wiki/index.php/Meep_C-plus-plus_Reference
"Note: Prior to Meep version 1.1, what is now the grid_volume class was
called volume, and what is now the volume class was called
geometric_volume."

You are running a more up to date version than I am. I ran it with the
following changes:

$ diff maine.cpp maink.cpp 
1,3c1
< #include"main.hpp"
< #include<fftw3.h>
< 
---
> #include <meep.hpp>
34,35c32,33
<   grid_volume v = vol3d(xbox+2*dpmlx,ybox+2*dpmly,
zbox+2*dpmlz,resolution);
<   volume
src_plane(vec(dpmlx,dpmly,dpmlz),vec(dpmlx,ybox+dpmly,zbox+dpmlz));
---
>   volume v = vol3d(xbox+2*dpmlx,ybox+2*dpmly, zbox+2*dpmlz,resolution);
>   geometric_volume
src_plane(vec(dpmlx,dpmly,dpmlz),vec(dpmlx,ybox+dpmly,zbox+dpmlz));
37c35
<   dum = 1000;
---
>   double dum = 1000;
54c52
<   volume box(vec(x0,y0,z0),vec(xn,yn,zn));
---
>   geometric_volume box(vec(x0,y0,z0),vec(xn,yn,zn));

650s runtime with dft code, 420s commented out.

-----Original Message-----
From: meep-discuss-boun...@ab-initio.mit.edu
[mailto:meep-discuss-boun...@ab-initio.mit.edu] On Behalf Of
adepri...@anl.gov
Sent: March 22, 2010 5:42 PM
To: meep-discuss@ab-initio.mit.edu
Subject: [Meep-discuss] poor parallelization traced to function update_dft

Hi Steven and others,

I am profiling the c++ interface of MEEP on the BlueGene/P system at
Argonne.

Right now, all I'm doing is simulating an empty 3D box and accumulating the
flux with add_flux_box().  I am running the simulation on 64 processors.
What bothers me is the distribution of the function 

void meep::dft_chunk::update_dft

which only seems to be entered by 8 of the 64 processors.  What's more
troublesome because is that it is easily the most expensive function in the
run.  As far as I can tell, the other 56 procs are hung up in MPI_Allreduce
waiting for those 8.  I have tried the simulation at two different
resolutions, one giving 125000 total grid points and one giving 1000000 grid
points.  I find it hard to believe that the million point case is too small
to warrant 64 processors.  Why would only 8 processors enter this routine?
Is there any way to distribute the effort in this function?

Any help would be greatly appreciated.  My code follows.

Thanks very much,
Eugene

#include"main.hpp"
#include<fftw3.h>

using namespace meep;

double empty(const vec &p);
complex<double> one(const vec &p);

double ev_to_meep;
double xbox,ybox,zbox;
double dpmly=20;
double dpmlz=20;
double dpmlx=20;

int main(int argc, char *argv[]) {

  int i,j,nfreq;
  double resolution = 1.0;
  double fcen,freq,df,freq_min,freq_max,dfreq;

  initialize mpi(argc,argv);
  xbox = 100;
  ybox = 100;
  zbox = 100;

  ev_to_meep = 2.4179836E14 / (2.997925E8 * 1.E9 );

  nfreq    = 200;
  freq_min = 1./450.;
  freq_max = 1./300.;
  dfreq    = (freq_max -freq_min) / (nfreq-1);
  fcen     = 0.5*(freq_min + freq_max);

  grid_volume v = vol3d(xbox+2*dpmlx,ybox+2*dpmly, zbox+2*dpmlz,resolution);
  volume src_plane(vec(dpmlx,dpmly,dpmlz),vec(dpmlx,ybox+dpmly,zbox+dpmlz));

  dum = 1000;
  gaussian_src_time src(fcen,0.5/(freq_max-freq_min),0.,dum);

  // EMPTY:
  structure s0(v, empty, pml(dpmlx));
  fields f0(&s0);
  f0.use_bloch(vec(0,0,0));
  f0.add_volume_source(Ey,src,src_plane,one,1.0);

  double x0,xn,y0,yn,z0,zn;
  x0 = dpmlx + 12;
  xn = dpmlx + xbox - 12;
  y0 = dpmly + 12;
  yn = dpmly + ybox - 12;
  z0 = dpmlz + 12;
  zn = dpmlz + zbox - 12;

  volume box(vec(x0,y0,z0),vec(xn,yn,zn));
  dft_flux f0box = f0.add_dft_flux_box(box,freq_min,freq_max,nfreq);

  while(f0.time()<dum){
       f0.step();
  }

  return 0;
}


complex<double> one(const vec &p){
  return 1.0;
}

double empty(const vec&p){
  return 1.0;
}


_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to