Re: [gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ?
Dear Chris, Thanks for your time and suggestion. I tried all the possible pressure couplings (including semiisotropic) and run it for around 500ps.Gap between head group and water molecule disappear, but I was getting uneven distribution of water molecules. I am pasting my previous mail again. Hope I will get any solution for my problem. Best Regards, Alok ## Dear Mark, Thanks a lot for your valuable time, and sorry for inappropriate description, I am describing again, I hope thin time I can make it clear. I took preequilibrated POPE.pdb files which already have SPC water molecules I had deleted these water molecules and change the box size at 'Z Axis' only, so I can accommodate more water, then using genbox I had added TIP4P water molecules, but it also added the water molecules in the interior of the bilayer. So I deleted these water by the criteria if the 'Z' coordinate of the water in between the 'Z_min' and 'Z_max' of 'C13' (where branching of the POPE molecules start) atom. After that I got the files which don't have any water at the interior of the bilayer but there is a vaccuum between lipid head group and TIP4P water molecules (I defined it as a ZONE in my previous mail). As discussed in the mailing list so many times I can do same thing by increasing the VdW radius of lipid atoms. But after that I was expecting these vacuum will be vanished and water molecules will spread homogenously after sort span of MD, as suggested in the mailing list. But here problem has started I run MD till 500ps, but water molecules are clustered at some places, at some places there is no water or very less water. i.e. I am getting uneven distribution of water molecules over lipid head groups. So I thought this problem might be due to pressure coupling or type of ensemble I am using (might be I am wrong here !). I ran four different sort MD by using isotropic, semiisotropic, anisotropic pressure coupling and last one no pressure coupling (NVT ensemble). But in all the cases I am getting similar structure at last which is uneven distribution of TIP4P water molecules over head groups. The parameters I used for diffrent couplings all mentioned below. Isotropic: (First Simulation) diffrent = Berendsen Pcoupltype= isotropic tau_p = 2.0 compressibility = 4.5e-5 ref_p = 1 semiisotroic: (Second Simulation) Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p= 2 2 compressibility=0 4.5e-5 ref_p = 0 1.0 anisotropic: (Third Simulation) Pcoupl = Berendsen pcoupltype = anisotropic tau_p= 10.0 10.010.0 00 0 compressibility= 4.5e-5 4.5e-5 4.5e-500 0 ref_p = 1.0 1.0 1.0 0 0 0 NVT (Fourth Simulation). I hope I make my problem clear.could some one give some idea what parameters/ensemble I should take to overcome this problem. please suggest me where I am doing mistake. Thanks Regards, Alok - Original Message - From: Mark Abraham [EMAIL PROTECTED] To: Discussion list for GROMACS users gmx-users@gromacs.org Sent: Friday, October 19, 2007 11:58 AM Subject: Re: [gmx-users] uneven distribution of water across the bilayer Alok wrote: Dear All, I am trying to simulate lipid-water system (340 POPE lipids 6120 TIP4P Waters), during the solvation by genbox, It also add the the water at the interior of the bilayer. I removed those water molecules by my perl script. But after removing these water molecules I have observed a zone between the lipid head group and water. You'll have to describe that zone better if you want us to understand what you're talking about. Read genbox -h where it mentions vdwradii.dat I tried to do small simulations (50 to250 ps) using different pressure coupling but still I am not getting the structure which have homogeneous arrangement of water over lipid head group. There is uneven distribution of water across the bilayer. Are these last two observations related, or not? During this sort simulations position restrain on lipid was applied. Check your waters aren't restrained too. I tried Isotropic, semiisotropic,anisotropic pressure coupling with the following parameter, but no luck I think you need to read section 7.3.14 of the manual. You're using combinations of parameter values that don't make sense. Isotropic: Pcoupl = Berendsen Pcoupltype= isotropic tau_p = 2.0 compressibility = 4.5e-5 ref_p = 1 semiisotropic: Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p
Re: [gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ?
Alok wrote: Dear Chris, Thanks for your time and suggestion. I tried all the possible pressure couplings (including semiisotropic) and run it for around 500ps.Gap between head group and water molecule disappear, but I was getting uneven distribution of water molecules. I am pasting my previous mail again. Hope I will get any solution for my problem. Best Regards, Alok ## Dear Mark, Thanks a lot for your valuable time, and sorry for inappropriate description, I am describing again, I hope thin time I can make it clear. I took preequilibrated POPE.pdb files which already have SPC water molecules I had deleted these water molecules Why not leave them? and change the box size at 'Z Axis' only, so I can accommodate more water, then using genbox I had added TIP4P water molecules, but it also added the water molecules in the interior of the bilayer. So I deleted these water by the criteria if the 'Z' coordinate of the water in between the 'Z_min' and 'Z_max' of 'C13' (where branching of the POPE molecules start) atom. After that I got the files which don't have any water at the interior of the bilayer but there is a vaccuum between lipid head group and TIP4P water molecules (I defined it as a ZONE in my previous mail). The Z-coordinate-based water-removal procedure you describe can't create such a vacuum, so I can't follow your description. As discussed in the mailing list so many times I can do same thing by increasing the VdW radius of lipid atoms. But after that I was expecting these vacuum will be vanished and water molecules will spread homogenously after sort span of MD, as suggested in the mailing list. But here problem has started I run MD till 500ps, but water molecules are clustered at some places, at some places there is no water or very less water. i.e. I am getting uneven distribution of water molecules over lipid head groups. I'm afraid I can't understand what you mean by uneven distribution in the absence of a picture or a structure. So I thought this problem might be due to pressure coupling or type of ensemble I am using (might be I am wrong here !). I ran four different sort MD by using isotropic, semiisotropic, anisotropic pressure coupling and last one no pressure coupling (NVT ensemble). But in all the cases I am getting similar structure at last which is uneven distribution of TIP4P water molecules over head groups. I made suggestions about your parameters last time. You don't seem to have followed them. Mark ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ?
Dear Chris, Thanks for your time and suggestion. I tried all the possible pressure couplings (including semiisotropic) and run it for around 500ps.Gap between head group and water molecule disappear, but I was getting uneven distribution of water molecules. I am pasting my previous mail again. Hope I will get any solution for my problem. Best Regards, Alok ## Dear Mark, Thanks a lot for your valuable time, and sorry for inappropriate description, I am describing again, I hope thin time I can make it clear. I took preequilibrated POPE.pdb files which already have SPC water molecules I had deleted these water molecules and change the box size at 'Z Axis' only, so I can accommodate more water, then using genbox I had added TIP4P water molecules, but it also added the water molecules in the interior of the bilayer. So I deleted these water by the criteria if the 'Z' coordinate of the water in between the 'Z_min' and 'Z_max' of 'C13' (where branching of the POPE molecules start) atom. After that I got the files which don't have any water at the interior of the bilayer but there is a vaccuum between lipid head group and TIP4P water molecules (I defined it as a ZONE in my previous mail). As discussed in the mailing list so many times I can do same thing by increasing the VdW radius of lipid atoms. But after that I was expecting these vacuum will be vanished and water molecules will spread homogenously after sort span of MD, as suggested in the mailing list. But here problem has started I run MD till 500ps, but water molecules are clustered at some places, at some places there is no water or very less water. i.e. I am getting uneven distribution of water molecules over lipid head groups. So I thought this problem might be due to pressure coupling or type of ensemble I am using (might be I am wrong here !). I ran four different sort MD by using isotropic, semiisotropic, anisotropic pressure coupling and last one no pressure coupling (NVT ensemble). But in all the cases I am getting similar structure at last which is uneven distribution of TIP4P water molecules over head groups. The parameters I used for diffrent couplings all mentioned below. Isotropic: (First Simulation) diffrent = Berendsen Pcoupltype= isotropic tau_p = 2.0 compressibility = 4.5e-5 ref_p = 1 semiisotroic: (Second Simulation) Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p= 2 2 compressibility=0 4.5e-5 ref_p = 0 1.0 What's going on here? Apparently x/y stays the same and Z can scale? I am relatively sure that this is your problem. try this: semiisotroic: (Second Simulation) Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p= 2 2 compressibility = 4.5e-54.5e-5 ref_p= 1.0 1.0 **Note that I personally use tau_p=4 but 2 should be fine also. Also note that there may _possibly_ be some issue with gromacs that makes the zeroes not work as they should, but I would rather suspect that everything is working as it should and that it is just not working as you might expect. Try the above suggestion and let me know hoe it works out. Chris. anisotropic: (Third Simulation) Pcoupl = Berendsen pcoupltype = anisotropic tau_p= 10.0 10.010.0 00 compressibility= 4.5e-5 4.5e-5 4.5e-500 0 ref_p = 1.0 1.0 1.0 0 0 0 I would avoid anisotropic. In any event, still I would avoid zeroes. NVT (Fourth Simulation). I hope I make my problem clear.could some one give some idea what parameters/ensemble I should take to overcome this problem. please suggest me where I am doing mistake. Thanks Regards, Alok - Original Message - From: Mark Abraham [EMAIL PROTECTED] To: Discussion list for GROMACS users gmx-users@gromacs.org Sent: Friday, October 19, 2007 11:58 AM Subject: Re: [gmx-users] uneven distribution of water across the bilayer Alok wrote: Dear All, I am trying to simulate lipid-water system (340 POPE lipids 6120 TIP4P Waters), during the solvation by genbox, It also add the the water at the interior of the bilayer. I removed those water molecules by my perl script. But after removing these water molecules I have observed a zone between the lipid head group and water. You'll have to describe that zone better if you want us to understand what you're talking about. Read genbox -h where it mentions vdwradii.dat I tried to do small simulations (50 to250 ps) using different pressure coupling but still I am not getting the structure which have
[gmx-users] No improvement in scaling on introducing flow control
Hi, We tried turning on switch control on our local cluster (www.dcsc.sdu.dk) but were unable to achieve any improvement in scale up whatsoever. I was wondering if you folks could shed light upon how we should go ahead with this. (We have not installed the all-to-all patch yet) The cluster architecture is as follows: ## * Computing nodes 160x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 4 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet 40x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet ## * Switches 9 D-link SR3324 2 D-link SRi3324 The switches are organised in two stacks, each connected to the infrastracture switch with an 8 Gb/s LACP trunk.Firmware Build on the switches ## * Firmware Build on the switches: 3.00-B16 There are newer firmware builds available, but according to the update logs, there is not update on the IEEE flow control protocol in the new firmware ## * Tests (were run using OPENMPI, not LAMMPI) DPPC-bilayer system of ~ 4 atoms, with PME and cutoffs, 1fs time step. The scaleup data is as follows. We are also currently running some tests with larger systems. # Procs nanoseconds/day Scaleup 1 0.526 1 2 1.0 1.90 4 1.7683.36 8 1.0892.07 160.39 0.74 Any inputs will be very helpful, thank you Best, -himanshu ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Re: [gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ?
Dear Chris, Thanks a lot your suggestions. I have started the MD based on your suggestions. I will tell you as soon I will get the results. PS: Is this is already reported that Gromacs have some problem with ZERO? Regards, Alok - Original Message - From: [EMAIL PROTECTED] To: gmx-users@gromacs.org Sent: Thursday, October 25, 2007 1:47 PM Subject: [gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ? Dear Chris, Thanks for your time and suggestion. I tried all the possible pressure couplings (including semiisotropic) and run it for around 500ps.Gap between head group and water molecule disappear, but I was getting uneven distribution of water molecules. I am pasting my previous mail again. Hope I will get any solution for my problem. Best Regards, Alok ## Dear Mark, Thanks a lot for your valuable time, and sorry for inappropriate description, I am describing again, I hope thin time I can make it clear. I took preequilibrated POPE.pdb files which already have SPC water molecules I had deleted these water molecules and change the box size at 'Z Axis' only, so I can accommodate more water, then using genbox I had added TIP4P water molecules, but it also added the water molecules in the interior of the bilayer. So I deleted these water by the criteria if the 'Z' coordinate of the water in between the 'Z_min' and 'Z_max' of 'C13' (where branching of the POPE molecules start) atom. After that I got the files which don't have any water at the interior of the bilayer but there is a vaccuum between lipid head group and TIP4P water molecules (I defined it as a ZONE in my previous mail). As discussed in the mailing list so many times I can do same thing by increasing the VdW radius of lipid atoms. But after that I was expecting these vacuum will be vanished and water molecules will spread homogenously after sort span of MD, as suggested in the mailing list. But here problem has started I run MD till 500ps, but water molecules are clustered at some places, at some places there is no water or very less water. i.e. I am getting uneven distribution of water molecules over lipid head groups. So I thought this problem might be due to pressure coupling or type of ensemble I am using (might be I am wrong here !). I ran four different sort MD by using isotropic, semiisotropic, anisotropic pressure coupling and last one no pressure coupling (NVT ensemble). But in all the cases I am getting similar structure at last which is uneven distribution of TIP4P water molecules over head groups. The parameters I used for diffrent couplings all mentioned below. Isotropic: (First Simulation) diffrent = Berendsen Pcoupltype= isotropic tau_p = 2.0 compressibility = 4.5e-5 ref_p = 1 semiisotroic: (Second Simulation) Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p= 2 2 compressibility=0 4.5e-5 ref_p = 0 1.0 What's going on here? Apparently x/y stays the same and Z can scale? I am relatively sure that this is your problem. try this: semiisotroic: (Second Simulation) Pcoupl = Berendsen Pcoupltype = semiisotropic tau_p= 2 2 compressibility = 4.5e-54.5e-5 ref_p= 1.0 1.0 **Note that I personally use tau_p=4 but 2 should be fine also. Also note that there may _possibly_ be some issue with gromacs that makes the zeroes not work as they should, but I would rather suspect that everything is working as it should and that it is just not working as you might expect. Try the above suggestion and let me know hoe it works out. Chris. anisotropic: (Third Simulation) Pcoupl = Berendsen pcoupltype = anisotropic tau_p= 10.0 10.010.0 0 0 compressibility= 4.5e-5 4.5e-5 4.5e-500 0 ref_p = 1.0 1.0 1.0 0 0 0 I would avoid anisotropic. In any event, still I would avoid zeroes. NVT (Fourth Simulation). I hope I make my problem clear.could some one give some idea what parameters/ensemble I should take to overcome this problem. please suggest me where I am doing mistake. Thanks Regards, Alok - Original Message - From: Mark Abraham [EMAIL PROTECTED] To: Discussion list for GROMACS users gmx-users@gromacs.org Sent: Friday, October 19, 2007 11:58 AM Subject: Re: [gmx-users] uneven distribution of water across the bilayer Alok wrote: Dear All, I am trying to simulate lipid-water system (340 POPE lipids 6120 TIP4P Waters), during the solvation by genbox, It
Re: [gmx-users] No improvement in scaling on introducing flow control
Hi Himanshu, maybe your problem is not even flow control, but the limited network bandwidth which is shared among 4 CPUs in your case. I also have done benchmarks on Woodcrests (2.33 GHz) and was not able to scale an 8 atom system beyond 1 node with Gbit Ethernet. Looking in more detail, the time gained by the additional 4 CPUs of a second node was exactly balanced by the extra communication. I used only 1 network interface for that benchmark, leaving effectively only 1/4 th of the bandwidth for each CPU. Using two interfaces with OpenMPI did not double the network performance on our cluster. In my tests nodes with 2 CPUs sharing one NIC were faster than nodes with 4 CPUs sharing two NICs. Could be on-node contention, since both interfaces probably end up on the same bus internally. Regards, Carsten himanshu khandelia wrote: Hi, We tried turning on switch control on our local cluster (www.dcsc.sdu.dk) but were unable to achieve any improvement in scale up whatsoever. I was wondering if you folks could shed light upon how we should go ahead with this. (We have not installed the all-to-all patch yet) The cluster architecture is as follows: ## * Computing nodes 160x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 4 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet 40x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet ## * Switches 9 D-link SR3324 2 D-link SRi3324 The switches are organised in two stacks, each connected to the infrastracture switch with an 8 Gb/s LACP trunk.Firmware Build on the switches ## * Firmware Build on the switches: 3.00-B16 There are newer firmware builds available, but according to the update logs, there is not update on the IEEE flow control protocol in the new firmware ## * Tests (were run using OPENMPI, not LAMMPI) DPPC-bilayer system of ~ 4 atoms, with PME and cutoffs, 1fs time step. The scaleup data is as follows. We are also currently running some tests with larger systems. # Procs nanoseconds/day Scaleup 1 0.526 1 2 1.0 1.90 4 1.7683.36 8 1.0892.07 160.39 0.74 Any inputs will be very helpful, thank you Best, -himanshu ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php -- Dr. Carsten Kutzner Max Planck Institute for Biophysical Chemistry Theoretical and Computational Biophysics Department Am Fassberg 11 37077 Goettingen, Germany Tel. +49-551-2012313, Fax: +49-551-2012302 http://www.mpibpc.mpg.de/research/dep/grubmueller/ http://www.gwdg.de/~ckutzne ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Re: [gmx-users] No improvement in scaling on introducing flow control
Hi Carsten, Thank you very much for the prompt reply. I know very little about network architecture, and therefore understand your explanation only partly. Based on what you say, however, would it be fair to conclude that on the quad core woodcrests, it will not be possible to improve scaleup without altering the network architecture hardware itself ? Do you think it will be worthwhile to test an all-to-all optimization at all ? Thank you, -Himanshu On 10/25/07, Carsten Kutzner [EMAIL PROTECTED] wrote: Hi Himanshu, maybe your problem is not even flow control, but the limited network bandwidth which is shared among 4 CPUs in your case. I also have done benchmarks on Woodcrests (2.33 GHz) and was not able to scale an 8 atom system beyond 1 node with Gbit Ethernet. Looking in more detail, the time gained by the additional 4 CPUs of a second node was exactly balanced by the extra communication. I used only 1 network interface for that benchmark, leaving effectively only 1/4 th of the bandwidth for each CPU. Using two interfaces with OpenMPI did not double the network performance on our cluster. In my tests nodes with 2 CPUs sharing one NIC were faster than nodes with 4 CPUs sharing two NICs. Could be on-node contention, since both interfaces probably end up on the same bus internally. Regards, Carsten himanshu khandelia wrote: Hi, We tried turning on switch control on our local cluster (www.dcsc.sdu.dk) but were unable to achieve any improvement in scale up whatsoever. I was wondering if you folks could shed light upon how we should go ahead with this. (We have not installed the all-to-all patch yet) The cluster architecture is as follows: ## * Computing nodes 160x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 4 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet 40x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet ## * Switches 9 D-link SR3324 2 D-link SRi3324 The switches are organised in two stacks, each connected to the infrastracture switch with an 8 Gb/s LACP trunk.Firmware Build on the switches ## * Firmware Build on the switches: 3.00-B16 There are newer firmware builds available, but according to the update logs, there is not update on the IEEE flow control protocol in the new firmware ## * Tests (were run using OPENMPI, not LAMMPI) DPPC-bilayer system of ~ 4 atoms, with PME and cutoffs, 1fs time step. The scaleup data is as follows. We are also currently running some tests with larger systems. # Procs nanoseconds/day Scaleup 1 0.526 1 2 1.0 1.90 4 1.7683.36 8 1.0892.07 160.39 0.74 Any inputs will be very helpful, thank you Best, -himanshu ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php -- Dr. Carsten Kutzner Max Planck Institute for Biophysical Chemistry Theoretical and Computational Biophysics Department Am Fassberg 11 37077 Goettingen, Germany Tel. +49-551-2012313, Fax: +49-551-2012302 http://www.mpibpc.mpg.de/research/dep/grubmueller/ http://www.gwdg.de/~ckutzne ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Re: [gmx-users] No improvement in scaling on introducing flow control
himanshu khandelia wrote: Hi Carsten, Thank you very much for the prompt reply. I know very little about network architecture, and therefore understand your explanation only partly. Based on what you say, however, would it be fair to conclude that on the quad core woodcrests, it will not be possible to improve scaleup without altering the network architecture hardware itself ? Do you think it will be worthwhile to test an all-to-all optimization at all ? If my guess is right and bandwidth is the problem here, the patch will not improve the scaling. Are the benchmarks made with 1 or 2 NICs/node? If they are for 1 NIC/node then there should be no network congestion for the case of 8 CPUs (=2 nodes). You could try a back-to-back connection between two nodes to be absolutely shure that the rest of the network (switch etc.) does not play a role. I would try that and repeat the benchmark for 8 CPUs. See if you get a different value. Regards, Carsten Thank you, -Himanshu On 10/25/07, Carsten Kutzner [EMAIL PROTECTED] wrote: Hi Himanshu, maybe your problem is not even flow control, but the limited network bandwidth which is shared among 4 CPUs in your case. I also have done benchmarks on Woodcrests (2.33 GHz) and was not able to scale an 8 atom system beyond 1 node with Gbit Ethernet. Looking in more detail, the time gained by the additional 4 CPUs of a second node was exactly balanced by the extra communication. I used only 1 network interface for that benchmark, leaving effectively only 1/4 th of the bandwidth for each CPU. Using two interfaces with OpenMPI did not double the network performance on our cluster. In my tests nodes with 2 CPUs sharing one NIC were faster than nodes with 4 CPUs sharing two NICs. Could be on-node contention, since both interfaces probably end up on the same bus internally. Regards, Carsten himanshu khandelia wrote: Hi, We tried turning on switch control on our local cluster (www.dcsc.sdu.dk) but were unable to achieve any improvement in scale up whatsoever. I was wondering if you folks could shed light upon how we should go ahead with this. (We have not installed the all-to-all patch yet) The cluster architecture is as follows: ## * Computing nodes 160x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 4 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet 40x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer, SATA150), 2x Gigabit Ethernet ## * Switches 9 D-link SR3324 2 D-link SRi3324 The switches are organised in two stacks, each connected to the infrastracture switch with an 8 Gb/s LACP trunk.Firmware Build on the switches ## * Firmware Build on the switches: 3.00-B16 There are newer firmware builds available, but according to the update logs, there is not update on the IEEE flow control protocol in the new firmware ## * Tests (were run using OPENMPI, not LAMMPI) DPPC-bilayer system of ~ 4 atoms, with PME and cutoffs, 1fs time step. The scaleup data is as follows. We are also currently running some tests with larger systems. # Procs nanoseconds/day Scaleup 1 0.526 1 2 1.0 1.90 4 1.7683.36 8 1.0892.07 160.39 0.74 Any inputs will be very helpful, thank you Best, -himanshu ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php -- Dr. Carsten Kutzner Max Planck Institute for Biophysical Chemistry Theoretical and Computational Biophysics Department Am Fassberg 11 37077 Goettingen, Germany Tel. +49-551-2012313, Fax: +49-551-2012302 http://www.mpibpc.mpg.de/research/dep/grubmueller/ http://www.gwdg.de/~ckutzne ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the
[gmx-users] g_wham
Hi guys, I am facing some troubles when I am trying to use the code g_wham with my Gromacs 3.3. The input of g_wham was as follows: g_wham pull.pdo pull_1.pdo ... pull_10.pdo -o cabe.xvg -hist histo.xvg Then, the output that I received from g_wham is, gunzip: stdin: not in gzip format --- Program g_wham, VERSION 3.3 Source code file: gmx_wham.c, line: 90 Fatal error: This does not appear to be a valid pdo file --- Furthermore, we also introduced pull.pdo.gz instead of pull.pdo and then, we obtained the following message, Opening file pull.pdo.gz. --- Program g_wham, VERSION 3.3 Source code file: gmx_wham.c, line: 90 Fatal error: This does not appear to be a valid pdo file --- May anyone help me with g_wham? Thanks a lot in advance for your collaboration, Regards Javier Lopez -- Dr. Jose Javier Lopez Cascales Profesor Titular de Escuela Universitaria Area de Quimica Fisica Universidad Politecnica de Cartagena Centro de Electroquimica y Materiales Inteligentes (CEMI) Campus de Alfonso XIII, Aulario II 30203 Cartagena, Murcia Spain Phone: +34-968-325567 Fax..: +34-968-325931 Skype: jjlopezcascales e-mail...: [EMAIL PROTECTED] http://www.upct.es/electroquimica/laboratorio/javierc.htm ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] problems with opls_
i am a new user of Gromacs and i would like to knoe the opls for C, =O, and -O in RCOOR'. i tried it many times but i still fail to get the proper answer.. thanks __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Re: [gmx-users] problems with opls_
huan wrote: i am a new user of Gromacs and i would like to knoe the opls for C, =O, and -O in RCOOR'. i tried it many times but i still fail to get the proper answer.. thanks if you mean the atomtypes check ffoplsaa.atp __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php -- David. David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group, Dept. of Cell and Molecular Biology, Uppsala University. Husargatan 3, Box 596, 75124 Uppsala, Sweden phone: 46 18 471 4205 fax: 46 18 511 755 [EMAIL PROTECTED] [EMAIL PROTECTED] http://folding.bmc.uu.se ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] solvate using genbox results in water in the centerofthe bilayer. How to edit pdb file contents in gromacs ?
Dear Chris, Thanks a lot your suggestions. I have started the MD based on your suggestions. I will tell you as soon I will get the results. ok, great. PS: Is this is already reported that Gromacs have some problem with ZERO? No. And I don't believe that gromacs has a problem here. But it is something that you could test. For example... is the z changing at all in your simulation? I only mentioned anything about it because I have no personal knowledge that it works correctly with zeroes from my own experience. That of course doesn't make it incorrect :) but I would prefer for you to try using some set of conditions that I do know personally to work correctly. Regards, Alok ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] MSD near specific molecules
Dear users: 1. I have a lipid bilayer of POPC with cholesterols. I'm trying to calculate the MSD of the POPCs which is nearest to cholesterols. The nearest POPCs are defined as those POPC whose C13 atoms to O6 atoms from cholesterol distances are less than a certain cutoff. Is there a good way to do that? 2. The overall lateral diffusion of each leaflet should be removed first. Using GMX3.3.2, I intend to get it by using trjconv -pbc nojump -center -boxcenter tric, is that right? 3. The two leaflets have been separated, and if I apply g_dist to the groups of upper-cholesterol and upper-POPC, and then use g_analyze -msd, I can only get the mutual diffusion of the center of mass of these two groups, is that right? Thank you. -DJ ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php