build gcc on target machine for example. also to simplify build you
can take a look a http://prefix.gentoo.org for example.
2013/5/15 Alexey Shvetsov ale...@omrb.pnpi.spb.ru
В письме от 15 мая 2013 00:11:49 пользователь Андрей Гончар написал:
Hi. I'm trying to static compile gromacs from
В письме от 15 мая 2013 00:11:49 пользователь Андрей Гончар написал:
Hi. I'm trying to static compile gromacs from source, everything goes well,
but when I
move it to another machine and try to launch mdrun I got a message:
mdrun: error while loading shared libraries: libfftw3f.so.3: cannot
Hello,
В письме от 28 января 2013 16:06:20 пользователь Juliette N. написал:
Dear all,
I am trying to calculate structure factor or scattering density of a
polymer in solution. The only tool I know is g_rdf -f .trr -s tpr. -sq -n
index
g_rdf can calculate SAXS structurefactor
g_sans can
Hi!
What files do you have in ffname.ff directory?
Do you have files
ffbonded.itp ffnonbonded.itp forcefield.doc forcefield.itp
Forcefield doc should have ff description like
AMBER99SB-ILDN force field (Lindorff-Larsen et al., Proteins 78, 1950-58, 2010)
В письме от 23 января 2013 18:42:43
Justin Lemkul писал 22-12-2012 00:33:
On 12/21/12 12:55 PM, Alexey Shvetsov wrote:
Justin Lemkul писал 19-12-2012 01:10:
On 12/18/12 3:59 PM, XUEMING TANG wrote:
Hi there
I searched through the website for g_sans, which is a simple tool
to
compute Small Angle Neutron Scattering spectra
Hello!
Actualy g_sans from release-4.6 can only compute SANS scattering only
for one structure. Trajectory avereging was already merge to master. So
you can cherry pick one commit. Or i can send you patch against
release-4.6
XUEMING TANG писал 19-12-2012 00:59:
Hi there
I searched through
Justin Lemkul писал 19-12-2012 01:10:
On 12/18/12 3:59 PM, XUEMING TANG wrote:
Hi there
I searched through the website for g_sans, which is a simple tool to
compute Small Angle Neutron Scattering spectra. But I cannot find it
in
gromacs folder?
I found it in the following website:
Justin Lemkul писал 07-11-2012 16:20:
On 11/7/12 7:10 AM, Steven Neumann wrote:
On Wed, Nov 7, 2012 at 11:56 AM, Justin Lemkul jalem...@vt.edu
wrote:
On 11/7/12 4:19 AM, Steven Neumann wrote:
Dear Gmx Users,
I am trying to simulate protein-ligand interactions at specific
pH=5.
I
Hi!
Will it use CUDA or OpenCL? Second one will be more common since it
will work with wider range of platfroms (cpu, gpu, fcpga)
Szilárd Páll писал 27.11.2011 23:50:
Native acceleration = not relying on external libraries. ;)
--
Szilárd
On Sun, Nov 27, 2011 at 9:33 PM, Peter C. Lai
Also for additional performance benefit you can use atlas math library
and compile gromacs and fftw with additional cflags like '-mfpmath=sse -
msse_version' This will enable sse math (by default gcc generates i386 and
sse).
PS Also you can use CFLAGS=-O2 -pipe -march=native -mfpmath=sse -
Hi.
There is two possibilitys
1. utility written by Oliver F. Lange and Helmut Grubmüller [1] that
compites
general corelation coefficients
2. My utility that computes pearson correlation coefficients [2]
[1]
yourself a correlation matrix.
Hope it helps,
Tsjerk
On Sun, Jun 19, 2011 at 11:55 AM, Alexey Shvetsov
ale...@omrb.pnpi.spb.ru wrote:
Hi.
There is two possibilitys
1. utility written by Oliver F. Lange and Helmut Grubmüller [1]
that
compites
general corelation coefficients
2. My utility
Hi all!
There also another problem. You are using intel compilers on amd based
machine. Intel compilers uses cpuid to identify processor and optimization and
they doesnt include amd cpuids at all. So you will have bad performance on amd
cpus. I'd recomend to use recent versions of gcc
Hi,
How to write your hdb block is described in chapter 5.6.4 of
gromacs-4.5.3 manual
On Mon, 14 Mar 2011 11:58:17 +0200, Yulian Gavrilov wrote:
Dear gromacs users,
Please, can you explain the structure of FFAMBER99.HDB file
For example,
LYN 7
1 1 H N -C CA
1 5 HA
2010/5/2 David van der Spoel sp...@xray.bmc.uu.se
On 2010-05-02 10.32, Alexey Shvetsov wrote:
Hi,
Actualy suppermicro offers 4 socket machines with 12-core amd cpus so
you get 48cores in 1U
GROMACS perform well on such hardware =)
Sure but a simple rule of thumb is that you want
Hi
Looks like your system simply runs out of memory. So power cycling nodes isnt
needed. If your cluster runs linux then it already has OOM Killer that will
kill processes that runs out of memory. Also having swap on nodes is a good
idea even with huge amount of memory.
Memory usage for mpi
On Пятница 18 декабря 2009 23:47:34 David van der Spoel wrote:
Alexey Shvetsov wrote:
On Пятница 18 декабря 2009 19:29:15 XAvier Periole wrote:
Dears,
we have been trying to run gmx-4.0.X on new machines with the
Intel I7 CPUs. It is a quad core intel on which multi-threading
For me gentoo and gcc 4.3.x works fine =)
And yes gcc 4.1.x is broken
On Вторник 17 марта 2009 12:44:43 alkasrivast...@iitb.ac.in wrote:
Hi,
Since the following warning is given in the GROMACS home page,
WARNING: do not use the gcc 4.1.x set of compilers. They are
18 matches
Mail list logo