Dear Sir,

Are these requirements for RAM ? .


Thanks
Vasudev

On 25 May 2016 at 12:58, Georgios Michalareas <
giorgos.michalar...@esi-frankfurt.de> wrote:

> Hi,
>
> the recommended memory for MEG pipelines is:
>
> hcp_baddata              32 gb
> hcp_icaclass               32 gb
> hcp_tmegpreproc     32gb
> hcp_eravg             32 gb
> hcp_tfavg              32 gb
> hcp_srcavglcmv     16gb
> hcp_srcavgdics       16gb
> hcp_tmegconnebasic     16 gb
>
> Best
>
> Giorgos
>
>
>
>
>
>
>
>
> On 5/25/2016 12:02 PM, Dev vasu wrote:
>
> Dear Sir,
>
>
> How much working memory is needed to run the tasks in MEG Pipeline , Most
> often i am incurring following error
>
>
>
>
>
>
>
>
>
> *" Out of memory. Type HELP MEMORY for your options. Error in
> ft_read_cifti (line 362) Error in megconnectome (line 129) " *
> I have 14.5 GB as Linux Swap Space, and 3.9 GB RAM
>
>
>
> *" vasudev@vasudev-OptiPlex-780:~$ grep MemTotal /proc/meminfo
> MemTotal:        3916512 kB " *
>
> Could you please let me know if it is sufficient to run the pipelines ?.
>
>
>
> Thanks
> Vasudev
>
>
> On 24 May 2016 at 20:34, Timothy B. Brown <tbbr...@wustl.edu> wrote:
>
>> You will then need to learn how to write a script to be submitted to the
>> SLURM scheduler.
>>
>> I am not familiar with the SLURM scheduler, but from very briefly looking
>> at the documentation that you supplied a link to, I would think that the
>> general form of a script for the SLURM scheduler would be:
>>
>> #!/bin/bash
>> #  ... SLURM Scheduler directives ... e.g. #SBATCH ... telling the system
>> such things as how much memory to expect to use
>> #                                                      and how long you
>> expect the job to take to run
>> #  ... initialization of the modules system ... e.g. source
>> /etc/profile.d/modules.sh
>> #  ... loading of the required software modules ... e.g. module load fsl
>> ... command to run the HCP Pipeline Script you want to run (e.g.
>> Structural Preprocessing, MEG processing, etc.) for the subject and scans
>> you want to process
>>
>> Once you've written such a script, (for example named myjob.cmd), it
>> appears that you would submit the job using a command like:
>>
>> sbatch myjob.cmd
>>
>> At the link that you provided, there is a section titled "Introductory
>> Articles and Tutorials by LRZ"  I would suggest you follow the links
>> provided in that section, read that documentation, and submit any questions
>> you have to the service desk for the LRZ (a link to the service desk is
>> also on the page you supplied a link to.)
>>
>>   Tim
>>
>> On Tue, May 24, 2016, at 13:00, Dev vasu wrote:
>>
>> Dear Sir,
>>
>> I have the cluster available , following is the link to it  :
>> <http://www.lrz.de/services/compute/linux-cluster/>
>> http://www.lrz.de/services/compute/linux-cluster/
>>
>>
>> Thanks
>> Vasudev
>>
>>
>>
>>
>>
>> On 24 May 2016 at 19:43, Timothy B. Brown < <tbbr...@wustl.edu>
>> tbbr...@wustl.edu> wrote:
>>
>> Do you already have a cluster set up and available, or are you looking to
>> set up the cluster?
>>
>> If you are looking to set up a cluster, do you have the group of machines
>> that you want to use for the cluster already available or are you thinking
>> of setting up a cluster "in the cloud" (e.g. a group of Amazon EC2
>> instances)?
>>
>>   Tim
>>
>> On Tue, May 24, 2016, at 12:18, Dev vasu wrote:
>>
>> Dear Sir,
>>
>> Currently i am running HCP Pipelines on a Standalone computer but i would
>> like to set up the pipeline on a Linux cluster, if possible could you
>> please provide me some details concerning procedures that i have to follow .
>>
>>
>>
>> Thanks
>> Vasudev
>>
>> _______________________________________________
>> HCP-Users mailing list
>> <HCP-Users@humanconnectome.org>HCP-Users@humanconnectome.org
>> <http://lists.humanconnectome.org/mailman/listinfo/hcp-users>
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> ________________________________________
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any unauthorized
>> use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify the
>> sender via telephone or
>> return mail.
>>
>> --
>> Timothy B. Brown
>> Business & Technology Application Analyst III
>> Pipeline Developer (Human Connectome Project)
>> tbbrown(at)wustl.edu
>> ________________________________________
>> The material in this message is private and may contain Protected
>> Healthcare Information (PHI).
>> If you are not the intended recipient, be advised that any unauthorized
>> use, disclosure, copying
>> or the taking of any action in reliance on the contents of this
>> information is strictly prohibited.
>> If you have received this email in error, please immediately notify the
>> sender via telephone or
>> return mail.
>>
>
> _______________________________________________
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
>
>
> ------------------------------
> [image: Avast logo]
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>
> This email has been checked for viruses by Avast antivirus software.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>
>

_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to