sorry correct my call to

julia -p 32 exper.jl > myout.out

(without the first < , not sure that makes a difference)

On Saturday, 30 August 2014 12:58:57 UTC+1, Florian Oswald wrote:
>
> @require should work for what you want. i usually run batch jobs like this
>
> julia -p 32 < exper.jl > myout.out
>
> maybe give it a try?
> also, do you have 32 CPUs? not sure how stable this is if you use plenty 
> more processes than cores.
>
> here is a working example for a large cluster:
> https://github.com/floswald/parallelTest/tree/master/julia/iridis
>
> the setup is different, but you should be able to figure out from sge.jl 
> how I load the functions. make sure you are in the right directory?
>
> On Saturday, 30 August 2014 04:01:00 UTC+1, Travis Porco wrote:
>>
>> julia> versioninfo()
>> Julia Version 0.3.1-pre+405
>> Commit 444fafe* (2014-08-27 20:11 UTC)
>> Platform Info:
>>   System: Linux (x86_64-linux-gnu)
>>   CPU: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>>
>> On Friday, August 29, 2014 10:15:54 AM UTC-7, Travis Porco wrote:
>>>
>>> Hello--I'd like to be able to run something like this:
>>> nohup ../julia/julia -p 32 < mscript.jl
>>> where inside mscript.jl, I would like each worker to read in and have 
>>> access to a large script (something like require("analysis.jl") )
>>> and then call a function defined in my own file, nside which various 
>>> pieces of a computation are done in parallel.
>>> Does anyone have a working example? Nothing I have tried has worked (I 
>>> must have just misunderstood the manual). 
>>> Thanks.
>>>
>>

Reply via email to