Oh, Main
import Main.foo
Thanks.
On Wednesday, October 26, 2016 at 3:27:50 PM UTC-4, Ryan Gardner wrote:
>
> say I have code:
>
>
> type foo
>a
> end
>
> module MyModule
>#how do I use foo here?
>#can I
>import .foo
>#??
> end
>
say I have code:
type foo
a
end
module MyModule
#how do I use foo here?
#can I
import .foo
#??
end
There must be a way to use global types in modules. Is there a name for
the "global module" (if you will). Thanks.
Alright, well I hacked up a copy of ClusterManagers such that I added an
obtain_procs() function that actually gets the available process and
generates and stores the relevant WorkerConfigs with the ClusterManager.
This function does not require any locks to be obtained.
Then addprocs() (speci
...)
finally
unlock(worker_lock)
end
end
guess that confirms what's going on... hah.
On Monday, October 24, 2016 at 1:40:22 PM UTC-4, Ryan Gardner wrote:
>
> I'm trying to write code for sun grid engine (sge) although I think the
> general idea applies to any addproc
I'm trying to write code for sun grid engine (sge) although I think the
general idea applies to any addprocs. I would like to be able to request a
gazillion nodes, and start using each shortly after it becomes available.
An example of what I want is roughly this code:
for j=1:100
I'm looking for a way to reliably kill asynchronous tasks.
My code is roughly:
task = @async call_external_program_that_may_never_return
#do stuff of interest
exit(0) #please really exit now, no matter what
Currently, if the external program never returns, neither does my program,
Thanks. Makes sense now.
On Tuesday, October 18, 2016 at 3:53:00 PM UTC-4, Ryan Gardner wrote:
>
> The documentation for Julia 0.5.0 says that the lock returned by
> ReentrantLock() "is NOT threadsafe" (
> http://docs.julialang.org/en/release-0.5/stdlib/parallel
The documentation for Julia 0.5.0 says that the lock returned by
ReentrantLock() "is NOT threadsafe" (
http://docs.julialang.org/en/release-0.5/stdlib/parallel/ see
ReentrantLock()) . What does that mean? I interpret it to mean that I
cannot safely call lock or unlock simultaneously with diff
I was running with --depwarn=no to suppress deprecation warnings. Now I'm
parallelizing, and the warnings still seem to be printed for all the
workers (which is probably making everything super slow - they are printed
every single time they are hit, not just at parsing or compilation).
Anyone
:47 AM UTC-4, Ryan Gardner wrote:
>
> I was running with --depwarn=no to suppress deprecation warnings. Now I'm
> parallelizing, and the warnings still seem to be printed for all the
> workers (which is probably making everything super slow - they are printed
> every single
Can anyone comment on the underlying memory allocation behavior of these
two operations? I believe push! will potentially allocate more memory than
needed for the resulting array in the case that any new memory needed to be
allocated? Does it double the size of the underlying allocated memory?
ple, and it took 290 seconds on a machine with enough
> RAM. Given that it is creating a matrix with half a billion nonzeros, this
> doesn’t sound too bad.
>
> -viral
>
>
>
> On 30-Apr-2014, at 8:48 pm, Ryan Gardner wrote:
>
> > I've got 16GB of RAM on
:
>> > Sorry for pointing out a probably obvious problem, but as there are
>> others that might try debug this issue on their laptop, I ask how much
>> memory do you have? 70*700 floats + indexes, will spend a minimum of 11
>> GB (if my math is correct) and p
Creating sparse arrays seems exceptionally slow.
I can set up the non-zero data of the array relatively quickly. For
example, the following code takes about 80 seconds on one machine.
vec_len = 70
row_ind = Uint64[]
col_ind = Uint64[]
value = Float64[]
for j = 1:70
for k = 1:700
schedule yet. It has been about six months since 0.2
> was released, and we are very close to 0.3 now... My rough guess for 0.4
> would be August-ish, bumping to LLVM3.5, but that's really just a guess.
>
>
>
>
> On Mon, Apr 28, 2014 at 10:31 AM, Ryan Gardner
> >
Similarly, is there any schedule for the releases (either one with rough
objectives or a harder one)?
On Monday, April 28, 2014 9:48:18 AM UTC-4, Ryan Gardner wrote:
>
> Can anyone point me to something that describes or briefly describe the
> process for determining/ensuring that a r
Can anyone point me to something that describes or briefly describe the
process for determining/ensuring that a release is stable. A few sentences
is fine.
Is there essentially a large set of test cases that are run on the code
before the release it made, while those test cases aren't run on t
A project I work on has a very high interest in cross-compiling Julia to
embedded architectures, so that Julia code could be run on embedded
systems. (I can update this with a more definitive list of target
architectures later, but I think the main two are PowerPC and ARM.)
I've been reading s
A project I work on has a very high interest in cross-compiling Julia to
embedded architectures, so that Julia code could be run on embedded
systems. (I can update this with a more definitive list of target
architectures later, but I think the main two are PowerPC and ARM.)
I've been reading s
19 matches
Mail list logo