Does that mean that an empty array comprehension is always Array{Any}?
that array comprehensions are now type-inference-independent. That means
> that the type of the resulting array only depends on the actual types of
> values produced, not what the compiler can prove about the expression in
>
Very nice summary, thanks for posting. One question I had was what should
the signature of a function be to receive a generator? For example, if the
only method of extrema is extrema(A::AbstractArray), is that too
restrictive?
Jared Crean
On Tuesday, October 11, 2016 at 1:05:03 PM UTC-4, S
To make it concrete, I have
type A{T}
x
a::Array{Any,1}
end
The elements of the array a are numbers, Symbols, strings, etc., as well as
more instances of type A{T}. They
may be nested to arbitrary depth. If I call show on an instance of A{T},
then show will be called recursively
on all p
The `showall` function does this.
On Monday, October 10, 2016 at 3:16:20 PM UTC-4, John Paul Vasquez wrote:
>
> Is there a command/option to display all the elements of a very long
> array? The output gets cut in the middle. For example when calling
> Pkg.available(), I only see the head and tai
Jeff's approach should be included in the performance hints doc
On Tuesday, October 11, 2016 at 1:04:43 PM UTC-4, Jeff Bezanson wrote:
>
> If the performance of this matters, it will probably be faster to
> iterate over `1:nfields(myt)`, as that will avoid (1) constructing the
> array of symbols
Are you saying a and b and c and d?
(a) that you have a outer type which has a Rational field and has another
field of a type that has a field which is typed Rational or is typed e.g.
Vector{Rational}
(b) and displaying a value of the outer type includes displaying the
Rationals from withi
What worked:
function bloop(size; write_size=10)
data = Array{UInt8}("f" ^ size)
println("start")
(pout, pin, p) = readandwrite(`cat -`)
println("read")
read_task = @async read(pout)
for chunk in chunks(data, write_size)
println("write")
write(pin, chu
On Tue, Oct 11, 2016 at 1:13 PM, Evan Fields
wrote:
> I'm unsure if "bit shared" is a technical term I should know, or if "bit
> shared" is a smartphone typo for "not shared" which would describe my
> understanding of normal loops, where it seems each iteration doesn't have
> access to loop-only
I think I understand what you are saying (not sure). A problem that arises
is that if I call show or print on an object, then show or print may be
called many times on fields and fields of fields, etc., including from
within Base code before the call returns. I don't know how to tell the
built
I don't know much about @simd. I see it pop up when people use loops made
up of very simple arithmetic operations, but I don't know if map can take
advantage of it for your more complicated function.
On Friday, October 7, 2016 at 4:29:20 AM UTC-4, Martin Florek wrote:
>
> Thanks Andrew for answe
I'm unsure if "bit shared" is a technical term I should know, or if "bit
shared" is a smartphone typo for "not shared" which would describe my
understanding of normal loops, where it seems each iteration doesn't have
access to loop-only variables defined in a previous iteration. :)
I guess the
Since the 0.5 release affects everyone here, I wrote a longish blog post
about what the major changes are:
http://julialang.org/blog/2016/10/julia-0.5-highlights.
One other change that I left out of the post because it was getting pretty
long and it seems a bit esoteric is that array comprehension
If the performance of this matters, it will probably be faster to
iterate over `1:nfields(myt)`, as that will avoid (1) constructing the
array of symbols, (2) looking up the index within the type for that
field.
On Mon, Oct 10, 2016 at 7:20 PM, K leo wrote:
> Thanks so much Mauro. That does it.
On Oct 11, 2016 12:34 PM, "Evan Fields" wrote:
>
> Let's say I have a type MyType and function f(mt::MyType) which is slow
and stochastic. I have an object y::MyType, and I'd like to compute f(y)
many times.
>
> If I write a loop like
> fvals = Vector{Float64}(100)
> Threads.@threads for i in 1:le
Let's say I have a type MyType and function f(mt::MyType) which is slow and
stochastic. I have an object y::MyType, and I'd like to compute f(y) many
times.
If I write a loop like
fvals = Vector{Float64}(100)
Threads.@threads for i in 1:length(fvals)
ycopy = deepcopy(y)
fvals[i] = f(ycopy)
You can do it with 2 (e.g. integer) channels per worker (requests and
replies) and a task for each pair in the main process. That's so ugly I'd
be tempted to write an
interface to named system semaphores. Or just use a separate file for each
worker.
On Monday, October 10, 2016 at 11:09:39 AM U
Hi,
I have a dataset of many (about 30 million) observations of the type
Tuple{Person, Array{DataA,1}, Array{DataB,1}}
where
immutable Person # simplified
id::Int32
female::Bool
age::Int8
end
immutable DataA
startdate::Int32
enddate::Int32
type::Int8
extra:UInt8
end
and DataB is
17 matches
Mail list logo