Hi Steven,
Thank you very much for taking the time to try to try to answer my
question.
Perhaps if it is not clear enough, yes I have a usage question that I am
asking here. I'll try to rephrase it. Could you kindly answer this?
I have data (could be a string) that is stored in IOBuffer. I
Yes, sorry, I'm sure the devil is in the details, otherwise it would have
been done a long time ago...
On Sunday, July 24, 2016 at 11:08:07 AM UTC-4, Yichao Yu wrote:
>
> On Sun, Jul 24, 2016 at 9:44 AM, Cedric St-Jean > wrote:
> > Thank you for the explanations. With
On Sun, Jul 24, 2016 at 5:22 PM, 'Bill Hart' via julia-users
wrote:
> I built the dlls we make use of in our Nemo package a slightly odd way, but
> everything worked, all tests passed.
>
> I decided not to be lazy and built the dlls the correct way, and all of a
>
I built the dlls we make use of in our Nemo package a slightly odd way, but
everything worked, all tests passed.
I decided not to be lazy and built the dlls the correct way, and all of a
sudden I get a ReadOnlyMemoryError() whilst running our test code.
This is with either Julia 0.4.0 or 0.4.6
I've opened an issue (https://github.com/JuliaLang/julia/issues/17593) -
and realized that with Julia v0.5 we may not need this at all. :-)
On Saturday, July 23, 2016 at 8:47:57 PM UTC+2, Stefan Karpinski wrote:
>
> I would file an issue about having and fma! function. Since it's only
>
fma is a very specific operation defined by an IEEE standard. BLAS axpy
does not implement it.
If you extend the fma definition to linear algebra, then you'd arrive at a
definition such as "no intermediate result will be rounded". The BLAS
definition of axpy does not state that, and it's unlikely
The deeper problem though is: what if I want to use all the currently installed
packages at the same time, but don’t want the downgraded versions of them? Say
I want to use package A and B, and both have a dependency on C, but A requires
some older version of C, and that prevents me from
On Sun, Jul 24, 2016 at 3:18 PM, Yichao Yu wrote:
> On Sun, Jul 24, 2016 at 3:12 PM, Joosep Pata wrote:
>> Right, thanks for the tip. To confirm: `ui/repl.c` is still the code that
>> gets compiled to the julia(-debug) binary, right?
>
> Yes.
>
>> If I
On Sun, Jul 24, 2016 at 3:12 PM, Joosep Pata wrote:
> Right, thanks for the tip. To confirm: `ui/repl.c` is still the code that
> gets compiled to the julia(-debug) binary, right?
Yes.
> If I call "Base._start()" via libjulia, I still need to reproduce the usual
> argv
https://github.com/JuliaLang/julia/issues/17571
Not the full thing you recommend, but on the other hand probably much easier to
implement.
From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On
Behalf Of Chris Rackauckas
Sent: Saturday, July 23, 2016 7:38 AM
To:
Right, thanks for the tip. To confirm: `ui/repl.c` is still the code that
gets compiled to the julia(-debug) binary, right?
If I call "Base._start()" via libjulia, I still need to reproduce the usual
argv logic of the julia binary.
I'll just patch `repl.c` to my needs then without changing the
On Sun, Jul 24, 2016 at 2:39 PM, Yichao Yu wrote:
> On Sun, Jul 24, 2016 at 2:37 PM, Joosep Pata wrote:
>> I'd like to not re-implement all the REPL boiler-plate, like
>> ~~~
>> ios_puts("\njulia> ", ios_stdout);
>> ios_flush(ios_stdout);
>>
On Sun, Jul 24, 2016 at 2:37 PM, Joosep Pata wrote:
> I'd like to not re-implement all the REPL boiler-plate, like
> ~~~
> ios_puts("\njulia> ", ios_stdout);
> ios_flush(ios_stdout);
> line = ios_readline(ios_stdin);
> ~~~
> and so on.
That's not
I'd like to not re-implement all the REPL boiler-plate, like
~~~
ios_puts("\njulia> ", ios_stdout);
ios_flush(ios_stdout);
line = ios_readline(ios_stdin);
~~~
and so on.
In effect, I want to launch the usual julia REPL, but call some of my own
initialization procedures
That explanation is a bit off actually, it's not that f1 can't optimize for
t, it's that f1 has to do method lookup every time it's called.
type X; x::Int end
g1(x) = x.x+1
g2(x::X) = x.x+1
x = X(1)
const y = X(1)
@code_warntype g1(x)
#...
# begin
#return
On Sun, Jul 24, 2016 at 1:21 PM, Joosep Pata wrote:
> Hi,
>
> I'd like to compile ui/repl.c into a shared library so that I could dlopen
> julia after some other initialization procedures that would otherwise
> conflict with the LLVM linked to julia.
You should **NOT**
Hi,
I'd like to compile ui/repl.c into a shared library so that I could dlopen
julia after some other initialization procedures that would otherwise conflict
with the LLVM linked to julia.
I succeeded in doing that on OSX using:
~~~
diff --git a/ui/Makefile b/ui/Makefile
+julia-release:
Actually the performance drop there is just because of the always looming
threat of global var inefficience :P
Because the var t is global, f1 can't optimize for its type. But f2 has a
type declared on its argument, so it can. Remove the ::mytype on f2 or make
t a const and you'll see no
Maybe it does. It's unclear to me whether operations like BLAS apxy
implement fma or muladd.
On Sunday, July 24, 2016, Oliver Schulz
wrote:
> Uh, sorry, I don't quite get that. I thought muladd was basically that
> same as fma - with the difference that fma has to
On Sun, Jul 24, 2016 at 9:44 AM, Cedric St-Jean wrote:
> Thank you for the explanations. With that in mind, why is stack-allocating
> heap-object-containing immutables hard? Doesn't it boil down to inspecting
> the immutable type at compile-time to find the offset of the
As long as you don't mind preserving exactly what's between the {},
(t.parameters...)
is an easy way to get this.
--Tim
On Sunday, July 24, 2016 5:21:56 AM CDT Kristoffer Carlsson wrote:
> Maybe;
>
> type MyType end
>
> function f(t::DataType)
> I = ()
> for t in t.types
> if isa(t,
El domingo, 24 de julio de 2016, 7:52:47 (UTC-4), jw3126 escribió:
>
> I need a function, which accepts an arbitrary tuple type and returns the
> types of the components of the tuple. For example
> ```
> f(Tuple{Int64, Float64})--> (Int64, Float64)
> f(Tuple{Int64, MyType,
I think you are right btw, the compiler got rid of the wrapper function for
the "+" call, since all I see above is Base.add_float.
On Sun, Jul 24, 2016 at 4:55 PM, Marius Millea
wrote:
> Here's my very simple test case. I will also try on my actual code.
>
> using
Here's my very simple test case. I will also try on my actual code.
using SelfFunctions
using TimeIt
@selftype self type mytype
x::Float64
end
t = mytype(0)
# Test @self'ed version:
@self @inline function f1()
1+x
end
@timeit f1(t)
println(@code_warntype(f1(t)))
# 100 loops,
Alan and I stopped by a couple of times. We did sponsor bags and flyers -
but were unable to have a booth this year with many of us travelling after
juliacon.
-viral
On Thursday, July 21, 2016 at 8:36:44 PM UTC-4, Sheehan Olver wrote:
>
> Julia Computing sponsored the bags. Though I was
Thank you Josef! It has been too long since I have used MATLAB I guess!
On Sunday, July 24, 2016 at 5:11:42 AM UTC-4, Josef Heinen wrote:
>
> The usage of the subplot() command should be the same as in MATLAB, e.g.
>
> using GR
>
> x = linspace(-3.8, 3.8)
>
> y_cos = cos(x)
>
> y_poly = 1 -
The compiler is pretty smart about removing these extra function calls, so
I didn't get any extra overhead on my test cases. I went ahead and added
`@inline` to the selfcall deckles. You can also do this:
@self @inline function inc2()
inc()
inc()
end
Update from the gist and try
Thank you for the explanations. With that in mind, why is stack-allocating
heap-object-containing immutables hard? Doesn't it boil down to inspecting
the immutable type at compile-time to find the offset of the heap-allocated
objects' pointers, and pushing those onto the GC frame?
On Sunday,
On Sun, Jul 24, 2016 at 8:26 AM, Uwe Fechner wrote:
> Thanks for your answer. Some more questions:
> - is a gc frame just a pointer?
Parent point (it forms a linked list)
Length
GC Roots
> - if not, which information does it hold? In which file is the gc frame
> type/
Thanks for your answer. Some more questions:
- is a gc frame just a pointer?
- if not, which information does it hold? In which file is the gc frame
type/
structure defined?
- is there exactly one gc frame for each local variable?
- why is it called "frame"? The term implies, that it is around
Maybe;
type MyType end
function f(t::DataType)
I = ()
for t in t.types
if isa(t, TypeVar)
I = (I..., Any)
else
I = (I..., t)
end
end
return I
end
julia> f(Tuple{Int64, Float64})
(Int64,Float64)
julia> f(Tuple{Int64, MyType, Float32})
(Int64,MyType,Float32)
julia> f(NTuple{3})
(Any,Any,Any)
On Sun, Jul 24, 2016 at 6:03 AM, Uwe Fechner wrote:
> Hello,
> in the issues on github I see a lot that refer to gc frames.
>
> What is a gc frame? I know how garbage collection works in general,
> but I don't understand the meaning of gc frames in the context of Julia.
I need a function, which accepts an arbitrary tuple type and returns the
types of the components of the tuple. For example
```
f(Tuple{Int64, Float64})--> (Int64, Float64)
f(Tuple{Int64, MyType, Float32}) --> (Int64, MyType, Float32)
f(NTuple{3})
Very nice! Didn't understand your hint earlier but now I do!
My only problem with this solution is the (perhaps unavoidable) run-time
overhead, since every single function call gets wrapped in one extra
function call. With a very simple test function that just does some
arithmetic, I'm seeing
Hello,
in the issues on github I see a lot that refer to gc frames.
What is a gc frame? I know how garbage collection works in general,
but I don't understand the meaning of gc frames in the context of Julia.
Could someone explain:
- What they are used for?
- When they need to be created?
- If
The usage of the subplot() command should be the same as in MATLAB, e.g.
using GR
x = linspace(-3.8, 3.8)
y_cos = cos(x)
y_poly = 1 - x.^2./2 + x.^4./24
subplot(2, 2, 1)
plot(x, y_cos)
subplot(2, 2, 2)
plot(x, y_poly, "g")
subplot(2, 2, [3, 4])
plot(x, y_cos, "b", x, y_poly, "g")
...
Uh, sorry, I don't quite get that. I thought muladd was basically that same
as fma - with the difference that fma has to do it without rounding the
intermediate result, while muladd is free to do whatever is most efficient
on the given hardware? But why wouldn't make muladd! make sense for
Yes, I know axpy. However - as far a I understand - that is explicitly a
BLAS function. So it's not something that, e.g., a GPU library would
provide specialized methods for. Also, axpy always results in a call, even
for small arrays (please correct me if I'm wrong). I think in-place
38 matches
Mail list logo