Your builder doesn't return int, it returns some proc returning int. As simple
as that.
Oh, probably. I thought that by "attach" you mean:
type Shape {.inheritable.} = ref object
...
area: proc(Shape): float
type Square = ref object of Shape
...
proc newShape(...): Shape =
...
area = areaShape
proc newSquare(...):
@Araq
Well, I don't think it's possible to use generics in this solution. As far as I
know, Rust's trait objects are implemented in a similar manner and they lack
generic methods (that's one of the reasons why it's often advised to use
generic traits rather than generic methods within
@mashingan
No, it's not about number of arguments. See that:
proc fun(x,y: int) = echo x+y
proc gun(x,y: int): int = x+y
fun 1, 2 # works fine
let x = gun 1, 2 # doesn't compile
let x = gun 1: 2 # compiles (!), although it looks weird
Just like GULPF
@GULPF
That's very interesting, actually! I never thought I'm SO mainstream using so
many macros.
@didlybom
But strings and sequences ARE already treated a little differently than plain
reference object types, aren't they? The most trivial example being: strings
have their literals and sequences (and arrays) have `openArray` but neither
string nor sequence has object constructor. So what
Why not just treat `discard` like a function call which takes another function
call? Then `discard fun x, y` would parse as `discard(fun(x), y)`, just like
with, say, `echo`. Then it just makes sense.
@jzakiya
So you DID inspect C code differences and also checked another compiler, but
you still blame Nim, not gcc, for that time difference? Even to the point of
calling it a security issue?
Surely, why not?
Actually, Julia's main advantage isn't pure performance. It's easy of usage.
Please notice Julia is NOT a true HPC language as the only type of parallelism
it provides is master-slave (as far as I know) which isn't even common in HPC.
However, it's handy to be able to use Python, C and Fortran
I'm wondering whenever it's possible to iterate over all entities available in
a module. My main idea is a wrapper module macro, which could sort-of-borrow
other type's procedures declared in another module.
@planhths
Yes, I actually overlooked where it happens. Now I see what it does is
essentially partial transposition (I mentioned transposition can give nice
results before, it also plays nicely with parallelization).
But did you really called THAT one "naive - algorithm"? Doesn't really look
I do use {.experimental.}. The communicate is precisely: "Warning: overloaded
'.' and '()' operators are now .experimental; () is deprecated [Deprecated]"
Cool! I didn't see it coming, () operator. Last time I asked about it, there
was no proposal of it yet, if my memory serves me well.
I would like to have a function behaving like an object, so I could overload an
operator on that function. I need the following to be true:
assert 1.fun ^ 2 == fun ^ 2
I know I could use fun() with a default argument but that's that syntax isn't
acceptable in this context.
But that's just how the terminal works. For me, it returns:
* ESC: 27
* F1: 27, then 79, then 80 (THREE chars per one keydown)
* →: 27, then 91, then 67
The problem's not with the lib. Go ahead try it in C, you should get the same
results.
You allocated a wholly new seq in optMatrixProduct, no wonder why it's slower
than unoptimized version. By the way: have you tried changing the
representation of the matrix before multiplication (so that for k in 0 ..< a.n:
a.data[i,k] and b.data[k,j] are both linear in memory)? As far as I
Update: I succeeded the first project. Used gnuplot-nim for plotting, it's not
bad (I'll make some PR soon, though).
Hi folks!
I've been more or less active on forum for quite a long time but to tell you
the truth, I haven't really used Nim for any "serious" stuff. Now when I
started Physical Processes Modeling course as a part of my studies, a friend of
my asked me: "Will you use Matlab or Python?". It was
Well, if you need gmatch3 then it won't do. But if gmatch1 and gmatch2 are
enough, use C++-like approach and make those a proc which returns a structure
with items and pairs iterators.
type MatchWrap = distinct seq[string]
proc gmatch(src, pat: string): MatchWrap =
@mashingan @StasB What if a macro uses, let's say, system date? Or connects to
a database (you can't check whenever it changed until you do connect)? And
still, even a noSideEffect macro is a code generator/transformer, not a textual
substituter so there is no analogy to C.
@Demos Well, I do but I consider it dirty.
@guibar Thank you, guibar, I forgot I need a static[T] to change macros'
semantics. I haven't used that for some time now.
@mratsim Thank you. I haven't read this blog post before, actually.
Thank you guys, I didn't notice custom field pragmas were possible in the last
devel. I waited for them for quite a lot of time.
@Araq How can I do it with macros? getImpl returns the const's default value,
not the strdefined one:
import macros
const module {.strdefine.}: string = "math"
macro importconst(name: string): untyped =
let value = name.symbol.getImpl
echo "variable
And that's one of the reasons why I like functional programming... Firstly, you
don't really need a new seq for the job you're doing. Your should utilize
iterating over sections separated by ',' but you can modify them in-place (or
even better --- directly write them to the file!).
Here is a
> Interesting, I am also am a Rust user. (...) Nim is amazingly productive.
I prefer Rust for more complex projects but heretically use Nim for reusable
scripts (interchangeably with Python).
Actually, it works for ALL of type's parameters, including types:
type Matrix[W, H: static[int], T] = object
data: array[W * H, T]
var m : Matrix[3, 2, int]
m.data = [1,2, 3,4, 5,6]
proc `[]`(m: Matrix, x, y: int): m.T {.inline.} = # here m.T is a return
Well... the template is the right thing to do but without closure magic ---
just use a block:
template scope(code): auto =
block:
code
let a = scope:
echo "hello"
1
echo a
Better yet, use a single seq and iterate over it as if it was NxM. It will be
more cache-friendly.
@Serenitor It's different in many ways:
* also changes semantics of == ("is it actually the same object" instead of
"is this (other) object the same")
* no dynamic dispatch is usable anyway unless {.inheritable.} is applied first
* poorer performance due to dynamic heap allocation (and GC)
Bizarre enough, the following seems to work:
#module A
static:
var test=0
proc test_proc(): void {.discardable,compileTime.} =
var a = test
echo a
static:
test_proc()
#module B
import A
static:
I think so. I tried it with both typed and untyped macro arguments. What's
funny, it works as expected for untyped ones:
macro sth(code: untyped): untyped =
echo code.repr
sth:
let s = 5
echo s
But not for typed:
macro sth(code:
As for writing REALLY Python-like Nim code, please have a look at
[nimpylib](https://github.com/Yardanico/nimpylib). In some simple cases and
when good design-patterns are followed in the Python code, it can get almost
1:1.
ast = getAst(inner(body))
code = $toStrLit(ast)
is replaceable by:
code = body.repr
Other than that, it seems to be a bug as body.repr.parseStmt should be an
identity and here it's not (parse error due to invalid indentation).
Well, it is not an issue but I don't get why it doesn't work for a block:
{.experimental.}
type MyObj = object
a: int
proc `=destroy`(m: MyObj) =
echo "Destruct"
block:
let x = MyObj(a: 5)
I find it misleading as a block should
Rewriting doesn't have to be bad. One of the reasons is that it may be possible
to write a more efficient implementation more easily in another language (see:
Rust's enums are faster than C++ virtual, especially for small ones). Also:
just like a wrapper with native types (uses seq instead of
You're welcome. I mostly use Nim for metaprogramming fun so I'm kind of used to
these kind of tricks.
I think I had a similar problem a similar problem with printing some time
ago... Anyway: use BiggestUInt instead of uint64. It has better semantics and
could be platform-adjusted, e.g. we could add uint128 if there was a platform
which natively supports it and the BiggestInt would then be
# Replace:
proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1): T
# with:
proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1.int32): T
Note it will still work for int64 thanks to the conversion.
The reason of this problem is that int32 is int at
Well, the whole problem is that = can't be AST-overloaded. That would be the
best and most nimish solution. However, I found three other solutions as well,
one of which I will quote. Sadly, all three require patterns so if you use
\--patterns:off, the checks will be disabled. Here it comes,
In fact, I consider it a bug in the compiler that the following doesn't work:
proc button*[W, H: UILength = UIAbsLength](self: UISystem,
buttonLabel = none(string),
width = W(0.0),
I just grabbed the first part of code that is very easy to understand. I didn't
know that changing the order of yielded values doesn't make any difference for
you (I did think about how greatly it would simplify transform but forgot when
posting).
I only had a glance at the rest of the
It should not forbit it but allow it. Please notice that generics are almost
replaceable by () operator (it should be replaceable by [] operator but then it
only works when called explicitly, not by operator syntax):
template Sth(t: typedesc = int): typedesc =
type `Sth t` =
It's quite easy to speed it up, actually. Let's take a look at your transform
iterator:
proc flip(s: seq[string]): seq[string] =
result = s # copy
result[0] = s[^1]
result[^1] = s[0]
proc transpose(s: seq[string]): seq[string] =
result = s # copy
I followed the manual as for how to use higher-kinded concepts. I was quite
surprised when the code containing genericHead actually compiled, but returned
something I don't really get...
import future, typetraits, options
type Functor[A] = concept f
f.get is A
No, you're wrong. Iterable is ANY container that can be iterated over
(including lists, sets etc) while openArray is anything that has an array-like
memory layout, i.e. array or seq. Your code fails for containers which with
non-linear memory layout:
import lists
var li =
{.noSideEffect, codeGenDecl: "__attribute__((pure)) $# $#$#".} still doesn't
help for my nim 0.17.2 and gcc 5.4.0.
Compiles just fine for my Nim 0.17.2.
Use object variants, they're exactly for cases like that --- the number of
variants is fixed.
@mratsim Well, it seemed to me the current idea is to push the GC aside so that
Nim will become scope-based "by default" with optional GC just where it's
needed. Thanks for the explanation though.
It suggests the new direction the Nim is to be heading to (according to Araq's
blog post), i.e. turning away from GC, it would more or less ruin things for
the guy talking.
@Araq Do you mean the slices have to be passed directly, without a local
binding?
@jzakiya Too bad there is no seq constructor from raw pointer and size. This
way, you could just make seq which is, in fact, a view of another seq.
Hypothetical example:
var s = @[3,1,4,1,5,9,2]
var v = ptrToSeq(s[2].addr, 3)
assert(v == @[4,1,5])
@adrien79 Yes, that's how it works. There is no items for types, as far as I
know.
I don't remember who it was, but someone here on the forums complained about
indentiation-based syntax as they preffered braces. The answer was using
parentheses and semicolons, just like I did right now.
@mratsim As far as I know, you can do the same for Julia so it sounds like
cheating.
Seems like a bug to me but it can be hacked quite easily:
const required_fields: array[0, tuple[f1: int, f2: string]] =
static(var a: array[0, tuple[f1: int, f2: string]]; a)
@adrien79 Actually, if you tried to iterate over Points as you illustrated, it
would break too:
for pt in Points:
do_sth(pt)
You should use explicit low and high:
for pt in Points.low .. Points.high:
do_sth(pt)
What I mean is a vectorized structure-of-arrays for x, y, z (and possibly
others) for a set of particles. They should be ordered according to their place
in space grid. As I said, in numpy I can have a any-D numpy array and sort it,
no python lists involved. I imagine the same for tensors in
That is not true. Vtable pointers can be elements of a seq, just like any ptr
or ref type.
@mratsim Vtptrs are not ready yet, as I've heard? If they were, I guess they
would solve the problem (it is how it would probably be solved in Rust,
actually).
Let's say I want to do some operations on particles. They should be vectorized
(and maybe parallelized, some calculations could also benfit from GPU) and it
would be really nice if particles from the same space grid would be in the same
place in the sequence, as they will need to access the
@monster Is the number of different kinds of messages fixed? If it is, you can
use variant types.
@mratsim Oh, really, you don't know any example of an operation the cost of
depends on values? Well, I easily know one: sorting.
@mratsim No, it's not. That's why I asked whenever you use dynamic scheduling.
Imagine you have a sequence of 1, 2, 4, 8, ..., 1048576. Now, map it with an
operation with O(N) complexity, where N is the value of your number. If you use
static scheduling, it's entirely possible most of the work
@monster Why not use inheritance?
type
ThreadID* = distinct uint8
AnyMsg = object {.inheritable.}
sender*: ThreadID
receiver*: ThreadID
previous: ptr[AnyMsg]
Msg*[T] = object of AnyMsg
content*: T
let space =
@Araq Happy to hear that!
Could you elaborate about the main thread being the only one being able to
create and destroy the objects? It sounds quite restrictive so I'd like to hear
what your motivation and the general idea was.
nodejs package? Does it mean Nim is to be compiled on JS backend for Android or
am I wrong (please say I'm wrong)?
@dawkot To put it simply, what Araq says is: in the first case, the macro
operates directly on procA at compile time so it behaves as expected but in the
second case, it actually operates on p argument (which has no implementation as
it's not an actual procedure, therefore its implementation is
@Jehan The fact that an arbitrary string is ambiguous without a context is
probably the reason a context is passed as a separate parameter in Rust macros,
I guess. I sometimes miss that possibility in Nim, it would makes tricks
unnecessary and macros would be less of magic, I guess.
@mratsim Would you mind if I make a reference to your lib in my bachelor thesis
about optimization?
I don't think it's possible, actually. By using pars, you force non-standard
operator (let's assume : could be an operator) precedence. Then, it's possible
for ? to just eat an untyped block, not caring too deeply about whenever : is
really an operator or not. But without pars, things are
@woggioni Well, I guess Nim's philosophy is different here. If the let-binding
exists, it must have a value. But you're in the middle of describing the proc's
body so the value isn't ready yet. Why is it practical? Let's say you call a
macro from within your fibo's body. A macro on a recursive
Personally, I think I started with CBOT. Then some JavaScript and Lua but
nothing serious, really. Just simple scripts for a website and hacking some
Battle for Wesnoth's hidden functionality (I guess it was adding a new status
icon for units). Then I learned C as a part of my studies. The
@Lando Thank you for pointing it out. Too bad it's not documented, I tend to
avoid using undocumented stuff. ^^" Actually, I think I even used lineInfoObj
once like a year ago so I probably forgot about it.
Is it possible to generate a docs link based on NimSym? Or, even better, based
on an argument's type?
Here comes an example:
# a.nim
type Mock = object
# b.nim
import a
macro add2docs(sym: typed, docs: string): typed =
@Araq Oh, I shold bold it I mean Nim optimization and inlining, not C's ones...
as weird as it sounds. I mean, any pattern-template or pattern-macro which
matches a noSideEffect routine would match here, despite the fact this routine
has side effects, actually. I guess whenever it's good or not
@Araq That looks nasty! Wouldn't it confuse the compiler? Also --- it prevents
inlining, I guess?
I'm not sure it's possible, actually. You can retrieve t's initialization value
(!), using t.symbol.getImpl, which is useful when handling const, but I can't
see any way (or at least an obvious way) how to extract variable pragmas from a
variable symbol.
@mratsim Well, you mentioned game programming at first so "serious deep
learning" didn't come to my mind. ^^"
@mratsim Well, if you call GC_collect manually then it's much worse than manual
allocation and deallocation from memory pool, I guess.
Could you explain further what getOccupiedMem changes in the case we were
talking about (ref to ptr to memory external to thread heap)?
Call me a weirdo but I think a converter which could fail should return an
option[T] instead of T itself. This way you can either unwrap the option
(works like exception) or pass it somewhere first, e.g. into a container so
that you could use unwrap inside of an higher-order routine.
@Araq Maybe you don't need flexibility but please imagine using a it-template
inside of an it-template (e.g. an apply inside of an apply). Just like @olwi
said, => is better and I think it should be brought to standard Nim.
People only mention two reasons to use it-templates, correct me if I'm
@cdome That's funny, actually, I've used a container that seems to work just
like yours, despite the fact the use case was totally different. It was for an
evolutionary program.
@mratsim If I get it, you need at least 93 MB * 32 images = 11 GB 904 MB per
batch?
By the way: did you look at
Well, you said "today you can do" as a general advice so I assumed you try to
show us "how simple it is to do it by hand", not "how easy it is to make sth
only for internal usage".
About this object being a ref one --- no, please read my comment again. Long
story short: ref object can be
I don't think it's something you should actually do as the compiler can do
certain optimizations when it knows a routine has no side effects.
Optimizations which may opt out your side effects, I guess.
@jxy Well, I wanted to make a call optimization library, actually. Including
various recursion optimizations (I know there is memo but it provides only one
opt method and even my pull request to it was never accepted). Inlining is, of
course, one of the possible optimizations (so I guess it's
Mine version would not. The old code can't use parameters non-existing then so
you'll just have to add another split argument which works like splitWhitespace
does today and then make splitWhitespace an alias for some split call (with
depreciation annotation) for backwards compatibility.
@mtrasim I've already said it's a bad idea (even on many architectures on which
it's possible). And even if you don't really have a heap, it doesn't mean you
couldn't use dynamic memory if a language supports custom allocators. You can
provide a memory pool on a stack, that's what I actually
How split can't behave like splitWhitespace. I guess it should and just be more
general.
Well, there is a similar library in Fortran. If I recall, there is a function a
little similar to split (it's also an iterator). It separates the concept of a
separator characters and unmatched characters.
@Varriount It doesn't? I was pretty sure there was something called shared
heap... Well, whatever. I think GC is much better for functional languages, RAI
seems better for stateful ones.
Oh, by the way --- I didn't really try it, but I guess idiomatic Nim programs
could bring problmes on some
@Araq
I probably mistook the general spirit of your answer for an answer to this
particular thing. Sorry then.
Well, I'm puzzled about this particular case (searching for float formating)
too, as... well, I've already said this --- if you're looking for a single
specific function (which is,
@olwi
I'd say it looks like a bug. I always use split though.
@dom96
> Doc writing is boring
>
> Creating documentation PRs is the easiest thing in the world (but also the
> most boring)
It depends. I, for instance, quite like it. I care (although "deeply" may be a
big word) about my users so even if I have no time, energy or I'd like to move
to
@cdome Plus it's not that handy when using with objects, as far as I know?
Well, it certainly helps a lot when you find some code which uses some cool
routines you know nothing about. It helped me with pegs that way if I recall.
Although I must admit that I've already assumed it was something similar to
regexes as ~= was used and Perl also uses it.
@bpr Not in the core (or std, I guess, but's more realistic). But I never said
I was talking about the core.
@rayman22201 I was referring to a GC library for Rust. I can see @mratsim
already mentioned one example (there are more, if my memory serves me well).
There are two main reasons for it
1 - 100 of 177 matches
Mail list logo