Re: Write Nim in Matlab/Julia style using macros while still deploy to Cloud/PC/GPU/embedded?
I am not sure about the current status of Arraymancer, but I believe it supports most of your requests. Neo, instead, supports only: * Slicing, concatenation, transposing... these sorts of array operations. * Broadcast basic math functions to any array. * Linear algebra (e.g. matrix multiplication, solving linear equations). * All above, running in CPU only (with MKL and/or automated multi-threading e.g. for large FFT/IFFT); plus running in GPU with minimal code change (well, some Lapack things are CUP only for now) The fact is that when I was developing Neo, @mratsim started Arraymancer and improved it a lot. Nowadays, Arraymancer is more advanced and fast: @mratsim has implemented Laser, Weave for multithreading and more, and I don't really see a point in adding many new features to Neo. It will surely be maintained, and what it does, it does decently, but I don't want to duplicate the great effort of @mratsim.
Re: How mature is async/threading in Nim?
> for new projects I wouldn't use anything else because the tooling is so much > better That would be great, but it requires an introduction that explains to users what ARC is, how to make use of it, how it impacts multithreading, the new sync and lent parameters, how to design collections and libraries without a GC and much more.
I cannot understand ARC
I am trying to understand ARC, which seems to be promised as the future for Nim. That, in turn, redirects to [this document](https://nim-lang.org/docs/destructors.html), so I am reading that, but I cannot make head or tails of the whole document. It starts with a motivating example, which I cannot follow because I don't know yet the meaning of the procs declared there, like sink=. The example is not explained in English, so even if it purports to explain how to implement seqs as a library, I fail to see what is the design behind that, and what features make this possible with ARC that were missing before (or is it the new runtime? I am not sure what is the difference between the two). It goes on by saying > The memory management for Nim's standard string and seq types as well as > other standard collections is performed via so called "Lifetime-tracking > hooks" or "type-bound operators". which is... weird. I mean, I would expect a high level panoramic of what this document is even about. Why talking about memory management for strings and seqs? What about other things? At this point in the document I am still wondering about much higher level things, like: will still there be ptr and ref? Are things garbage collected or do I need to put some annotations like Rust to track lifetimes? It then goes on into hooks, starting with destroy=. > Variables are destroyed via this hook when they go out of scope or when the > routine they were declared in is about to return. Fine? I guess... How do things survive out of a scope? I am still lacking a high level description of what we are even trying to do Then there is this example proc `=destroy`(x: var T) = # first check if 'x' was moved to somewhere else: if x.field != nil: freeResource(x.field) x.field = nil Run First check if x was moved somewhere else? What is a move? Isn't it explained **later** in the document? Then we go into =sink. > A =sink hook moves an object around, the resources are stolen from the source > and passed to the destination. It is ensured that source's destructor does > not free the resources afterwards by setting the object to its default value > (the value the object's state started in) I cannot make any sense of the above sentence. And so on. For me, this is completely unreadable. I cannot tell * what we are trying to achieve * whether this document describes the new runtime or ARC (or both) * what is the general strategy that we would like to implement * how the new features will help writing better collections * how to design things that intentionally should share storage and so on. If we look at, say, the _chapter in Rust's book <[https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html>`](https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html>`)_ about ownership, it describes which rules are in place, what they mean in simple examples, how they help to track memory. I understand that ARC is still in development and do not expect this level of polish in the documentation, but if library authors should port their libraries to work with ARC, there needs to be a minimum level of explanation of how this will work. Are there any other resources to understand the way this will work and impact Nim developers?
Re: Nim 1.2 is here
By the way, if Nim 1.2 requires a specific version of OpenSSL, this should be mentioned in [https://nim-lang.org/blog/2020/04/03/version-120-released.html](https://nim-lang.org/blog/2020/04/03/version-120-released.html)
Re: Nim 1.2 is here
Isn't OpenSSL a system library? I am not sure how to update it without breaking a lot of things. I am using MacOS Sierra, this is probably why I am having trouble
Re: Nim 1.2 is here
I am now getting the error could not import: X509_check_host Run every time I invoke nimble :-(
Re: Will void be unified with empty tuple?
There's this [https://github.com/nim-lang/Nim/issues/7370](https://github.com/nim-lang/Nim/issues/7370)
Re: Code cleanup suggestions: ctors
I think it would be better to give your objects ids and use those as keys to the table. Pointers, by design, can be moved
Re: private type in public API is allowed
There's nothing wrong in this in my opinion. It just means that `T` is an opaque type. You can get it from exported functions, and pass it to other exported functions, but you can't create one yourself or manipulate one without using one of the functions in the module that exports it. I would even say that this is a useful pattern. For instance the following works: #dep.nim type T = object x: int proc makeT*(x: int): T = T(x: x) proc printT*(t: T) = echo t.x Run and then import dep let t = makeT(12) printT(t) Run but the following does not compile import dep let t = T(x:12) Run
Re: reader macro
Do you produce your AST at runtime? Because macros are run at compile time, so that's too late. It sounds like you are trying to use Nim macros like a JIT, if I understand correctly
Re: How to print output from two echo in a single line?
Are you sure you used write and not`writeLine`? write does exactly that, without adding newlines [https://play.nim-lang.org/#ix=2bvn](https://play.nim-lang.org/#ix=2bvn)
Re: How to print output from two echo in a single line?
write(stdout, someString)
Re: Compile time FFI
One thing to keep in mind is that having FFI in the VM allows it to also work in the secret interpreter. This ensures that one can try out more things interactively, such as anything that needs `times` for instance.
Re: ELI5: newruntime and arc
> there is no reason to despair and stop developing Nim libraries. Thank you, I ensure you that I do not despair, but I feel a little lost with all the recent changes :-)
Re: ELI5: newruntime and arc
The issue for me is that posts about `--gc:arc` still reference concepts like `sink` and `lent` parameters. These are described in [the post about destructors](https://github.com/nim-lang/Nim/blob/devel/doc/destructors.rst#nim-destructors-and-move-semantics), but, well, that document is not very understandable. Also, it is not clear whether the two mechanisms are going to coexist or `--gc:arc` will be a substitute for `--newruntime` (which then won't by ever enabled). All of this is complicated by the fact that - as far as I understand - `--gc:arc` will not support exceptions as they are today, thus introducing other changes. In short: I am lost. I don't know which mechanism will be used in place of a GC. I don't know if that requires me, as a library author, to introduce `sink` or `lent` parameters, which I still do not understand. (related to this, I have stopped writing Nim libraries, because I am unsure about its directions). I don't even know if existing libraries are supposed to be working with `--gc:arc` or `--newruntime`, so I don't submit issues about them. I think we really need an ELI5 of these mechanisms, and this thread, until now, does not explain much. Let me give some examples of what I do not understand in the destructors document. For instance, let's see sink parameters. > To move a variable into a collection usually sink parameters are involved. Ok-ish. Let us see what this entails > A location that is passed to a sink parameter should not be used afterwards. > This is ensured by a static analysis over a control flow graph. Not sure what a location is, but if the compiler ensures it, seems fine. > If it cannot be proven to be the last usage of the location, a copy is done > instead and this copy is then passed to the sink parameter. So the compiler does not ensure this, after all. Looks like I can introduce copies by accidentally using a location (variable?) after putting it into a collection? Not sure if I am reading correctly > A sink parameter may be consumed once in the proc's body but doesn't have to > be consumed at all. The reason for this is that signatures like proc put(t: > var Table; k: sink Key, v: sink Value) should be possible without any further > overloads and put might not take owership of k if k already exists in the > table. What is ownership? Is it defined elsewhere? It is not defined earlier in the document. As far as I know, Nim does not currently (v1) have a concept of ownership, so if it is relevant, it should be explained. I sort of understand the concept intuitively, and know a little Rust, but if Nim deviates from Rust (I assume it does) it should explain what it means for a a procedure to take ownership of a parameter. > The employed static analysis is limited and only concerned with local > variables So what happens for things that have a longer lifetime? > however object and tuple fields are treated as separate entities This sentence is supposed to be explained by the following code sample, but I do not understand what the code sample is meant to convey proc consume(x: sink Obj) = discard "no implementation" proc main = let tup = (Obj(), Obj()) consume tup[0] # ok, only tup[0] was consumed, tup[1] is still alive: echo tup[1] Run > Sometimes it is required to explicitly move a value into its final position: Uh... sometimes when? Why should I explicitly move a value into its final position? The following code sample does not explain at all why here one should use explicitly `system.move` proc main = var dest, src: array[10, string] # ... for i in 0..high(dest): dest[i] = move(src[i]) Run And so on, and so on. Sorry for the rant, but I think an ELI5 is really needed, even if the whole concept is still in flux. At least the final vision should be clear.
Re: Goto based exception handling
How can I validate if computing 2^500 is safe without, you know, asking the computer to compute 2^500 and dealing with the overflow?
Re: How to manage local dependencies with nimble?
Or even when compiling - just use `nimble c` in place of `nim c` and so on. Even better, write a nimble task and call that
Re: Goto based exception handling
> I'm not sure what would be a reasonable way to handle an out of bounds > exceptions or an overflow even if it were catchable. It's not like a file not > found where we can expect user interaction behind. For an HTTP server, output an error 500 and go on processing other requests (the point of Araq still stands). For anything with user interaction: would you prefer a calculator to display "number too large" or rather crash when you enter "2^2^500"? I really don't see why crashing can be an acceptable way to deal with errors
Re: Goto based exception handling
> In the "goto based exceptions" mode checked runtime errors like "Index out of > bounds" or integer overflows are not catchable and terminate the process. Hard to love the new exceptions if one cannot defensively protect against out of bounds errors and risk to see their process terminated... :-(
Re: Nim is the friendliest language to start
I actually find these error messages very very helpful. Sure, sometimes they are a little verbose, but it is nice to see all existing overloads with the corresponding mismatch
Re: Practical examples showing how macros lead to better code?
[https://github.com/unicredit/csvtools](https://github.com/unicredit/csvtools) relies on type definitions to automatically generate a CSV parser [https://github.com/andreaferretti/memo](https://github.com/andreaferretti/memo) defines a `memoized` macro that performs memoization of a function. Memoization is easy to do for non recursive functions, but doing it for recursive functions requires changing all self calls to calls to the generated memoized version, hence a macro [This macro](https://github.com/unicredit/neo/blob/master/neo/private/neocommon.nim#L64) allows to perform FFI calls using the Fortran ABI instead of the C one, as used [here](https://github.com/unicredit/neo/blob/master/neo/dense.nim#L1086) for example. Admittedly this example does not require a macro, but using a macro can automate an otherwise tedious and repetitive pattern.
Re: Disabling unused import warning locally
Great, thank you
Disabling unused import warning locally
I frequently write test suites in different files. Then I have a main test file which imports all suites. One example from emmy <https://github.com/unicredit/emmy>_ looks like this: import tstructures, tpairs, toperations, ttableops, tmodular, tfractions, tpolynomials, tlinear, tprimality, tintegers_modulo, tfinite_fields, tnormal_forms Run The problem is that I get a lot of warnings like /Users/andrea/progetti/emmy/tests/all.nim(16, 8) Warning: imported and not used: 'tstructures' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(16, 21) Warning: imported and not used: 'tpairs' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(16, 29) Warning: imported and not used: 'toperations' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(16, 42) Warning: imported and not used: 'ttableops' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(16, 53) Warning: imported and not used: 'tmodular' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(16, 63) Warning: imported and not used: 'tfractions' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(17, 3) Warning: imported and not used: 'tpolynomials' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(17, 17) Warning: imported and not used: 'tlinear' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(17, 26) Warning: imported and not used: 'tprimality' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(17, 38) Warning: imported and not used: 'tintegers_modulo' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(17, 56) Warning: imported and not used: 'tfinite_fields' [UnusedImport] /Users/andrea/progetti/emmy/tests/all.nim(18, 3) Warning: imported and not used: 'tnormal_forms' [UnusedImport] Run I tried to locally disable the warning by thi small modification {. warning[UnusedImport]:off .} import tstructures, tpairs, toperations, ttableops, tmodular, tfractions, tpolynomials, tlinear, tprimality, tintegers_modulo, tfinite_fields, tnormal_forms {. warning[UnusedImport]:on .} Run but apparently it has no effect. What is the correct incantation to persuade the compiler to not give warnings for these imports?
Re: CSources are gone - How to bootstrap?
> They are not gone. They are frozen Good to know, I had assumed the repo was frozen because building csources was deprecated
Re: Repeated templates don't work anymore - alternatives?
I am wondering as well... maybe it didn't? :-D In any case, I have updated memo per @mratsim's suggestion, it seems the best approach for now
CSources are gone - How to bootstrap?
I used to rely on CSources to bootstrap Nim, but I just realized that [the repository](https://github.com/nim-lang/csources) has been archived. I have a custom script called [MyNim](https://gist.github.com/andreaferretti/d40af6a276fb3275d97d0f9585d8197e) to manage Nim versions, as well as using something like this on Travis on all repositories language: c compiler: - gcc before_install: # Install nim - git clone -b devel git://github.com/nim-lang/Nim.git --depth 1 - cd Nim - git clone -b devel --depth 1 git://github.com/nim-lang/csources - cd csources && sh build.sh - cd .. - bin/nim c koch - ./koch boot -d:release - export PATH=$PWD/bin:$PATH - cd .. script: - nim c --run test Run What is the alternative now? What is the current recommended way to bootstrap? (Before someone mentions Choosenim, I don't like it for a few reasons: * it doesn't support updating without rebuilding every from scratch [https://github.com/dom96/choosenim/issues/12](https://github.com/dom96/choosenim/issues/12) * it doesn't use symlinks (I don't remember, but this used to cause me issues) * it does not allow to remove old versions [https://github.com/dom96/choosenim/issues/123](https://github.com/dom96/choosenim/issues/123) whereas MyNim supports all of this)
Re: Repeated templates don't work anymore - alternatives?
Yeah, I agree it works, and this is probably the solution I am going to take. I just don't like much the fact that I am generating a function whose name never appears anywhere, but I don't see any better solution
Re: Repeated templates don't work anymore - alternatives?
@cdome: the problem of using gensym is that - well, users have to be able to call this function, they could not with a generated name. @mratsim Your solution should work. It is not really optimal to generate a function with a magic name such as `resetCacheFib()` but if there are no other solutions, I am going for it, thank you!
Repeated templates don't work anymore - alternatives?
In Nim 1.0, a fix was made to disallow declaring a template of an untyped parameter more than once. That is, this is now disallowed template foo(x: untyped) = discard template foo(x: untyped) = discard Run This is correct, and one may wonder why it was allowed before. Unfortunately, it turns out I had used this pattern in [memo](https://github.com/andreaferretti/memo), inadvertently, as it was generated by a macro. Or maybe it comes from some PR I accepted, I dno't recall exactly. But it turns out that memo generates code like this template resetCache(x: untyped) = when x == someFunction: # do stuff template resetCache(x: untyped) = when x == someOtherFunction: # do stuff template resetCache(x: untyped) = when x == YetAnotherFunction: # do stuff Run As you can see, there is a `when` guard, which guarantees that at most one of the templates will apply for a given function. But still, the generated code does not make sense, and does not compile. The intention here is to memoize functions - that is, store computed values in a hash table - and be able to reset that hash table when needed. In other words, the declaration proc f(x: int): int {.memoized.} = # body here Run is translated into something like var table12345 = initHashTable[int, int]() proc f12345(x: int): int = # body here proc f(x: int): int = if not x in table12345: table12345[x] = f12345(x) return table12345[x] template resetCache(x: untyped) = when x == f: table12345 = initHashTable[int, int]() Run where the names `table12345` and `f12345` are gensymmed. What could be a way to prevent the name clash? I could collect all associations function -> cache at compile time, but then I need some way to generate a single `resetCache` template before it is used. Right now, memo does not compile, and I am trying to fix it as I have been updating all my libraries for Nim 1.0 compatibility.
Re: Library for linear algebra
Hi Royi, as I have written in the README, [linear-algebra](https://github.com/unicredit/linear-algebra) is now discontinued in favor of its sequel [neo](https://github.com/unicredit/neo). But be sure to also check [ArrayMancer](https://github.com/mratsim/Arraymancer) which is much more advanced and optimized (but has a different API based on tensors)
Re: Call to all nimble package authors
@bobd > There's one package, for example, that I was planning to use that gets a 2 > for code quality That's precisely my point! You are making a decision based on a cursory assessment by a person who has looked at hundreds of packages in a month. You should **not** reconsider using that package because the code quality is written to be 2! If you, and other people, do that, the author of that package may lose motivation, essentially for no reason at all. Moreover, say the author uses this as an opportunity to improve the code quality (which may already be high!): should they update packages.json to signal that they have done improvements? It feels awkward. I don't know any other platform that gives votes to packages in this way. Example on my own packages: I know for a fact that - say - the code quality on Neo is much higher than that on CSVtools or memo. This is because I have written them. But judging from the spreadsheet you may think that it is a good idea to use CSVtools and a bad idea to use Neo. My only package that gets a 4 on code quality is Alea. While that is not especially bad, I have no clue why that should get a 4 while others get a 2. >From what I can tell on my own packages, these votes are essentially arbitrary.
Re: Call to all nimble package authors
I agree. It can be arbitrary and debatable to attach a number such as code quality to a project. Here is the criterion for code quality 1. Poor code quality. No structure. No comments in code. No nimble support. 2. Code is structured. Some comments in code. Nimble enabled. 3. Code is well-structured. Code is commented. Tested and run on a single platform. Code examples. 4. Code is well-structured. Code is commented. Test sets. Tested and run on multiple platforms. Multiple examples provided. Let me take as an example my own project [neo]([https://github.com/unicredit/neo)](https://github.com/unicredit/neo\)), which is rated with 2 on the provided sheet. It is kind of demoralizing to see this number after all the time I dedicated to the project. I don't know if the code is well structured (I find it easy to maintain, other may disagree) and it is certainly not commented. On the other hand, it has test sets, it is tested and run on multiple platforms and it has multiple examples provided (shouldn't this last points affect docs quality?). On the other hand, another project of mine is [csvtools]([https://github.com/unicredit/csvtools](https://github.com/unicredit/csvtools)) which was more like a couple of day project, yet it has code quality 3 (is it because it has comments? It definitely has less structure and less tests). I don't take this personally, I know you had to rate a lot of projects on a lot of aspects, and maybe a 2 is deserved here, I don't really give it much weight. But giving official votes to the quality of various aspects of projects can carry a lot of negative feelings
Re: Rosencrantz: a DSL to write web servers
I don't know, I have never mixed middleware from different libraries even using python - I always used Django, or Flask, or Bottle on their own. The existence of a basic HTTP server in the standard library can be used as a common ground on which one can develop frameworks (this is what Rosencrantz does), but then there are standalone projects such as [HTTPBeast](https://github.com/dom96/httpbeast) or [Mofuw](https://github.com/2vg/mofuw) (now archived). One thing I wanted to do is abstracting the notion of handler in order to be able to support both the stdlib server AND HTTPBeast, but I could not find the time for this yet
Re: Rosencrantz: a DSL to write web servers
Since handlers are nested, you should be able to see outer variables in the inner handlers. For instance something like scope do: let user = getUserData() return someHandler[ someOtherHanderl[ ok(user) ] ] Run I guess one could abstract some pattern here: if you find yourself repeating some logic in such situations, please file a PR with some generic handler for user authentication. Another possibility would be to make the context generic (`Context[A]`): this would allow to attach arbitrary data to the context. I am not sure whether this is worth the extra complexity, but it is something to consider
Re: how to integrate existing react components in karax?
I published [bindings]([https://github.com/andreaferretti/react.nim](https://github.com/andreaferretti/react.nim)) to actually use React in frontend Nim applications. I can't say it is very actively maintained, but I accept PRs
Re: State of Nimble packages
Wow, @spip, you made a great work! As you mention, it will be complex to keep this up to date, but already having this information as of today is a great start!
Re: Fortran bindings
I use some Fortran bindings to the LAPACK library in [neo]([https://github.com/unicredit/neo/blob/master/neo/dense.nim#L1086)](https://github.com/unicredit/neo/blob/master/neo/dense.nim#L1086\)). Here I make use of a [fortran macro]([https://github.com/unicredit/neo/blob/master/neo/private/neocommon.nim#L49-L68](https://github.com/unicredit/neo/blob/master/neo/private/neocommon.nim#L49-L68)) that abstracts for me the task of passing everything by address etc.
Re: Nim vs V language
That's sad :-( But it shows the power of advertising...
Re: Nim's future: GC and the newruntime
@cdome It can be implemented with 100 lines of code, but can it be implemented **transparently** for users, or will GC users have to write more convoluted code than today? Moreover, can **deferred** RC be implemented with 100 lines of code (I guess not, one has to distinguish between local variables and other references)? If not, your proposed library GC would be less efficient than today. Finally, will mark and sweep also be implementable as a library? Again, I guess not, even though it is generally the most efficient of the available GC algorithms in Nim. Things are not as easy as you picture them
Re: What prevents you from using Nim as your main programming language?
I see that different people really have different priorities. :-) For instance, the standard library, documentation and tooling have been really good enough for me for a long time, but I keep seeing people complain about those.
Re: What prevents you from using Nim as your main programming language?
> That's all there is to it, the important questions are: > Is this annoying to > use in practice? > Does it produce catastrophic crashes in practice long > after the code has been tested extensively? Agreed. > Come on, that's a terrible argument. Well, sort of. Things always evolve, but work on Scala 3 started when Scala was version 2.11, not when it was 2.0RC. If that was the case, people would have skipped the Scala 2 release line altogether
Re: What prevents you from using Nim as your main programming language?
> seem to be implying that the GC is going to be completely removed for 1.0 Sorry, I am not implying this at all. I am just saying that 1.0 will be released with a mechanism for memory management (GC) which is already in the stage of being replaced. This is bad for two reasons: * people will not invest in this language, since they know they are buying something that is somehow on the verge of obsolescence. For how long will GC be available when the new runtime is deemed to be stable? * the standard library, which should be providing examples of good code, will be littered with annotations that are for the new runtime and ignored with the current runtime. People will read it and be puzzled. > By this reasoning, the only language you should use are languages that have > been formally verified like Ada Spark, Idris, or APL Sorry again, I did not explain myself. I did not mean formal verification as in a computer verifying the correctness of code. I was talking about proving as in people doing the proofs. For instance, you mention how Haskell is based on many research articles. Well, most of them come with (human) proof of correctness of the mechanism that they propose. Many GC algorithms also have been proved correct, its not something especially difficult. But the way the new runtime is developing is people finding new examples where the proposed mechanism leaves dangling pointers, or frees memory too early and so on. This is good, but it should be a phase of experimentation - even on paper. Then, when one is sure that the mechanism is sound (does not leave dangling pointers and so on) it should not be too difficult to prove it correct (on paper again, I am not talking about writing an interpreter in Coq).
Re: What prevents you from using Nim as your main programming language?
I have used Nim for many small experiments, but the main reason why I am not using Nim as my main programming language is that it is not popular enough. I know this is circular, but where I work I am the Nim person. If anything should go wrong on a project where Nim fails to deliver on some points, the burden of having chose that lies on me. If a project goes well, I am bound to maintain it indefinitely because I am the local Nim expert. All this means that using Nim in anger is too risky for me now, although I like to use it for experiments of many kinds. Other than that, I am worried about the new runtime. Starting to plan the move to a new runtime just before 1.0 is... well, not a good sign. I also don't especially like that this new runtime requires a complex set of lifetime annotations (sink, move, owned...), which makes code less understandable, while before that Nim could be considered a benchmark of readability. Finally, the new runtime doe not seem to be based on sound research on formal type systems (there's just Bacon and Dingle, but it seems an abandoned approach) and new special cases that evade the analyses done so far keep popping out. I think if one wants to really follow such an approach, it must be tried on paper and **proved correct** with an actual demonstration before jumping to the implementation.
Re: cannot evaluate at compile time: i
The fact is that `entry` is a tuple, hence the type of `entry[i]` depends on `i`: for `i < 5` it is `float`, while for `i == 6` it is `string`. The type checker, to perform its work, must be able to evaulate `i`` at compile time. If, say, ``i` was a constant, what you did would be ok. The compiler lacks the sofistication to infer that, since `i` is 0, 1, 2,3 or 4, the type of `entry[i]` is `float` in any case.
Re: Generic methods are deprecated?
> concepts were deemed to experimental yet to build another key feature on top Uh that's sad, it would have been nice to have static and runtime polymorphism coexist nicely within the same mechanism
Re: Future of Nim ?
> do you have any links providing examples on how to do it in Nim? I am not sure what you are asking. It is an optimization, and as such it should be invisible in frontend (i.e Nim) code. It is the task of a compiler backend to implement it, not something you do by hand. Or maybe I just misunderstood the question?
Re: Future of Nim ?
> even lowly shr is still in process of being changed to be more in line with > other languages (a good thing IMO) which completely broke [cello]([https://github.com/unicredit/cello](https://github.com/unicredit/cello)) :-( > the latest memory management model eliminating GC is huge and excellent! We will have to see, it may very well alienate the already small pool of Nim users > if much simpler languages such as Elm can automatically turn recursive > "procs" into loops, surely Nim could too It is easy to do TCO for self-recursive function. It is essentially not doable in general, unless one controls the generated assembly (you have to omit a stack frame). This is why Nim relies on GCC and Clang to do it. For comparison, Scala is in the same position, since it lives in the JVM, and in fact Scala only supports TCO for self-recursive functions > A cleaner syntax for Algebraic Data Types... this must also be supported in > pattern matching Not ideal, but this is doable with macros, see [patty]([https://github.com/andreaferretti/patty](https://github.com/andreaferretti/patty)) and [gara]([https://github.com/alehander42/gara](https://github.com/alehander42/gara)) > currying and the related partial function application The issue here is not varargs (well, it may be but most functions do not use varargs anyway). The thing is that partial application turns any function into a closure over the partially applied arguments, with the obvious cost. If you really want it, this is again doable with macros, see [currying]([https://github.com/t8m8/currying](https://github.com/t8m8/currying))
Re: Owned refs
Thank you for the explanation. I am still curious, though, since it is not difficult to create a variant of the problem where one needs to actually keep a thing into two different collections. For instance, say I am designing a cache where I keep objects of type `ref T`. I want to be able to evict the cache smartly, so I keep two collections: * `accessTimes: Table[ref T, int]`, which keeps, for each object, the last time it was accessed * `frequencies: Table[ref T, int]` which keeps, for each object, the number of times it was accessed. I can query these two structures to decide that some object is needed rarely, and not recently, hence I can free some associated resource, say close some files or handles. The problem is again the same: how can I do that if only one of the collections can own `ref T`?
Re: Owned refs
I have read the new spec, and I have one more question. With the new model, most collections in the stdlib will be written to use sink paramaters and take ownership of the objects that they contain. Many algorithms require to keep the same object in more than one collection at a time. For example, say I have a `seq[T]` and want to split it according to whether an item appears an even or an odd number of times. In the current Nim language, I woudl write it like this proc split(things: seq[T]): tuple[even, odd: seq[T]] = var counter = newTableCount[T]() for thing in things: inc(counter[thing]) for thing, count in counter: if count mod 2 == 0: result.even.add(thing) else: result.odd.add(thing) Run How would this work with the new runtime when T is a ref type? I need `thing` to be at the same time in the original sequence and as keys in the count table. Next, I need to move all these items from the count table to their destination: do I need some form of iteration that consumes items in the table? Even if this is doable with the new runtime: will this be doable in a generic fashion, as above, or will I need two different implementations, for ref types and non ref types?
Re: Owned refs
@cdome There is a point in discussing this now, otherwise we will end up with something unsatisfactory in 2020. The post by @GordonBGood \- while having problems with the benchmark, as pointed out by @Jehan \- discusses some shortcomings of the new runtime that are worth debating for me. The issues that I see are: * supporting both runtimes will make the stdlib more complex and brittle, and less understandable * the new runtime does not seem to bring many advantages wrt multithreading, which is the current biggest limitation * not everyone comes from a C/C++/Rust background. Most programmers are used to having a GC, and an alternative mechanism makes a **huge** impact in how Nim is perceived Apart from this, I have a question for @Jehan: you say > The main cost of naive reference counting comes from assigning pointers to > local variables (including argument passing) Isn't this problem solved by having **deferred** reference counting?
Re: Rosencrantz is Routing DELETE Method not working?
Try putting explicitly rosencrantz.delete, I think there may be an issue because the function delete is overloaded
Re: Is there way to change «proc» naming?
> it sounds for me like proc[urrent] not like proc[essor] Actually, it is short for proc[edure]. What does procurrent even mean?
Re: Not-so-distinct types
Type1 and Type2 are not distinct types, they are just aliases. You are declaring a different name for the same type uint32, there is no subtyping here
Re: "Nim needs better documentation" - share your thoughts
https://github.com/wicast/nim-docset
Re: [some offtopic] 33 = (8866128975287528)^3+(-8778405442862239)^3+(-2736111468807040)^3
Well, in any case a trivial implementation will have no chance to succeed whatsoever, Nim, C or Assembly. If you want to replicate the result, you should at the very least use the methods described in the paper [https://people.maths.bris.ac.uk/~maarb/papers/cubesv1.pdf](https://people.maths.bris.ac.uk/~maarb/papers/cubesv1.pdf) Even then, consider the following quote from the paper: > The total computation used approximately 15 core-years over three weeks of > real time. :-)
Re: Immutability -- more ideas
Yes, please! The weird semantics for immutability is my number one issue with Nim! :-)
Re: Legal Threats In Nimble Packages
> Limiting the official Nimble module list to just the code with acceptable > licenses. Please. You haven't contributed a single library to Nim's ecosystem, I made 18 (including wrappers), all under Apache2. But for some reason, you keep talking as if this was a threat to the purity of Nim's ecosystem. Yes, two people chose to use a weird license, which is of dubious legal standing and probably not enforceable. Big deal
Re: Buggy concepts
The situation is not as dire as described, I use them quite extensively in [emmy](https://github.com/unicredit/emmy) Still, I find some issues with them, they are not completely stable right now
Re: Some nim builtin libs not doing enough error checking?
Well, if there is no directory, then there are no files inside it to iterate on. Makes sense to me, I would not want an exception there
Re: "Nim needs better documentation" - share your thoughts
I generally find Nim's documentation ok, so it is not my top priority. That said, the areas that are not well-documented are: * the various modes of GC: which one are available? how do they differ? which external requirements do they have - bohem, go libs etc. Especially, it is not clear how the Nim **language** is affected by changing GC, e.g.: with Bohem there is a single heap, what this implies for multithreading? Some constructs require the Go GC, what happens with different GCs? What if you call GF_ref/unref with mark and sweep? What about compiling to JS? If we use regions, what happens with constructs that are not amenable to escape analysis? * Multithreading, especially parallel. For all I can tell, parallel simply does not work. spawn works, but it relies on a threadpool which is easy to deadlock when using channels. Channels are advertised only for use with bare threads, but then when using spawn there is no form of communication. Basically, the threading model is not clear. * How to integrate Nimscript in Nim applications
Re: interesting exercise in Nim metaprogramming: Clojure-inspired data
I am not sure about the rationale for this design. Why don't you just export type PostalCode* = distinct string Run and then use this type elsewhere? In this way, you can leverage Nim type system. I am not really familiar with Clojure, but it looks like this design tries to replace what a type system would do for free. Can you share some more examples and motivation to understand what you are getting at?
Re: Should we get rid of style insensitivity?
Constraining types and variables to have different namespaces would be a solution. The only issue with that is that sometimes types can be used as values (typedesc) and in that case there could be a clash (unlikely, but still).
Re: Should we get rid of style insensitivity?
@preempalver It used to be the case that the first letter was also insensitive. That was changed because many people - me included - like to do something like let person: Person = ... Run
Re: Heterogen lists
Great if that works for you! In any case, the constraint on types comes from the fact that the only way to construct an HCons is with the cons function (because the h and t fields are private) and the only two exported overloads only allow valid constructions
Re: Heterogen lists
Uh? I just proposed a solution that implements HLists. Is there anything missing?
Re: Heterogen lists
In any case, I think that in Nim a more idiomatic solutions would to use tuples instead, which are of course isomoprhic. One cannot used recursion with tuples, but that can be easily circumvented with macros.
Re: Heterogen lists
Yes, since the list is heterogeneous, the type changes according to the various parameters. It is very similar to a tuple - as explained in the blog post I linked - but it behaves like a list in certain respects and this helps in writing generic procedures that would otherwise require a macro
Re: Heterogen lists
@alehander42 What @trtt is trying to implement is called an HList, and is a common construction in, say, Haskell or Scala. This is an [example of implementation in Scala]([https://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/)](https://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/\)), this is [the actual implementation]([https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/hlists.scala](https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/hlists.scala)) in a popular library, and this is [a blog post]([http://rnduja.github.io/2016/01/19/a_shapeless_primer/#hlists-and-product-types](http://rnduja.github.io/2016/01/19/a_shapeless_primer/#hlists-and-product-types)) introducing the feature. I copy the basic implementation here as an example sealed trait HList final case class HCons[H, T <: HList](head : H, tail : T) extends HList { def ::[T](v : T) = HCons(v, this) } sealed class HNil extends HList { def ::[T](v : T) = HCons(v, this) } // aliases for building HList types and for pattern matching object HList { type ::[H, T <: HList] = HCons val :: = HCons } Run
Re: Heterogen lists
@trtt Here it is # hlist.nim type HNil* = object HCons*[H, T] = ref object h: H t: T HList = HNil or HCons let hNil* = HNil() proc cons*[H; T: HList](hd: H; tl:T): auto = HCons[H, T](h: hd, t: tl) template `<>`*(hd, tl: untyped): untyped = cons(hd, tl) proc head*[H, T](c: HCons[H, T]): H {.inline.} = c.h proc tail*[H, T](c: HCons[H, T]): T {.inline.} = c.t Run and how to use it # example.nim import hlist proc printAll(n: HNil) = discard proc printAll[H; T](hl: HCons[H, T]) = echo hl.head printAll(hl.tail) let l = "hi" <> (2 <> hNil) printAll(l) Run
Re: Heterogen lists
I am on my phone now, I will try to give an example tomorrow
Re: Heterogen lists
No, you don't need parameters there, you can leave them unspecified. HCons is just the existential type HCons[H, T] for some H and T
Re: Heterogen lists
If you want to prevent T to be anything other than HNil or HCons, just make the constructor private and expose two overloads of cons
Re: Heterogen lists
@trtt Ok, I have a version that works type HNil = object HCons*[H, T] = ref object head*: H tail*: T let hNil = HNil() proc cons*[H; T](hd: H; tl:T): auto = HCons[H, T](head: hd, tail: tl) let l = cons("hi", cons(2, hNil)) proc printAll(n: HNil) = discard proc printAll[H; T](hl: HCons[H, T]) = echo hl.head printAll(hl.tail) printAll(l) Run
Re: Heterogen lists
@trtt I agree that the code I posted is not working. But I am convinced that it **should** work, and the fact that is not working is a bug of Nim. It is true that a generic T does not have fields like head and tail. On the other hand, all concrete types T which are ever used in specializing that function **do** have head and tail. I will investigate more
Re: Heterogen lists
@trtt You can add such a constraint with `T: HList` if you want. It seems not to work very well now, maybe I will try to figure out why. But there is another point which is important. You **do not** lose the generic type information if you don't specify T. This is because the type checking and type inference happens when T is specialized to a concrete type. So, the checking is not done _a priori_ on the generic proc - that is only a recipe which is eventually converted into a concrete proc which is type checked as if it was not generic
Re: Heterogen lists
@trtt Apart from using solutions for pattern matching such as gara, you can iterate recursively over HLists and use if/case statements This is how I would write a barebone HList, but for some reason the recursive printAll does not seem to work type HListKind = enum hkNil, hkCons HList*[H, T] = ref object case kind*: HListKind of hkNil: discard of hkCons: head*: H tail*: T let hNil*: HList[void, void] = HList[void, void](kind: hkNil) proc cons*[H; T](hd: H, tl:T): HList[H, T] = HList[H, T](kind: hkCons, head: hd, tail: tl) let l = cons("hi", cons(2, hNil)) # proc printAll[H; T](hl: HList[H, T]) = # case hl.kind # of hkNil: # discard # of hkCons: # echo hl.head # printAll(hl.tail) # printAll(l) Run
Re: FE web libraries
@miran thank you, I did not think of that! :-D
Re: FE web libraries
What's a FE lib?
Re: Should we get rid of style insensitivity?
@gemath Actually, a pure Nim library could not respect NEP1 and follow a different naming style. If I use such library, I will call it using NEP1 identifiers - no importc in sight. I think it could make sense to add a warning in the compiler for the case where mixed styles are used **inside** the same project (that is, not in code coming from nimble libraries)
Re: Should we get rid of style insensitivity?
@allochi that experiment does not really make much sense, because you are not supposed to use style insensitivity to mix styles inside a codebase, unless you are masochistic. Style insensitivity is used to take a library written in a different style and use inside your project without having to adapt to the library style
Re: TechEmpower Web Framework Benchmarks
@moerm I think the misunderstanding stems from your claims > To be frank, almost everything seems to be faster than Nim in the techempower > benchmark. > A quick look at Single query, Multiple queries, Fortunes, Data > updates (all "physical"), each with all sub-categories, showed that Nim was > not even in the results. This seems to imply that you think that Nim does not appear in these categories because it is slower than everything else - or at least, this is the way me and @dom96 interpreted it. Instead, the reason why Nim does not appear in these categories is because entries for these categories were simply not submitted :-)
Re: TechEmpower Web Framework Benchmarks
@moerm Nim was not in the results for these categories because - well, probably no entry for Nim was submitted :-)
Re: Should we get rid of style insensitivity?
Style insensitivity is my least favourite feature. That said, I also don't want a vote. I'd rather have a stable 1.0 sooner instead of another language change
Re: Nim as a hot-patching scripting language?
Nim does not have a REPL (well, there is one but it does not work very well), but it has an interpreter, which is used, among other things, to compile macros and to scripts Nimble. You can embed the interpreter in your application - a starting point could be [https://github.com/Serenitor/embeddedNimScript](https://github.com/Serenitor/embeddedNimScript)
Re: should we deprecate nim.cfg in favor of config.nims format?
In fact, \--clang.cpp.options.linker = "-Oz -s ASM_JS=1 --separate-asm" would do as well, if not for the dots in the option name. Options without dots will be configured ina way that is almost equal to nim.cfg
Re: should we deprecate nim.cfg in favor of config.nims format?
I guess switch("clang.cpp.options.linker", "-Oz -s ASM_JS=1 --separate-asm") Run and so on
Re: should we deprecate nim.cfg in favor of config.nims format?
> Deprecation of the old config system is tough as long as the new config > system is so slow. :-( Anecdotally I have yet to experience any difference whatsoever. This may be due to the fact that I build using Nimble anyway, so the only point where I have to use config.nims/nim.cfg is to configure nimsuggest for editors - and in that case the cost of parsing the file is completely irrelevant I am also in favor of using config.nims everywhere
Re: Does Nim need package-level visibility?
I don't know which solution I would appreciate most, but I should mention that have also frequently incurred in 1), 2), 3), 4)
Re: how to increase velocity for merging PRs?
It's not only about compiler, in any case: most PRs are actually for the stdlib
Re: unary operators are often best replaced by explicit names, eg: `%*` => toJson
If you have ever tried to write some complex JSON structure in a typed language, you will quickly realize that such shortcuts are indeed very useful. Now, this is true especially for %, which will be repeated often, while %* is only used once per structure. That said, I find %* consistent with %?, and I would never deprecate `% for the above reason
Re: Proc parameters, local copy
When you write `var a = a`, there is no local variable `a` in scope yet, so the (new) `a` gets assigned the value of the parameter `a`. When you write `var a: int`, you introduce a local variable `a` that shadows the parameter, and then `a = a` is a no-op
Re: Gara: pattern matching DSL
I am really happy to see a complete pattern matching library for Nim! Congratulations for your efforts!
Re: Concepts syntax and interfaces - declarative vs procedural
I don't think it is a good idea to be explicit about `<` being a proc. For me concepts are about (static) duck typing: if it supports `<`, then it satisfies the concept. This is good, because I will want to call `<` on members of the type. I see concepts as a promise that the code I write next will compile when specialized. I don't really care if `<` is a template or a proc, or possibly it is defined on a different type but `a < b` still compiles because of converters.
Re: Introducing the nimgen family of Nim wrappers
Yup, that's it
Re: Nim partners with Status.im
Congratulations! I am very happy to hear about this partnership with the very talented Status team!!
Re: Concepts syntax and interfaces - declarative vs procedural
Agreed with GULPF: the imperative syntax of concepts makes it more suited to some kind of duck typing. Moreover, I find type Comparable = concept x, y (x < y) is bool Run simpler than type Comparable[T] = concept proc `<`(x, y: T): bool Run (if anything it is at least less verbose)
Re: Introducing the nimgen family of Nim wrappers
I have done wrappers for BLAS, LAPACK and CUDA using only c2nim and some manual work. BLAS and LAPACK are ok and they are unlikely to change, but it would be nice if my CUDA wrappers could be automated using Nimgen (ideally in a backward compatible way), so that they could be kept up to date with newer CUDA releases.
Re: [best practice] we should use `check` or `require` instead of `echo` + `discard` magic in tests
I'd say both are needed. Testament certainly works well for the compiler, but a standard library unit testing module is very useful and enough for most libraries
Re: How do I trace procs/things back to the module they come from?
The simplest way is to use an editor with nimsuggest support - for instance I am happy with Visual studio code and its Nim extension, but many other editors support Nim integration
Re: Globally-invoked macros
This discussion was rather strange until now because the first post was not visible. Now that we have more context, my impression is that the feature makes sense, but maybe it would need more use cases. recur is one, and another one I can think of is adding unobstrusive serialization support. Any more use cases?