Re: Connection-Pooling Compile-Time ORM
Can't help myself: const data: SqlQuery = sqls: SELECT Pet.name FROM Pet INNER JOIN House ON House.id = Pet.houseId WHERE House.country = "CA" Run That would be really clean.
Re: Connection-Pooling Compile-Time ORM
To followup on @Variount 's question. Is the ORM goal to be something like: type Pet = object {. tableName: "pets" .} name: string age: int {. fieldName: "age_years" .} const data: SqlQuery = sqls: select Pet.* `from` Pet where Peg.age = 4 Run
Re: Connection-Pooling Compile-Time ORM
As to case, in this particular application, one could have a version that supports the typical styling guidelines for SQL statements. SELECT blah FROM xyz WHERE zin=4 Run could be DSL'd as: const data: SqlQuery = sqls: SELECT "blah" FROM "users" WHERE "zin = 4" Run It would avoid the use of backticks and have a certain visual appeal to SQL coders. But then you would be violating nim's case guidelines. Sometimes one can't win. :-)
Re: advanced `nim doc` use
Or, I could just give up and generate two html files and use the "exports" section at the bottom. Not the end of the world. Just curious if there is a work-around.
Re: advanced `nim doc` use
Are there any when conditions I could use just for documentation. Aka: when defined(docgen): type ABC = object ## this the fancy ABC object. Use it to track blah and stuff. blah: int else: import common # this imports the REAL type ABC for shared use import seesaw # brings in "seeSawAction"; that module also imports common var x = ABC(blah: 4) let bing = x.seesawAction() Run
Re: advanced `nim doc` use
The `--docInternal` simply documented the non-exported items also. But the items from `common.nim` still didn't make it. But..that option tells me I've not seen all the options yet. I'm off to hunt...
advanced `nim doc` use
I'm working on a library that is fairly long. After a single nim file reached about 2000 lines, I realized a refactoring is way over due. So I broke the file into four source files. And, in past projects, I could often abstract the subtending files so that there was no dependency on the main file. But this one was not so easy because, at a fundamental level, these source files all depended on a comment set of types. No problem, I simply but the shared types into a common.nim file and had all of them, including the main file import from common. The main file also does `export ` on each of the types so that the user of the library can also see them. All is good. Then I used nim doc. Now the very important, and needs to be well documented, types are no longer part of the generated documentation. Nor can I see a way to import them into the documentation. (for procedures, I simply create stubs. Aka `proc abc*(): string = submoduleA.abc()`. Not the most performant thing to do, but I get my docs generated.) I have tried a Linux bash workaround that kind-of-sort-of works with a lot of tweaking and some oddly conditional compilation weirdness: excerpt from nimble file (pretending the library is called "main"): task docgen, "Generate HTML documentation": exec "cat common.nim main.nim > temp.nim && nim doc -o:doc/main.html temp.nim && rm temp.nim" Run But I'm really not a fan of that as an answer. Is there a better way to do this?
Re: jester: one handler for several routes?
While not as flexible as @treeform 's solution, but if the naming allows for it, you could do a patterned naming. Either as a regex (see [https://github.com/dom96/jester#regex](https://github.com/dom96/jester#regex)) or simple pattern. For example: import jester routes: get "/hello@suffix": resp "Hello World" Run I also have an experimental fork for plugins also supports subrouters grouped by url prefix, but that is very iffy right now.
Re: Using generic objects and inheritance at the same time.
Off-topic: is there the equivalent of a "when case"? I like that nim checks for missing scenarios when writing "case" statements. I like that nim allows for compile-time code decisions with "when" directives. Putting those together would be spiffy. Not a top-tier feature by any means, but it would be nice.
Using generic objects and inheritance at the same time.
After many many hours of trying to make this work, I finally got it to go. I suspect I'm not the only person to struggle with this, so I figured I'd document the answer here. Rather than getting into theoreticals, let's just jump into code. Nim supports inheritance for objects. An example: # Parent object type Animal = ref object of RootObj id: int method sayFeet(a: Animal): string = result = "[" & $a.id & "] has an unknown number of feet" # Child object type Dog = ref object of Animal hairLength: float legCount: int method sayFeet(d: Dog): string = result = "Dog [" & $d.id & "] has " & $d.legCount & " feet." # Child object type Cat = ref object of Animal whiskerCount: int pawCount: int method sayFeet(c: Cat): string = result = "Cat [" & $c.id & "] has " & $c.pawCount & " feet." # generic proc proc describe(foo: Animal) = echo "animal detail: " & sayFeet(foo) # use it var sparky = Dog(id: 1, legCount: 4) mittens = Cat(id: 2, pawCount: 4) sparky.describe() # "animal detail: Dog [1] has 4 feet." mittens.describe() # "animal detail: Cat [2] has 4 feet." Run Yes, this and all the examples are silly. This more about proof-of-concept. Nim also supports generics in object types. Among other things, this allows for compile-time optimization. type Domestication = enum Wild Feral Domestic type Animal[wildness: static[Domestication]] = ref object of RootObj id: int proc sayFeet(a: Animal): string = result = "[" & $a.id & "] has an unknown number of feet" proc describe(foo: Animal) = when foo.wildness == Wild: echo "Wild animal detail: " & sayFeet(foo) elif foo.wildness == Feral: echo "Feral animal detail: " & sayFeet(foo) else: echo "Tame animal detail: " & sayFeet(foo) var sparky = Animal[Feral](id: 1) mittens = Animal[Domestic](id: 2) sparky.describe() # "Feral animal detail: [1] has an unknown number of feet." mittens.describe() # "Tame animal detail: [2] has an unknonw number of feet." Run But, can you put both ideas together? Yes, but there are apparently a few things to keep in mind. The example of both: # Parent object type Domestication = enum Wild Feral Domestic type Animal[wildness: static[Domestication]] = ref object of RootObj id: int proc sayFeet(a: Animal): string = result = "[" & $a.id & "] has an unknown number of feet" # Child object type Dog[wildness: static[Domestication]] = ref object of Animal[wildness] hairLength: float legCount: int proc sayFeet(d: Dog): string = result = "Dog [" & $d.id & "] has " & $d.legCount & " feet." # Child object type Cat[wildness: static[Domestication]] = ref object of Animal[wildness] whiskerCount: int pawCount: int proc sayFeet(c: Cat): string = result = "Cat [" & $c.id & "] has " & $c.pawCount & " feet." # generic proc proc describe(foo: Animal) = when foo.wildness == Wild: echo "Wild animal detail: " & sayFeet(foo) elif foo.wildness == Feral: echo "Feral animal detail: " & sayFeet(foo) else: echo "Tame animal detail: " & sayFeet(foo) # use it var sparky = Dog[Feral](id: 1, legCount: 4) mittens = Cat[Domestic](id: 2, pawCount: 4) sparky.describe() # "Feral animal detail: Dog [1] has 4 feet." mittens.describe() # "Tame animal detail: Cat [2] has 4 feet." Run Some things to watch out for: * For inheritance in general, you really need "ref object" rather than "object" objects. Things get lost otherwise. * Notice the ref object of Animal[wildness] on the child object definitions. The parent object reference must include the generics. * Inheritance WITHOUT generics: use "method" not "proc". WITH generics, use "proc" not "method". And no, I don't know why. Hopefully this is helpful to somebody. :-)
A good word for idiomatic nim?
This will be a fun/light post. The following is idiomatic to Nim: let myNumber = 3 + 9 Run whereas the following is not: let My_Number = `+`(3,9) Run Prior to my introduction to Python I really never cared much about being idiomatic; but in that context I learned about how much of a difference it can make to shared code. The cultural effect was strong enough that it had generated a new word: _pythonic_ , which is much quicker to say/write than "in a form idiomatic to Python". So, has the community here thought of any good words for "in a form idiomatic to Nim"? nimic? nimetic? nimal? nimium? araqal? clearium? Feel free to pointlessly discuss words in this thread. :)
Call-for-Help: a 128-bit Decimal library expansion
I have just released a new decimal library for Nim that is based on the IEEE 754-2008 specification. [https://github.com/JohnAD/decimal128](https://github.com/JohnAD/decimal128) It has a pending PR for inclusion in the nimble directory. Examples of use: let a = newDecimal128("0.9") let b = newDecimal128("-Infinity") let c = newDecimal128("1.23E+4023") let d = newDecimal128("12.3E+4022") assert c === d# for === to be true, both the numeric value and the number of significant digits must match Run This is the spec used by both the BSON protocol and MongoDB database, which is why I wrote it. It successfully encodes and decodes NumberDecimal() fields. I'll be formally adding it to my `bson` and `mongopool` libraries this next week. In general, however, it should also work for folks wanting decimal support and the associated tracking of significant digits. For me, I pretty much have what I needed: import/export. But I suspect folks will notice a big glaring problem: **mathematical operators do not work yet**. You can't do `a + b` because a proc `+`*(left, right: Decimal128): Decimal128 Run procedure has not been written yet. So, the call-for-help: If you are interested in having decimal library for Nim, please consider helping out by writing one of the operator procedures! An issue has been made for tracking progress: [https://github.com/JohnAD/decimal128/issues/1](https://github.com/JohnAD/decimal128/issues/1) Thanks for considering this. I know folks have asked for a decimal library in Nim in the past. This could be a good opportunity to flush one out based on a known standard.
Re: Semantic grep, a very cool idea (currently mostly for Python)
I like the idea. I'll have to play with it sometime. I'm glad you linked the slides. I started by looking at the examples in `semgrep` repo, and they all showed semgrep through a docker; which makes sense for PHP/Python etc. But would be odd for a compiled language. I would think putting the source code into a docker instance would be a security hazard. I'll probably play with this the next time I need to do deep code search on legacy code.
Re: Nim programming book for kids
I do both video and written books in other contexts (in fact I own a small book publishing company called Purple Squirrel Productions.) I would simply say that different people learn things differently. But I agree that YouTube videos can be boring for me. That is especially true of "do this exactly" instruction videos. I'd rather a online doc that I can cut&paste from. For that reason, I try to infuse my videos with the philosophy of why things are the way they are. I want to give things a sense of context or history that helps explain things. I remember in one of my Python db driver videos going on a small rant about the horrible and misleading the naming of sequences in JS and JSON and how that propagated into BSON and MongoDb. Oddly, people seemed to appreciate that rant.
need for a decimal library?
In both * working on projects that involve financial transactions, and * maintaining the nim MongoDB libraries `mongopool` and `bson` which has a decimal type in addition to float, I've been tempted multiple times to build out a library for Nim that supports decimal numbers (base 10 math and storage). Specifically, I've looked at implementing _Decimal128_ based on _IEEE_ _754_. more details: * [https://en.wikipedia.org/wiki/Decimal128_floating-point_format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format) * [https://metacpan.org/pod/BSON::Decimal128](https://metacpan.org/pod/BSON::Decimal128) I do both enjoy math and machine-level work; so it is, as I say, tempting to start this. I'd want to write a pure-Nim library. I could either: 1. write a standalone Decimal128 library 2. incorporate Decimal128 into the `bson` library 3. write it for possible inclusion into Nim as a standard library But here is the problem: this is a non-trivial project and I'm fairly swamped with work at the moment (I'm a remote contract programmer). So, I'm writing the forum to ask: * Is this something that would be very useful to others here? * Are there other programmers who would be interested in helping out?
Re: Video series: Making a WebSite with Nim
Yes, the code is at: > [https://github.com/JohnAD/bookclub](https://github.com/JohnAD/bookclub) There is a Release pointing to the commit made at the end of each episode.
Re: Video series: Making a WebSite with Nim
Video #3 is out: > [https://www.youtube.com/watch?v=R1FOKpg5UNs](https://www.youtube.com/watch?v=R1FOKpg5UNs)
Re: Where can I deploy a Nim web application? Is there a "NimAnywhere" yet?
@dom96 True. I run docker-compose for consistency and separation mostly. **consistency** It almost guarantees that the instance on my local laptop will be truly replicated on any instance I run on the cloud. My nginx configs always includes `*.localtest.me` support. Basically, I can test the website by: * running the app in the source directory (127.0.0.1:5000) * running the server in a docker-compose instance on my laptop with all the bells-and-whistles surrounding it, just like on the cloud. (thedomain.com.localtest.me) * running the server in production on the cloud. (thedomain.com) I also use `docker-compose` 's virtual environment variable support to avoid storing passwords and credentials in repos. Mostly. I'm not always consistent about that :). **separation** I typically run a couple dozen low-traffic websites on a docker-compose instance. I don't want the websites messing with each other. Actually, with Nim, I suspect I can run much more than that. Nim-compiled websites are really fast and have a tiny footprint. In fact, right now, one of my Nim-based websites is really messing up due to some obscure db-related bug. But the other 6 Nim sites on that same instance are running smooth and unaffected. **downside** The biggest downside, IMO, is the learning curve. If you are not comfortable with Docker, that is a non-trivial amount to learn. It has a lot of conceptual curve balls. It's like git. Only easy once you already understand it. A catch-22. But in general you are right. Anything than can run nginx and a compiled unix program can host a nim-based website. In fact, one would be hard-pressed to find systems that _can 't_ run a nim website. I've put them up on linode also, where I don't use Docker. (but could, but I use linode for experiments usually.)
Re: Where can I deploy a Nim web application? Is there a "NimAnywhere" yet?
Myself, I use Digital Ocean for hosting, using docker-compose. I keep the source code (w/o credentials) for each site on separate private repos and put the results on a separate repo for the docker instance. Essentially, each website is compiled with C. I write a move.sh bash script to move only the important files, including the executable, to the distribution repo. So, to publish changes from any of the websites: 1. After saving my work in a commit and pushing it to GitHub, I run the move.sh script. 2. I go to the distribution repo directory, commit the new changes and push to GitHub. 3. I shell into the DigitalOcean host(s). 4. I take the docker down, pull from GitHub, build, and restart the docker. Done. Each website in the distribution directory has a subdirectory for each website containing: * the website executable (which I call webapp.) * any support files such as html templates; and any subdirectories * a Docker file. An example docker file: FROM python:3.7 WORKDIR /ngosite ADD . /ngosite Run (Ignore the word "python" up there. I'm not running any python; it is just a convenient starting image for me.) In the "nginx" subdirectory, I have a single conf file for each website similar to: # ngo.conf server { listen 80; server_name nimgame.online.localtest.me; location / { proxy_pass http://ngo:5000/; proxy_set_header Host "nimgame.online"; } } server { listen 80; server_name nimgame.online; return 301 https://nimgame.online$request_uri; } server { listen 443 ssl; server_name nimgame.online; ssl_certificate nimgame.online.crt; ssl_certificate_key nimgame.online.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass http://ngo:5000/; proxy_set_header Host "nimgame.online"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded_Proto $scheme; } } Run My overall docker-compose.yaml file looks something like this: version: '3.3' services: ngo: restart: always build: ./ngo ports: - "5000" volumes: - "./ngo:/ngosite" command: /ngosite/webapp tr: restart: always build: ./tr ports: - "5000" volumes: - "./tr:/trsite" command: /trsite/webapp ps: restart: always build: ./ps ports: - "5000" volumes: - "./ps:/pssite" command: /pssite/webapp nginx: restart: always build: ./nginx/ ports: - "80:80" - "443:443" links: - ngo - tr - ps volumes: datavolume: Run I don't run any database myself. IMO, that is a good way to lose data unless you are a skilled DB admin and have built a full cluster. Instead I have a subscription with ScaleGrid for shared databases. In fact, I choose my database instances on ScaleGrid that are on the same AWS network as my Digital Ocean instances. No point to doing database queries across the open Internet backbone. In fact, I generally never store important new data on the Docker. I do store cache data locally. For example, I store IP Geo lookup caches locally. It's okay if that get's wiped out from time to time. I'm not claiming my setups are ideal, but this can be a starting point.
Re: Format() problem with Jester
I recognize this code. :) If you would like the original, I've been placing the website code after each section of the YouTube series at: [https://github.com/JohnAD/bookclub](https://github.com/JohnAD/bookclub) If you look at the releases, each release points to the commit for that part of the video. That reminds me...I've written episode 3 but need to get it published.
Re: Video series: Making a WebSite with Nim
It took considerably longer than I intended to get the video #2 out. But it _is_ out now: > [https://www.youtube.com/watch?v=tz6av3o4XUI](https://www.youtube.com/watch?v=tz6av3o4XUI) Video 3 is written and should be posted this weekend.
Re: A 'made with Nim' website?!
Thanks for the suggestions! Both the new game (smalltrek) and the "Add Game" form are up on the nimgame.online website.
Re: Async web servers and database
An alternative is to have a pooled driver that is friendly to async. My MongoDb driver does this: `mongopool` (on nimble). Essentially, the connections to the database are pooled and used dynamically by the server threads. So no slow down at scale. With postgresql and mysql this is not as pronounced as it is on MongoDb. On MongoDB an authenticated SCRAM connection can take 1 full second to form but the query is almost instantaneous. So having each thread start it's own connection is a very bad idea. I think some of the Python drivers do both: pooled connections and async.
Re: A 'made with Nim' website?!
@Divy, that is a great idea. I'll add such a form this weekend when I'm adding the smalltrek game.
Re: A 'made with Nim' website?!
@hyl I'm happy to add any nim javascript game. My searches on GitHub have not been very fruitful so far. Either games are not actually finished or they don't compile to Javascript. My incentive for making the site, in part, is to make it easier for future devs to find examples of nim js games. The game you linked to is a great one! I'll place it up on the site this weekend. In fact, I'll try to hunting for more on itch.io.
Re: A 'made with Nim' website?!
Sounds like a useful resource. Be sure to post a link. In a similar manner, I've been slowly maintaining a website showcasing open-source nim javascript games: [https://nimgame.online](https://nimgame.online)/
Suggestions for performance-tracking a nim database driver?
So, for the `mongopool` MongoDB database driver, I'd like to create a set of performance benchmarks so that the impact of future code changes can be relatively measured. There is, AFAIK, no real means of reliably "mocking" a MongoDb server (esp. since I'm wanting to mock the performance characteristics, not just the API.) So, I'm thinking: * Create a docker-compose YAML file for a container on the repo under a `/performance` directory. In it include: * A specific copy of MongoDb * A generic ubuntu instance that pulls in and runs a script that: * runs test executable that saves a temporary text file * runs a nim webserver that serves that text file * Create a "performance" task in the nimble that: * compiles the test executable; moves it to the `/performance` directory. * compiles the mini webserver; moves it also. * Builds the docker instance * starts the docker * pulls the results from the webserver that eventually appears; and saves the results in a file with timestamp in the name * shuts down the docker instance and deletes it. (forcing a fresh empty virtual drive for the next MongoDB server.) * displays the % of performance difference on the screen Then, one simply runs nimble performance on a machine that supports docker and, many many minutes later, you have some new results. Two huge problems with this approach: * I'm doing a lot of re-inventing for this to work. * This is only really meaningful when run from the same machine. In fact, now that I've written this all out, I'm somewhat less motivated to do all this. Does anyone have any suggestions for making this easier and more consistent?
Re: conditional compilation from macro-generated code?
doh! Looking at old code of mine, I should have recalled that.
conditional compilation from macro-generated code?
if I do: var x:int if declared(joe): x = joe() else: x = 4 Run Essentially, that is conditional compilation: the existence or non-existence of "joe" is checked at compile time. All is good. If I use a macro to generate some code, that works also. All good. But if I use a macro to generate code that contains a conditional compilation check; not so good. If "joe" is not declared, I get a `Error: undeclared identifier: 'joe'`; despite the conditional nature of the check. Is this correct behavior? If so, that is fine; I can see how the order of compilation makes some things not possible. If it is correct behavior, what are some good workarounds?
Re: conditional compilation from macro-generated code?
I guess the question would be: is there a good way for the macro itself to detect, using a string, if an identifier has been declared?
Re: Fizzbuzz game
saved 5 more: nim for i in 1..100: case i mod 15: of 0: echo "FizzBuzz" of 3,6,9,12: echo "Fizz" of 5,10: echo "Buzz" else: echo $i Run I'm going to stop now. Down these paths lay OCD madness. :)
Re: Fizzbuzz game
five characters saved: for i in 1..100: if i mod 15==0: echo "FizzBuzz" elif i mod 5==0: echo "Buzz" elif i mod 3==0: echo "Fizz" else: echo $i Run Of course, you could also then remove all the fancy spacing if the if statement to make it shorter.
Re: future of htmlgen
> You can use any of the macro-based html-generating nimble libraries or create > your own html builder. Not sure why would you use something like moustache > when macros exist. There are pros and cons to each direction. I use the macro approach on one of my larger websites. It now takes 8 minutes to compile every time I change to a web page (and a new submission to the docker of the exec). Nor can I allow self-manipulation of pages via the web interface ala wordpress-style. (Though, for some pages, that is a good thing.) It is insanely fast though. A minimal docker instance can handle a phenomenal amount of traffic. So, for something like an API or a Karax backend, a template library is likely bad idea. For a large traditional page-centric web site, it scales better, IMO. As always, it depends on the details.
future of htmlgen
I'm about finished with production on my next YouTube video, which is about demonstrating some of the methods of generating the HTML views under Jester. They currently are: 1. simple string manipulation (aka `strutils`) 2. using `htmlgen` 3. using Source Code Filters 4. from nimble, using the logic-free templating library `moustachu` (this is what the remainder of the series will use.) But ... I vaguely recall hearing somewhere that `htmlgen` is slated for removal from the standard library. But I might be remembering wrong. Is it slated for removal? If so, I'll skip it. Also, should I categorize that source code filters are the "officially recommended way" to generate html? Or is that extrapolating too much?
Video series: Making a WebSite with Nim
The first two episodes are up on YouTube. I hope to do about one a week; there will be at least nine of them in the end: > [https://www.youtube.com/playlist?list=PL6RpFCvmb5SGw7aJK1E4goBxpMK3NvkON](https://www.youtube.com/playlist?list=PL6RpFCvmb5SGw7aJK1E4goBxpMK3NvkON) On an adjacent topic: I will be at FOSDEM at the start of February. I see at least 4 talks on Nim, including ones from `Araq` and `dom96`. I look forward to meeting up with folks in the Nim community!
Re: Error converting json to types
BSON uses Extended Json (v2); and BSON does have a UTC Time/Date function. Granted, it converts to Time rather than DateTime. BSON also has marshalling support to/from objects. If interested in that: [https://nimble.directory/pkg/bson](https://nimble.directory/pkg/bson) Example of use with objects: import json import times import bson import bson/marshal type MyType = object cannonicalDate: Time relaxedDate: Time marshal(MyType) let example = """{ "cannonicalDate": {"$date": {"$numberLong": "1567367316000"}}, "relaxedDate": {"$date": "2019-08-11T17:54:14"}, }""" let parsedJsonDoc = parseJson(example) let bsonDoc = interpretExtendedJson(parsedJsonDoc) var exampleObj = MyType() exampleObj.pull(bsonDoc) # this applies the BSON to the object assert exampleObj.relaxedDate.format("-MM-dd\'T\'HH:mm:ss", utc()) == "2019-08-11T17:54:14" Run But, this does not solve the original poster's problem if the JSON he receives is not ExtendedJSON compliant. As to modifying the library, IMO Araq is correct, since JSON does not support dates, putting such support into the standard library would be improper. Someone could write a nimble support library though.
Re: Error converting json to types
(disclaimer: I'm the author of BSON)
Re: Walking trees without recursive iterators
It is always possible to move a recursive algorithm to a non-recursive one; even avoiding psuedo-stacks or tail-recursion games. It is, ironically, something I enjoy doing because it is quite the intellectual puzzle sometimes. The toughest one I did was with the Negamax (Minimax with restrictions) in a python library. (Sorry, I've not moved it to Nim yet.) But, for context: writing a normal negamax algo in any language takes, perhaps, a few hours in any language. But, writing the non-recursive one, involved multiple convoluted arrays and more complex edge cases than I've ever encountered before. It took over a week of intense work. SO, Anything your head can do can be described in an algorithm with enough scrutiny. (BTW, the non-recursive version of negamax is about 3 to 8 times faster. But it uses lots more memory.)
interest in a decimal library
I'm about to expand the `BSON` protocol library to support 128-bit decimal numbers. But, to do that, I would need to write a 128-decimal decimal number support into Nim. So, I could either: 1. put the new data type directly into the `BSON` library, or 2. create a seperate 128-bit decimal library and have BSON dependant on it, or 3. put partial support into the `BSON` library and save/read using the arbitrary precision library at [https://github.com/status-im/nim-decimal](https://github.com/status-im/nim-decimal) (which is not on nimble yet). Has there been much need for such decimal support? I know if Nim is ever used for financial work, some kind of decimal arithmetic would be needed to prevent rounding issues. If made it a separate library I would definitely put it on nimble.directory. Or, I could submit it to be part of the Nim language standard library if there was interest. Opinions?
Re: interest in a decimal library
other notes: 1\. The library would be written such that it conforms to the BSON spec: * [https://github.com/mongodb/specifications/blob/master/source/bson-decimal128/decimal128.rst](https://github.com/mongodb/specifications/blob/master/source/bson-decimal128/decimal128.rst) Which uses the IEEE 754 spec: * [https://en.wikipedia.org/wiki/IEEE_754](https://en.wikipedia.org/wiki/IEEE_754) * [https://en.wikipedia.org/wiki/Decimal128_floating-point_format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format) * [http://speleotrove.com/decimal/decbits.html](http://speleotrove.com/decimal/decbits.html) 2\. Yes, there is another decimal library on nimble.directory, but it does not have high-enough precision. 3\. The arbitrary precision library by im-status has the benefit of near-infinite precision and uses Python's mpdecimal C library. In theory, the downside is performance, but that C library is pretty highly optimized.
Re: indentation
Nice font. A quick test with a nim sample: [https://d.pr/i/Cj8kSE](https://d.pr/i/Cj8kSE) I wonder if using a proportional font would further discourage me from doing any... proc myP( i: int, c: char ) Run ... space-alignment tricks. Hmmm... a project I'll probably never get around to: design a san-serif coding font where [space] is the same size as [0..9], [a..f], and dot (.), but otherwise everything is proportional.
Re: Editor support for Nim 1.0
I use SublimeText 3 also. While the Nim support is out of date and seems to have quirks, it does not actually cause me any problems.
confirming the purpose of `$` stringify operator
I'm about to make a few changes in some of my libraries and I'd like to ask the language devs a key question about the $ stringify operator. I want to make sure my libraries are following "nimish" convention as much as possible. So, my question: what is the specific purpose of the operator in Nim and, more specifically, who is the intended audience for the generated string? In the world of JSON and XML, the word "stringify" has a very specific meaning: converting the data structure to the string version of the spec. Essentially, if you "stringify" an XML document, you can turn around and "parse" that string and get the original data structure back (in full). Outside of that, I've only seen the term used for human grammar, which simply means a "textual representation" which can be a summary or a spelling or what-have-you. If that is the intended meaning of $, then if you had a type called, for example, `DoctoralThesis` and I followed general Nim convention, then: let a = newDoctoralThesis() # # add details to a, then... # let stringifiedA = $a let recoveredA = parseDoctoralThesis(stringifiedA) # `recoveredA` should have the same content as `a` Run would be expected to work. I'm not saying everyone's implementation of $ follows this model. I've seen lots of examples that don't. :) In fact, I've also seen it used in two other ways: 1\. Generating a useful representative string of the type with a short summary of details. So, very useful to programmers and debugging. > `DocThesis(Brown, 1995)` 2\. Generating an easily understood string equivalent. Very useful for creating output to be seen by end users. But, not reliably parsable. > `Brown, C. (1995). Understanding and care of fictional beagles. Washington > archives. (1445202-23)` but if it is meant to be parsable/reconstructable, the string could look something like: > `("Brown", "C.", 1995, "Understanding and care of fictional beagles", > "Washington archives", "1445202-23")` Anyway, sorry to ask such a pedantic question. I'm happy to write a PR to the docs if the answer to this question is useful to others.
Re: FOSDEM Call for Participation
It's confirmed finally: I will be at FOSDEM 2020. I'll be sure to visit the devroom on Sunday.
Re: the "type" of the curly-bracket structure
As much as I wanted that to work, it doesn't. Once I place type like that in the macro parameter, the compiler re-encodes the Table Constructor as an array (nnkTableConstructor disappears from the NimNode). And then it stops supporting mixed structures like: `{"a": "blah", "b": 4}` Not a big deal. I'll just use different macro names.
the "type" of the curly-bracket structure
If I write a macro as such: macro `@@@`(x: typed): FancyType = # stuff goes here var a = @@@{"fromStruct": "blah", "sub": {"joe": 10}} Run All is good. Or if I do: macro `@@@`(x: Blork): FancyType = # stuff goes here let xyz = Blork() var a = @@@xyz Run All is still good. But if I do both, then one macro overwrites the other. Okay, no problem. I'll make the first one have a specific type for that curly-bracket-thing to separate it from the other. I can't see to find the name for it though. Reverse engineering implies the type is determined on a case-by-case basis during compile-time. Makes sense. Looking at `json` example of %*. Yep, uses NimNode generically. Drat. But, I'll ask anyway: is there a way to "declare" that one version of the macro is expecting the curly-bracket-thing. Also..is there an official name for the...curly-bracket-looks-like-json-but-isn't-thing? Giving me a name will help with further communication. :)
nimgame.online - open-source nim javascript games (and references to the source)
I was going to put up one of my games online anyway, so I figured I'd make it more educational: this could be an online repository of javascript nim games with links to the corresponding source code. Details at [https://nimgame.online](https://nimgame.online) If nothing else, it has two of my games on it :). But, I'm happy to put more up if anyone volunteers. (My two games use my `webterminal` library, so they are essentially shell games. But that need not be an expectation; any javascript game is okay.) John P.S. The website is also written in Nim. You are welcome to look at it on the repo, but don't expect it compile it on your machine. I've got about 3 more libraries to put up on nimble still. So much to do...
Re: FOSDEM Call for Participation
I plan on attending the upcoming FOSDEM. While I'm not prepared to "present a talk"; I'm happy to volunteer my help in less-intense ways. :)
Research questions for open-source library authors re communicating with general users
I'm doing research for a possible startup where I'm writing a tool (in Nim) for supporting library repos. I also have a few libraries up on nimble.directory. So, I figured this forum might be a apt place to ask some open-ended research questions. If it is inappropriate, please let me know. * * * One of the purposes of an open-source library is for other people to use it. The other purpose is also for contributors to help develop it. And while the "users group" and "contributors group" definitely overlap, there are pretty much always more users than contributors. That makes sense. Though I help out regularly with about a dozen libraries, I also "only use" hundreds of other libraries. (Actually, since my main OS is an Ubuntu derivative, I'm probably using many thousands of other libraries indirectly every day.) For the libraries where I'm an active contributor, one of the challenges of users is they often have questions. For a little-used library, the occasional "tech support" question thrown into GitHub Issues is no big deal and easily answered. But for a popular library or project, this can be more problematic. I'd love to hear peoples opinions and solutions for the following. 1\. For the libraries you support; how has communication with users been handled? > (Nim itself, has this forum to help out, of course...) 2\. Any examples of complications? 3\. How often do you encounter the same questions? Is it easy to find "FAQ maintenance volunteers", etc.? 4\. Can you think of other ways communication with end-users could be helpfully improved, increased, or decreased? Example stories?
Re: Introducing Norm: a Nim ORM
... checked the range. `Time` is good to the year 292277026596 and presumably backwards that far. So, you will have to worry about the Y292277026K problem.
Re: Introducing Norm: a Nim ORM
In the expansion of `norm` to support mongodb that I'm developing, I've defaulted to only supporting `Time`. The `DateTime` type breaks things on so many levels, mostly because `DateTime()` generates a runtime error. Fortunately, the `Time` function, though based on the traditional Unix timestamp (seconds from Jan 1, 1970), is signed and 64 bit. So, the actual time range is massive and includes time before 1970.
Is it possible to detect context in relation to a threadvar?
I'm developing a library that maintains a global pool of resources that can be seen in both thread and non-threaded contexts. Rather than jump into great detail, let show this with a silly example: import strutils import jester var someGlobal: string = "value at the moment" var someThread {.threadvar.}: string proc getSomeMain(): string = result = "copy of $1".format(someGlobal) proc getSomeFromThread(): string {.gcsafe.} = someThread = "copy of $1".format(someGlobal) return someThread someGlobal = "another change" var x = getSomeMain() echo x routes: get "/": var y = getSomeFromThread() resp y Run Everything works great. But, I ask, would it be possible to write a generic equivalent of "getSome". Something like: proc getSome(): string = if MAGIC_DETECT_METHOD: return "copy of $1".format(someGlobal) else: someThread = "copy of $1".format(someGlobal) return someThread Run Or, if not at runtime, some kind of compile-time thing that detects whether "getSome" is being called from a thread, such as a Macro or Template. I apologize if the answer to this is well documented somewhere. I've not been able to find an answer yet.
Re: Jester question: passing gcsafe variable into routes
> To fix this to work with threads you should use a threadvar, and initialise > the DB connection in each thread. Ah, but that gets me back to 2 seconds delay per page. Just for fun; started diving in. I created a new element called `decorate`: decorate : beforeThread , : # causes a COPY of identifier to be passed as a parameter to the route. beforeRoutes: # statements to run prior to running the code in any route # this is different than "before:" (as found in routers) because any variables here # are still in context for the route itself. afterRoutes: # statements to run immediately after running the code in any route # used for cleanup such as saving cookies etc. afterThread: # in the same context as 'beforeThread'. Meant for cleanup after the async thread has closed. Run The decorators are setup to be chainable. In fact, if someone wrote templates, one could add functionality to `jester` with stuff like: jesterUserManagement() jesterTrafficLogger() Run Here is an example of `decorate` I am testing: import jester, sequtils type fakePoolEntry = tuple id: int free: bool var fakePool: seq[fakePoolEntry] = @[] for i in 0 .. 9: fakePool.add (id: i, free: true) var tryNext = 0 proc getOne(): fakePoolEntry = for i in 0 .. 9: tryNext = (tryNext + 1) mod 10 if fakePool[tryNext].free: fakePool[tryNext].free = false result = fakePool[tryNext] break proc returnOne(db: fakePoolEntry) = for i in 0 .. 9: if fakePool[i].id == db.id: fakePool[i].free = true decorate helloWorld: beforeThread db, fakePoolEntry: var db = getOne() beforeRoutes: var name = "Joe" var something = 3 afterRoutes: echo "something = ", something afterThread: returnOne(db) routes: get "/": something = 2 resp "hello, " & name & " pool number =" & $db.id Run It kind-of-sort-of works. It does what it says, but the underlying `asynchttpserver` is _also_ running threads. So the `asynchttpserver` complains about `beforeThread` accessing global memory, but not the async call to the route is okay. Will play with it some more...
Jester question: passing gcsafe variable into routes
I have a web project that uses an external database. I'm wanting to write the site in Nim using the jester library. One of the gotchas of the database, which I cannot control, is that though the database itself is very fast, establishing an authenticated connection to it takes about two seconds. So far, I've found two ways to make this work: 1\. Create a new connection to the database for each web query. Some psuedo-code to demonstrate: routes: get "/index.html": var db = connectToDatabase() r = db.query("select update from today") resp someIndexHtml(r) Run That works and is fully thread safe, but delays each and every page by 2 seconds. Ouch. 2\. Make it not thread safe by accessing a global variable. More psuedo-code: var db = connectToDatabase() routes: get "/index.html": r = db.query("select update from today") resp someIndexHtml(r) Run Now delay is on server startup. That is fine. The pages are nice and fast. The compiler warns me about accessing a global variable. If two page threads access the db simultaneously it is very likely that things will break. As long as traffic is light, I will probably get away with it. How have people worked around this problem? Just thinking out loud: An answer could be a template of some kind, such as: var pool = connectToDatabaseAsPool(20) template inbound(body: untyped) {.dirty.} = # this is run before the async call to the route var db = pool.getOneThread() template outbound(body: untyped) {.dirty.} = # this is run after the async call to the route pool.returnOneThread(db) customRoutes(inbound, outbound): get "/index.html": r = db.query("select update from today") resp someIndexHtml(r) Run I imagine if I clone jester I could make a custom version of routes that does this for me. But I'd rather not go down that path. Thoughts?
Re: nimongo and MongoDb.Atlas connection
Update for sake of completeness: MongoDb Inc. didn't work simply because they require SCRAM-SHA-256 auth encryption on the cheaper accounts. Nimongo seems to only support SCRAM-SHA-1 at the moment. So... it is not a conspiracy; simply a protocol mis-match for which I don't have a work-around.
Re: nimongo and MongoDb.Atlas connection
Never mind. While I am still curious as to what is needed to get it to work, I gave up and created an account at cloudclusters.io and it worked on the first try. Given the other providers are all running 3.x of MongoDb and they are running 4.0, I kind of wonder if MongoDb Inc. is running some kind of "purposely-incompatable-but-still-open-source" game to stop the other cloud providers. Just a guess given some of their other behaviors recently,
nimongo and MongoDb.Atlas connection
I've been using the pure nim nimongo library for a while and it has been very useful. I am attempting to get a project into full production and am wanting to move to a live database and am having trouble with that. Specifically, I have an account at mongodb.com (MongoDb Inc.'s Atlas) But I'm having difficulty with it. Has anyone worked out a combination that works with this hoster and is willing to give details/hints? John My details: The recommended connect string: "mongodb+srv://:@cluster0-abcde.mongodb.net/test?retryWrites=true&w=majority" Run (Hostname slightly mod'd above to not give out security details. And of course, I substitute the actual username/password.) But that creates a dsn look failure. I figure that SRV isn't supported yet. Nor is listing the cluster's shards working AFAIK: "mongodb://:@cluster0-shard-00-00-abcde.mongodb.net:27017,cluster0-shard-00-01-abcde.mongodb.net:27017,cluster0-shard-00-02-abcde.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin&retryWrites=true&w=majority" Run But even when I use a single shard: "mongodb://:@cluster0-shard-00-00-abcde.mongodb.net:27017" Run It is not connecting. But neither am I getting a descriptive error. Simply a "cannot read from stream". The Db is at version 4.0.11 and the password is encoded with SCRAM.
Re: Problem with var objects and adding values to them
this can be solved with a converter. Add: converter fromBtoA(b: B): A = result = A(x = b.x) Run The y field will not be stored, of course, since X is a seq[A].
get a type via a string?
I'm writing a macro that parses over object types and generates various functions on it's fields. For the most part it works. However, I'm will likely encountering folks trying to do this: type ssnType = Option[int] type Person = object age: Option[int] ssn: ssnType Run My macro can handle the "age" field because it's true type is easily parsed in the Node tree. Both "Option" and "int" are known types and in a bracket. Got it. "ssnType" is also visible, but that is an unknown type made by the library user. Is there a way, given the string or symbol "ssnType", to somehow glean it's earlier definition of "Option[int]"? I've tried various plays with `getTypeImpl` and `getTypeImpl`, but I suspect I'm missing some key knowledge.
Re: timezone in unit tests
Thanks for the helpful work-around. Still can't test the $ function without manipulating an environment variable, but that isn't the end of the world is suppose. Perhaps, if I get motivated, I'll make a PR for a compiler variable for the times module that allows for an override for $.
timezone in unit tests
I'm writing a library that reads Time in and out of a database. As part of that, I am writing unit tests the output of $ for Time. $ does not take extra parameters by its nature. The problem is that $ generates a string in my local time zone. For myself, that kind of works. But if/when the library is released, asking each developer to change their timezone to run nimble test is not particularly friendly or ideal. I've come up with a hack-ish workaround for now. But, is there a way to override the timezone in nim? Or perhaps a command-line switch to place in the t*.cfg files?
challenges with bool parameter on template
import macros template test(a: bool) = if a: echo "true seen" else: echo "false seen" macro joe(body: untyped): untyped = result = newStmtList() let temp = getAst test(true) result.add(temp) joe: echo "never seen" Run I've been able to narrow down my problem scenario. A template with a bool parameter works normally except when pulled through getAst inside a macro. In that scenario, it seems to "become" an integer. I get the error: /home/johnad/test.nim(14, 1) template/generic instantiation of `joe` from here /home/johnad/test.nim(11, 25) Error: type mismatch: got but expected 'bool' Run Am I misusing something? Or is this a bug?
can already defined types be modified at compile time with a macro?
I can write a macro that allows me to do this: my_macro_adds_age: type SomeType = object name: string let a = SomeType(name: "Bob", age: 30) Run No problem. But, I'm curious, is it even possible (and safe) to modify a type after the definition? Something akin to: type SomeType = object name: string my_other_macro_adds_age(SomeType) let a = SomeType(name: "Bob", age: 30) Run
International meetup or conference?
With the upcoming 1.0 release; are there any thought to having a developer/users meetup or conference? Perhaps a developer room at FOSDEM 2020?
Re: Future direction of db_* modules
thanks @mikra! I did a search in Nim issues and somehow didn't find that one. Araq mentioned possibly moving `ndb` to the standard library if it can act as a replacement for the db_* modules. I'll look into volunteering to help with development of `ndb`. I like what I see so far, especially the handling of nulls and blobs with `dbNull` and `dbBlob`.
Future direction of db_* modules
Currently the database modules (db_mysql, db_postgre, and db_sqlite) all rely on using `nil` to represent NULL when passing data in/out of the module. As most folks know, Nim is moving away from production support of `nil`, so it would be good to make changes to those libraries. I'm happy to help with this, but rather arbitrarily deciding on a solution, I want to get opinions and/or direction from the community. But, to get started, let's formally describe the current solution: **CURRENT** Rows of data are passed in/out using: seq[string] Run Where NULL values are represented by entries with a `nil`. All data types are represented by the string equivalent of the data. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: @["joe", nil, "30"] Run I see four ways we could change this: **STRINGIFY** Rows of data are passed in/out using: seq[string] Run Where NULL values are represented by entries with the strings NULL. Strings are quoted. All other data types are represented by the string equivalent of the data. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: @["'joe'", "NULL", "30"] Run notice the extra inner quotes around the real string value. **OPTION[STRING]** Rows of data are passed in/out using: seq[Option[string]] Run Where NULL values are represented by entries with none. All other data types are represented by the string equivalent of the data. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: @[some[string]("joe"), none(string), some[string]("30")] Run **TUPLES** Rows of data are passed in/out using: tuple[data: seq[string], are_null: seq[bool]] Run The data types are represented by the string equivalent of the data, and the null items are indicated in the separate sequence. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: (data: @["joe", "", "30"], are_null: @[false, true, false]) Run **JSON** Rows of data are passed in/out using: seq[JsonNode] Run Where all data types are represented by their JSON equivalents. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: @[parseJson("""["joe", null, 30]""")] Run **NULLABLE** Rows of data are passed in/out using: seq[nstring] Run Where all data types are represented by string equivalents. So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass: @["joe", NULL, "30"] Run Note: I've not actually finished this type library yet. In fact, I only have `nint`'s type operations fully flushed out. Okay, and now: NOTES: * A subtle problem with both the CURRENT method and the OPTION[STRING] method is that nil and none are best interpreted as "no value". But on a database, NULL which means "unknown value". That is why, in SQL, "NULL != NULL". Two real but unknown values cannot be assumed to be equal. [https://www.essentialsql.com/get-ready-to-learn-sql-server-what-is-a-null-value](https://www.essentialsql.com/get-ready-to-learn-sql-server-what-is-a-null-value)/ Handling this correctly solves many odd border-case scenarios, especially with aggregation functions. * I mention my "nullable" database types library to be complete. I'm _not_ pushing for this. In fact, using that would require that nullable be added to the standard library, which I'm not entirely sure is a good idea. But, if curious, details are at [https://github.com/JohnAD/nullable](https://github.com/JohnAD/nullable) . * Personal option, so far: I'm really not a fan on using JSON given the overhead that would raise. I think that STRINGABLE and OPTION[STRING] are the best candidates. But, I'm happy to go along with the community.
Re: Is allowing non-matching types for assignment overloading in the development timeline?
No need apologize! Holy cow ... how did I not know about `converter`? I've been coding Nim for over a year now and I just now learned of it. I will have to re-read the manuals again. I wonder if there are any other hidden nuggets I've missed.
Is allowing non-matching types for assignment overloading in the development timeline?
If, for example, I were wanting to create a crazy new int type that stored its value as product of primes, and I was not concerned with performance, I could create my own pure nim type in a library. type intAP = object setOfPrimes: array[0..50, int] Run ... and, using operator overloading, it would mostly work just like any other int. I could add, subtract, etc. and it would all behave just like any other int. I can even mix the types. I could even add a int64 and a intAP together and return either a new int64 or intAP (depending on how I wrote it.) But the one thing I can't do is plain assignment. # works: var a: int = 5 var b: int64 = 5 var c: int16 = 5 var d: float = 5 # doesn't work: var e: intAP = 5 # workaround: var e: intAP = createIntAP(5) Run This isn't a deal breaker. But it would really be nice if the operator overload proc allowed for different target types. Akin to: proc `=`(n: var intAP, target:intAP) = n.setOfPrimes = target.setOfPrimes proc `=`(n: var intAP, target:int) = # blah blah blah mathy stuff Run Various places in the forum it has been hinted that this might one day be possible. So, two questions: Does this cause any fundamental problems? Is it on the roadmap? Just curious.
Re: How do I trace procs/things back to the module they come from?
If you wish to get into the habit of it, you can also: from x import nil Run Then, in your code, reference `x.thing1` and `x.thing2`. Or, if the module name is really long or involves directory separation: from reallylongname/subdir/awesomelib as x import nil Run To make such behavior even more possible, I'm thinking of writing a PR to support Nim importing associated methods when an object is explicitly pulled. That would allow: from x import ThingObject Run to import not just `ThingObject` but also any methods/procs that have `ThingObject` as the first parameter. But, I'm still learning Nim and I'll definitely wait until 1.0 is out before making such a PR. I have the impression they have enough stuff on the plate right now. I'm a big fan of explicitness. (And, I like old-school paper printouts to do debugging and study.)
any value to a dynamiclists module to the public
Running down the rabbit hole of my earlier questions. I've decided to make the equivalent to a dynamic list as you would see in an interpreted language. Would such a module be of interest to the public? If not, I'll write a nice limited version for myself. Essentially, the following would be possible: import dynamiclists generateDynamicList("MM", int, float, string) var x = MM() x.add(3.14) x.add(3) x.add(5) x.add("hello") for a in x: echo type(a), ", ", $a # outputs: # float, 3.14 # int, 3 # int, 5 # string, hello Or, more usefully in my case: import dynamiclists # NOTE: the following two objects are NOT related. They merely # have the same procedure. type Zippy = ref object of RootObj a: int DD = ref object of RootObj b: string proc say_hi(self: Zippy): void = echo "ding" proc say_hi(self: DD): void = echo "dong" var z = Zippy() var doodah = DD() generateDynamicList("NN", Zippy, DD) attachProcToDynamicList("NN", "say_hi") var y = NN() y.add(z) y.add(doodah) y[0].say_hi() y[1].say_hi() # outputs: # ding # dong I no interest, I'll keep it to myself as the level of meta is way high; and there are a lot of border cases to track.
Re: get "size" of a tuple at compile time?
@def, Thanks! In fact, now that I'm browsing the "system" module ... I do believe I'll be sitting back on the couch reading the "system" module doc in depth tonight.
Re: get "size" of a tuple at compile time?
Actually, since I'm bugging you all with a question, i'll jump into some detail for feedback: The framework is for turn-based games. A **Game** object handles "different" kinds of players. The default **ConsolePlayer** is a "human sitting at the text console". So, if var move = player.get_move() is called, then the next move is returned as a string key by prompting the person running the program with text prompts. But if **NegamaxPlayer** , then ' **get_move** ' runs and negamax AI algo. Or if the framework user creates a **GodotPlayer** , it might be grabbed from GUI responses. And so on. The idea is consistent responses for methods, but wildly varying means of getting the data. Anyway, I can't have a list of **n** players in a sequence because I need to mix /match **ConsolePlayer** , **NegamaxPlayer** , etc. in that list. But I need to keep track of the player numbers. So it need to behave like a multi-type list rather than array or seq. But, to my knowledge, that really isn't possible in a compiled language. So, I'm doing: Game* = ref object of RootObj # ... stuff removed ... player_count*: int players*: tuple[ a: Player, b: Player, c: Player, d: Player, e: Player, f: Player ] current_player_number*: int # ... The framework user would then override the tuple with their own variant: type MyCrazyGame* = ref object of Game # ... player_count*: int players*: tuple[ a: ConsolePlayer, b: NegamaxPlayer, c: RemoteServerPlayer, # ... ] current_player_number*: int # ... Which is ... arduous to deal with. But okay. Lot's of if/elif sequences to call the same method name in key places. But before I go down this path too far; is this the best and most nim-like way to do this? Or am I missing something... Thanks for any feedback on this...
get "size" of a tuple at compile time?
Is there a way to get number of elements inside a tuple? This is known at compile time, of course; as tuples are fixed structures. At first glance, this probably sounds like an odd request, but i'm writing macro for an inheritable framework with objects referencing other inheritable objects. And it would be of convenience to the user of the framework if this were resolved without the need for human counting. One less mistake to make and debug. Not a deal breaker, of course. Just fewer "oops I forgot to change that" mistakes.
Re: Statistics for standard library usage
quick correction regarding my earlier post: apparently there is already one website for nimble packages: [https://nimble.directory](https://nimble.directory)/ I just happened to come across it via a random google query. I don't see it referenced on nim-lang.org. Or even on the packages repo. It is on the wiki; though not under the packages part. It appears to be coming from [https://github.com/FedericoCeratto/nim-package-directory](https://github.com/FedericoCeratto/nim-package-directory) Is this an official source of things? If so, I'll create PRs for both [https://github.com/nim-lang/website](https://github.com/nim-lang/website) and [https://github.com/nim-lang/packages](https://github.com/nim-lang/packages) to add references. If not official, I'll just update the wiki perhaps.
Re: Statistics for standard library usage
Adding a short opinion regarding what should be in the standard library for v1.0. Keep in mind my background is mostly python, however, I'm still a newb with Nim. One of Python's selling points is the large standard library that is distributed. But one of it's flaws, however, is that it only has two levels of libraries: * Standard Libraries * All external libraries (PyPi) That means if needed library is not in standard, it is _very_ non-obvious which competing library is a good one. One has to hunt-around the similar libraries and try to get a "general impression" and hope you are right. Looking at repo update traffic helps, etc. My recommendation for nim: Standard Library: These are libraries installed with the compiler itself. Always available. * Keep it large * but if a lib is not ready to be "frozen" to v1.0, keep it out; you can always easily add it in later. But change becomes problematic once it is distributed with the compiler. * Do not allow libraries dependent on external software projects. So, remove "db_postgres" and "db_mysql", for example. These can, potentially, change too fast for a standard lib. Dependencies on external protocols (such as http) are okay, however, as they change more slowly. Nimble Libraries: * Eventually add a website and server system to gather stats; it should use the JSON distribution (not replace it.) * Helps create consensus. * Possibly add forums for packages * Add **voluntary** collection system to nimble that sends a "I just installed this" to that server to help with stats. ...and, something not seen elsewhere, a possible third category... Nimble Canonical: This is a list of libraries "highly recommended" by the nim-lang developers. * Could be a single "nim-canon" distribution package on some OSes. Otherwise, requires nimble. * This is where libraries pending "standard lib" are placed. * perhaps all repo'd at something like github.com/nim-canonical * Part of the point is to focus developer attention Just ideas... BTW, I've toyed with the idea of arbitrarily creating the nimble website; as I would have found it useful myself. But I don't want to step on anyone's toes.
Re: idiomatic name for object setup
Thanks! I'll change the libraries to match up with these guidelines on the next release.
clarification: overloading is strictly on parameter types?
Just a quick point of clarification: is it correct that nim can only do overloading in terms of parameter types and ignores the return type? Aka, the following example code would generate a compiler error: proc by_two(val: int): int = result = val * 2 proc by_two(val: int): string = result = $val & " and " & $val var x: int = 3 a: int b: string a = x.by_two() b = x.by_two() echo a echo b The compiler error is Error: ambiguous call; both test2.by_two(val: int)[declared in test2.nim(2, 5)] and test2.by_two(val: int)[declared in test2.nim(5, 5)] match for: (int) This is something I came across while writing a framework library. Any suggestions for working around this? Essentially, with my framework, the programmer inherits an object and overrides various methods as needed before running the main algorithm. One of the methods could either return a sequence (faster) or a table (slower but adds descriptions while running). In my Python version of the library, its the same procedure. But with strong static typing, it clearly can't work in the same way in Nim. One workaround I've come across already is adding the return var to the parameters: proc by_two(val: int, dummy: int): int = result = val * 2 proc by_two(val: int, dummy: string): string = result = $val & " and " & $val var x: int = 3 a: int b: string a = x.by_two(a) b = x.by_two(b) echo a echo b Kind of confusing, but it works. Any suggestions? Thanks in advance for any help with this!
Re: [RFC] Adding @ prefix (or similar) to import
Araq, I like your idea even better. I'm not involved in the compiler coding...is this easily implemented while keeping the method and data separation underneath?
[RFC] Adding @ prefix (or similar) to import
As we all know, nim seperates the procedures that operate on an object from the data of the object itself. I understand the thinking and largely agree with it. However, a side-effect of this is that when importing an object from a module, it is difficult to control what is being imported. Such control is useful, IMO, not because of avoiding name conflicts (though that is nice also); but avoiding blanket imports makes debugging faster and easier. It makes a visual scan of the code much more obvious from any context, even from dead-tree paper printouts. For example, if someone showed me the following code because they had problem with x: import a_package import b_package var x = Joe() x.something() All I know is that Joe is a class from _somewhere_. I now get to do a hunt-and-find exercise. Or I need to get myself to a computer, install the packages and source, and look at it with a nim-enabled editor. Alternatively, the imports could have been: import a_package as a from b_package import nil The user would then prefix everything with the module name or alias. This totally works, but it get very tedious the bigger the program becomes. Another alternative is: from a_package import Joe, something, something_else, ... This also works, but if Joe has 50 methods, then I either import everything as a huge dump. Or, I import just what I need and keeping adding/removing stuff as needed. Again, tedious, but it works. **My suggestion: add an indicator to do well defined import of an object and any associated methods found.** So: from a_package import @Joe or perhaps from a_package import Joe.* The key here is to providing the compiler a way to automatically import both Joe and any non-generic method or proc in the form of "method *(self: Joe, ...". In other words, is it a wild card that finds any proc who's first parameter is the designated class and conforms to UFCS usage. So, the example program would become: from a_package import @Joe, @Larry from b_package import process_y, @ZZ var x = Joe() x.something() Now I know exactly where Joe came from by just looking at it. While, I'm still a newbie with nim, I've been doing Python programming for six years. I've come to appreciate the Python communities dislike of "from X import * " (they equivalent of "import X" in nim.) It has saved me countless headache over the years. Ironically, my background before that was in C/C++ and other languages that embrace such open imports. I suspect, if something like this were implemented, much of the nim community would also eventually use such a feature as a default way of doing things. Thoughts?