I'm not sure it's possible, actually. You can retrieve t's initialization value
(!), using t.symbol.getImpl, which is useful when handling const, but I can't
see any way (or at least an obvious way) how to extract variable pragmas from a
variable symbol.
Actually _InterlockedCompareExchange8 does exist, It was added in VS2012
silently and it is not mentioned in the docs.
@mratsim Well, you mentioned game programming at first so "serious deep
learning" didn't come to my mind. ^^"
@mratsim Well, if you call GC_collect manually then it's much worse than manual
allocation and deallocation from memory pool, I guess.
Could you explain further what getOccupiedMem changes in the case we were
talking about (ref to ptr to memory external to thread heap)?
I submitted an example to the docs, but it is still [a
PR](https://github.com/nim-lang/Nim/pull/6711), so have a look at the PR.
And try something like the following (and remove the push and pop pragmas):
const
hdr = "nimfuzz/fts_fuzzy_match.h"
proc
Have you tried something like
when AVX2_Available():
Of course that can only work when your AVX2_Available() is known at compile
time...
Here is some pseudocode for what I would like to do:
template SIMD(actions:untyped) =
if AVX2_Available():
import AVX2 # use the avx2 version of Add/Mul etc.
actions
else:
import SSE# use the sse version of Add/Mul etc.
actions
I'd like to check that some "memory location" is annotated with {.volatile.},
to make sure my code only compiles, if I use {.volatile.} in the right place. I
searc the lib code but didn't really find much, except this (asyncmacro.nim):
proc asyncSingleProc(prc: NimNode): NimNode
Yes, a Nvidia Titan X is quite a common GPU for deep learning and comes with
12GB of VRAM, GTX 1080 Ti, 11GB and GTX 1070-1080 8GB. Whatever the GPU the
goal is to saturate them with the biggest batch they can handle.
There is a lot of research going into more memory-efficient networks, and how
One other wrinkle more in line with this thread topic is making things work for
general object types but specific standard key types. In C one might handle
that by taking an `offsetof` for where they key field is within an object and
`sizeof` for the size of object in the array. In Nim, one
@Stefan_Salewski - I think this idea of having a family of overloaded procs to
sort on standard types is very good. I have almost raised it myself several
times.
It would be useful for `cmp`-less `sort` procs on such type to be _stable_
sorts (meaning elements with identical keys retain
Hi,
I'm trying to understand what goes on in "lib\system\atomics.nim". This is in
part because I'm missing atomicLoadN/atomicStoreN on Windows, and I'm trying to
work out how to implement that myself. I've just stumbled upon this declaration
(atomics.nim, line #220):
Reading this blog post
[http://nibblestew.blogspot.de/2017/11/aiming-for-c-sorting-speed-with-plain-c.html](http://nibblestew.blogspot.de/2017/11/aiming-for-c-sorting-speed-with-plain-c.html)
> I just remembered a discussion some time ago:
S. Salewski wrote:
> A consequence may be, that for
I do like the It templates, they make the code shorter.
Although I also found that nesting them was awkward when trying to multiply two
sequences, it made me think of different ways to achieve the same thing, and
make it look even better than the initial implementation, it ended looking like
Call me a weirdo but I think a converter which could fail should return an
option[T] instead of T itself. This way you can either unwrap the option
(works like exception) or pass it somewhere first, e.g. into a container so
that you could use unwrap inside of an higher-order routine.
@Araq Maybe you don't need flexibility but please imagine using a it-template
inside of an it-template (e.g. an apply inside of an apply). Just like @olwi
said, => is better and I think it should be brought to standard Nim.
People only mention two reasons to use it-templates, correct me if I'm
@cdome That's funny, actually, I've used a container that seems to work just
like yours, despite the fact the use case was totally different. It was for an
evolutionary program.
@mratsim If I get it, you need at least 93 MB * 32 images = 11 GB 904 MB per
batch?
By the way: did you look at
Well, you said "today you can do" as a general advice so I assumed you try to
show us "how simple it is to do it by hand", not "how easy it is to make sth
only for internal usage".
About this object being a ref one --- no, please read my comment again. Long
story short: ref object can be
I don't think it's something you should actually do as the compiler can do
certain optimizations when it knows a routine has no side effects.
Optimizations which may opt out your side effects, I guess.
e.g. in Kotlin, when a function with only one parameter is accepted, in it's
definition, the first parameter name can be omitted and instead it's named `it`
implicitly:
strings.filter { it.length == 5 }.sortedBy { it }.map { it.toUpperCase() }
So there it's even part of the
20 matches
Mail list logo