Got everything updated and fixed up thanks everyone. I updated my opengl
wrapper and learnopengl tutorial port project:
[https://github.com/jackmott/easygl](https://github.com/jackmott/easygl)
great thank you!
I went to check if my opengl library would still work with 2.0, when I try to
build I get:
"/home/jmott/.nimble/pkgs/sdl2-1.1/sdl2.nim(3, 8) Error: cannot open file:
unsigned"
any ideas?
thank you, that is handy
yes! that is it, thanks!
what is the distinction between using nimble install, and nimble develop?
I have a vague memory of someone tweeting me a way to avoid having to put a
type annotation on every field in an array like so:
var gradZ = [
0'f32, 0'f32, 0'f32, 0'f32,
1'f32, 1'f32,-1'f32,-1'f32,
1'f32, 1'f32,-1'f32,-1'f32]
But I can't remember,
I am familiar with how to use 'requires' to pull in a library that is in
package manager. What if I have my own library that is on my filesystem, and
I'm starting a new project and I want to use it, how do I use my nimble file to
point at it?
I think I have figured it out:
use 'nimble install' on the library then you can just refer to it by name with
'requires'
can you implement the same with intrinsics? That is generally the
recommendation from Microsoft now.
I have a SIMD project that does CPU feature detection and uses macros to
provide a nice SIMD api, and the sample app works for me on Windows, and Linux,
but a user has reported an issue on OSX:
[https://github.com/jackmott/nim_simd/issues/4](https://github.com/jackmott/nim_simd/issues/4)
I
AVX2 cpu, forgot that I7 doesn't really narrow it down anymore! The code is
ported from a friends C++ library, which should be good but I could definitely
have introduced mistakes with some of the obscure C bindings.
runtime detection is now in, tested only on my i7 in linux so far.
Ok I've fixed the .gitignore / bin directory setup
> I don't know much about SIMD, it looks like your approach is to figure out
> how to take nim code and SIMDify it?
No not quite. You will write explicit SIMD instructions, but it will
automatically transform them to use the best possible option given runtime
detection. So you can write a loop
Yes runtime detection is the plan, the prompt is just a placeholder, so that I
know the decision is happening at runtime.
Thanks on the.gitignore, its ignoring exe but i am in linux!
I don't want the for loop to run at compile time, so that is ok.
I was able to print out the template expansion at compile time and it looks
like when I added the inner template, the body gets a single quote character
appended to it.
I have a template like so:
template SIMD(width:untyped,body:untyped) =
and I'd like to make a template like this;
template simdFor[T](a:openarray[T], body:untyped) =
for i in countup(0,a.len,width): #etc
where the second template depends on the width variable from the
nim maintainers should really iterate through all the broken nimble packages
and fix or remove them. This kind of thing can be very frustrating to people
exploring a new language, and cause them to leave.
note: I have tried to be the change I want to see in the world here, when I
came across
Details are in the readme. If anyone is interested in this and would like to
provide input or help out please let me know.
[https://github.com/jackmott/nim_simd](https://github.com/jackmott/nim_simd)/
no webGL, but no reason you couldn't do something similar.
Just wanted to share this:
[github link](https://github.com/jackmott/easygl)
I may tidy this up and add it to nimble when I have some time.
This would have to generate all versions of a given statement list at compile
time, then execute the correct statement list at run time. So for every block
that you use the macro on, there would be N variations of that block in the
binary where N is the number of SIMD instructions sets you want
In cases where there isn't an equivalent function you would have a fallback
that does it in non vectorized fashion. So if you used the gather instruction
the SSE fallback would just loop over the elements of the simd vector and do
them one by one.
No doubt you would not be able to write code
That is an interesting idea, but I suppose it would make it impossible to
inline each SIMD call right? There would be a pointer hop each time? That would
be no good.
the idea is to take a statement list, and generate SSE and AVX versions of it
at compile time. Then at **runtime** select the proper version to use.
I'm working on a macro idea to allow a SIMD library where you can write simd
code once, and at _runtime_ the correct simd functions will be used based on
feature detection of the cpu.
I have a simple proof of concept here, this works, but I am unsure if this is
the best way to accomplish this:
yep, looking for a nice runtime solution.
Here is some pseudocode for what I would like to do:
template SIMD(actions:untyped) =
if AVX2_Available():
import AVX2 # use the avx2 version of Add/Mul etc.
actions
else:
import SSE# use the sse version of Add/Mul etc.
actions
What would be the idiomatic way to do something like a compile time assert.
Say that you had proc foo(bar:int) where you want to restrict bar to values
1,3,5 and 8. I don't think you can use a range here because they are not
contiguous. But you could use a when to check at compile time, but
what does the call to sync() do?
I would like some guidance on how to best parallelize this loop. I've played
around with the parallel: macro, and using spawn, but haven't gotten a working
solution yet. It isn't entirely clear what the best approach should be:
for i,face in faces: #face is a path to an image file
range types like:
type
MySubrange = range[0..5]
Are they also checked at runtime? Is there a way to disable them being checked
at runtime in release mode?
thank you!
if I have a C library that returns a pointer to an array like this:
proc glMapBuffer(target: GLenum, access: GLenum): pointer
If it possible to turn that into a seq without copying the whole contents and
allocating a new array? If not can you cast it to an unchecked array
Thank you, that was indeed the problem, and it makes sense why. As a template
it is just being treated as a seq, rather than converted to an openarray as a
proc, I suppose.
The following code works fine as a proc, but not as a template:
proc BufferData*[T](target:BufferTarget, data:openarray[T],
usage:BufferDataUsage) =
glBufferData(target.GLenum,data.len*T.sizeof().GLsizeiptr,data.unsafeAddr,usage.GLenum)
Any obvious reason
on compile I'm getting this error from gcc:
gcc: error: /home/jmott/easygl/examples/model_loading/nimcache/read.o: No such
file or directory
How can I track down the cause of this?
edit: nim -f fixed it.
Ok I figured it out, not sure if bug or user error:
proc BufferData in File A which imports opengl, which defines GLenum, used by
BufferData
File B calls BufferData, but does not import opengl, and I get an error about
the type defined in opengl.
If I add the opengl import to File B, it
I have this function:
proc BufferData*[T](target:BufferTarget, data:openarray[T],
usage:BufferDataUsage) {.inline.} =
glBufferData(target.GLenum,data.len*T.sizeof().GLsizeiptr,data.unsafeAddr,usage.GLenum)
It works fine if I pass it a seq[float32]. If I pass it a
aha! thanks, that is a pretty simple workaround.
Interesting, I wonder if there is any technical reason for this or if if could
be enhanced to handle this. Other languages like F# can handle it.
The following is thrown off by the echo. Is there a way to make this work?
let format =
if channels == 1:
TextureInternalFormat.RED
elif channels == 3:
TextureInternalFormat.RGB
Krux02- it looks like you are using a struct with a single field instead of a
distinct type for things like shader/program ids. What are the pros/cons of
that approach?
Thanks everyone, some good things to think about.
your opengl id types. Also the compatible GLenum's for
functions are put into enums for easy discovery of what your options are. You
can see the [library code
here](https://github.com/jackmott/easygl/blob/master/src/easygl/easygl.nim) and
some example usage
[here](https://github.com/jackmott
ahh, I see.
Do people tend to bother with methods then much in nim? Or just leverage the
universal call syntax on normal procs? If one was publishing a library, what
would people expect? Or what are the pros/cons?
Working on a camera object, using the glm vector/matrix library. The compiler
says Front is an undeclared identifier, I can't figure out why:
type Camera* = ref object
Position*,Front*,Up*,Right*,WorldUp*:Vec3f
Thanks Dom, done.
I think I found the answers I need here:
[https://github.com/nim-lang/nimble#project-structure](https://github.com/nim-lang/nimble#project-structure)
In case anyone else finds this with a search, this worked:
# Package
version = "0.1.0"
author= "Jack
awesome, yes this is perfect!
>From looking at other nim repos, it looks like the accepted pattern if you are
>making a library is to have a src directory under your pojects root directory
>with the library code. Then an example directory under the root for example
>code. How do you set this up so that you can conveniently
I spent about an hour tonight tracking down a weird bug where it turned out I
had a proc with a return value, but I forgot to return a value. This compiled
fine, but ran wrong. I assume what happened is that the implicit return value
was returned, with the default value for the type. Should it
A couple of quick questions:
If I have a function that accepts an array as input via an openarray, do the
array get copied when you call the function, or no?
Is it possible to write a function using generics, or otherwise, that can
accept a sequence OR an array, in an efficient manner?
jxy - I did look at that, and perhaps I am reading the code wrong but I think
that one does compile time decision about which SIMD feature set is available,
not runtime, is that correct?
I'm looking to be able to build one exe, send it to a computer with SSE or AVX
or AVX512 and have it use
While modern C compilers can do some nice auto vectorization, there are many
cases where you have to do it by hand. For instance, fractal noise:
[https://github.com/jackmott/FastNoise-SIMD/blob/master/FastNoise/FastNoise3d.cpp#L25](https://github.com/jackmott/FastNoise-SIMD/blob/master/FastNoise
I am learning some Nim, and have a hunch that the metaprogramming features of
Nim may allow for a user friendly SIMD library. The primary challenge with SIMD
is that various processors support different SIMD features. So to write code
that will run as fast as possible on every CPU, you have to
thanks wiffel, that helps
Context: I'm new to Nim, not new to SIMD intrinsics. I'm using this binding
library:
[https://github.com/bsegovia/x86_simd.nim](https://github.com/bsegovia/x86_simd.nim)
I can successfully add two m128i values if I do something like:
let a = set1_epi32(1)
let b = set1_epi32(1)
58 matches
Mail list logo