Re: Connection-Pooling Compile-Time ORM

2020-07-01 Thread Varriount
While this is a neat use of templates to create an SQL DSL, where does the ORM 
part come in? I don't see any place to automatically load values into an 
object/ref, aside from the procedures returning sequences.


Re: Norm 2.0.0

2020-06-24 Thread Varriount
I don't quite understand this:


Re: Module queues is not working? Is it deprecated?

2020-03-03 Thread Varriount
What problems are you having? Can you post code that demonstrates your 
problems? 


Re: reader macro

2020-02-19 Thread Varriount
Do you mean something like [NPeg](https://github.com/zevv/npeg)?


Re: closure function types are Compatible with nimcall

2020-02-06 Thread Varriount
Can you post the rest of your code, including the procedure definitions? 


Re: How do I extract a particular block from an external module?

2020-02-04 Thread Varriount
That's actually a rather nifty use of the include statement! I can see using 
that for something like generating mocks and tests. 


Re: Nim calling Lemon parser and SIGSEGV

2020-02-01 Thread Varriount
Hm, have you looked at the generated C code? 


Re: Overloaded proc dispatch won't work with object type

2020-01-30 Thread Varriount
A variable declared with var can be used as both a var and regular parameter.

The following is valid: 


type Foo = object
  a: int

proc bar(x: Foo) =
  echo x.a

proc baz(x: var Foo) =
  echo x.a

var y = Foo(a: 1)
bar(y)
baz(y)


Run

[https://play.nim-lang.org/#ix=28KD](https://play.nim-lang.org/#ix=28KD)

Notice that both `bar` and `baz` can be called with `y`. If the two functions 
share the same name, then the call is ambiguous, because y is valid for both 
functions.


Re: Nim calling Lemon parser and SIGSEGV

2020-01-30 Thread Varriount
It appears that (for some reason) the upper 32 bits of the tree pointer in 
PState.tree are being cleared. You can see this if you add printf("Node at 
address: %pn", (void *)e); to the mkNode functions, and then print the value of 
the tree pointer.


Re: Is there a help() method, or dir(), like in python?

2019-09-02 Thread Varriount
You can also use `echo repr(variable)` to print out a basic representation of a 
variable. 


Test Post

2019-06-29 Thread Varriount
Test Text


Re: Owned refs

2019-03-27 Thread Varriount
How are the owned/unowned reference concepts different from the strong/weak 
reference concepts found in other languages?


Re: Reading very long strings in chunks

2019-03-08 Thread Varriount
readChars


Re: Httpclient and hangs

2019-03-08 Thread Varriount
It would help if we had more context for your problem. Is the server sending a 
lot of data? Are you using threads? What have you tried with regards to 
debugging?


Re: Some weird Heisenbug with the implicit result variable which results in a segfault

2019-03-07 Thread Varriount
I would double-check the ZLIB binding code that you have, to make sure that 
everything is correct. This looks suspiciously like a memory-corruption issue, 
which tends to happen when Nim bindnigs don't accurately represent the 
functions/structures they are wrapping.

You _[might](https://forum.nim-lang.org/postActivity.xml#might) have some luck 
with defining the symbol 'checkAbi' (so `-d:checkAbi`) to insert C binding 
checks.


Re: Can I access arrays faster than this?

2019-03-04 Thread Varriount
Ok, so I was able to get the Nim and C++ examples to compile when using the 
heap (rather than the stack).

Comparing the code, I think the difference in performance is caused by two 
things:


  * The Nim code isn't in a main procedure
  * The C++ code is using int`s (32 bits) while Nim is using `int`s (64 bits). 
Nim's `int type is always the size of a pointer, so you can use it for indexing 
arrays.



The time I got for the modified code was about the same: 


/tmp $>time ./temp
result: 1998

real0m0.847s
user0m0.792s
sys 0m0.048s

/tmp $>time ./temp_cpp
result: 1998

real0m0.905s
user0m0.851s
sys 0m0.051s


Run

And the code I used: 


## compiled with: nim -d:r c filename
## nim v 0.19.9

proc main =
  const N = 20_000_000
  var data = newSeqUninitialized[int](N)
  # custom init
  for i in 0 ..< N:
data[i] = i
  # busy work
  for r in 1 .. 50:
for i in 3 ..< N-1:
  data[i] = (data[i+1]-data[i])+data[i-1]
  echo "result: ",data[N-2]

main()


Run


// compiled with: c++ -O3 -o filename filename.cpp
#include 
const int N = 2000;
int main()
{
  size_t* data = new size_t[N];
  // custom init
  for (size_t i=0; i

Re: Can I access arrays faster than this?

2019-03-04 Thread Varriount
No.. I'm running it on a MacBook with 16GB of ram


Re: Can I access arrays faster than this?

2019-03-04 Thread Varriount
Hm, when I compile the C++ code, I get a segfault.


Re: Introducing Norm: a Nim ORM

2019-03-04 Thread Varriount
Are all the incoming types from the database strings? How does this handle NULL?


Re: Dereference a pointer to its underlying type

2019-03-03 Thread Varriount
Hm, are you sure you need pointers? Usually references are better (as 
references are tracked by the garbage collector).

Also, runtime type data doesn't work like this. The is operator works at 
compile time. What I think you want to do is this: 


type
  NodeValueKind = enum
nvString
nvBool
  
  NodeValue = ref object
case kind: NodeValueKind
of nvString:
  stringValue: string
of nvBool:
  boolValue: bool
  
  Node = ref object of RootObj
Values* : Table[string, NodeValue]
Map : Table[string, string]
 
 proc `[]`(parser : ArgParser, key : string): NodeValue =
   if parser.Values.hasKey(key):
return parser.Values[key]


Run

Unlike languages such as Python or Go, Nim doesn't rely on 
runtime-type-information all that much. There is a module for run-time-type 
information, but it's somewhat advanced... and unsafe.


Re: cannot countProcessor in compile time

2018-05-22 Thread Varriount
Woops, yes, I meant to say alloca.


Re: cannot countProcessor in compile time

2018-05-21 Thread Varriount
If you _really_ need to know the number of processors on a system, you could 
always compile a sub-program during compile-time and invoke it.

Regarding allocating arrays at runtime, is there a reason that a 
[sequence](https://nim-lang.org/docs/manual.html#types-array-and-sequence-types)
 won't work? You could technically do something by wrapping C's _calloc_ , 
however that's not portable.


Re: Need help with macro

2018-03-26 Thread Varriount
Your setupDepFile macro can only take static parameters - the parameter values 
must be known at compile-time. On line 53, your are passing a string and 
boolean that have may have values unknown at compile-time values. Yes, the 
function calls on lines 54 and 55 have known values, but the compiler doesn't 
look that far for static inferencing.


Re: ASM on Windows basically dead?

2017-12-29 Thread Varriount
Huh? Why does UE4 only work with vcc?


Re: ASM on Windows basically dead?

2017-12-29 Thread Varriount
Visual Studio support for assembly has always been tricky (if I recall 
correctly, it to a whole for inline assembly to be supported).

Have you tried using Gcc/Mingw? That's what most people use.


Re: owerflowChecks - how to make it work

2017-12-27 Thread Varriount
>From the [compiler 
>manual](https://nim-lang.org/docs/nimc.html#compiler-usage-command-line-switches):
\--overflowChecks:on|off \- turn int over-/underflow checks on|off


Re: Hiring Nim Devs for Ethereum Implementation

2017-12-08 Thread Varriount
Are you accepting part-time applications?


Re: Error invalid module name: nim_1.1.1

2017-12-02 Thread Varriount
geo555: I don't see how this is unexpected - Nearly all languages that have a 
"namespace" or "module" concept restrict namespace/module names to valid 
identifiers. This is so the module can be referenced within the code.

  * Python allows running scripts with invalid identifiers, however it doesn't 
allow importing modules with invalid identifiers without import trickery. I 
believe Ruby also has a similar restriction.
  * C++ requires namespaces to be valid identifiers.
  * Java requires packages to be valid identifiers.



C and C++ both allow for _including_ files with arbitrary names, however that 
is because the inclusion is handled by the preprocessor - there is no way to 
use the name of an included file in the same way you would use a module name.


Re: Do we really like the ...It templates?

2017-11-11 Thread Varriount
I like Stefan_Salewski's version as well. It reminds me of python generator 
expressions.


Re: What's happening with destructors?

2017-11-09 Thread Varriount
@Udiknedorm: The biggest problem with the current GC is that it doesn't support 
shared memory (though, I don't know why we can't do something like Go's 
channels).

To me, these changes seem like an effort to reasonably support programs that 
don't want to rely on the GC.


Re: compile time 'asserts'

2017-11-05 Thread Varriount
Well, the only way to do type checking like this is through static analysis, 
which is typically performed by the compiler. The other way to do this is to 
make a variant of a procedure that accepts static parameters, and then use when 
to test their value. 


Re: Bitwise lowercase

2017-11-04 Thread Varriount
What website is it?

Normally I would say that it depends on the website and the license. If the 
website doesn't have a license for the code, I would look into contacting 
whoever administrates it.

If it's just small snippets and optimizations, then it's probably fine to 
translate the code, provided you put some attribution. 


Re: real inline for inline procs or converters

2017-10-25 Thread Varriount
You can always run your output code through [Google Closure 
Compiler](https://developers.google.com/closure/compiler/).


Re: Cannot get name of type using typedesc

2017-10-24 Thread Varriount
@LeuGim Look under the line, in the same area as the 'Run' button.


Re: Execution speed Nim vs. Python

2017-09-27 Thread Varriount
Python caches hashing of strings. Nim does not (it would be a challenge, as Nim 
strings are mutable). I suggest using string references or Hash objects if you 
want to compare performance.

Has anyone benchmarked C++ for this kind of test?


Re: Move semantic and manuel memory management

2017-09-12 Thread Varriount
Copying semantics in Nim are fairly straightforward:


  * Object types, sequences, and strings copy on assignment
  * References do not copy on assignment



shallowCopy and shallow sidestep the fact that strings and sequences copy on 
assignment. If you use these procedures, any affected variables (which in the 
case of shallowCopy means both arguments) must not be modified.

Udiknedormin: shallow and shallowCopy should not be considered equivalent to 
C++ move operations. If they are used as such, data corruption will result.


Re: String slice performance!

2017-07-21 Thread Varriount
V2:


import tables, times, hashes

const WORD_SIZE = 4
const K = 1

iterator wordSlices(line: string, size: int): Slice[int] =
  for startIndex in 0 .. len(line) - size:
let endIndex = startIndex + size - 1
yield startIndex..endIndex

proc counter(file: string, wordSize: int, counters: int): TableRef[string, 
int] =
  var lines = ""
  for line in lines(file):
lines.add(line)
  echo "ready"
  
  var
hashToWordMap = newTable[Hash, string]()
wordCountsMap = newTable[Hash, int]()
  
  for wordSlice in wordSlices(lines, wordSize):
let wordHash = hash(lines, wordSlice.a, wordSlice.b)

if wordHash notin hashToWordMap:
  hashToWordMap[wordHash] = lines[wordSlice]

if wordHash in wordCountsMap:
  wordCountsMap[wordHash] += 1
elif len(wordCountsMap) < counters:
  wordCountsMap[wordHash] = 1
else:
  var keysToDelete = newSeq[Hash]()
  for key, value in wordCountsMap:
if value == 1:
  keysToDelete.add(key)
else:
  wordCountsMap[key] -= 1
  for key in keysToDelete:
wordCountsMap.del(key)
  
  result = newTable[string, int]()
  for key, value in wordCountsMap:
result[hashToWordMap[key]] = value

proc printTop(table: TableRef[string, int], top: int): void =
  var sorted = newCountTable[string]()
  for key in table.keys:
sorted[key] = table[key]
  sorted.sort
  
  var n = 0
  for pair in sorted.pairs:
inc n
if n > top: break
let (key, value) = pair
echo $n, ": '", key, "' -> ", value

let t0 = cpuTime()
let res = counter("enwik8", WORD_SIZE, K)
echo "CPU time [s] ", cpuTime() - t0

res.printTop(30)



Re: String slice performance!

2017-07-21 Thread Varriount
I've come up with an optimized version which takes about 5 seconds on my 
machine, vs the 48 seconds that the Python script takes. This version doesn't 
print out the top words (that would take a bit more work, and another map of 
hashes to strings)


import tables, times, hashes

const WORD_SIZE = 4
const K = 1

iterator window(line: string, size: int): Hash =
  for start_pos in 0 .. line.len - size:
let end_pos = start_pos + size - 1
yield hash(line, start_pos, end_pos)

proc counter(file: string, wordSize: int, counters: int): TableRef[Hash, 
int] =
  var lines = ""
  for line in lines(file):
lines.add(line)
  echo "ready"
  
  var counts = newTable[Hash, int]()
  for word_hash in window(lines, wordSize):
if counts.contains(word_hash):
  counts[word_hash] += 1
elif counts.len < counters:
  counts[word_hash] = 1
else:
  var keys_to_delete = newSeq[Hash]()
  for key, value in counts:
if value == 1:
  keys_to_delete.add(key)
else:
  counts[key] -= 1
  for key in keys_to_delete:
counts.del(key)
  
  return counts

# proc printTop(table: TableRef[string, int], top: int): void =
#   var sorted = newCountTable[string]()
#   for key in table.keys:
# sorted[key] = table[key]
#   sorted.sort

#   var n = 0
#   for pair in sorted.pairs:
# inc n
# if n > top: break
# let (key, value) = pair
# echo $n, ": '", key, "' -> ", value

let t0 = cpuTime()
let res = counter("enwik8", WORD_SIZE, K)
echo "CPU time [s] ", cpuTime() - t0

#res.printTop(30)



Re: String slice performance!

2017-07-20 Thread Varriount
Could you post your Python source code too? Often when people post these kinds 
of comparisons, their programs are doing slightly different things.


Re: How To - Proper Interfacing In Nim

2017-06-10 Thread Varriount
With the exception of code that is interfacing with some external language (C, 
C++, etc) this doesn't really make sense. Nim is module based, so you just 
import the types from the files you need and Nim takes care of all the type 
linking. If you need to use mutually recursive types, put all the types in a 
[single 'type' block](https://nim-lang.org/docs/manual.html#type-sections).


Re: How to export data to C

2017-05-25 Thread Varriount
Are you exporting a const? 


Re: HELP!! Mentioning Nim is resulting in the drain of all my karma at Hacker News.

2017-05-08 Thread Varriount
I agree with dom96 - comments like 
[this](https://news.ycombinator.com/item?id=14286359) and 
[this](https://news.ycombinator.com/item?id=14287623) are what people should 
see (if if the karma whining in the latter detracts from the message :/ )


Re: when will [] ambiguous be solved?

2017-05-05 Thread Varriount
It will probably be solved:


  * After all other higher-priority features/bugs are solved
  * When someone contributes a patch that resolves the ambiguity



It's not really a big problem. Just use "foo[T](42)" or "foo(42)[T]". I don't 
think it warrants a syntax change.


Re: vcc didn't run the second spawn, any idea?

2017-05-03 Thread Varriount
It's likely that this is a bug in the threadpool implementation - I suggest you 
file an issue in the main repository.


Re: Creating a new seq is not that fast

2017-04-18 Thread Varriount
Probably. I wouldn't be surprised if part of the reason Nim's allocator is 
slightly slower is due to zeroing memory too.


Re: Creating a new seq is not that fast

2017-04-17 Thread Varriount
Unfortunately, creating good benchmarks is hard. The benchmark above has some 
subtle faults that lessen its effectiveness.

First off, MyBuffer is a tuple type: 


type
  MyBuffer = tuple
d: array[128, int]
len: int


Tuple types are object types, which means they can be allocated on the stack. 
The only time object types are not allocated on the stack is when the object 
type is part of a reference type.

Because MyBuffer is a tuple, all tx() has to do is allocate ~129 integers worth 
of memory from the stack, which is a simple bump allocation (all the program 
has to do is move the current stack pointer up by X). A better way to test 
Nim's allocator is to compare it with the system malloc implementation.

Below is my version of the above benchmark. I've not tested it for Windows 
users - the worst that might happen is that you get a rather large file called 
'nul' full of numbers.


import times, random

proc malloc(size: uint): pointer {.header: "", importc: "malloc".}
proc free(p: pointer) {.header: "", importc: "free".}

const bufferSize = 128

type
  MyBuffer = array[bufferSize, int]

proc testStackAllocation(): int =
  var s: MyBuffer
  result = cast[int](addr s)

proc testSequenceAllocation(): int =
  var s = newSeqOfCap[int](bufferSize)
  result = cast[int](addr s)

proc testMallocAllocation(): int =
  var res = malloc(uint(sizeof(MyBuffer)))
  free(res)
  result = cast[int](res)

proc testRandom(): int =
  result = random(7)


proc main =
  # Use writing to /dev/null to prevent compiler optimizations
  when defined(posix):
let nullfh = open("/dev/null", fmReadWrite)
  else:
let nullfh = open("nul") # untested!
  
  var baseline: float
  
  # Establish a baseline time of a really simple operation + writing to 
stdout
  # This way we can esentially measure how fast stdout can be written to, 
and
  # factor that out of other measurements.
  let z = cpuTime()
  for i in 0..10_000_000:
nullfh.write(i)
  baseline = cpuTime() - z
  echo "Baseline time:", baseline
  
  # Template to run test procedures.
  template runProc(testProc: typed, testName: string): untyped =
let t = cpuTime()
for _ in 0..10_000_000:
  var i = testProc()
  nullfh.write(i)
echo "Time for ", testName, ": ", (cpuTime() - t) - baseline
  
  # Test:
  #  - Stack allocation (which is usually a bump allocator)
  #  - Malloc allocation (system defined)
  #  - Nim allocation
  #  - Random number generation
  runProc(testStackAllocation, "stack allocation test")
  runProc(testSequenceAllocation, "sequence allocation test")
  runProc(testMallocAllocation, "malloc allocation test")
  runProc(testRandom, "random number generation test")

main()


Output using various GC backends (Mac OS Sierra, 2.5 GHz Intel Core i7) : 


# nim c -d:release --passC:"-flto" --passL:"-flto" --gc:markAndSweep 
benchmark.nim && ./benchmark
Baseline time: 1.072926
Time for stack allocation test: 0.16431101
Time for sequence allocation test: 1.142116
Time for malloc allocation test: 0.81331399
Time for random number generation test: -0.14664202

# nim c -d:release --passC:"-flto" --passL:"-flto" benchmark.nim && 
./benchmark
Baseline time: 1.053752
Time for stack allocation test: 0.14567701
Time for sequence allocation test: 1.227938
Time for malloc allocation test: 0.82476394
Time for random number generation test: -0.10941602

# Version of the benchmark with mark and sweep cycle collection disabled
# nim c -d:release --passC:"-flto" --passL:"-flto" benchmark.nim && 
./benchmark
Baseline time: 1.075432
Time for stack allocation test: 0.146002
Time for sequence allocation test: 1.189319
Time for malloc allocation test: 0.76232396
Time for random number generation test: -0.17652802


As you can see, Nim's allocator is slower than the system malloc allocator, but 
not by much (I chalk some of this up to the compiler being able to use better 
inlining and intrinsics for malloc). Neither comes close to stack allocation, 
but again, that's expected.


Re: ref object or object with ref field

2017-04-11 Thread Varriount
Krux02: It would be better to store the index. Using addr is unsafe - the 
object won't have the same address if it's copied from place-to-place on the 
stack.

mratsim: First, I would look at the section of the manual regarding [reference 
and pointer 
types](https://nim-lang.org/docs/manual.html#types-reference-and-pointer-types) 
. The should provide some information on how references work in general. As a 
summary, object types are equivalent to C structs, while references types are 
somewhat equivalent to C pointers (but safer).

Objects will are stored on the stack or directly within the memory allocated 
for another object, references always point to memory allocated on the heap 
(and never memory within the bounds of another block of reference memory).


Re: How to properly bind a function to a compiler buildin?

2017-04-09 Thread Varriount
What do you mean by 'bind'?


Re: File, FileDescriptor, Handle, Windows

2017-02-18 Thread Varriount
It looks like you're mixing standard C io functions with Windows IO functions, 
which usually leads to trouble. The standard output handle on Windows isn't 
exactly like a normal file handle, and doesn't sort all the states that a 
regular handle does.


Re: Amicable numbers in Nim and a few questions

2017-01-24 Thread Varriount
The Nim compiler knows nothing about CFLAGS and related things, however the C 
compiler it uses should (if you're using GCC or Clang). You can also pass in 
arguments via `--passC` and `--passL` arguments.

When benchmarking, it's good to remember some things:


  * Nim's `int` datatype is always the size of the target architecture's 
pointer type (32 bits on x86, 64 on x86-64). This can cause disparities in 
benchmarks.
  * Putting main code in the global scope of a module prevents certain 
optimizations. When aiming for optimization, put things in a main procedure.
  * Profile guided optimization and link time optimization can work really well 
with regards to speed. Link time optimization also tends to have a significant 
effect on executable size too.
  * Depending on how fair you want to be, you can turn off the mark and sweep 
portion of the garbage collector if you're sure the benchmark doesn't generate 
any reference cycles. The regular reference counting garbage collector will 
still run.




Maintainer wanted for NimLime

2017-01-17 Thread Varriount
Hello all!

Unfortunately, due to a recent increase in work and educational demands, I 
don't have any free time left to devote to 
[NimLime](https://github.com/Varriount/NimLime), the Nim integration plugin for 
Sublime Text. As such, I'm reaching out to the community to request if there's 
anyone who would like to take over - The codebase is fairly straightforward, 
and I'd be happy to explain the architecture.

As a whole the plugin is fairly complete, featuring syntax highlighting, 
autoindenting, comment continuation, and the like. The only sore point left is 
integration with Nimsuggest (which, when I was trying to support it, was 
something of a moving target).

[https://github.com/Varriount/NimLime](https://github.com/Varriount/NimLime)


Re: Installation on 64-bit Windows

2017-01-13 Thread Varriount
The big problem with a self-extracting executable is that we need the 
capability to download extra components - namely MinGW. Bundling MinGW with all 
installs dramatically increases download size, not to mention that if the MinGW 
component needs to be updated, so does the rest of the installer.


Re: Installation on 64-bit Windows

2017-01-10 Thread Varriount
Anyway, does anyone know some alternative that we might try out? I know that 
we've tried WiX (too much XML), NSIS (too much assembly), and Inno (way, way 
too much programming needed). Any others?


Re: Installation on 64-bit Windows

2017-01-10 Thread Varriount
@mindplay I suspect the main reason for araq's vehemence is not malicious, but 
maintenance fatigue.

The old installer used [NSIS](http://nsis.sourceforge.net/Main_Page), which is 
a headache to deal with (the installer is built using the NSIS language, which 
closely resembles assembly and a [declarative 
language](http://nsis.sourceforge.net/Sample_installation_script_for_an_application))


Re: Surprises with Generics

2017-01-08 Thread Varriount
The symbol binding rules for generics are outlined in the manual 
[here](http://manual.nim-lang.org/docs/manual.html#generics-symbol-lookup-in-generics).
 It might help if the section defined what exactly 'open' and 'closed' symbols 
are, since most programmers don't know about compiler-specific terminology.


Re: Surprises with Generics

2017-01-07 Thread Varriount
Why should it be surprising? Procedure calls and field accesses in generics 
aren't resolved until instantiation time. What if wuff was a field of 'a'? 
Anyway, if by 'inlining' you mean, 'can the Nim compiler/backend compiler 
inline generics', then yes, it can be inlined.

I don't quite understand your last question. If there is a generic cmp[T](x, y: 
T) that applies to all objects, and a more specific cmp(x, y: Foo) defined in 
the current module/an imported module, the more specific procedure will take 
precedence during the resolution process.

Looking at the algorithm module's 
[sort](http://nim-lang.org/docs/algorithm.html#sort,openArray\[T\],proc\(T,T\)) 
procedure, it looks like a sensible addition might be to have a 'sort' 
procedure which defaults to calling 'cmp' on the items, but the current 
implementation is ok. I wouldn't be surprised if a compile inlined any calls to 
'sort', detected the function pointer, and inlined the referenced function too.


Re: Why do custom types need to be reference counted objects for dynamic dispatch to work.

2017-01-04 Thread Varriount
Yes, objects need to be reference counted for methods to work. This is because 
only reference types can point to variable-length memory regions.

Take the below code: 


type
  Animal = ref object of RootObj
name: string
  
  Dog = ref object of Animal
breed: string

method makeNoise(this: Animal) =
  echo "Hi, I'm ", this.name

method makeNoise(this: Dog) =
  echo "*Bark!* [said ", this.name, "]"


These type definitions translate roughly to the equivalent structures: 


# TypeInfo is an object containing type information
# makeTypeInfo creates a TypeInfo object holding a type's information

type
  AnimalObjBase = object of RootObj
typeInfo = ptr TypeInfo
  
  AnimalBase = ptr AnimalObjBase
  
  AnimalObj = object of RootObj
typeInfo = ptr TypeInfo
name: pointer
  
  Animal = ptr AnimalObj
  
  DogObj = object of RootObj
typeInfo = ptr TypeInfo
name: pointer
breed: pointer
  
  Dog = ptr DogObj


const
  animalTypeInfo: TypeInfo = makeTypeInfo(AnimalObjBase)
  dogTypeInfo: TypeInfo = makeTypeInfo(DogObjBase)


proc makeNoise_Animal(this: Animal) =
  echo "Hi, I'm ", this.name

proc makeNoise_Dog(this: Dog) =
  echo "*Bark!* [said ", this.name, "]"

proc makeNoise(this: AnimalBase) =
  if baseObj.typeInfo == animalTypeInfo:
makeNoise_Animal(cast[Animal](this))
  elif baseObj.typeInfo == dogTypeInfo:
makeNoise_Dog(cast[Dog](this))


(Note that this isn't exactly valid code, nor is it precisely how methods are 
implemented)

Note that 'AnimalObjBase', 'AnimalObj', and 'DogObj' all share common fields, 
'typeInfo' for all three, and 'name' for the latter two. This means that, given 
a region of memory holding data from one of these three types, we will always 
be able to access the 'typeInfo' field, and given a region of memory holding 
data from AnimalObj or DogObj, we can access the 'name' field (this 
field-sharing is the basis for subtyping).


+---+   +---+   +---+
| AnimalObjBase |   | AnimalObj |   | DogObj|
+---+   +---+   +---+
| typeInfo  |   | typeInfo  |   | typeInfo  |
+---+   +---+   +---+
| name  |   | name  |
+---+   +---+
| breed |
+---+


The typeInfo field is used to mark these regions of memory. As long as every 
AnimalObj's 'typeInfo' member points to 'animalTypeInfo' and every DogObj's 
'typeInfo' member points to 'dogTypeInfo', we can reinterpret (cast) these 
regions of memory to their appropriate types, and pass them into their 
corresponding procedures/methods.

Now lets look at how objects are stored in memory. In contrast to references, 
which are pointers that always point to heap-allocated memory, object data may 
be located either in the heap _or_ the stack. It's this latter case that 
reveals why methods won't work on object types.

Say we create Animal and Dog variables in a main method, then pass those 
variables into a procedure which calls the 'makeNoise' method: 


method makeNoise(this: AnimalBase)

proc makeLotsOfNoise(someAnimal: Animal):
  makeNoise(someAnimal)
  makeNoise(someAnimal)
  makeNoise(someAnimal)

proc main =
  var animal = Animal(name: "Unknown")
  var dog = Dog(name: "Spot", breed: "Poodle")
  
  makeLotsOfNoise(animal)
  makeLotsOfNoise(dog)

main()


When 'main' is called, after the variables are created, the stack holds two 
references that point to regions of heap memory: 


main():
  animal: 8 byte pointer -> 16 byte heap memory region
  dog:8 byte pointer -> 24 byte heap memory region


And when makeLotsOfNoise is called, the stack layout looks something like this: 


main():
  animal: 8 byte pointer -> 16 byte heap memory region
  dog:8 byte pointer -> 24 byte heap memory region
  makeLotsOfNoise(someAnimal = animal):
someAnimal: 8 byte pointer -> 16 byte heap memory region
makeNoise(this = someAnimal):
  this: 8 byte pointer -> 16 byte heap memory region
  ...
  makeLotsOfNoise(someAnimal = dog):
someAnimal: 8 byte pointer -> 24 byte heap memory region
makeNoise(this = someAnimal):
  this: 8 byte pointer -> 24 byte heap memory region
  ...


Make note of the size of the parameter passed into 'makeLotsOfNoise' \- it's 
always an 8 byte pointer. This is a constraint of how procedure calls work, as 
the size of the parameters usually nee

Re: Return values question

2016-12-13 Thread Varriount
@flyx Are you quite sure the sequence is being resized? When I modify the code 
a bit, I get some very strange results:


 foo() =
  var a = @[0, 0]
  a.add(1)
  var b: seq[int]
  shallowCopy(b, a)
  a[0] = 0 # modify sequence after copying
  a.add(2) # further modification
  echo a
  echo b
  for i in 0..20:
a.add(i)
  echo a
  echo b

foo()


This produces: 


@[0, 0, 1, 2]
@[0, 0, 1, 2]
@[0, 0, 1, 2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 
18, 19, 20]
@[2318280822927416128, 3184080310742559793, 3683993088988819744, 
2318286320485802028, 3186332110640131126, 2318280895740262688, 
2318283094763516209, 2318285293786772273, 2318287492810028337, 
2318289691833284401, 26230164680358193, 0, 0, 0, 0, 11, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]



Re: Return values question

2016-12-13 Thread Varriount
Well, as I said, only the actual reference (the pointer) is copied on return. 
Its when it returns as the right-hand side of an assignment that copying tends 
to occur.

Personally, I would just use a sequence reference. You'll have to manually 
dereference it occasionally (using the [] operator) but its the most flexible 
solution. You could also just store the sequence as part of a larger reference 
type too.

Alternatively, you could try rolling your own collection using the memory 
allocation functions and unchecked pointers.


Re: Return values question

2016-12-12 Thread Varriount
> I guess that one could put the return value as the first one on the stack 
> frame, so there is no copy involved. It seems to me that this is safe to do 
> when one uses the special variable result. It depends on the calling 
> convention, but I think it should be doable?

This is close to how most calling conventions return results larger than a 
pointer. For many calling conventions, the caller of the function allocates the 
storage for the result (usually from the stack) then passes a pointer to that 
memory as a hidden argument to the called function. The called function then 
writes to that memory upon returning.

Basically, it's like this: 


type LargeObj = object
  a: array[0..10, int]
  b: string

proc foo(): LargeObj =
  result.a[0] = 1
  result.b = "hello"
  return result

# The above becomes this when compiled:
proc foo(result: ptr LargeObj) =
  result.a[0] = 1
  result.b = "hello"
  return



Re: Return values question

2016-12-12 Thread Varriount
**TL;DR**: Strings and sequences, like objects and integers, are copy on 
_assignment_. References are not.

Technically, values are always copied when returned from a procedure - there's 
not really any other way.* If values were to be always passed by 
pointer/reference, how would values that are stored on the function call stack 
persist after the procedure has returned?

What differs between the types is _what_ is copied. Though objects, integers, 
and references are all copied to the previous procedure frame, the memory 
referenced by references is not copied.

What people often get confused about in Nim (and other low-level languages) is 
the difference between _return_ and _assignment_ semantics. Assignment 
semantics in Nim are quite similar to return semantics, with two** exceptions: 
string and sequence types. These types Nim attempts to make semantically 
similar to arrays, despite both strings and sequences being references to 
dynamically allocated memory. Strings and sequences, like objects, copy their 
contents when assigned. This can be demonstrated by the code below: 


var a = @[1,2,3]
var b = a

a[0] = 4
echo "A: ", a
echo "B: ", b


Now, before you declare that this is an awful design flaw, I have the following 
explanation. For the C backends, the sequence (and string) types are 
represented roughly like this: 


type
  # An array of sequence data dynamically allocated at runtime.
  SeqData{.unchecked.}[T] = array[0..0, T]
  Sequence[T] = object
len, cap: int
data: SeqData[T]


Since sequences and strings are mutable, their array of data must be 
occasionally resized and reallocated when space is needed. Since there is no 
guarantee that the reallocated block of memory will have the same pointer, all 
references to the old data throughout the entire program's memory must be 
updated. Under the current scheme, this is a simple operation. Since sequence 
and string types are always copied on assignment, there is always at most one 
reference to the old data - the current variable holding the string/sequence. 
Other schemes would require tracking all points in memory that a 
string/sequence is referenced, which would be difficult (though not impossible, 
as most copying-style garbage collectors function like this).

Now, there are ways to circumvent this behavior:


  * Using a reference to a sequence or string
  * Marking the string or sequence as shallow
  * Using the shallowCopy procedure



Of these three the first is the most safe, allowing the string to be resized 
while also allowing it to be reference by multiple parts of the program: 


type SeqRef[T] = ref seq[T]

var seqr: SeqRef[int]
new(seqr)

seqr[] = @[1, 2, 3]
var seqr2 = seqr

seqr[0] = 4
echo "Address of sequence reference one: ", repr(addr seqr[][0])
echo "Address of sequence reference two: ", repr(addr seqr2[][0])


(This is the scheme used by Python for its list type)

The second and third options involve using shallow operations. Marking a 
sequence or string with the 
[shallow](http://forum.nim-lang.org///nim-lang.org/docs/system.html#shallow,seq\[T\])
 procedure will bypass the usual data-copying behavior for all further 
assignments to that sequence, while using the 
[shallowCopy](http://forum.nim-lang.org///nim-lang.org/docs/system.html#shallowCopy,T,T)
 operator will perform a single assignment operation that bypasses the 
behavior. 


var a, b, c, d, e: seq[int]
a = @[1,2,3]

# Perform a shallow assignment operation from a to b
shallowCopy(b, a)

# Perform a normal (copying) assignment from b to c
c = b

# Make c shallow, then perform shallow assignments to d and e
shallow(c)
d = c
e = d

echo "a: ", repr(addr a[0])
echo "b: ", repr(addr b[0])
echo "c: ", repr(addr c[0])
echo "d: ", repr(addr d[0])
echo "e: ", repr(addr e[0])


The problem with shallow operations is that once a sequence or string has been 
shallowly copied, it _must not_ be modified. If it is, then you will can end up 
with some versions of the string that are out-of-sync. When a shallow sequence 
is resized, only the variable currently being modified has its reference 
updated; the other variables will still have references to the old data. Though 
the old data will still persist (so you shouldn't get null reference errors), 
this kind of behavior is unpredictable.


Re: Nim GC Performance

2016-12-06 Thread Varriount
Turning on cycle detection doesn't seem to affect the pause times for me. I 
still get sub-millisecond pauses for Araq's Nim snippet.

This is the snippet I'm using: 


# Compile and run with 'nim c -r -d:useRealtimeGC -d:release main.nim'

import strutils
#import times

include "$lib/system/timers"

const
  windowSize = 20
  msgCount   = 100

type
  Msg = seq[byte]
  Buffer = seq[Msg]

var worst: Nanos

proc mkMessage(n: int): Msg =
  result = newSeq[byte](1024)
  for i in 0 ..  worst:
worst = elapsed

proc main() =
  # Don't use GC_disable() and GC_step(). Instead use GC_setMaxPause().
  # GC_disableMarkAndSweep()
  GC_setMaxPause(300)
  
  var b = newSeq[Msg](windowSize)
  # we need to warmup Nim's memory allocator so that not most
  # of the time is spent in mmap()... Hopefully later versions of Nim
  # will be smarter and allocate larger pieces from the OS:
  for i in 0 .. 

Re: Nim GC Performance

2016-12-06 Thread Varriount
Here's an amendment to my previous timing. I compiled the Go snippit with 
standard arguments (I don't know if there's a 'release' mode) and Nim with 
'-d:release', then ran each executable 20 times. 


Nim
---
Mean: 0.08501 ms
Median:   0.07966 ms
Std Dev.: 0.03227 ms
Lowest:   0.053365 ms
Highest:  0.175821 ms


Go
---
Mean: 5.949 ms
Median:   5.81 ms
Std Dev.: 0.5054 ms
Lowest:   5.359414 ms
Highest:  7.220875 ms


As you can see, Go's garbage collector takes quite a bit longer than Nim's... 
Although it does have the benefit of being able to handle multiple threads (I 
think).


Re: Nim GC Performance

2016-12-06 Thread Varriount
@jlp765 It varied ±1 millisecond for the Go snippet, and ±0.1 millisecond for 
the Nim snippet.


Re: Nim GC Performance

2016-12-06 Thread Varriount
On my laptop, I get ~6 milliseconds for the Go code snippet in the linked post, 
and ~0.3 milliseconds for the Nim snippet posted by Araq


Re: passing references

2016-12-05 Thread Varriount
Or if you want to compare the addresses only: 


echo "IS IT THE SAME? ", (cast[int](xobjref) == cast[int](result))



Re: Question about the interaction of Concepts, Generic types, and typedesc

2016-11-28 Thread Varriount
This looks mainly like an edge case that hasn't been covered. You should 
probably file an issue for it.


Re: messaging - or communicating between GUI's

2016-11-15 Thread Varriount
You could also roll your own mechanism via shared memory maps (although this 
world better for fixed-length structures).


Re: Code page 65001

2016-10-25 Thread Varriount
Personally I dislike that particular behavior. The c runtime that ships with 
Windows is meant for internal use.

[https://blogs.msdn.microsoft.com/oldnewthing/20140411-00/?p=1273](https://blogs.msdn.microsoft.com/oldnewthing/20140411-00/?p=1273)


Re: strutils.toLower deprecation?

2016-10-12 Thread Varriount
Eh, my biggest complaint about strings is (and probably always will be) their 
copy-on-assignment behavior. At least UTF8 handling can be implemented fairly 
transparently through subtyping.

(Yes, I'm aware that changing current string assignment behavior would break 
things)


Re: Nim Chess 2 with transposition table support is available

2016-10-04 Thread Varriount
Don't forget link-time-optimizations. Turning those on tends to make code quite 
a bit smaller.


Re: reactor.nim 0.0.1 - an asynchronous networking library - is released

2016-08-29 Thread Varriount
I would argue that the most performant networking model would use multiple 
threads, with each thread hosting its own asynchronous event loop. Multiple 
processes work as well, however you have the overhead that comes with complete 
process separation.


Re: async I/O API: why strings?

2016-08-29 Thread Varriount
This actually puzzles me too, especially since Nim are particularly inefficient 
for this kind of work (strings are copy-on-assignment, meaning the data is 
going to be copied at least twice on its way to a socket)


Re: Send data structures between threads?

2016-08-28 Thread Varriount
@Araq This sounds like object mapping, in which data from a database is mapped 
into a set of objects. That's quite common, and makes working with data much 
easier.


Re: asynchttpserver and big request body

2016-08-27 Thread Varriount
I'd love interfaces like the ones shown by 
[requests](http://forum.nim-lang.org///docs.python-requests.org/en/master/) and 
[unirest](http://forum.nim-lang.org///unirest.io)...


Re: Send data structures between threads?

2016-08-27 Thread Varriount
@Araq By "implemented", I believe jyelon meant "design"; using existing 
database software (postgresql).

Anyway, your reply still doesn't answer the question - how would he use json 
functions in one thread to act on data from another thread?


Re: Send data structures between threads?

2016-08-27 Thread Varriount
In those cases, do you really need to access the exact same memory, or will a 
copy do?


Re: Nim and COM

2016-07-04 Thread Varriount
Since COM has a C interface, it's quite possible (although I have yet to see a 
real-world example). What kind of scenario are you planning to use COM in?


Re: Jester: slow route affects other routes

2016-07-03 Thread Varriount
Jester relies on Nim's built-in async modules, which are unfortunately built on 
a single-threaded _asynchronous_ approach. This means any blocking operation 
done on the main thread will stall all operations.

If your aim is to pause a route, I suggest using something like 
[sleepAsync](http://forum.nim-lang.org///nim-lang.org/docs/asyncdispatch.html#sleepAsync,int).
 For operations that don't have an asynchronous version, you'll need to run the 
procedure on another thread.


Re: Split at an empty separator does not work

2016-07-03 Thread Varriount
I'd prefer throwing an error (or returning the original string). Splitting with 
a delimiter of 'nothing' makes little sense, and doesn't have an intuitive 
result.


Re: Go: Embedding provides automatic delegation.

2016-07-03 Thread Varriount
This looks like it would fit Nim far better than classical object-oriented 
multi-inheritence.


Re: Concepts, name resolution and polymorphic behavior

2016-06-27 Thread Varriount
This looks to be more a limitation in the current concept implementation 
(although whether the limitation will be lifted is a design decision that Araq 
will have to make).

Currently the compiler only considers globally-scoped procedures when testing 
of a concept "fits" a type, likely for complexity reasons.

For example, what would happen in your code if a global 'cost' procedure is 
already declared? What happens if you try to pass `g` in `print_route1` to 
another procedure expecting a `Graph` argument (`g` may be considered a Graph 
in the scope of `print_route1`, but not in the scope of sub-procedure calls).

Again, this appears to be more of a limitation of the current concepts 
implementation than a limitation in the abstract idea of concepts.

As a workaround, you can either just use plain generics (I think), templates, 
or as a final resort, procedure parameters or procedure tables/structs (like 
C++ vtables).


Re: Windows nim binaries freeze

2016-06-27 Thread Varriount
Could you retry the above actions with an elevated process explorer (one run as 
an administrator)? Also, do you have an Antivirus program or something similar 
that could be causing this? Have you had any similar problems with other 
programs in the past?

Sorry for the questions, it's just that this is a very strange problem. The 
only time a process has _just_ those three dlls loaded is very early in the 
program startup sequence when setting up the runtime environment, before the 
"main" function is called (before the "main" function is run, the C runtime 
envrionment must be setup, dlls loaded, etc).


Re: Windows nim binaries freeze

2016-06-27 Thread Varriount
It's possible to 'suspend' (pause) threads in various ways - a process can be 
created in a suspended state, a debugger can pause threads, or threads can be 
suspended via an undocumented kernel function 
[NtSuspendProcess](http://forum.nim-lang.org///stackoverflow.com/questions/11010165/how-to-suspend-resume-a-process-in-windows)
 (process explorer uses the undocumented function when you right click an entry 
and select 'suspend'). Doing this to random processes can have strange 
consequences - for example, suspending a game with sound output will prevent 
some sound manipulation programs (such as Window's built-in sound mixer) from 
responding/spawning, as they try to communicate with the suspended process and 
wait until the communication is successful.

Tell me, are you able to generate a full dump of the process via process 
explorer? That doesn't require opening the properties menu. Also, could you use 
process explorer to look at the list of dlls loaded by the process, and see if 
there is anything out of the ordinary? You can also paste them in a gist or 
something and I can look over them.

If this is too much trouble for you, we could arrange a time to screenshare, 
and I could try doing to some diagnostics. You can usually find me on the #nim 
IRC channel.


Re: Windows nim binaries freeze

2016-06-26 Thread Varriount
:

Re: Concepts, name resolution and polymorphic behavior

2016-06-26 Thread Varriount
:

Re: Windows nim binaries freeze

2016-06-26 Thread Varriount
: