Should 'in' Imply 'ref' as Well for Value Types?

2018-05-04 Thread Vijay Nayar via Digitalmars-d
While working on a library built for high efficiency, avoiding 
unnecessary copies of structs became an issue.  I had assumed 
that `in` was doing this, but a bit of experimentation revealed 
that it does not.  However, `ref in` works great.


My question is, should `in` by default also imply `ref` for value 
types like structs?  Is there a reason not to do this?


This is the test program I used for reference:

```
import std.stdio;

struct Bob {
int a;
this(this) {
  writeln("");
}
}

void main()
{
Bob b = Bob(3);
writeln("&b = ", &b);
void showAddrIn(in Bob b) {
writeln("(showAddrIn)&b = ", &b);
}
showAddrIn(b);
void showAddrRefIn(ref in Bob b) {
writeln("(showAddrRefIn) &b = ", &b);
}
showAddrRefIn(b);
}
```

The output is as follows:

```
&b = 7FFD9F526AD0

(showAddrIn)&b = 7FFD9F526AB0
(showAddrRefIn) &b = 7FFD9F526AD0
```


Bug?: Presence of "init()" Method Causes std.array.appender to Fail to Compile

2018-05-13 Thread Vijay Nayar via Digitalmars-d
I encountered a very unexpected error when working on a project.  
It seems that the Appender and RefAppender structs created from 
the std.array.appender() method are sensitive to the mere 
presence of a method called "init()" on the element type of the 
array.


Here is a minimal example:

```
import std.array;

struct S1 {
  // The mere presence of this method causes the error, deleting 
it fixes the error.

  void init(string p1, int p2, int p3) { }
}

struct S2 {
  S1[] a;
  RefAppender!(int[]) getAppender() {
return appender(&a);
  }
}

void main() { }
```

The compiler produces the following output:
```
/dlang/dmd/linux/bin64/../../src/phobos/std/array.d(2907): Error: 
cannot have array of `void(string, int, int)`
/dlang/dmd/linux/bin64/../../src/phobos/std/array.d(2976): Error: 
cannot have array of `inout void(string, int, int)`
/dlang/dmd/linux/bin64/../../src/phobos/std/array.d(3369): Error: 
template instance `std.array.Appender!(S1[])` error instantiating
/dlang/dmd/linux/bin64/../../src/phobos/std/array.d(3879):
instantiated from here: `RefAppender!(S1[])`
onlineapp.d(12):instantiated from here: `appender!(S1[]*, 
S1)`
/dlang/dmd/linux/bin64/../../src/phobos/std/array.d(3429): Error: 
cannot have array of `inout void(string, int, int)`

```

Is this a bug or a misunderstanding on my part?


Clash When Using Function as Template Value-Parameters?

2018-05-26 Thread Vijay Nayar via Digitalmars-d
I've been experimenting with code that uses std.functional : 
binaryFun and unaryFun, but I have found that using these methods 
makes it impossible to add function attributes like @safe, @nogc, 
pure, and nothrow, because no guarantee can be made about the 
functions created via a stream.  For example, if you expect a 
comparator function like "a == b", someone can pass in "a.data--" 
instead.


That being said, I started trying out using strongly typed and 
attributed template parameters instead, relying on lambdas to 
keep the syntax for the user short. But when I tried this, I 
found that the very existence of templates with different 
parameter values causes a collision during compilation.


The following code snippet demonstrates the error:

```
import std.stdio;

final class BTree(
ValueT, KeyT = ValueT,
	const(KeyT) function(ValueT) @safe @nogc nothrow pure KeyF = (a) 
=> a) {


KeyT getKey(ValueT val) {
return KeyF(val);
}
}

void main()
{
auto btree1 = new BTree!(char);  // Removing this line 
eliminates the error.

auto btree2 = new BTree!(int);
}
```

The error is:
```
onlineapp.d(8): Error: function literal `__lambda6(char a)` is 
not callable using argument types `(int)`
onlineapp.d(8):cannot pass argument `val` of type `int` 
to parameter `char a`
onlineapp.d(15): Error: template instance `onlineapp.BTree!(int, 
int, function (char a) => a)` error instantiating

```

Is this an error in the compiler or in my own understanding of 
the D language?


Re: Friends in D, a new idiom?

2018-05-26 Thread Vijay Nayar via Digitalmars-d
On Sunday, 27 May 2018 at 05:25:53 UTC, IntegratedDimensions 
wrote:



Re: Friends in D, a new idiom?


In D, there's no exact equivalent to friend, but there are a few 
more specialized tools at your disposal. Normally all code in the 
same module is essentially a friend, so if the classes you are 
dealing with are tightly coupled, they can simply be in the same 
module.


For example:

module m;

class C {
  // This is still visible in the same module.
  // See https://dlang.org/spec/attribute.html#VisibilityAttribute
  private int data;
  ...
}

class CAccessor {
  C _this;
  this(C c) {
_this = c;
  }
  @property void data(int v) {
_this.data = v;
  }
  ...
}

Initially I thought nested classes contained an inherent super 
but I guess that is not the case?


Super is for inheritance rather than inner classes. So another 
way to tackle your problem using super would be this:


class C {
  protected int _data;
  @property int data() {
return _data;
  }
}

class CAccessor : C {
  @property void data(int v) {
_data = v;
  }
  C toC() {
return this;
  }
}

I also imagine that one could enhance this so that write access 
could also be allowed by certain types.


The 'package' visibility attribute can also be given a parameter 
if you need to limit access only to certain module.


Any ideas about this type of pattern, how to make it better, 
already exists etc?


You might be looking for the "Builder Pattern" which uses a 
separate object to construct and modify objects, and then it 
creates a read-only object with those values upon request.


Also, I would recommend using "const" to control access as well.  
Setter methods will not be const, but getters will be.  Those 
that have a `const(C)` reference will only be able to read, and 
those with a `C` will be able to call all methods.


For example:

class C {
  private int _data;
  @property int data() const { return _data; }
  @property void data(int v) { _data = v; }
}

void main() {
  C a = new C();
  const(C) b = a;

  a.data(3);
  a.data();
  b.data();
  // b.data(4);  Compile error.
}


Re: General problem I'm having in D with the type system

2018-05-27 Thread Vijay Nayar via Digitalmars-d
On Sunday, 27 May 2018 at 06:00:30 UTC, IntegratedDimensions 
wrote:



The problem description is not very clear, but the catfood 
example gives a bit more to work with.



animal  ->   food
  ||
  vv
cat ->   catfood


Of course, I'm not sure how to avoid the problem in D of


animal a = new cat();

a.f = new food()
auto c = cast(cat)a;


Cast operations are generally not guaranteed to preserve type 
safety and should be avoided when possible.  But if I understand 
your description, you have the following relations and 
transitions:


  animal owns food
  catowns catfood
  animal may be treated as a cat (hence the casting)
  food may be treated as a catfood (hence the casting)

It may be that the inheritance relationship is backwards in your 
use case.  If "animal" may be treated as a "cat", then the 
inheritance should be other other way around, and "animal" would 
inherit from "cat".


What specific kinds of relationships are you trying to model 
among what kinds of entities?




Re: Clash When Using Function as Template Value-Parameters?

2018-05-27 Thread Vijay Nayar via Digitalmars-d

On Saturday, 26 May 2018 at 11:56:30 UTC, Vijay Nayar wrote:

The error is:
```
onlineapp.d(8): Error: function literal `__lambda6(char a)` is 
not callable using argument types `(int)`
onlineapp.d(8):cannot pass argument `val` of type `int` 
to parameter `char a`
onlineapp.d(15): Error: template instance 
`onlineapp.BTree!(int, int, function (char a) => a)` error 
instantiating

```


Just to clarify.  In the example above, if I create a 'BTree!int' 
by itself, it's fine.  If I create a 'BTree!char' by itself, it's 
fine also.  But if I create both, even if they are created in 
different modules, the compiler seems to mix up the types of the 
function template-parameter, and tries to fit a 'char' to the 
'int' function or an 'int' to the 'char' function, depending on 
which was declared first.


Re: Friends in D, a new idiom?

2018-05-27 Thread Vijay Nayar via Digitalmars-d
On Sunday, 27 May 2018 at 06:37:56 UTC, IntegratedDimensions 
wrote:


I'm looking for something lightweight and direct. It is not for 
total encapsulation control but to simply provide an extra 
level of indirection for write access to make the object look 
read only to those that directly use it.


I think const is something that may be helpful then. If applied 
consistently, especially with methods, it can also protect you 
from accidentally making mutations in functions where were 
originally intended to be read-only.  Having an object "look" 
read-only is more of a stylistic thing based on conventions about 
method naming, etc. Personally I lean towards having the compiler 
enforce it.


Re: Clash When Using Function as Template Value-Parameters?

2018-05-27 Thread Vijay Nayar via Digitalmars-d

On Sunday, 27 May 2018 at 20:38:25 UTC, Daniel Kozak wrote:

I would rewrite it to something like this:

template BTree(ValueT, KeyT = ValueT,alias KeyF = 
unaryFun!"cast(const)a")

{
class BTree
{


This is roughly what I originally had, but it creates a number of 
problems that I wanted to get around.  Changing KeyF back to an 
alias means that any function that uses it can no longer be 
const, pure, @nogc, or nothrow.  Essentially the parameter is 
just anything the user provides.


If I use a template value-parameter, then it forces any lambda 
the user passes in to either match the type I enter in (with 
const, pure, etc.) or the program to fail to compile.  That is, I 
don't want the user to pass in any function, but only functions 
with the desired attributes.  I.e., I wouldn't want them to pass 
in for KeyF something like "a.data--".


Listing out the full type does indeed work correctly with various 
examples, and letting the user pass in something like `a => 
a._id` does compile, but the only problem is that when there are 
two such template instances in the same program.


Logically `BTree!(MyStruct, int, a => a.id)`, 
`BTree!(AnotherStruct, char, a => a.name[0])`, `BTree!int` and 
`BTree!char` should all be totally independent.  But for reasons 
unknown, the individual parameters seems to be swapped and and 
confused during compilation.


In the error above I listed.  The function parameter from 
`BTree!char` is being used to create a compile error against 
`BTree!int`, which is very odd.  Each of these classes compile 
and run just fine individually, the compilation only breaks when 
both exist.





Re: Clash When Using Function as Template Value-Parameters?

2018-05-29 Thread Vijay Nayar via Digitalmars-d

On Tuesday, 29 May 2018 at 11:36:11 UTC, Yuxuan Shui wrote:


No, wait a second. (a)=>a is in default argument list, so it is 
in the global scope. And it was instantiated when you 
instantiate BTree with char.


Could you explain that part a bit for me?  Yes, (a) => a is a 
default value, but when you say it is in the global scope, are 
you saying that a single object "(a) => a" is created in the 
global scope and not created for each template argument list, 
e.g. "BTree!int" and "BTree!char"?


I actually do not know in what scope such objects would be 
created, I had assumed it was per template-parameter list, but 
are you saying this is not the case?


Re: Clash When Using Function as Template Value-Parameters?

2018-05-29 Thread Vijay Nayar via Digitalmars-d

On Tuesday, 29 May 2018 at 12:58:20 UTC, Yuxuan Shui wrote:

I believe that is the case. Normally that will be fine, because 
you can't modify them. Type-deduced lambda is a very special 
case, as in their parameter types are deduced on first use, so 
in a sense, they are "modified" by the first instantiation.


BTW, I can't find the documentation about defining lambda with 
their parameter types omitted anywhere.


I tried this again, this time completely ignoring lambdas and 
completely specifying the desired type like so:


final class BTree(
ValueT,
KeyT = ValueT,
const(KeyT) function(ValueT) nothrow pure @nogc KeyF =
function KeyT(ValueT a) { return a; }) {

  KeyT getKey(ValueT val) {
return KeyF(val);
  }
}

But unfortunately, the following code still produces an error:

void main()
{
auto btree1 = new BTree!(char);
auto btree2 = new BTree!(int);  // The error is on this line.
}

onlineapp.d(17): Error: template instance `BTree!int` does not 
match template declaration `BTree(ValueT, KeyT = ValueT, 
const(char) function(char) pure nothrow @nogc KeyF = function 
KeyT(ValueT a)

{
return a;
}
)`

I think at this point this may be a bug in the compiler.  What do 
you think?




Re: Clash When Using Function as Template Value-Parameters?

2018-05-30 Thread Vijay Nayar via Digitalmars-d

On Tuesday, 29 May 2018 at 19:17:37 UTC, Vijay Nayar wrote:

On Tuesday, 29 May 2018 at 12:58:20 UTC, Yuxuan Shui wrote:


[...]


I tried this again, this time completely ignoring lambdas and 
completely specifying the desired type like so:


[...]


Issue created:  https://issues.dlang.org/show_bug.cgi?id=18917


Re: What's happening with the `in` storage class

2018-06-12 Thread Vijay Nayar via Digitalmars-d

On Saturday, 9 June 2018 at 02:38:14 UTC, SonicFreak94 wrote:

On Saturday, 9 June 2018 at 02:17:18 UTC, Adam D. Ruppe wrote:

On Saturday, 9 June 2018 at 02:13:00 UTC, Walter Bright wrote:
But it was never enforced, meaning that suddenly enforcing it 
is just going to break code left and right.



It isn't going to break anything. It is going to *correctly 
diagnose already broken code*.


That's a significant difference. Real world D users don't like 
broken code, but they DO like the compiler catching new bugs 
that slipped by before.


I agree. I would rather my potentially broken code be pointed 
out to me rather than removing the much more concise `in` from 
my code. In any case, I feel as though the concept of both `in` 
and `out` should be fairly intuitive. `in` would be a read-only 
reference (C# has received this recently), and `out` is a 
reference with the intention to write.


100% agreed.

I always found "in" to be consistent with what I view as one of 
D's core philosophies, that the simple thing should be the right 
thing.  For example, when you have a class parameter, it is 
automatically passed by reference without any other special 
considerations by the programmer.


To me, "in" has been a shorthand to communicate my desire to make 
sure that the parameter is treated strictly as an input, and not 
modified in any way or having ways to pass its reference to 
others who may then modify it.


Where I may be doing something wrong, a helpful message from the 
compiler is welcome.


Re: Expanding tool (written in D) use, want advice

2018-06-25 Thread Vijay Nayar via Digitalmars-d

On Friday, 22 June 2018 at 14:45:46 UTC, Jesse Phillips wrote:
Should I be looking more at the benefits of having D as a tool? 
It was a good choice for me since I know D so well (and other 
reasons at the time), but C# is a reasonable language in this 
space. I'm thinking, like should I go into how learning D 
wouldn't be too hard for new hire since it has similar syntax 
to C# and so on.


One strong argument to make is based on performance. Give them 
numbers about how fast your tool runs and make it efficient. The 
idea is that because the linting tool will be run for every 
incremental build a developer makes, slower running times are a 
barrier to productivity.


But once performance targets are defined, and if the company 
thinks that C# can also meet those targets, then really it's 
their call. Ultimately it is their company and their assets.


In such a case, I would generalize your tool for use outside of 
the specific context of your company, and make it the basis of an 
open source project.


Associative Array that Supports upper/lower Ranges

2018-06-25 Thread Vijay Nayar via Digitalmars-d
I was in need of an associative array / dictionary object that 
could also support getting ranges of entries with keys below or 
above a given value.  I couldn't find anything that would do 
this, and ended up using the RedBlackTree to store key/value 
pairs, and then wrap the relevant functions with key lookups.


I feel that there was probably an easier way to do this, but I 
didn't find one.  Regardless, if anyone else has this kind of 
problem, you can get around it like this:


```
module rbtree_map;

import std.container.rbtree;
import std.algorithm : map;
import std.functional : binaryFun;
import std.meta : allSatisfy;
import std.range : ElementType, isInputRange;
import std.traits : isDynamicArray, isImplicitlyConvertible;

/**
 * A dictionary or associative array backed by a Red-Black tree.
 */

unittest {
  auto rbTreeMap = new RBTreeMap!(string, int)();
  rbTreeMap["a"] = 4;
  rbTreeMap["b"] = 2;
  rbTreeMap["c"] = 3;
  rbTreeMap["d"] = 1;
  rbTreeMap["e"] = 5;
  assert(rbTreeMap.length() == 5);
  assert("c" in rbTreeMap);
  rbTreeMap.removeKey("c");
  assert("c" !in rbTreeMap);
  rbTreeMap.lowerBound("c");  // Range of ("a", 4), ("b", 2)
  rbTreeMap.upperBound("c");  // Range of ("d", 1), ("e", 5)
}

final class RBTreeMap(KeyT, ValueT, alias KeyLessF = "a < b", 
bool allowDuplicates = false) {

public:
  static struct Pair {
KeyT key;
ValueT value;
  }

  alias keyLess = binaryFun!KeyLessF;

  alias RedBlackTreeT =
  RedBlackTree!(Pair, (pair1, pair2) => keyLess(pair1.key, 
pair2.key), allowDuplicates);


  RedBlackTreeT rbTree;

  // Forward compatible methods like: empty(), length(), 
opSlice(), etc.

  alias rbTree this;

  this() {
rbTree = new RedBlackTreeT();
  }

  this(Pair[] elems...) {
rbTree = new RedBlackTreeT(elems);
  }

  this(PairRange)(PairRange pairRange)
  if (isInputRange!PairRange && 
isImplicitlyConvertible!(ElementType!PairRange, Pair)) {

rbTree = new RedBlackTreeT(pairRange);
  }

  override
  bool opEquals(Object rhs) {
RBTreeMap that = cast(RBTreeMap) rhs;
if (that is null) return false;

return rbTree == that.rbTree;
  }

  /// Insertion
  size_t stableInsert(K, V)(K key, V value)
  if (isImplicitlyConvertible!(K, KeyT) && 
isImplicitlyConvertible!(V, ValueT)) {

return rbTree.stableInsert(Pair(key, value));
  }
  alias insert = stableInsert;

  ValueT opIndexAssign(ValueT value, KeyT key) {
rbTree.stableInsert(Pair(key, value));
return value;
  }

  /// Membership
  bool opBinaryRight(string op)(KeyT key) const
  if (op == "in") {
return Pair(key) in rbTree;
  }

  /// Removal
  size_t removeKey(K...)(K keys)
  if (allSatisfy!(isImplicitlyConvertibleToKey, K)) {
KeyT[K.length] toRemove = [keys];
return removeKey(toRemove[]);
  }

  //Helper for removeKey.
  private template isImplicitlyConvertibleToKey(K)
  {
enum isImplicitlyConvertibleToKey = 
isImplicitlyConvertible!(K, KeyT);

  }

  size_t removeKey(K)(K[] keys)
  if (isImplicitlyConvertible!(K, KeyT)) {
auto keyPairs = keys.map!(key => Pair(key));
return rbTree.removeKey(keyPairs);
  }

  size_t removeKey(KeyRange)(KeyRange keyRange)
  if (isInputRange!KeyRange
  && isImplicitlyConvertible!(ElementType!KeyRange, KeyT)
  && !isDynamicArray!KeyRange) {
auto keyPairs = keys.map(key => Pair(key));
return rbTree.removeKey(keyPairs);
  }

  /// Ranges
  RedBlackTreeT.Range upperBound(KeyT key) {
return rbTree.upperBound(Pair(key));
  }

  RedBlackTreeT.ConstRange upperBound(KeyT key) const {
return rbTree.upperBound(Pair(key));
  }

  RedBlackTreeT.ImmutableRange upperBound(KeyT key) immutable {
return rbTree.upperBound(Pair(key));
  }

  RedBlackTreeT.Range lowerBound(KeyT key) {
return rbTree.lowerBound(Pair(key));
  }

  RedBlackTreeT.ConstRange lowerBound(KeyT key) const {
return rbTree.lowerBound(Pair(key));
  }

  RedBlackTreeT.ImmutableRange lowerBound(KeyT key) immutable {
return rbTree.lowerBound(Pair(key));
  }

  auto equalRange(KeyT key) {
return rbTree.equalRange(Pair(key));
  }

}
```


Re: Interesting Observation from JAXLondon

2018-10-11 Thread Vijay Nayar via Digitalmars-d

On Thursday, 11 October 2018 at 11:50:39 UTC, Joakim wrote:
On Thursday, 11 October 2018 at 07:58:39 UTC, Russel Winder 
wrote:

This was supposed to come to this list not the learn list.

On Thu, 2018-10-11 at 07:57 +0100, Russel Winder wrote:
It seems that in the modern world of Cloud and Kubernetes, 
and the charging
model of the Cloud vendors, that the startup times of JVMs is 
becoming a
financial problem. A number of high profile companies are 
switching from

Java
to Go to solve this financial difficulty.

It's a pity D is not in there with a pitch.

I suspect it is because the companies have heard of Go (and 
Rust), but not

D.


I doubt D could make a pitch that would be heard, no google 
behind it and all that jazz. D is better aimed at startups like 
Weka who're trying to disrupt the status quo than Java shops 
trying to sustain it, while shaving off some up-front time.


Personally I think this is going to change soon depending on what 
options are available.  The amount of time and money that 
companies, especially companies using Java and AWS, are putting 
in to saving money with Nomad or Kubernetics on the promise of 
having more services per server is quite high.  However, these 
JVM based services run in maybe 1-2GB of RAM at the minimum, so 
they get maybe 4 services per box.


A microservice built using D and vibe.d could easily perform the 
same work using less CPU and maybe only 500MB of RAM.  The scale 
of improvement is roughly the same as what you would get by 
moving to containerization.


If D has the proper libraries and integrations available with the 
tools that are commonly used, it could easily break through and 
become the serious language to use for the competitive business 
of the future.


But libraries and integrations will make or break that.  It's not 
just Java you're up against, it's all the libraries like 
SpringBoot and all the integrations with AWS systems like SQS, 
SNS, Kinesis, MySQL, PostGREs, Redis, etc.


My hope is that D will be part of that future and I'm trying to 
add libraries as time permits.


Re: Interesting Observation from JAXLondon

2018-10-12 Thread Vijay Nayar via Digitalmars-d

On Friday, 12 October 2018 at 07:13:33 UTC, Russel Winder wrote:
On Thu, 2018-10-11 at 13:00 +, bachmeier via Digitalmars-d 
wrote: […]

Suggestions?

My guess is that the reason they've heard of those languages 
is because their developers were writing small projects using 
Go and Rust, but not D.


I fear it may already be too late. Go, and now Rust, got 
marketing hype from an organisation putting considerable 
resources into the projects. This turned into effort from the 
community that increased rapidly, turning the hype into 
frameworks and libraries, and word of mouth marketing. It is 
the libraries and frameworks that make for traction. Now the 
hype is gone, Go and Rust, and their libraries and frameworks, 
are well positioned and with significant penetration into the 
minds of developers.


Talk to Java developers and they have heard of Go and Rust, but 
not D. Go is
more likely to them because of Docker and the context of The 
Web, for which Go
has a strong pitch. They have heard of Rust but usually see it 
as not relevant

to them, despite Firefox.

Talk to Python developers and they know of Go, many of them of 
Rust, but
almost never D. C and C++ are seen as the languages of 
performance extensions,

though Rust increasingly has a play there.

D has vibe.d, PyD, GtkD, and lots of other bits, but they've 
never quite had the resources of the equivalents in Go and Rust.


Also the D community as a whole is effectively introvert, 
whereas Go and Rust communities have been quite extrovert. 
"Build it and they will come" just doesn't work, you have to be 
pushy and market stuff, often using guerilla marketing, to get 
mindshare.


D has an excellent position against Python (for speed of 
development but without the performance hit) but no chance of 
penetrating the places where Python is strong due to lack of 
libraries and frameworks that people use – cf. Pandas, 
SciKit.Learn, etc.


D has an excellent position against Go as a language except 
that Go has goroutines and channels. The single threaded event 
loop and callback approach is losing favour. Kotlin is 
introducing Kotlin Coroutines which is a step on from the 
observables system of Rx. Structured concurrency abstracting 
away from fibres and threadpools. Java may well get this via 
Project Loom which is Quasar being inserted into the JVM 
directly. Whatever D has it doesn't seem to be going to compete 
in this space.


D without the GC has a sort of position against Rust, but I 
think that battle has been lost. Rust has won in the "the new C 
that isn't Go and doesn't have a garbage collector, and isn't 
C++, but does have all the nice monads stuff, oh and memory 
safety mostly".


When it comes down to it D will carry on as a niche language 
loved by a few unknown to most.


In my opinion, I don't think the game is over just yet.  One of 
D's biggest strengths has been its ability to adapt and innovate. 
 Despite being around since 2001, it is still forging ahead and 
many of the new features coming out in programming languages are 
coming to fruition in D before being back-ported to other 
languages.


The D crowd is certainly very introverted and very technically 
minded, it really seems to be an amazing hub for innovators and 
compiler designers.  But the D community has also been very 
receptive of changes to the language which allows it to evolve at 
a pace few other languages can match.


My personal opinion is that languages that grow up too fast get 
stuck because they have too much legacy code and certain options 
that they may have originally wanted become unavailable.


Go and Rust are gaining traction, especially among developers 
getting tired of very hard to work with languages.  Java is very 
very slow to evolve and there's a huge amount of effort invested 
in learning other JVM languages like Scala, I think largely 
because people are looking for alternatives.


Rust, while intriguing, is very alien in syntax and concept for 
many developers.  Go gets wider adoption than other languages 
I've seen, but the race is still on in my book.



One thing that does concern me, is the avenues in which people 
can discover D.  For me personally, after a particularly nasty 
C++ project, I just googled for "alternatives to C++" and that's 
how I found D back in 2009 or so.  But the same search today 
turns up nothing about D.  I'm not sure sure how people are 
supposed to find D.


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Friday, 12 October 2018 at 21:08:03 UTC, Jabari Zakiya wrote:

On Friday, 12 October 2018 at 20:05:29 UTC, welkam wrote:
On Friday, 12 October 2018 at 16:19:59 UTC, Jabari Zakiya 
wrote:
The real point of the challenge is too see what idiomatic 
code...


There is no idiomatic D code. There is only better 
implementations.


D doesnt tell you how to write your code. It gives you many 
tools and you choose which tools to use. That`s what people 
like about D.


I await your implementation(s)! :-)


I downloaded the reference NIM implementation and got the latest 
nim compiler, but I received the following error:

  $ nim c --cc:gcc --d:release --threads:on twinprimes_ssoz.nim
  twinprimes_ssoz.nim(74, 11) Error: attempting to call 
undeclared routine: 'sort'


For a person not familiar with nim, what's the fastest way to fix 
that?


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 14:32:33 UTC, welkam wrote:

On Saturday, 13 October 2018 at 09:22:16 UTC, Vijay Nayar wrote:


I downloaded the reference NIM implementation and got the 
latest nim compiler, but I received the following error:

  $ nim c --cc:gcc --d:release --threads:on twinprimes_ssoz.nim
  twinprimes_ssoz.nim(74, 11) Error: attempting to call 
undeclared routine: 'sort'


For a person not familiar with nim, what's the fastest way to 
fix that?


import algorithm

thats all but then it spits out

lib/nim/pure/algorithm.nim(144, 11) Error: interpretation 
requires too many iterations


I ran into the same problem as you did, and then followed the 
instructions from the error.  I modified the compiler source and 
increased the number of maximum iterations from 3_000_000 to 
1_000_000_000, rebuilt and installed it, but still ran into the 
exact same problem.  There may be something up with the algorithm 
itself.


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 15:19:07 UTC, Jabari Zakiya wrote:

On Saturday, 13 October 2018 at 14:32:33 UTC, welkam wrote:
On Saturday, 13 October 2018 at 09:22:16 UTC, Vijay Nayar 
wrote:

[...]


import algorithm

thats all but then it spits out

lib/nim/pure/algorithm.nim(144, 11) Error: interpretation 
requires too many iterations


My mistake. I updated the file and forgot to include the 
'import algorithm' directive. The file is now fixed to include 
it. Download the corrected version or patch your file 
accordingly.


As stated in the file intro **YOU MUST DO THIS** to get it to 
compile with current Nim (they were supposed to fix this in 
this version 0.19.0 but didn't).


 To compile for nim versions <= 0.19.0 do following:
 1) in file: ~/nim-0.19.0/compiler/vmdef.nim
 2) set variable: MaxLoopIterations* = 1_000_000_000 (1 Billion 
or >)

 3) then rebuild sysem: ./koch boot -d:release

If you are using 'choosenim' to install Nim (highly advisable) 
the full path is:


 ~/.choosenim/toolchains/nim-0.19.0/compiler/vmdef.nim

I'll post performance results from my laptop to give reference 
times to compare against.


Ok, now it builds.  I was previously following the build 
instructions from the Nim website and am not super clear what the 
"koch" tool does, but following your instructions, the program 
does build and run.  I'll take a stab at making a D version.


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 15:50:06 UTC, Vijay Nayar wrote:
On Saturday, 13 October 2018 at 15:19:07 UTC, Jabari Zakiya 
wrote:

On Saturday, 13 October 2018 at 14:32:33 UTC, welkam wrote:
On Saturday, 13 October 2018 at 09:22:16 UTC, Vijay Nayar 
wrote:

[...]


import algorithm

thats all but then it spits out

lib/nim/pure/algorithm.nim(144, 11) Error: interpretation 
requires too many iterations


My mistake. I updated the file and forgot to include the 
'import algorithm' directive. The file is now fixed to include 
it. Download the corrected version or patch your file 
accordingly.


As stated in the file intro **YOU MUST DO THIS** to get it to 
compile with current Nim (they were supposed to fix this in 
this version 0.19.0 but didn't).


 To compile for nim versions <= 0.19.0 do following:
 1) in file: ~/nim-0.19.0/compiler/vmdef.nim
 2) set variable: MaxLoopIterations* = 1_000_000_000 (1 
Billion or >)

 3) then rebuild sysem: ./koch boot -d:release

If you are using 'choosenim' to install Nim (highly advisable) 
the full path is:


 ~/.choosenim/toolchains/nim-0.19.0/compiler/vmdef.nim

I'll post performance results from my laptop to give reference 
times to compare against.


Ok, now it builds.  I was previously following the build 
instructions from the Nim website and am not super clear what 
the "koch" tool does, but following your instructions, the 
program does build and run.  I'll take a stab at making a D 
version.


Interesting results so far.  I have a partially converted program 
here:  
https://gist.github.com/vnayar/79e2d0a9850833b8859dd9f08997b4d7


The interesting part is that during compilation (with the command 
"dmd twinprimes_ssoz.d"), the compilation will abort with the 
message "Killed" and no further information. That's a new one for 
me, so I'm looking into the cause.


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 18:05:45 UTC, Jabari Zakiya wrote:


It may be also running into a hard time limit imposed on 
compilation that Nim had/has that prevented my code from 
initially compiling. I'm generating a lot of PG parameter 
constants at compile time, and it's doing a lot of number 
crunching and building larger and larger arrays of constants as 
the PG's get larger.


Try compiling with successive PG's (just P5, then P5 and P7, 
etc) to see where it fails. That will let you know the code is 
working correctly, and that the compiler is choking either/and 
because of a hard time limit and/or memory limit. That's why I 
put in a compiler output statement in 'genPGparameters' to see 
the progression of the PG parameters being built by the 
compiler to initially find when the compiler started choking. 
You may also need to patch whatever facility in the D compiler 
chain that controls this too.


It's P17, the biggest one that takes the longest to build in the 
Nim version. I actually don't know what memory limits exist for 
the D compiler at compile-time, so I may need to do some homework.


Re: A Friendly Challenge for D

2018-10-13 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 18:14:20 UTC, Vijay Nayar wrote:
On Saturday, 13 October 2018 at 18:05:45 UTC, Jabari Zakiya 
wrote:


It may be also running into a hard time limit imposed on 
compilation that Nim had/has that prevented my code from 
initially compiling. I'm generating a lot of PG parameter 
constants at compile time, and it's doing a lot of number 
crunching and building larger and larger arrays of constants 
as the PG's get larger.


Try compiling with successive PG's (just P5, then P5 and P7, 
etc) to see where it fails. That will let you know the code is 
working correctly, and that the compiler is choking either/and 
because of a hard time limit and/or memory limit. That's why I 
put in a compiler output statement in 'genPGparameters' to see 
the progression of the PG parameters being built by the 
compiler to initially find when the compiler started choking. 
You may also need to patch whatever facility in the D compiler 
chain that controls this too.


It's P17, the biggest one that takes the longest to build in 
the Nim version. I actually don't know what memory limits exist 
for the D compiler at compile-time, so I may need to do some 
homework.


It's not just DMD either.

$ ldc2 twinprimes_ssoz.d
...
generating parameters for P17
Killed

$ gdc twinprimes_ssoz.d
...
generating parameters for P17
gdc: internal compiler error: Killed (program cc1d)
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.

$ dmd twinprimes_ssoz.d
...
generating parameters for P17
Killed



Re: A Friendly Challenge for D

2018-10-14 Thread Vijay Nayar via Digitalmars-d

On Saturday, 13 October 2018 at 19:04:48 UTC, Jabari Zakiya wrote:

On Saturday, 13 October 2018 at 18:31:57 UTC, Vijay Nayar wrote:
On Saturday, 13 October 2018 at 18:14:20 UTC, Vijay Nayar 
wrote:
On Saturday, 13 October 2018 at 18:05:45 UTC, Jabari Zakiya 
wrote:


It may be also running into a hard time limit imposed on 
compilation that Nim had/has that prevented my code from 
initially compiling. I'm generating a lot of PG parameter 
constants at compile time, and it's doing a lot of number 
crunching and building larger and larger arrays of constants 
as the PG's get larger.


Try compiling with successive PG's (just P5, then P5 and P7, 
etc) to see where it fails. That will let you know the code 
is working correctly, and that the compiler is choking 
either/and because of a hard time limit and/or memory limit. 
That's why I put in a compiler output statement in 
'genPGparameters' to see the progression of the PG 
parameters being built by the compiler to initially find 
when the compiler started choking. You may also need to 
patch whatever facility in the D compiler chain that 
controls this too.


It's P17, the biggest one that takes the longest to build in 
the Nim version. I actually don't know what memory limits 
exist for the D compiler at compile-time, so I may need to do 
some homework.


It's not just DMD either.

$ ldc2 twinprimes_ssoz.d
...
generating parameters for P17
Killed

$ gdc twinprimes_ssoz.d
...
generating parameters for P17
gdc: internal compiler error: Killed (program cc1d)
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.

$ dmd twinprimes_ssoz.d
...
generating parameters for P17
Killed


In the Nim code, starting line 91 is when the PG constants are 
being generate at compile time.


-
# Generate at compile time the parameters for PGs P5-P17.
const parametersp5  = genPGparameters(5)
const parametersp7  = genPGparameters(7)
const parametersp11 = genPGparameters(11)
const parametersp13 = genPGparameters(13)
const parametersp17 = genPGparameters(17)
-

Can it compile just using P5 (the first line, others commented 
out), and then P7, etc?


I'm not understanding your comments now.

If you can get a working version up and running (with correct 
output) we can solve the P17 compiler issues later (or in a 
parallel forum thread), especially if you have to delve into 
the weeds of the compiler chain.


In my mind (same with Nim process) getting working code using 
any PG is first order priority (because you'll need getting 
multi-threading working too). Once you can do that, by default, 
you can then use any generator you want if you create the 
correct parameters for it. That's one of the advantages of the 
algorithm, it's PG agnostic (as long as your hardware will 
accommodate it).


So don't prioritize getting P17 to compile right now (in this 
thread). Create the working generic structure that can work 
with any PG first.


Updated:  
https://gist.github.com/vnayar/79e2d0a9850833b8859dd9f08997b4d7


I still get a few runtime errors likely from mistakes in my 
conversion for certain primes.  I'll resolve those after I get 
back from the gym.


But as previous posters have said, the code is not really very 
different between Nim and D.  Most of it is array manipulation 
and arithmetic operations, and not many of the features of either 
D or Nim are very different.  Both turn into fast code, both have 
garbage collection, and both have generally similar operators and 
libraries for this kind of problem.


The biggest differences I observed revolved not around the 
languages themselves, but around code style.  For example, can 
you put a loop and 3 additional statements on a single line in D? 
 Yes.  But it is considered to be not very readable code from a 
style perspective.


Once I get the bugs out, I'm curious to see if any performance 
differences crop up.  There's the theory that says they should be 
the same, and then there's the practice.


Re: A Friendly Challenge for D

2018-10-15 Thread Vijay Nayar via Digitalmars-d

On Sunday, 14 October 2018 at 10:51:11 UTC, Vijay Nayar wrote:
Once I get the bugs out, I'm curious to see if any performance 
differences crop up.  There's the theory that says they should 
be the same, and then there's the practice.


I don't actually understand the underlying algorithm, but I at 
least understand the flow of the program and the structure.  The 
algorithm utilized depends heavily on using shared memory access, 
which can be done in D, but I definitely wouldn't call it 
idiomatic.  In D, message passing is preferred, but it really 
can't be well demonstrated on your algorithm without a deeper 
understanding of the algorithm itself.


A complete working version can be found at: 
https://gist.github.com/vnayar/79e2d0a9850833b8859dd9f08997b4d7


I modified both versions of the program to utilize the 
pgParameters13 for more of an apples-to-apples comparison.


The final results are as follows:
$ nim c --cc:gcc --d:release --threads:on twinprimes_ssoz.nim && 
echo "30" | ./twinprimes_ssoz

Enter integer number: threads = 8
each thread segment is [1 x 65536] bytes array
twinprime candidates = 175324676; resgroups = 1298702
each 135 threads has nextp[2 x 5566] array
setup time = 0.000 secs
perform twinprimes ssoz sieve
sieve time = 0.222 secs
last segment = 53518 resgroups; segment slices = 20
total twins = 9210144; last twin = 299712+/-1
total time = 0.223 secs

$ dub build --compiler=ldc2 -b=release && echo "30" | 
./twinprimes

Enter integer number:
threads = 8
each thread segment is [1 x 65536] bytes array
twinprime candidates = 175324676; resgroups = 1298702
each 135 threads has nextp[2 x 5566] array
setup time = 1 ms, 864 μs, and 7 hnsecs
perform twinprimes ssoz sieve
sieve time = 196 ms, 566 μs, and 5 hnsecs
last segment = 53518 resgroups; segment slices = 20
total twins = 9210144; last twin = 299712+/- 1
total time = 198 ms, 431 μs, and 2 hnsecs

My understanding is that the difference in performance is largely 
due to slightly better optimization from the LLVM based ldc2 
compiler, where I believe Nim is using gcc.




Re: A Friendly Challenge for D

2018-10-16 Thread Vijay Nayar via Digitalmars-d

On Monday, 15 October 2018 at 22:17:57 UTC, Jabari Zakiya wrote:
$ dub build --compiler=ldc2 -b=release && echo "30" | 
./twinprimes

Enter integer number:
threads = 8
each thread segment is [1 x 65536] bytes array
twinprime candidates = 175324676; resgroups = 1298702
each 135 threads has nextp[2 x 5566] array
setup time = 1 ms, 864 μs, and 7 hnsecs
perform twinprimes ssoz sieve
sieve time = 196 ms, 566 μs, and 5 hnsecs
last segment = 53518 resgroups; segment slices = 20
total twins = 9210144; last twin = 299712+/- 1
total time = 198 ms, 431 μs, and 2 hnsecs

My understanding is that the difference in performance is 
largely due to slightly better optimization from the LLVM 
based ldc2 compiler, where I believe Nim is using gcc.


Here's what I get on my system.

$ echo 3_000_000_000 | ./twinprimes_test7yc.0180.gcc821
Enter integer number: threads = 8
each thread segment is [1 x 65536] bytes array
twinprime candidates = 175324676; resgroups = 1298702
each 135 threads has nextp[2 x 5566] array
setup time = 0.000 secs
perform twinprimes ssoz sieve
sieve time = 0.144 secs
last segment = 53518 resgroups; segment slices = 20
total twins = 9210144; last twin = 299712+/-1
total time = 0.144 secs

Could you list your hardware, D ver, compiler specs.

I will run your code on my system with your D version and 
compiler, if I can.


Excellent work!


D has multiple compilers, but for the speed of the finished 
binary, LDC2 is generally recommended.  I used version 1.11.0.  
https://github.com/ldc-developers/ldc/releases/tag/v1.11.0


I was using DUB to manage the project, but to build the 
stand-alone file from the gist link, use this command:  $ ldc2 
-release -O3 twinprimes_ssoz.d

And to run it:  $ echo "30" | ./twinprimes_ssoz

Running the program a bunch of times, I get variable results, 
mostly between 195ms and 250ms.  Running the Nim version, I also 
get variable results, mostly between 230ms and 280ms.


My system is an Alienware 14x R2 from 2012.  Specs can be found 
here: 
https://www.cnet.com/products/alienware-m14xr2-14-core-i7-3630qm-8-gb-ram-750-gb-hdd/specs/


Re: Shared - Another Thread

2018-10-18 Thread Vijay Nayar via Digitalmars-d

On Wednesday, 17 October 2018 at 21:12:49 UTC, Stefan Koch wrote:

Hi,

reading the other shared thread  "shared - i need to be 
useful"(https://forum.dlang.org/thread/mailman.4299.1539629222.29801.digitalmar...@puremagic.com)


let me to an important realisation concerning the reason 
shareding data across threads is so unintuitve and hard to get 
right.
The reason is that sharing in the real world has nothing to do 
with using something and the same time.
For example: If I share my flat with another person, that 
person, while occupying the same flat as me, cannot actually 
occupy the same space. It is physically impossible.


In other words sharing does not mean for multiple entities to 
own something it's rather about diving and managing the 
(temporary) ownership of fragments.


Therefore if ownership is unclear sharing is impossible.
The safest default for something shared with unclear ownership 
is to view it as untouchable/unreadble/unwritable until 
ownership is established.


My understanding is that the "shared" keyword can be useful 
especially with array types that are operated on by multiple 
threads.  Some algorithms put together data following specific 
rules on how that data can be fragmented.


Imagine a simple algorithm that does logic on very long numbers, 
split into bytes.  One multi-threaded implementation may use 4 
threads.  The first operating on bytes 0, 4, 8, etc.  The second 
operating on bytes 1, 5, 9, etc.


In this case, a mutex or lock isn't actually needed, because the 
algorithm itself assures that threads don't collide.


It's an over-simplification, but I think this is basically what 
the prime-number finding algorithm by Jabari Zakiya is doing.