Builtin Support for Comparison Function for Primitive Types

2015-05-02 Thread via Digitalmars-d-learn
Why doesn't std.algorithm.comparison.cmp support primitive types 
such as


assert(cmp(0,1) == true);

?


Re: A slice can lose capacity simply by calling a function

2015-05-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Saturday, May 02, 2015 01:21:14 Ali Çehreli via Digitalmars-d-learn wrote:
2)  void foo(const(int[]) arr);  // cannot affect anything
 // (even capacity)

Actually, you can modify the capacity of arr quite easily. All you have to
do is slice it and append to the slice, e.g.

auto arr2 = arr;
arr2 ~= 7;

Any code which has access to a dynamic array can affect that array's
capacity unless the array's capacity is already at its length or 0, because
if you have access to the array, you can slice, and that slice will have
access to exactly the same memory as the original. const prevents altering
the elements, of course, but it has no effect whatsoever on the ability of
other slices to expand into the memory beyond the end of that array.

And of course, if you misuse something like assumeSafeAppend (i.e. when it's
_not_ actually safe to append), then you can _really_ bork things.

Really, if you're dealing with a thread-local, dynamic array, and you check
its capacity immediately before doing something, then you can be sure that
it's capacity will be what it was when that something starts, but unless you
follow every line of code after checking the capacity and verify that none
of it could possibly have appended to a slice which referred to the last
point used in the memory block that that array points to or done something
like call assumeSafeAppend or anything else which could have affected the
capacity of that array, then you have to assume that it's possible that the
capacity of the array has changed.

I really don't think that it's reasonable in the general case to expect to
be able to guarantee that the capacity of a dynamic array won't change. If
you know exactly what the code is up to, and the array or any other array
that might refer to that same block of memory is only going to be appended
to under very controlled circumstances that you fully understand, then you
can know that the array's capacity won't change. But in general, if there's
any possibility of an array being appended to, then all bets are off.

- Jonathan M Davis




Re: Merging one Array with Another

2015-05-02 Thread via Digitalmars-d-learn

On Friday, 1 May 2015 at 19:30:08 UTC, Ilya Yaroshenko wrote:

Both variants are wrong because uniq needs sorted ranges.

Probably you need something like that:

x = x.chain(y).sort.uniq.array;


Interesting.

Is

x = x.chain(y).sort

faster than

x ~= y;
x.sort;

?

If so why?


Re: Builtin Support for Comparison Function for Primitive Types

2015-05-02 Thread via Digitalmars-d-learn

On Saturday, 2 May 2015 at 08:29:11 UTC, Per Nordlöw wrote:
Why doesn't std.algorithm.comparison.cmp support primitive 
types such as


assert(cmp(0,1) == true);

?


Correction if mean

  assert(cmp(0,1) == -1);


Re: Builtin Support for Comparison Function for Primitive Types

2015-05-02 Thread via Digitalmars-d-learn

On Saturday, 2 May 2015 at 08:29:11 UTC, Per Nordlöw wrote:
Why doesn't std.algorithm.comparison.cmp support primitive 
types such as


assert(cmp(0,1) == true);

?


I just found some forgotten code of my that solves it through

import std.range: only;
assert(cmp(only(0), only(1)) == -1);

Why the detour over std.range.only()?


Re: Merging one Array with Another

2015-05-02 Thread Meta via Digitalmars-d-learn

On Saturday, 2 May 2015 at 10:18:07 UTC, Per Nordlöw wrote:

On Friday, 1 May 2015 at 19:30:08 UTC, Ilya Yaroshenko wrote:

Both variants are wrong because uniq needs sorted ranges.

Probably you need something like that:

x = x.chain(y).sort.uniq.array;


Interesting.

Is

x = x.chain(y).sort

faster than

x ~= y;
x.sort;

?

If so why?


Probably the latter is slower than the former, at the very least 
because the latter requires memory allocation whereas the former 
does not.


A slice can lose capacity simply by calling a function

2015-05-02 Thread Ali Çehreli via Digitalmars-d-learn
This is related to a discussion[1] that I had started recently but I 
will give an even shorter example here:


void main()
{
// Two slices to all element
auto a = [ 1, 2, 3, 4 ];
auto b = a;

// Initially, they both have capacity (strange! :) )
assert(a.capacity == 7);
assert(b.capacity == 7);

// The first one that gets a new element gets the capacity
b ~= 42;
assert(a.capacity == 0);// -- a loses
assert(b.capacity == 7);
}

The interesting thing is that this situation is the same as appending to 
a slice parameter:


void foo(int[] b)
{
// Since stomping is prevented by the runtime, I am
// foolishly assuming that I can freely append to my
// parameter. Unfortunately, this action will cost the
// original slice its capacity.
b ~= 42;
}

void main()
{
auto a = [ 1, 2, 3, 4 ];

assert(a.capacity == 7);
foo(a);
assert(a.capacity == 0);// -- Capacity is gone :(
}

Note that the code above is about a function appending to a parameter 
for its own implementation. Otherwise, the appended element cannot be 
seen by the original slice anyway.


Also note that it would be the same if the parameter were const(int)[].

This is a new discovery of mine. Note that this is different from the 
non-determinism of when two slices stop sharing elements.[2] To me, this 
is very a strange consequence of passing a slice /by value/ despite the 
common expectation that by value leaves the original variable untouched. 
After this, I am tempted to come up with the following guideline.


(I am leaving 'immutable' and 'shared' out of this discussion.)

Guideline: Slice parameters should either be passed by reference to 
non-const or passed by value to const:


  1a) void foo(ref   int [] arr);  // can modify everything

  1b) void foo(ref const(int)[] arr);  // cannot modify elements

  2)  void foo(const(int[]) arr);  // cannot affect anything
   // (even capacity)

Only then the caller can be sure that the capacity of the original slice 
will not change. (I am assuming that the function is smart enough not to 
call assumeUnique.)


Since 1a and 1b are by reference, the function would not append to the 
parameter for its own implementation purposes anyway. If it did append, 
it would be with the intention of that particular side-effect.


For 2, thanks to the const parameter, the function must copy the slice 
first before appending to it.


If the parameter is to a mutable slice (even with const elements) then 
the original slice can lose capacity. It is possible to come up with 
solutions that preserve the capacity of the original slice but I think 
the previous guideline is sufficient:


foo(a.capacityPreserved);

(capacityPreserved can be a function, returning a RAII object, which 
calls assumeUnique in its destructor on the original slice.)


Does the guideline make sense?

Ali

[1] http://forum.dlang.org/thread/mhtu1k$2it8$1...@digitalmars.com

[2] http://dlang.org/d-array-article.html


Re: Merging one Array with Another

2015-05-02 Thread via Digitalmars-d-learn

On Saturday, 2 May 2015 at 11:16:30 UTC, Meta wrote:
Probably the latter is slower than the former, at the very 
least because the latter requires memory allocation whereas the 
former does not.


Ahh!,

auto x = [11, 3, 2, 4, 5, 1];
auto y = [0, 3, 10, 2, 4, 5, 1];

auto z = x.chain(y).sort; // sort them in place

assert(x == [0, 1, 1, 2, 2, 3]);
assert(y == [3, 4, 4, 5, 5, 10, 11]);

is very cool, provided that y is allowed to mutate. Now I 
understand why chain is useful.


A reallocation will however be needed in the final assignment 
though. But no temporary!


Re: stdx.data.json - enhancement suggestions

2015-05-02 Thread Laeeth Isharc via Digitalmars-d-learn

On Friday, 1 May 2015 at 20:04:58 UTC, Ilya Yaroshenko wrote:

This line can be removed:
.map!(ch = ch.idup)


On Friday, 1 May 2015 at 20:02:46 UTC, Ilya Yaroshenko wrote:

Current std.stdio is deprecated. This ad-hoc should works.

auto json = File(fileName)
.byChunk(1024 * 1024) //1 MB. Data cluster equals 1024 * 4
.map!(ch = ch.idup)
.joiner
.map!(b = cast(char)b)
.parseJSON;


Thanks for this.  I am trying to call parseJSONStream, so I will 
see if that works.


(Sample code here:
https://github.com/s-ludwig/std_data_json/blob/master/source/stdx/data/json/parser.d
)


Re: A slice can lose capacity simply by calling a function

2015-05-02 Thread Ali Çehreli via Digitalmars-d-learn

On 05/02/2015 01:56 AM, Jonathan M Davis via Digitalmars-d-learn wrote:

 I really don't think that it's reasonable in the general case to 
expect to

 be able to guarantee that the capacity of a dynamic array won't change.

Yes, it is very different from other languages like C and C++ that 
talking about this behavior is worth it.


The elements are const, the slice is passed by value, but the reserved 
capacity disappears:


void foo(const(int)[] b)
{
b ~= 42;
}

a.reserve(milion);
assert(a.capacity = milion);

foo(a);
assert(a.capacity == 0);// WAT

Ali



Re: Reducing source code: weak+alias values in array

2015-05-02 Thread Jens Bauer via Digitalmars-d-learn

On Saturday, 2 May 2015 at 13:08:27 UTC, Artur Skawina wrote:

On 05/02/15 05:28, Jens Bauer via Digitalmars-d-learn wrote:

On Saturday, 2 May 2015 at 03:21:38 UTC, Jens Bauer wrote:

For some reason, my build time has increased dramatically...

Building with 1 vector takes 0.6 seconds.
Building with 2 vector takes 0.7 seconds.
Building with 4 vector takes 0.9 seconds.
Building with 8 vector takes 1.1 seconds.
Building with 16 vectors takes 1.7 seconds.
Building with 32 vectors takes 3.4 seconds.
Building with 64 vectors takes 12.4 seconds.
Building with 112 vectors takes 55.5 seconds.
Building with 113 vectors takes 56.7 seconds.


Apparently CTFE can be very inefficient sometimes -- compiler
issue. Can't think of a workaround right now; manually parsing
(instead of using mixins) might help, but that would make the
solution less obvious...


I'll try and make a few experiments to see if there's something 
that helps speeding it up.



http://pastebin.com/pCh9e7hQ


For some reason I was never really affected by the horrible
CTFE perf. For example, your code from that link, after a few
tweaks to get it to build, compiles in ~3s for me. (64 bit x86
linux gdc build)


That's quick. I'd expect your computer to be a bit faster than 
mine. ;)
I have a QuadCore 2.5GHz PowerMac G5. But I'll also be building 
on a Dual 2GHz ARM Cortex-A7 based CubieBoard2 if I succeed 
building a D compiler for it; this. I think it's important for 
the user that the compilation time is kept low, because many 
people will be building on Cortex-A based devices.


Re: stdx.data.json - enhancement suggestions

2015-05-02 Thread Laeeth Isharc via Digitalmars-d-learn

It doesn't like it.  Any thoughts ?

lexer.d(257): Error: safe function 
'stdx.data.json.parser.JSONLexerRange!(MapResult!(__lambda3, 
Result), cast(LexOptions)0, __lambda31).JSONLexerRange.empty' 
cannot call system function 
'app.lookupTickers.MapResult!(__lambda3, Result).MapResult.empty'


string[2][] lookupTickers(string dataSource,string[] searchItems)
{
import stdx.data.json;
import std.conv:to;
import std.algorithm:canFind,countUntil,joiner,map;
import std.string:toLower;
bool found=false;
bool checkedCode=false;
bool checkedName=false;
string[2][] ret;
string buf;
auto filename=../importquandl/~dataSource~.json;
//auto data=cast(string)std.file.read(filename);
auto data = File(fileName)
.byChunk(100*1024 * 1024) //1 MB. Data cluster equals 1024 * 4
// .map!(ch = ch.idup)
.joiner
.map!(b = cast(char)b);
auto range1=parseJSONStream(data);
}


Reading bzipped files

2015-05-02 Thread via Digitalmars-d-learn
Have anybody cooked up any range adaptors for on the fly decoding 
of bzipped files? Preferable compatible with phobos standard 
interfaces for file io.


Should probably be built on top of

http://code.dlang.org/packages/bzip2


Re: Reducing source code: weak+alias values in array

2015-05-02 Thread Artur Skawina via Digitalmars-d-learn
On 05/02/15 05:28, Jens Bauer via Digitalmars-d-learn wrote:
 On Saturday, 2 May 2015 at 03:21:38 UTC, Jens Bauer wrote:
 For some reason, my build time has increased dramatically...

 Building with 1 vector takes 0.6 seconds.
 Building with 2 vector takes 0.7 seconds.
 Building with 4 vector takes 0.9 seconds.
 Building with 8 vector takes 1.1 seconds.
 Building with 16 vectors takes 1.7 seconds.
 Building with 32 vectors takes 3.4 seconds.
 Building with 64 vectors takes 12.4 seconds.
 Building with 112 vectors takes 55.5 seconds.
 Building with 113 vectors takes 56.7 seconds.

Apparently CTFE can be very inefficient sometimes -- compiler
issue. Can't think of a workaround right now; manually parsing
(instead of using mixins) might help, but that would make the
solution less obvious...

 Here's the source code for the file I'm building:
 
 http://pastebin.com/pCh9e7hQ

For some reason I was never really affected by the horrible
CTFE perf. For example, your code from that link, after a few
tweaks to get it to build, compiles in ~3s for me. (64 bit x86
linux gdc build)

artur


Re: Destruction in D

2015-05-02 Thread via Digitalmars-d-learn

On Friday, 1 May 2015 at 18:37:41 UTC, Idan Arye wrote:

On Friday, 1 May 2015 at 17:45:02 UTC, bitwise wrote:

On Friday, 1 May 2015 at 02:35:52 UTC, Idan Arye wrote:

On Thursday, 30 April 2015 at 23:27:49 UTC, bitwise wrote:
Well, the third thing was just my reasoning for asking in 
the first place. I need to be able to acquire/release shared 
resources reliably, like an OpenGL texture, for example.


If you want to release resources, you are going to have to 
call some functions that do that for you, so you can't escape 
that special stack frame(what's so special about it?) - 
though the compiler might inline it.


When you use a GC the compiler don't need to invoke the 
destructor in the end of the scope because the object is 
destroyed in the background, but that also means you can't 
rely on it to release your resources, so languages like Java 
and C# use try-with-resources and using 
statements(corresponding) to call something at the end of the 
scope and end up using that stack frame anyways.


I'm not sure I understand you 100%, but my plan was to have an 
asset management system give out ref counted textures/etc. 
Whenever the last one went out of scope, the asset would be 
destroyed. This only works if the destructor is called on the 
graphics thread due to limitations of graphics APIs. In a 
single threaded C++ app, this is fine, destructors are called 
at end of scope. I was confused though, because like C#, D has 
both reference and value types. But, while in C#, value types 
still do not have destructors(grrr...) D structs do have 
destructors, which apparently run when the struct goes out of 
scope. However, the D port of my code will most likely use 
multithreaded rendering, which removes the guarantee that the 
assets will go out of scope on the graphics thread, so this 
idea is a no-go anyways.


Structs allow you to implement ref-counting smart pointers like 
you can do in C++. There is an implementation in the standard 
library: http://dlang.org/phobos/std_typecons.html#.RefCounted


But for something as structured and as heavy as gaming 
resources, I would go for a more manual approach like a 
repository-style architecture, where you manually tell the 
repository to load/release the the resources.


A very easy and simple solution is to have a queue of unused
textures in your asset manager, that any thread can push to, but
only one thread consumes from. Whenever the last ref is released
you push a texture into this queue, and at the end of each frame,
while the GPU is busy rendering or flipping, you go through this
queue on the appropriate thread and delete the textures.
This has the advantage of making your timings deterministic, i.e.
a function won't suddently take 200x longer because it happened
to release the last ref to a texture.

Also, remember that aquiring/releasing refs in a threaded
environment is quite expensive, and you don't want to be doing
this while passing a texture around in your graphics pipeline, so
you probably shouldn't be counting texture refs at the texture
level, but maybe when you load/destroy a material, for example.


Re: What wrong?

2015-05-02 Thread Dennis Ritchie via Digitalmars-d-learn

On Saturday, 2 May 2015 at 02:51:52 UTC, Fyodor Ustinov wrote:

Simple code:

http://pastebin.com/raw.php?i=7jVeMFXQ

This code works compiled by DMD v2.066.1 and LDC2 (0.15.1) 
based on DMD v2.066.1 and LLVM 3.5.0.


$ ./z
TUQLUE
42
11

Compiled by DMD v2.067.1 the program crashes:
$ ./aa
TUQLUE
Segmentation fault

What I'm doing wrong?



I think the problem is in these lines:

-
receive(
(supervisorAnswer a) = r = a.ret
);

Partially it works :)

-
import std.variant;

private struct Exit{};
private struct supervisorAnswer {
Variant ret;
}

private __gshared Tid supervisorTid;

private void supervisor() {
static Variant[string] zval;
bool done = false;
void _store(T)(string k, T v) {
assert(k.length  0);
zval[k] = v;
}

void _get(Tid id, string k) {
id.send(supervisorAnswer(zval.get(k, Variant(NOTFOUND;
}

while (!done) {
supervisorAnswer answer;
receive(
(Exit s) { done = true; },
_store!long,
_store!ulong,
_store!int,
_store!uint,
_store!float,
_store!double,
_store!string,
_store!Variant,
_get,
(Variant e) {  writeln(e); },
);
}
}

Variant Get(const string s) {
Variant r;
supervisorTid.send(thisTid, s);
writeln(TUQLUE);
/*receive(
(supervisorAnswer a) = r = a.ret
);*/
writeln(42);
return r;
}

void Set(T)(const string s, T v) {
supervisorTid.send(s, v);
}

shared static this() {
supervisorTid = spawn(supervisor);
}

shared static ~this() {
send(supervisorTid, Exit());
}

void main() {
Set(1, 11);
writeln(Get(1));
send(supervisorTid, Exit());
thread_joinAll();
}


Re: A slice can lose capacity simply by calling a function

2015-05-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Saturday, May 02, 2015 07:46:27 Ali Çehreli via Digitalmars-d-learn wrote:
 On 05/02/2015 01:56 AM, Jonathan M Davis via Digitalmars-d-learn wrote:

   I really don't think that it's reasonable in the general case to
 expect to
   be able to guarantee that the capacity of a dynamic array won't change.

 Yes, it is very different from other languages like C and C++ that
 talking about this behavior is worth it.

It really comes down to how the memory itself is owned and managed, and with
dynamic arrays, it's the runtime that does that, and dynamic arrays simply
don't own or manage their memory, so expecting to them to isn't going to
work.

 The elements are const, the slice is passed by value, but the reserved
 capacity disappears:

 void foo(const(int)[] b)
 {
  b ~= 42;
 }

  a.reserve(milion);
  assert(a.capacity = milion);

  foo(a);
  assert(a.capacity == 0);// WAT

Yeah. If you just code in a way that you're making sure that you're not
slicing an array and then appending to the slices, then you can rely on
reserve working as you'd expect, and situations like this work just fine

int[] foo;
foo.reserve(target);

while(blah)
{
//...
foo ~= next;
//...
}

so long as you're not doing anything else with the array in the process, but
if you start passing a dynamic array around to functions which might append
to it or a to a slice of it, then all bets are off.

I really don't think that it's an issue in general, but if you do want to
guarantee that nothing affects the capacity of your array, then you're going
to need to either wrap all access to it so that nothing else can actually
get at a slice of the array itself, or you need to use another container -
one which is actually a container rather than the weird half-container,
half-iterator/range construct that a D dynamic array is.

Ultimately, I think that the semantics of D arrays make sense and are
extremely useful, but they are definitely weird in comparison to pretty much
anything else that you're going to run into.

- Jonathan M Davis




Re: Destruction in D

2015-05-02 Thread bitwise via Digitalmars-d-learn

On Fri, 01 May 2015 14:37:40 -0400, Idan Arye generic...@gmail.com wrote:
Structs allow you to implement ref-counting smart pointers like you can  
do in C++. There is an implementation in the standard library:  
http://dlang.org/phobos/std_typecons.html#.RefCounted


Yeah, I guess I should have taken the existence of RefCounted as  
confirmation that D has deterministic destruction for structs, but the GC  
reference makes some pretty broad generalizations that do not seem to be  
entirely true.


But for something as structured and as heavy as gaming resources, I  
would go for a more manual approach like a repository-style  
architecture, where you manually tell the repository to load/release the  
the resources.


A quick googling of repository style architecture yielded a large  
variety of pages, including a 143 slide presentation, which I have no  
intention of reading =/ I think such an architecture would be overkill for  
something like this.


In any case, manually loading/releasing assets is a huge step backward.



On Sat, 02 May 2015 09:22:57 -0400, Márcio Martins marcio...@gmail.com  
wrote:

A very easy and simple solution is to have a queue of unused
textures in your asset manager, that any thread can push to, but
only one thread consumes from. Whenever the last ref is released
you push a texture into this queue, and at the end of each frame,
while the GPU is busy rendering or flipping, you go through this
queue on the appropriate thread and delete the textures.
This has the advantage of making your timings deterministic, i.e.
a function won't suddently take 200x longer because it happened
to release the last ref to a texture.


This sounds like it could work.


Also, remember that aquiring/releasing refs in a threaded
environment is quite expensive, and you don't want to be doing
this while passing a texture around in your graphics pipeline, so
you probably shouldn't be counting texture refs at the texture
level, but maybe when you load/destroy a material, for example.


True. I was going to pass the RefCounted assets by (ref const), but the  
annoyance that brings hardly seems worth it at this point.


I think I would wrap the assets in a class instead though, and let the  
destructor decrement a ref count in the asset manager instead of pushing  
it to a queue. That way, it would be at the discretion of the asset  
manager to destroy the assets as needed. At first, I would most likely  
destroy the assets immediately, but eventually, the destruction could be  
delayed until memory runs thin to cover awesome code like this:


while(true) {
Texture tex = Asset.GetTexture(card);
drawQuad(tex);
}


Re: Reading bzipped files

2015-05-02 Thread tom via Digitalmars-d-learn

On Saturday, 2 May 2015 at 13:50:10 UTC, Per Nordlöw wrote:
Have anybody cooked up any range adaptors for on the fly 
decoding of bzipped files? Preferable compatible with phobos 
standard interfaces for file io.


Should probably be built on top of

http://code.dlang.org/packages/bzip2


i use Stephan Schiffels code from

http://forum.dlang.org/thread/djhteyhpcnaskpabx...@forum.dlang.org?page=2



Re: What wrong?

2015-05-02 Thread Dennis Ritchie via Digitalmars-d-learn

On Saturday, 2 May 2015 at 19:38:01 UTC, Fyodor Ustinov wrote:

I see it by the lack of 42. :)

But why is this receive breaks down?


Report, please, about it (D)evepopers :)
https://issues.dlang.org/


Re: stdx.data.json - enhancement suggestions

2015-05-02 Thread Ilya Yaroshenko via Digitalmars-d-learn
You can use std.json or create TrustedInputRangeShell template 
with @trasted methods:



struct TrustedInputRangeShell(Range)
{
Range* data;

auto front() @property @trusted { return (*data).front; }

//etc
}

But I am not sure about other parseJSONStream bugs.


How to I translate this C++ structure/array

2015-05-02 Thread WhatMeWorry via Digitalmars-d-learn


This is probably trivial but I just can't make a break thru.

I've got C++ code using glm like so:

   struct Vertex
{
glm::vec3 position;
glm::vec3 color;
}

Vertex triangle[] =
[
glm::vec3(0.0,  1.0, 0.0),
glm::vec3(1.0,  0.0, 0.0),   // red

// code removed for brevity.
];


And in D I'm using the gl3n library instead of glm.


struct Vertex
{
vec3 position;
vec3 color;
}

Vertex triangle[6] =
[
vec3(0.0,  1.0, 0.0),
vec3(1.0,  0.0, 0.0),   // red

   // code removed for brevity.
];

I keep getting the following:

MeGlWindow.d(171): Error: cannot implicitly convert expression 
(Vector([0.00F, 1.0F, 0.00F])) of type Vector!(float, 
3) to Vertex
MeGlWindow.d(172): Error: cannot implicitly convert expression 
(Vector([1.0F, 0.00F, 0.00F])) of type Vector!(float, 
3) to Vertex
MeGlWindow.d(174): Error: cannot implicitly convert expression 
(Vector([-1F, -1F, 0.00F])) of type Vector!(float, 3) to 
Vertex


why std.process.Pid std.process.Environment are classes ?

2015-05-02 Thread Baz via Digitalmars-d-learn

In std.process, the following declarations:

- final class Pid
- abstract final class environment

could be struct, couldn't they ?

Any particular reason behind this choice ?


Re: How to I translate this C++ structure/array

2015-05-02 Thread anonymous via Digitalmars-d-learn

On Saturday, 2 May 2015 at 22:01:10 UTC, WhatMeWorry wrote:

struct Vertex
{
vec3 position;
vec3 color;
}

Vertex triangle[6] =
[
vec3(0.0,  1.0, 0.0),
vec3(1.0,  0.0, 0.0),   // red

   // code removed for brevity.
];

I keep getting the following:

MeGlWindow.d(171): Error: cannot implicitly convert expression 
(Vector([0.00F, 1.0F, 0.00F])) of type 
Vector!(float, 3) to Vertex


You have to be explicit:

Vertex[6] triangle = /* [1] */
[
Vertex(
vec3(0.0,  1.0, 0.0),
vec3(1.0,  0.0, 0.0),   // red
),
...
];

[1] `Vertex triangle[6]` works, but please don't do that.


Parameter storage class 'in' transitive like 'const'?

2015-05-02 Thread Ali Çehreli via Digitalmars-d-learn

We know that 'in' is equivalent to const scope:

  http://dlang.org/function.html#parameters

So, the constness of 'in' is transitive as well, right?

Ali


Re: What wrong?

2015-05-02 Thread Fyodor Ustinov via Digitalmars-d-learn

On Saturday, 2 May 2015 at 19:13:45 UTC, Dennis Ritchie wrote:

On Saturday, 2 May 2015 at 02:51:52 UTC, Fyodor Ustinov wrote:

Simple code:

http://pastebin.com/raw.php?i=7jVeMFXQ

This code works compiled by DMD v2.066.1 and LDC2 (0.15.1) 
based on DMD v2.066.1 and LLVM 3.5.0.


$ ./z
TUQLUE
42
11

Compiled by DMD v2.067.1 the program crashes:
$ ./aa
TUQLUE
Segmentation fault

What I'm doing wrong?



I think the problem is in these lines:

-
receive(
(supervisorAnswer a) = r = a.ret
);

Partially it works :)


I see it by the lack of 42. :)

But why is this receive breaks down?


Re: Ada to D - an array for storing values of each of the six bits which are sufficient

2015-05-02 Thread Dennis Ritchie via Digitalmars-d-learn

On Friday, 1 May 2015 at 23:22:31 UTC, Dennis Ritchie wrote:
Maybe someone will show a primitive packed array. I really can 
not imagine how to do it on D.


Maybe you can somehow use bitfields. While what happened is 
something like this:


-
import std.stdio, std.bitmanip;

struct A
{
mixin(bitfields!(
uint, bit,  6,
bool, flag1, 1,
bool, flag2, 1));
}


void main() {

A obj;

int[] a;

foreach (e; 0 .. 64) {
obj.bit = e;
a ~= e;
}

writeln(a);

// obj.bit = 64; // Value is greater than
//the maximum value of bitfield 'bit'
}
-
http://ideone.com/Opr4zM


Re: How to I translate this C++ structure/array

2015-05-02 Thread WhatMeWorry via Digitalmars-d-learn

On Saturday, 2 May 2015 at 22:36:29 UTC, anonymous wrote:

On Saturday, 2 May 2015 at 22:01:10 UTC, WhatMeWorry wrote:

   struct Vertex
   {
   vec3 position;
   vec3 color;
   }

   Vertex triangle[6] =
   [
   vec3(0.0,  1.0, 0.0),
   vec3(1.0,  0.0, 0.0),   // red

  // code removed for brevity.
   ];

I keep getting the following:

MeGlWindow.d(171): Error: cannot implicitly convert expression 
(Vector([0.00F, 1.0F, 0.00F])) of type 
Vector!(float, 3) to Vertex


You have to be explicit:

Vertex[6] triangle = /* [1] */
[
Vertex(
vec3(0.0,  1.0, 0.0),
vec3(1.0,  0.0, 0.0),   // red
),
...
];

[1] `Vertex triangle[6]` works, but please don't do that.


Thanks. I assume you would prefer I use triangle[] but with 
OpenGL calls the dynamic arrays don't work.  But maybe that is 
another question for another time.