Re: Compile time initialization of AA
Le 23/03/2018 à 23:43, Xavier Bigand a écrit : I am trying to initialize an global immutable associative array of structs, but it doesn't compile. I am getting the following error message : "Error: not an associative array initializer". As I really need to store my data for a compile time purpose if we can't do that with AA, I'll use arrays instead. Here is my code : struct EntryPoint { string moduleName; string functionName; boolbeforeForwarding = false; } immutable EntryPoint[string] entryPoints = [ "wglDescribePixelFormat": {moduleName:"opengl32.forward_initialization", functionName:"wglDescribePixelFormat"} ]; I finally found something that works great: enum entryPoints = [ "wglChoosePixelFormat": EntryPoint("opengl32.forward_initialization", "client_wglChoosePixelFormat"), "wglDescribePixelFormat": EntryPoint("opengl32.forward_initialization", "client_wglDescribePixelFormat") ]; I am able to use this enum like an AA.
Compile time initialization of AA
I am trying to initialize an global immutable associative array of structs, but it doesn't compile. I am getting the following error message : "Error: not an associative array initializer". As I really need to store my data for a compile time purpose if we can't do that with AA, I'll use arrays instead. Here is my code : struct EntryPoint { string moduleName; string functionName; boolbeforeForwarding = false; } immutable EntryPoint[string] entryPoints = [ "wglDescribePixelFormat": {moduleName:"opengl32.forward_initialization", functionName:"wglDescribePixelFormat"} ];
Re: CTFE and -betterC
Le 16/03/2018 à 22:58, Xavier Bigand a écrit : Le 15/03/2018 à 01:09, Flamaros a écrit : On Wednesday, 14 March 2018 at 01:17:54 UTC, rikki cattermole wrote: You will still need DllMain, that is a platform requirement. I am not sure about that because when DllAnalyser don't see it in the opengl32.dll from the system32 directory. And the documentation indicate that it is optional. I finally choose to put the entry points generation in a sub-project that put them in a d file, like that it is easier to make the CTFE working and will be much better for the debugging and compilation time. So I have also some few other questions : - Is it a bug that ctRegex doesn't with the return of allMembers? - What is the status of the new CTFE engine? - Will CTFE be able to write files or expose a way to see to resulting generated code for a debug purpose? - Is there a reason why CTFE is impacted by the -betterC option? I actually found token strings, but I can't figure out how to cascade them, it is even possible? I tried things like that : enum loadSystemSymbolsCode = q{ version (Windows) { extern (Windows) void loadSytemSymbols() { import core.sys.windows.windows; immutable string dllFilePath = "C:/Windows/System32/opengl32.dll"; auto hModule = LoadLibraryEx(dllFilePath, null, 0); if (hModule == null) { return; } writeln(dllFilePath ~ " loaded."); "%SYSTEM_BINDINGS%" } } }; enum moduleCode = q{ module api_entry; import std.stdio : writeln; import derelict.util.wintypes; export extern (C) { mixin(loadSystemSymbolsCode); } }; string getLoadSystemSymbolsCode(string bindinsCode)() { return loadSystemSymbolsCode.replace("%SYSTEM_BINDINGS%", bindinsCode); } string getModuleCode(string loadSystemSymbolsCode)() { return moduleCode.replace("%LOAD_SYSTEM_SYMBOLS%", loadSystemSymbolsCode); } voidmain() { import std.stdio : File; auto file = File("../opengl32/src/api_entry.d", "w"); file.writeln( getModuleCode!( getLoadSystemSymbolsCode!("test;")()) ); } Is there some materials for learning to do this kind of things with CTFE? I feel my self little stupid, I don't need the " in the token string with the %WordToReplace%. So I think that the magic will happen.
Re: CTFE and -betterC
Le 15/03/2018 à 01:09, Flamaros a écrit : On Wednesday, 14 March 2018 at 01:17:54 UTC, rikki cattermole wrote: You will still need DllMain, that is a platform requirement. I am not sure about that because when DllAnalyser don't see it in the opengl32.dll from the system32 directory. And the documentation indicate that it is optional. I finally choose to put the entry points generation in a sub-project that put them in a d file, like that it is easier to make the CTFE working and will be much better for the debugging and compilation time. So I have also some few other questions : - Is it a bug that ctRegex doesn't with the return of allMembers? - What is the status of the new CTFE engine? - Will CTFE be able to write files or expose a way to see to resulting generated code for a debug purpose? - Is there a reason why CTFE is impacted by the -betterC option? I actually found token strings, but I can't figure out how to cascade them, it is even possible? I tried things like that : enum loadSystemSymbolsCode = q{ version (Windows) { extern (Windows) void loadSytemSymbols() { import core.sys.windows.windows; immutable string dllFilePath = "C:/Windows/System32/opengl32.dll"; auto hModule = LoadLibraryEx(dllFilePath, null, 0); if (hModule == null) { return; } writeln(dllFilePath ~ " loaded."); "%SYSTEM_BINDINGS%" } } }; enum moduleCode = q{ module api_entry; import std.stdio : writeln; import derelict.util.wintypes; export extern (C) { mixin(loadSystemSymbolsCode); } }; string getLoadSystemSymbolsCode(string bindinsCode)() { return loadSystemSymbolsCode.replace("%SYSTEM_BINDINGS%", bindinsCode); } string getModuleCode(string loadSystemSymbolsCode)() { return moduleCode.replace("%LOAD_SYSTEM_SYMBOLS%", loadSystemSymbolsCode); } voidmain() { import std.stdio : File; auto file = File("../opengl32/src/api_entry.d", "w"); file.writeln( getModuleCode!( getLoadSystemSymbolsCode!("test;")()) ); } Is there some materials for learning to do this kind of things with CTFE?
CTFE and -betterC
As I am trying to do a dll that acts exactly like one written in C, I am trying to compile my code with the -betterC option. So I would not need the DllMain function. I am not sure that I use the best syntax for my CTFE function to be able to make it works with the option -betterC and to maintain it after. In particular I have following issues (my code is at the end of message) : * startsWith function doesn't compile with -betterC * can't put static before the first foreach * don't really now how to factorize small expressions (oglFunctionName , oglFunctionName,...) * how to make the code I generate less polluted by conditions and iterations? Certainly by naming some previously computed parts with alias? * after that how to see the generated result and debug? Thank you in advance for any help. module api_entry; import std.stdio : writeln; import std.algorithm.searching; import missing_ogl; import std.traits; import std.meta; static string implementFunctionsOf(string Module, bool removeARB = false)() { import std.traits; import std.regex; import std.conv; mixin("import " ~ Module ~ ";"); string res; res ~= "extern (C) {\n"; foreach (name; __traits(allMembers, mixin(Module))) { static if (name.startsWith("da_") && mixin("isCallable!" ~ name)) { alias derelict_oglFunctionName = Alias!(name[3..$]); alias oglFunctionName = derelict_oglFunctionName; alias returnType = Alias!(ReturnType!(mixin(name)).stringof); alias parametersType = Alias!(Parameters!(mixin(name)).stringof); static if (removeARB && name.endsWith("ARB")) oglFunctionName = oglFunctionName[0..$ - 3]; res ~= "export\n" ~ returnType ~ "\n" ~ oglFunctionName ~ parametersType ~ "\n" ~ "{\n" ~ "writeln(\"" ~ oglFunctionName ~ " is not specialized\");\n"; // Forward the call to the driver (with arguments and return the // value of the forward directly) res ~= "import " ~ Module ~ ";"; // For a reason I do not understand the compiler can not // compile with returnType static if (ReturnType!(mixin(name)).stringof == "int function()") res ~= "alias extern (C) " ~ returnType ~ " returnType;\n" ~ "return cast(returnType) "; else if (returnType != "void") res ~= "return "; res ~= "" ~ Module ~ "." ~ derelict_oglFunctionName ~ "("; foreach (i, parameter; Parameters!(mixin(name))) { if (i > 0) res ~= ", "; // We use the default parameter name variable "_param_x" where x // is the index of the parameter starting from 0 res ~= "_param_" ~ to!string(i); } res ~= ");"; res ~= "}\n"; } } res ~= "}\n"; return res; } mixin(implementFunctionsOf!("derelict.opengl3.functions")); mixin(implementFunctionsOf!("derelict.opengl3.deprecatedFunctions"));
Re: How give a module to a CTFE function
Le 12/03/2018 à 23:28, Xavier Bigand a écrit : Le 12/03/2018 à 23:24, Xavier Bigand a écrit : Le 12/03/2018 à 22:30, arturg a écrit : On Monday, 12 March 2018 at 21:00:07 UTC, Xavier Bigand wrote: Hi, I have a CTFE function that I want to make more generic by given it a module as parameter. My actual code looks like : mixin(implementFunctionsOf()); string implementFunctionsOf() { import std.traits; stringres; foreach(name; __traits(allMembers, myHardCodedModule)) { } return res; } I tried many things but I can't figure out the type of the parameter I should use for the function implementFunctionsOf. you can use a alias or a string: void fun(alias mod, string mod2)() { foreach(m; __traits(allMembers, mod)) pragma(msg, m); foreach(m; __traits(allMembers, mixin(mod2))) pragma(msg, m); } void main() { import std.stdio; fun!(std.stdio, "std.stdio"); } I tried both without success, Here is my full code : module api_entry; import std.stdio : writeln; import std.algorithm.searching; import derelict.opengl3.functions; import std.traits; string implementFunctionsOf(string mod) { import std.traits; stringres; static foreach(name; __traits(allMembers, mixin(mod))) { static if (mixin("isCallable!" ~ name) && name.startsWith("da_")) { string oglFunctionName = name[3..$]; string returnType = ReturnType!(mixin(name)).stringof; string parametersType = Parameters!(mixin(name)).stringof; res ~= "export\n" ~ "extern (C)\n" ~ returnType ~ "\n" ~ oglFunctionName ~ parametersType ~ "\n" ~ "{\n" ~ " writeln(\"" ~ oglFunctionName ~ "\");\n"; static if (ReturnType!(mixin(name)).stringof != "void") { res ~= " " ~ returnType ~ " result;\n" ~ " return result;"; } res ~= "}\n"; } } return res; } mixin(implementFunctionsOf("derelict.opengl3.functions")); As string I get the following error: ..\src\api_entry.d(16): Error: variable `mod` cannot be read at compile time ..\src\api_entry.d(48):called from here: `implementFunctionsOf("derelict.opengl3.functions")` I also tried to make implementFunctionsOf a mixin template. I forgot to precise, that I don't have a main, because I am trying to create an opengl32.dll. This is why I already have a mixin to inject to function definitions in the root scope. Ok, it works with the alias, I didn't see the last () in the implementFunctionsOf prototype. Thank you a lot.
Re: How give a module to a CTFE function
Le 12/03/2018 à 23:24, Xavier Bigand a écrit : Le 12/03/2018 à 22:30, arturg a écrit : On Monday, 12 March 2018 at 21:00:07 UTC, Xavier Bigand wrote: Hi, I have a CTFE function that I want to make more generic by given it a module as parameter. My actual code looks like : mixin(implementFunctionsOf()); string implementFunctionsOf() { import std.traits; stringres; foreach(name; __traits(allMembers, myHardCodedModule)) { } return res; } I tried many things but I can't figure out the type of the parameter I should use for the function implementFunctionsOf. you can use a alias or a string: void fun(alias mod, string mod2)() { foreach(m; __traits(allMembers, mod)) pragma(msg, m); foreach(m; __traits(allMembers, mixin(mod2))) pragma(msg, m); } void main() { import std.stdio; fun!(std.stdio, "std.stdio"); } I tried both without success, Here is my full code : module api_entry; import std.stdio : writeln; import std.algorithm.searching; import derelict.opengl3.functions; import std.traits; string implementFunctionsOf(string mod) { import std.traits; stringres; static foreach(name; __traits(allMembers, mixin(mod))) { static if (mixin("isCallable!" ~ name) && name.startsWith("da_")) { string oglFunctionName = name[3..$]; string returnType = ReturnType!(mixin(name)).stringof; string parametersType = Parameters!(mixin(name)).stringof; res ~= "export\n" ~ "extern (C)\n" ~ returnType ~ "\n" ~ oglFunctionName ~ parametersType ~ "\n" ~ "{\n" ~ " writeln(\"" ~ oglFunctionName ~ "\");\n"; static if (ReturnType!(mixin(name)).stringof != "void") { res ~= " " ~ returnType ~ " result;\n" ~ " return result;"; } res ~= "}\n"; } } return res; } mixin(implementFunctionsOf("derelict.opengl3.functions")); As string I get the following error: ..\src\api_entry.d(16): Error: variable `mod` cannot be read at compile time ..\src\api_entry.d(48):called from here: `implementFunctionsOf("derelict.opengl3.functions")` I also tried to make implementFunctionsOf a mixin template. I forgot to precise, that I don't have a main, because I am trying to create an opengl32.dll. This is why I already have a mixin to inject to function definitions in the root scope.
Re: How give a module to a CTFE function
Le 12/03/2018 à 22:30, arturg a écrit : On Monday, 12 March 2018 at 21:00:07 UTC, Xavier Bigand wrote: Hi, I have a CTFE function that I want to make more generic by given it a module as parameter. My actual code looks like : mixin(implementFunctionsOf()); string implementFunctionsOf() { import std.traits; stringres; foreach(name; __traits(allMembers, myHardCodedModule)) { } return res; } I tried many things but I can't figure out the type of the parameter I should use for the function implementFunctionsOf. you can use a alias or a string: void fun(alias mod, string mod2)() { foreach(m; __traits(allMembers, mod)) pragma(msg, m); foreach(m; __traits(allMembers, mixin(mod2))) pragma(msg, m); } void main() { import std.stdio; fun!(std.stdio, "std.stdio"); } I tried both without success, Here is my full code : module api_entry; import std.stdio : writeln; import std.algorithm.searching; import derelict.opengl3.functions; import std.traits; string implementFunctionsOf(string mod) { import std.traits; string res; static foreach(name; __traits(allMembers, mixin(mod))) { static if (mixin("isCallable!" ~ name) && name.startsWith("da_")) { string oglFunctionName = name[3..$]; string returnType = ReturnType!(mixin(name)).stringof; string parametersType = Parameters!(mixin(name)).stringof; res ~= "export\n" ~ "extern (C)\n" ~ returnType ~ "\n" ~ oglFunctionName ~ parametersType ~ "\n" ~ "{\n" ~ " writeln(\"" ~ oglFunctionName ~ "\");\n"; static if (ReturnType!(mixin(name)).stringof != "void") { res ~= " " ~ returnType ~ " result;\n" ~ " return result;"; } res ~= "}\n"; } } return res; } mixin(implementFunctionsOf("derelict.opengl3.functions")); As string I get the following error: ..\src\api_entry.d(16): Error: variable `mod` cannot be read at compile time ..\src\api_entry.d(48):called from here: `implementFunctionsOf("derelict.opengl3.functions")` I also tried to make implementFunctionsOf a mixin template.
How give a module to a CTFE function
Hi, I have a CTFE function that I want to make more generic by given it a module as parameter. My actual code looks like : mixin(implementFunctionsOf()); string implementFunctionsOf() { import std.traits; string res; foreach(name; __traits(allMembers, myHardCodedModule)) { } return res; } I tried many things but I can't figure out the type of the parameter I should use for the function implementFunctionsOf.
Re: Need help to compile code with traits
Le 05/02/2017 à 18:32, Basile B. a écrit : On Sunday, 5 February 2017 at 14:59:04 UTC, Xavier Bigand wrote: Hi, I am trying to create an allocator that don't use the GC, and I have issues for the initialization of member before calling the constructor. Here is my actual code : mixin template NogcAllocator(T) { static TnogcNew(T, Args...)(Args args) @nogc { import core.stdc.stdlib : malloc; import std.traits; Tinstance; instance = cast(T)malloc(__traits(classInstanceSize, T)); foreach (string member; __traits(allMembers, T)) { static if (isType!(__traits(getMember, T, member))) __traits(getMember, instance, member) = typeof(__traits(getMember, T, member)).init; } instance.__ctor(args); return instance; } static voidnogcDelete(T)(T instance) @nogc { import core.stdc.stdlib : free; instance.__dtor(); free(instance); } } unittest { struct Dummy { int field1 = 10; int field2 = 11; } class MyClass { mixin NogcAllocator!MyClass; int a = 0; int[] b = [1, 2, 3]; Dummy c = Dummy(4, 5); int d = 6; this() @nogc { } this(int val) @nogc { d = val; } } MyClass first = MyClass.nogcNew!MyClass(); MyClass second = MyClass.nogcNew!MyClass(7); assert(first.a == 0); assert(first.b == [1, 2, 3]); assert(first.c.field1 == 4); assert(first.d == 6); assert(second.c.field1 == 4); assert(second.d == 7); } And the compilation errors : ..\src\core\nogc_memory.d(16): Error: no property 'this' for type 'core.nogc_memory.__unittestL39_3.MyClass' ..\src\core\nogc_memory.d(17): Error: type Monitor is not an expression ..\src\core\nogc_memory.d(63): Error: template instance core.nogc_memory.__unittestL39_3.MyClass.NogcAllocator!(MyClass).nogcNew!(MyClass) error instantiating ..\src\core\nogc_memory.d(16): Error: no property 'this' for type 'core.nogc_memory.__unittestL39_3.MyClass' ..\src\core\nogc_memory.d(17): Error: type Monitor is not an expression ..\src\core\nogc_memory.d(64): Error: template instance core.nogc_memory.__unittestL39_3.MyClass.NogcAllocator!(MyClass).nogcNew!(MyClass, int) error instantiating I don't understand my mistake with the getMember and isType traits. And I am curious about of what is the Monitor. The whole thing you do to initialize could be replaced by a copy of the initializer, which is what emplace does: static T nogcNew(T, Args...)(Args args) @nogc { import core.stdc.stdlib : malloc; import std.traits, std.meta; Tinstance; enum s = __traits(classInstanceSize, T); instance = cast(T) malloc(s); (cast(void*) instance)[0..s] = typeid(T).initializer[]; instance.__ctor(args); return instance; } Nice, thank you for that, it is much elegant ;-) Your nogcDelete() is bug-prone & leaky Certainly I didn't think a lot about it for the moment. - use _xdtor, which also calls the __dtor injected by mixin. - even if you do so, __xdtors are not inherited !! instead dtor in parent classes are called by destroy() directly. Currently what I do to simulate inherited destructor is to mix this for each new generation. mixin template inheritedDtor() { private: import std.traits: BaseClassesTuple; alias B = BaseClassesTuple!(typeof(this)); enum hasDtor = __traits(hasMember, typeof(this), "__dtor"); static if (hasDtor && !__traits(isSame, __traits(parent, typeof(this).__dtor), typeof(this))) enum inDtor = true; else enum inDtor = false; public void callInheritedDtor(classT = typeof(this))() { import std.meta: aliasSeqOf; import std.range: iota; foreach(i; aliasSeqOf!(iota(0, B.length))) static if (__traits(hasMember, B[i], "__xdtor")) { mixin("this." ~ B[i].stringof ~ ".__xdtor;"); break; } } static if (!hasDtor || inDtor) public ~this() {callInheritedDtor();} } When a dtor is implemented it has to call "callInheritedDtor()" at end of the dtor implementation. Thank you a lot for this great help.
Need help to compile code with traits
Hi, I am trying to create an allocator that don't use the GC, and I have issues for the initialization of member before calling the constructor. Here is my actual code : mixin template NogcAllocator(T) { static TnogcNew(T, Args...)(Args args) @nogc { import core.stdc.stdlib : malloc; import std.traits; T instance; instance = cast(T)malloc(__traits(classInstanceSize, T)); foreach (string member; __traits(allMembers, T)) { static if (isType!(__traits(getMember, T, member))) __traits(getMember, instance, member) = typeof(__traits(getMember, T, member)).init; } instance.__ctor(args); return instance; } static void nogcDelete(T)(T instance) @nogc { import core.stdc.stdlib : free; instance.__dtor(); free(instance); } } unittest { struct Dummy { int field1 = 10; int field2 = 11; } class MyClass { mixin NogcAllocator!MyClass; int a = 0; int[] b = [1, 2, 3]; Dummy c = Dummy(4, 5); int d = 6; this() @nogc { } this(int val) @nogc { d = val; } } MyClass first = MyClass.nogcNew!MyClass(); MyClass second = MyClass.nogcNew!MyClass(7); assert(first.a == 0); assert(first.b == [1, 2, 3]); assert(first.c.field1 == 4); assert(first.d == 6); assert(second.c.field1 == 4); assert(second.d == 7); } And the compilation errors : ..\src\core\nogc_memory.d(16): Error: no property 'this' for type 'core.nogc_memory.__unittestL39_3.MyClass' ..\src\core\nogc_memory.d(17): Error: type Monitor is not an expression ..\src\core\nogc_memory.d(63): Error: template instance core.nogc_memory.__unittestL39_3.MyClass.NogcAllocator!(MyClass).nogcNew!(MyClass) error instantiating ..\src\core\nogc_memory.d(16): Error: no property 'this' for type 'core.nogc_memory.__unittestL39_3.MyClass' ..\src\core\nogc_memory.d(17): Error: type Monitor is not an expression ..\src\core\nogc_memory.d(64): Error: template instance core.nogc_memory.__unittestL39_3.MyClass.NogcAllocator!(MyClass).nogcNew!(MyClass, int) error instantiating I don't understand my mistake with the getMember and isType traits. And I am curious about of what is the Monitor.
Re: @nogc and opengl errors check
Le 21/01/2017 à 13:24, Jerry a écrit : On Friday, 20 January 2017 at 22:47:17 UTC, Xavier Bigand wrote: Hi, I am writing some code with opengl commands that I want to check in debug, so I am using the function checkgl (from glamour lib). The issue is that checkgl throw exception and can't be @nogc, I had try to use std.experimental.logger in place of exceptions, but it doesn't work either. I mostly want to be able to check the opengl errors only in debug in a way that can make the debugger breaks. On an other part as I will certainly have to log some events (even in release) I would appreciate that the logger be able to be used in @nogc functions, maybe with allocators? Don't use checkgl, it just bloats you code and there's an actual debug feature in OpenGL now. It provides more information than just an enum as well. So when a function has multiple errors that use the same enum, you can actually know what the error was rather than guessing. https://www.khronos.org/opengl/wiki/Debug_Output I had never use these API as it is doesn't work on older devices, but I'll may try to use it when available instead of glGetError. Thank you to remember me these API.
@nogc and opengl errors check
Hi, I am writing some code with opengl commands that I want to check in debug, so I am using the function checkgl (from glamour lib). The issue is that checkgl throw exception and can't be @nogc, I had try to use std.experimental.logger in place of exceptions, but it doesn't work either. I mostly want to be able to check the opengl errors only in debug in a way that can make the debugger breaks. On an other part as I will certainly have to log some events (even in release) I would appreciate that the logger be able to be used in @nogc functions, maybe with allocators?
Re: Dynamic arrays with static initialization and maybe a bug with sizeof
Le 13/12/2016 23:44, Johan Engelen a écrit : On Tuesday, 13 December 2016 at 21:27:57 UTC, Xavier Bigand wrote: Hi, I have the following code snippet : voidset() { GLfloat[]data = [ -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, ]; glBindVertexArray(mVAO); glBufferData(GL_ARRAY_BUFFER, data.sizeof, cast(void*)data, GL_STATIC_DRAW); } And I ask my self about the memory management of data, as my data array is statically initialized is it allocated on stack? Note that if you can define the data array as immutable, you save on heap memory allocation + copying (LDC, from -O0): https://godbolt.org/g/CNrZR7 -Johan Thank you for the tips.
Using Nsight with VisualD
Hi, I am trying to use Nsight with VisualD by it tell me that it can not start the program "". It seems that it does not use the right property of the project to retrieve the path of my generated binary. I do not know if it is an issue of VisualD directly or DUB that don't correctly generate the VisualD project and miss to fill a field. Does someone use Nsight and VisualD?
Re: Dynamic arrays with static initialization and maybe a bug with sizeof
Le 13/12/2016 22:39, ag0aep6g a écrit : On 12/13/2016 10:27 PM, Xavier Bigand wrote: voidset() { GLfloat[]data = [ -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, ]; glBindVertexArray(mVAO); glBufferData(GL_ARRAY_BUFFER, data.sizeof, cast(void*)data, GL_STATIC_DRAW); } And I ask my self about the memory management of data, as my data array is statically initialized is it allocated on stack? data is a function-local variable, so there is no static initialization going on. The array is allocated on the heap at run-time. On another side I have a strange behavior with the sizeof that returns 8 and not 36 (9 * 4) as I am expecting. sizeof returns the size of the dynamic array "struct", which is a pointer and a length. Instead of sizeof, use .length and multiply with the element type's .sizeof: data.length * GLfloat.sizeof Seems logic, I just read the wrong table on the documentation because there is properties static and dynamic arrays are contiguous. Thanks you
Dynamic arrays with static initialization and maybe a bug with sizeof
Hi, I have the following code snippet : voidset() { GLfloat[] data = [ -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, ]; glBindVertexArray(mVAO); glBufferData(GL_ARRAY_BUFFER, data.sizeof, cast(void*)data, GL_STATIC_DRAW); } And I ask my self about the memory management of data, as my data array is statically initialized is it allocated on stack? On another side I have a strange behavior with the sizeof that returns 8 and not 36 (9 * 4) as I am expecting.
Question about DUB
Hi, I am using DUB with the SDL language and I have two questions: 1. How can I add some text file to my project? I want to add shaders in my Visual Project. 2. How to make a compiler option depending on the platform and debug mode at the same time? Thanks.
Re: Memory usage of dmd
Le 10/11/2014 20:52, "Marc Schütz" " a écrit : If your server runs systemd, I would strongly recommend to use that instead of a shell script. You can use "Restart=always" or "Restart=on-failure" in the unit file. It also provides socket activation, which will allow you to restart the program without downtime. Good to know, I wasn't systemd can do things like that.
Re: Memory usage of dmd
Le 10/11/2014 18:17, Etienne a écrit : On 2014-11-10 12:02 PM, Xavier Bigand wrote: As I know to be able to have no down time with vibe we need to be able to build directly on the server where the program runs. Maybe I just need to wait that I have some users to pay a better server with more memory. With a low number of users, there's no reason to worry about a 1 second downtime from closing the process and replacing the application file. You should use a bash script to keep the process open though: # monitor.sh nohup ./autostart.sh > stdout.log 2> crash.log >/dev/null & # autostart.sh while true ; do if ! pgrep -f '{processname}' > /dev/null ; then sh /home/{mysitefolder}/start.sh fi sleep 1 done # start.sh nohup ./{yourapplication} --uid={user} --gid={group} >> stdout.log 2>> crash.log >> stdout.log & # install.sh pkill -f '{processname}' /bin/cp -rf {yourapplication} /home/{mysitefolder}/ Using a console, run monitor.sh and the autostart.sh script will re-launch your server through start.sh into a daemon. Verifications will be made every second to ensure your server is never down because of a badly placed assert. If you need to replace your server application with an update, run the install.sh script from the folder where the update is. Thank you for tips. I'll start from your scripts.
Re: Memory usage of dmd
Le 10/11/2014 17:41, Etienne a écrit : On 2014-11-10 11:32 AM, Xavier Bigand wrote: Is there some options can help me to reduce the memory consumption? As it's for production purpose I don't think that is a good idea to remove compiler optimizations. The memory issues are probably related to diet templates. Yes I think it too. LDC and GDC won't help. You should definitely work and build on a machine with 4GB of ram. The server application could use as low as 8mb of ram, but compiling requires a workstation. Perhaps renting an amazon instance a few minutes for compilation would be a better idea? I already have a computer with a linux to build it, so amazon won't improve situation. As I know to be able to have no down time with vibe we need to be able to build directly on the server where the program runs. Maybe I just need to wait that I have some users to pay a better server with more memory.
Memory usage of dmd
I develop a web site with vibe, but because I am using a Virtual Private Server I get some memory issues. The server only has 1Go of memory (> 900Mo free) and it seems I can't compile directly on it a simple static page (70 lines). I get the following message when building with dub : Running dmd... FAIL ../../../.dub/packages/vibe-d-master/.dub/build/libevent-release-linux.posix-x86_64-dmd_2066-EB47C82EE359A00A02828E314FCE5409/ vibe-d staticLibrary Error executing command build: Orphan format specifier: %%s failed with exit code %s. This may indicate that the process has run out of memory. So for the moment I build the web site on a physical machine, and I saw the compilation takes around 1.6Go of memory. Is there some options can help me to reduce the memory consumption? As it's for production purpose I don't think that is a good idea to remove compiler optimizations. Are gdc or ldc better in this case?
Is it normal that unittests of phobos are executed with my project build?
I get a failure on a test in format.d when I build my own project with unittest. I though importing phobos header would not regenerate their unittest modules. Any idea of what can cause this issue? I already have reinstalled dmd with visualD completely.
Re: Working on a library: request for code review
Le 12/06/2014 20:35, Xavier Bigand a écrit : Le 12/06/2014 20:09, Rene Zwanenburg a écrit : On Thursday, 12 June 2014 at 15:46:12 UTC, Mike wrote: On Thursday, 12 June 2014 at 00:20:28 UTC, cal wrote: Might it be worth stitching things together into a proper image processing package? Well I started working on TGA because I was disappointed that no image abstraction is present in Phobos. Go has some imaging APIs and I think D would benefit from having one too (out of the box). Would I work on std.image? Sure. Best, Mike I'm looking over your code ATM but I'd like to reply to this first. IMO it's not a good idea to create something like std.image. The std lib should ideally never have breaking changes, so it's easy to get stuck with a sub optimal API or become increasingly hard to maintain. We have Dub. Better keep the std lib lean and maintainable. If you're looking to create an awesome idiomatic D image library you're probably better of building it on top of derelict-devil or derelict-freeimage, then publish it on code.dlang.org The discoverability of good code.dlang.org projects is still limited. Some kind of rating or like system would be useful, but that's another discussion. I think it can be a great advantage to have some things like image management in phobos, cause often dub projects are big and users don't want necessary a complete multimedia library but just small pieces that are standard. For example a GUI library, will allow image manipulations, but not only, and extracting only the image modules can be hard. That the case of our DQuick project. For a project like DQuick, I would be happy to find things like images, geometric algebra,environment analysis,... in phobos. This will allow use to be focused on other things making an UI framework (Windows, events, rendering, resource management,...). Having such minimalistic APIs would be a great benefit IMO, maybe in this case some extending libraries would appear in dub. I am thinking an other benefit is what you see as a default, the assurance of D standard conformances (portability, safety, quality, support,...).
Re: Working on a library: request for code review
Le 12/06/2014 20:09, Rene Zwanenburg a écrit : On Thursday, 12 June 2014 at 15:46:12 UTC, Mike wrote: On Thursday, 12 June 2014 at 00:20:28 UTC, cal wrote: Might it be worth stitching things together into a proper image processing package? Well I started working on TGA because I was disappointed that no image abstraction is present in Phobos. Go has some imaging APIs and I think D would benefit from having one too (out of the box). Would I work on std.image? Sure. Best, Mike I'm looking over your code ATM but I'd like to reply to this first. IMO it's not a good idea to create something like std.image. The std lib should ideally never have breaking changes, so it's easy to get stuck with a sub optimal API or become increasingly hard to maintain. We have Dub. Better keep the std lib lean and maintainable. If you're looking to create an awesome idiomatic D image library you're probably better of building it on top of derelict-devil or derelict-freeimage, then publish it on code.dlang.org The discoverability of good code.dlang.org projects is still limited. Some kind of rating or like system would be useful, but that's another discussion. I think it can be a great advantage to have some things like image management in phobos, cause often dub projects are big and users don't want necessary a complete multimedia library but just small pieces that are standard. For example a GUI library, will allow image manipulations, but not only, and extracting only the image modules can be hard. That the case of our DQuick project. For a project like DQuick, I would be happy to find things like images, geometric algebra,environment analysis,... in phobos. This will allow use to be focused on other things making an UI framework (Windows, events, rendering, resource management,...). Having such minimalistic APIs would be a great benefit IMO, maybe in this case some extending libraries would appear in dub.
Re: fix struct API with an interface
Le 06/03/2014 19:08, Dicebot a écrit : On Thursday, 6 March 2014 at 14:28:13 UTC, Flamaros wrote: Ok, it's like I though final class and struct are equivalent when calling a method (except the pointer deference, but it's minor I think). I don't think there is a real performance problem for us, it's more about to learn how to have a clean design. They are equivalent when calling directly through class instance. When called though interface pointer, final class still results in vtable dispatch (because type system can't know stored class is final) But if you need to inherit from existing interface it is best thing you have anyway. In my case as I don't build openGL and DirectX module in the same binary, it can share the same modules and classes names. So I'll use them through the class instead of the interface. Thank you for your explanations.
adding static return?
I thought it could be nice to have a static return. My Idea is to remove unnecessary bracket encapsulation made with some static if statements. It will works like this : module xxx.opengl; import buildSettings; // contains some global constants static if (renderMode == directX) return; ... So there will no more need to scope the module code and indent it. Is it a good idea?
Re: GC for noobs
Le 28/02/2014 13:22, Szymon Gatner a écrit : On Friday, 28 February 2014 at 11:43:58 UTC, Dicebot wrote: On Friday, 28 February 2014 at 11:28:01 UTC, Szymon Gatner wrote: I didn't mean "basic" in the sense of "easy" but in the sense of something that has to dealt with all the time / is common requirement. Yes, it needs to be dealt with all the time but in a different ways. Problem is with getting sensible defaults. D makes a reasonable assumption that most applications don't actually care about tight bullet-proof resource management and defaults to GC. I may not like it but it fits criteria "built-in resource management" and pretty much shows that it is not as basic as one may think. Not really different tho. Actual function call swqence might be different but the scheme is always the same: acquire resource, allocate, connect, take from pool vs release, deallocate, disconnect, return to pool. All of those fall under resource management - there is a finite amout of a resouce whether it is a memory, a system process, a file or a databese connection and it is crucial to the system stability that all of them are properly returned / released AND in proper order (which is of course reverse to "acquisition"). I had a lot of difficulties too with the release order of resources. Of course I am coming from c++, in which it's to ease to manage. I got some head-hack for the resource management, maybe a DIP must be done here about a module dedicated to the resource management? Or at least a tutorial in the wiki? I finally solve my issues, but I am not happy, cause the way it's done seems to be too much error prone (resource leaks). I used to work on mobile devices and some kind of resources have to be released as soon as possible. I also don't really like having a lot of applications running on devices never releasing resources, it can break the principle of multi-task OS. Just try to launch many Java/C# applications at the same time, you'll have to buy more memory.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 22/01/2014 14:13, Flamaros a écrit : On Wednesday, 22 January 2014 at 02:11:02 UTC, TheFlyingFiddle wrote: On Saturday, 18 January 2014 at 19:40:38 UTC, Xavier Bigand wrote: I am not sure the issue come really from my code, cause it just works fine on ATI cards, I do something Nvidia drivers dislike. I tried to replace GL_LINE_LOOP by triangles, increase buffer size, put the GL_ELEMENT_ARRAY_BUFFER buffer type bind right before glDrawElements without success. Crash only append when I fill text mesh before those ones. So I need dig more. From what i saw in your code you are not using Vertex Array Objects. I have had similar problems that code ran fine on ATI but crashed on nvidia. The problem went away for me when i just created and bound a global VAO just after context creation. Also i would recommend calling glGetError after every call, it helps finding errors. Here is a simple trick to do this automatically. static gl { static ref auto opDispatch(string name, Args...)(Args args) { enum glName = "gl" ~ name[0].toUpper.to!string ~ name[1 .. $]; debug scope(exit) checkGLError(); //Do glGetError and log it or something. mixin("return " ~ glName ~ "(args);"); } } After this simply change all glFunctionName(args) to gl.functionName or gl.functionName. I will try the global VAO. I already check glError with "checkgl!" function. I finally found the issue, glDisableVertexAttribArray calls were missing. I just doesn't understand why it works majority of times and any openGL debugging tool was able to warm me about that. Maybe I need try DirectX and see if debugging tools are better.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 13/01/2014 22:47, Benjamin Thaut a écrit : Am 13.01.2014 21:52, schrieb Xavier Bigand: glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) glBindBuffer(GL_ARRAY_BUFFER,10) glEnableVertexAttribArray(0) glVertexAttribPointer(0,3,GL_FLOAT,false,12,) glBindBuffer(GL_ARRAY_BUFFER,11) glEnableVertexAttribArray(1) glVertexAttribPointer(1,4,GL_FLOAT,false,16,) glDrawElements(GL_LINE_LOOP,4,GL_UNSIGNED_INT,) GLSL=4 ->glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT) ->->glUseProgram(4) ->->glUniformMatrix4fv(0,1,false,[0.002497,0.00,0.00,0.00,0.00,-0.00,0.00,0.00,0.00,0.00,-0.01,0.00,-1.00,1.00,0.00,1.00]) ->glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) ->->glBindBuffer(GL_ARRAY_BUFFER,10) ->->glEnableVertexAttribArray(0) ->->glVertexAttribPointer(0,3,GL_FLOAT,false,12,) ->->glBindBuffer(GL_ARRAY_BUFFER,11) ->->glEnableVertexAttribArray(1) ->->glVertexAttribPointer(1,4,GL_FLOAT,false,16,) ->->glDrawElements(GL_LINE_LOOP,4,GL_UNSIGNED_INT,) GLSL=4 This extract seems to correspond of latest gl command called just before the crash after the window resize. It doesn't seems to have any error here. Yes this indeed looks correct. Maybe its even a bug in the driver. Because it happens right after the window resize graphic resource might got invalid and the driver would need to re-create them. The problem ist most likely that you use two array buffers, one for each attribute, instead of using one array buffer and interleaving the attribute (this is the usual way). I could bet, that if you switch over to the interleaved variant, the problem goes away. You could also try to make the three buffers slightly larger and specifiy different pointers to see which one actually causes the invalid read. So that the calls become: > ->glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) > ->->glBindBuffer(GL_ARRAY_BUFFER,10) > ->->glEnableVertexAttribArray(0) > ->->glVertexAttribPointer(0,3,GL_FLOAT,false,12,) > ->->glBindBuffer(GL_ARRAY_BUFFER,11) > ->->glEnableVertexAttribArray(1) > ->->glVertexAttribPointer(1,4,GL_FLOAT,false,16,0016) > ->->glDrawElements(GL_LINE_LOOP,4,GL_UNSIGNED_INT,0004) GLSL=4 You could then see from the access violation which of the three buffers the read attempt fails. You could also try to move the glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) right before the glDrawElements call. Kind Regards Benjamin Thaut I am not sure the issue come really from my code, cause it just works fine on ATI cards, I do something Nvidia drivers dislike. I tried to replace GL_LINE_LOOP by triangles, increase buffer size, put the GL_ELEMENT_ARRAY_BUFFER buffer type bind right before glDrawElements without success. Crash only append when I fill text mesh before those ones. So I need dig more.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 13/01/2014 20:42, Xavier Bigand a écrit : Le 12/01/2014 18:01, Benjamin Thaut a écrit : Am 12.01.2014 17:18, schrieb Xavier Bigand: Le 12/01/2014 11:16, Benjamin Thaut a écrit : Am 12.01.2014 00:47, schrieb Xavier Bigand: I didn't know this menu settings, but activate Access Violation don't change anything. It seems that your crash happens inside the OpenGL part of the graphics driver. It is caused by DQuick\src\dquick\renderer3D\openGL\mesh.d line 125 I assume that you setup a few OpenGL parameters invalid and thus the driver reads from a null pointer. I found it by single stepping. I started the application, then set the breakpoint like shown below and single steppt into the draw() method. debug { if (mRebuildDebugMeshes) updateDebugMesh(); // put breakpoint here mDebugMesh.draw(); if ((implicitWidth != float.nan && implicitHeight != float.nan) && (implicitWidth != mSize.x && implicitHeight != mSize.y)) mDebugImplicitMesh.draw(); } Have fun debugging ;-) Kind Regards Benjamin Thaut Thank for your support and your time I already tried to debug opengl with gdebugger is used to find those kind of issues. But it doesn't seems working fine with D binaries. I higly recommend using either glIntercept (http://code.google.com/p/glintercept/) or glslDevil (http://www.vis.uni-stuttgart.de/glsldevil/) to debug OpenGL applications. If you have a nvidia card you could also use nvidia nsight to debug your application: https://developer.nvidia.com/nvidia-nsight-visual-studio-edition My guess would be that either your vertex buffer or index buffer is no longer valid, thus gets not bound, and as a result the graphics driver reads from client memory at address null. Kind Regards Benjamin Thaut I took a look to buffers manually just before the glDrawElements call, and all values seems good. I also check if any glDeleteBuffers and glDeleteShader,... was called before the glDrawElements. I need find some more time to test with glIntercept or nvidia nsight I finally tried glIntercept, but I am not sure how I need interpret the output : glViewport(0,0,801,600) glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT) glBindBuffer(GL_ARRAY_BUFFER,10) glBufferData(GL_ARRAY_BUFFER,48,0604C000,GL_DYNAMIC_DRAW) glBindBuffer(GL_ARRAY_BUFFER,0) glBindBuffer(GL_ARRAY_BUFFER,11) glBufferData(GL_ARRAY_BUFFER,64,0604E600,GL_DYNAMIC_DRAW) glBindBuffer(GL_ARRAY_BUFFER,0) glBindBuffer(GL_ARRAY_BUFFER,13) glBufferData(GL_ARRAY_BUFFER,48,0604FFC0,GL_DYNAMIC_DRAW) glBindBuffer(GL_ARRAY_BUFFER,0) glBindBuffer(GL_ARRAY_BUFFER,14) glBufferData(GL_ARRAY_BUFFER,64,0604E580,GL_DYNAMIC_DRAW) glBindBuffer(GL_ARRAY_BUFFER,0) glUseProgram(4) glUniformMatrix4fv(0,1,false,[0.002497,0.00,0.00,0.00,0.00,-0.00,0.00,0.00,0.00,0.00,-0.01,0.00,-1.00,1.00,0.00,1.00]) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) glBindBuffer(GL_ARRAY_BUFFER,10) glEnableVertexAttribArray(0) glVertexAttribPointer(0,3,GL_FLOAT,false,12,) glBindBuffer(GL_ARRAY_BUFFER,11) glEnableVertexAttribArray(1) glVertexAttribPointer(1,4,GL_FLOAT,false,16,) glDrawElements(GL_LINE_LOOP,4,GL_UNSIGNED_INT,) GLSL=4 ->glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT) ->->glUseProgram(4) ->->glUniformMatrix4fv(0,1,false,[0.002497,0.00,0.00,0.00,0.00,-0.00,0.00,0.00,0.00,0.00,-0.01,0.00,-1.00,1.00,0.00,1.00]) ->glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,9) ->->glBindBuffer(GL_ARRAY_BUFFER,10) ->->glEnableVertexAttribArray(0) ->->glVertexAttribPointer(0,3,GL_FLOAT,false,12,) ->->glBindBuffer(GL_ARRAY_BUFFER,11) ->->glEnableVertexAttribArray(1) ->->glVertexAttribPointer(1,4,GL_FLOAT,false,16,) ->->glDrawElements(GL_LINE_LOOP,4,GL_UNSIGNED_INT,) GLSL=4 This extract seems to correspond of latest gl command called just before the crash after the window resize. It doesn't seems to have any error here.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 12/01/2014 18:01, Benjamin Thaut a écrit : Am 12.01.2014 17:18, schrieb Xavier Bigand: Le 12/01/2014 11:16, Benjamin Thaut a écrit : Am 12.01.2014 00:47, schrieb Xavier Bigand: I didn't know this menu settings, but activate Access Violation don't change anything. It seems that your crash happens inside the OpenGL part of the graphics driver. It is caused by DQuick\src\dquick\renderer3D\openGL\mesh.d line 125 I assume that you setup a few OpenGL parameters invalid and thus the driver reads from a null pointer. I found it by single stepping. I started the application, then set the breakpoint like shown below and single steppt into the draw() method. debug { if (mRebuildDebugMeshes) updateDebugMesh(); // put breakpoint here mDebugMesh.draw(); if ((implicitWidth != float.nan && implicitHeight != float.nan) && (implicitWidth != mSize.x && implicitHeight != mSize.y)) mDebugImplicitMesh.draw(); } Have fun debugging ;-) Kind Regards Benjamin Thaut Thank for your support and your time I already tried to debug opengl with gdebugger is used to find those kind of issues. But it doesn't seems working fine with D binaries. I higly recommend using either glIntercept (http://code.google.com/p/glintercept/) or glslDevil (http://www.vis.uni-stuttgart.de/glsldevil/) to debug OpenGL applications. If you have a nvidia card you could also use nvidia nsight to debug your application: https://developer.nvidia.com/nvidia-nsight-visual-studio-edition My guess would be that either your vertex buffer or index buffer is no longer valid, thus gets not bound, and as a result the graphics driver reads from client memory at address null. Kind Regards Benjamin Thaut I took a look to buffers manually just before the glDrawElements call, and all values seems good. I also check if any glDeleteBuffers and glDeleteShader,... was called before the glDrawElements. I need find some more time to test with glIntercept or nvidia nsight
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 12/01/2014 11:16, Benjamin Thaut a écrit : Am 12.01.2014 00:47, schrieb Xavier Bigand: I didn't know this menu settings, but activate Access Violation don't change anything. It seems that your crash happens inside the OpenGL part of the graphics driver. It is caused by DQuick\src\dquick\renderer3D\openGL\mesh.d line 125 I assume that you setup a few OpenGL parameters invalid and thus the driver reads from a null pointer. I found it by single stepping. I started the application, then set the breakpoint like shown below and single steppt into the draw() method. debug { if (mRebuildDebugMeshes) updateDebugMesh(); // put breakpoint here mDebugMesh.draw(); if ((implicitWidth != float.nan && implicitHeight != float.nan) && (implicitWidth != mSize.x && implicitHeight != mSize.y)) mDebugImplicitMesh.draw(); } Have fun debugging ;-) Kind Regards Benjamin Thaut Thank for your support and your time I already tried to debug opengl with gdebugger is used to find those kind of issues. But it doesn't seems working fine with D binaries.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 12/01/2014 00:30, Benjamin Thaut a écrit : Am 11.01.2014 22:56, schrieb Xavier Bigand: Le 11/01/2014 22:15, Benjamin Thaut a écrit : Am 11.01.2014 20:50, schrieb Xavier Bigand: Yes I have no stack trace and adding import core.sys.windows.stacktrace change nothing. That is very strange. Can you reduce this? Or is this project on github somewhere? Did you try using a debugger? Kind Regards Benjamin Thaut Yes it's on github : https://github.com/Flamaros/DQuick/tree/Missing_RAII_Warning Ro reproduce the crash : - You can launch the DQuick-VisualD.sln solution file that is in the root folder. - Launch the Text project (in debug mode) - Resize the Window, it crash directly It seems to be related to the GraphicItem class in startPaint methode, particulary this section of code : debug { if (mRebuildDebugMeshes) updateDebugMesh(); mDebugMesh.draw(); if ((implicitWidth != float.nan && implicitHeight != float.nan) && (implicitWidth != mSize.x && implicitHeight != mSize.y)) mDebugImplicitMesh.draw(); } Putting it under comment remove the crash. Thank you for your help. If you use VisualD why don't you go to "Debugging->Execptions" in Visual Studio and activate the "Access Violation" under "Win32 Exceptions" to debug that access violation with the visual studio debugger? Kind Regards Benjamin Thaut I didn't know this menu settings, but activate Access Violation don't change anything.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 11/01/2014 22:15, Benjamin Thaut a écrit : Am 11.01.2014 20:50, schrieb Xavier Bigand: Yes I have no stack trace and adding import core.sys.windows.stacktrace change nothing. That is very strange. Can you reduce this? Or is this project on github somewhere? Did you try using a debugger? Kind Regards Benjamin Thaut Yes it's on github : https://github.com/Flamaros/DQuick/tree/Missing_RAII_Warning Ro reproduce the crash : - You can launch the DQuick-VisualD.sln solution file that is in the root folder. - Launch the Text project (in debug mode) - Resize the Window, it crash directly It seems to be related to the GraphicItem class in startPaint methode, particulary this section of code : debug { if (mRebuildDebugMeshes) updateDebugMesh(); mDebugMesh.draw(); if ((implicitWidth != float.nan && implicitHeight != float.nan) && (implicitWidth != mSize.x && implicitHeight != mSize.y)) mDebugImplicitMesh.draw(); } Putting it under comment remove the crash. Thank you for your help.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 11/01/2014 19:40, Benjamin Thaut a écrit : Am 11.01.2014 19:16, schrieb Xavier Bigand: Le 11/01/2014 18:45, Benjamin Thaut a écrit : Am 11.01.2014 17:24, schrieb Xavier Bigand: I get some troubles to solve a memory bug, just cause I don't have any informations for debuggers and I can't neither use DrMemory. Is it possible to get the callstack when calling a method on a null pointer with specifics DMD flags? Maybe DMD need add null ptr call checks in debug mode? For x64 exeuctables compile with -g. For x86 executables compile with -g and then run cv2pdb on the final executable. cv2pdb is part of VisualD. Kind Regards Benjamin Thaut Yep I am using VisualD with cv2pdb, and I build in debug mode with the flag -g. And it does not print a stack trace? Is it possbile that this access violation happens within a module constructor? Try importing core.sys.windows.stacktrace into every single one of your modules and see if that changes something. Kind Regards Benjamin Thaut Yes I have no stack trace and adding import core.sys.windows.stacktrace change nothing.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 11/01/2014 17:24, Xavier Bigand a écrit : I get some troubles to solve a memory bug, just cause I don't have any informations for debuggers and I can't neither use DrMemory. Is it possible to get the callstack when calling a method on a null pointer with specifics DMD flags? Maybe DMD need add null ptr call checks in debug mode? I am using VisualD with cv2pdb. I also tried to put checks manually on the code section which seems to crash (no crash when commented), but I almost don't use ptr and it never enter in my check conditions. It's like a real memory corruption in an other part of code.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 11/01/2014 18:45, Benjamin Thaut a écrit : Am 11.01.2014 17:24, schrieb Xavier Bigand: I get some troubles to solve a memory bug, just cause I don't have any informations for debuggers and I can't neither use DrMemory. Is it possible to get the callstack when calling a method on a null pointer with specifics DMD flags? Maybe DMD need add null ptr call checks in debug mode? For x64 exeuctables compile with -g. For x86 executables compile with -g and then run cv2pdb on the final executable. cv2pdb is part of VisualD. Kind Regards Benjamin Thaut Yep I am using VisualD with cv2pdb, and I build in debug mode with the flag -g.
Re: [Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
Le 11/01/2014 18:20, Namespace a écrit : On Saturday, 11 January 2014 at 16:24:08 UTC, Xavier Bigand wrote: I get some troubles to solve a memory bug, just cause I don't have any informations for debuggers and I can't neither use DrMemory. Is it possible to get the callstack when calling a method on a null pointer with specifics DMD flags? Maybe DMD need add null ptr call checks in debug mode? Try to compile with -gc Doesn't change anything.
[Windows & DMD] No callstack when crash with Access violation reading location 0x00000000
I get some troubles to solve a memory bug, just cause I don't have any informations for debuggers and I can't neither use DrMemory. Is it possible to get the callstack when calling a method on a null pointer with specifics DMD flags? Maybe DMD need add null ptr call checks in debug mode?
Re: Getting backtrace
Le 08/01/2014 21:29, Benjamin Thaut a écrit : Am 08.01.2014 21:25, schrieb Xavier Bigand: Is there a way to get backtrace outside exceptions? Found a plattform independend way: import core.runtime; import std.stdio; void main(string[] args) { auto trace = defaultTraceHandler(null); foreach(t; trace) { writefln("%s", t); } } It's exactly what I need, thank you.
Getting backtrace
Is there a way to get backtrace outside exceptions?
Re: [Windows] Building in 64bits
Le 20/12/2013 02:54, Rikki Cattermole a écrit : On Thursday, 19 December 2013 at 20:37:32 UTC, Xavier Bigand wrote: I try to build in 64bits with dmd to be able to use VS tools. Please notice on linux our project build fine in 64bits. Here is my error : E:\Dev\Personal\DQuick\src\samples\Minesweeper>dub --arch=x86_64 Checking dependencies in 'E:\Dev\Personal\DQuick\src\samples\Minesweeper' Building configuration "application", build type debug Compiling... Linking... Mine Sweeper.obj : fatal error LNK1179: fichier non valide ou endommagé : '_D6dquick6script5utils162__T31fullyQualifiedNameImplForTypes2TDFC6dquick6script11itemBinding65__T11ItemBindingTC6dquick4item15declarativeItem15DeclarativeItemZ11ItemBindingZvVb0Vb0Vb0Vb0Z29__T20storageClassesStringVk0Z20storageClassesStringFNaNdNfZAya' COMDAT dupliqué --- errorlevel 1179 Error executing command run: Link command failed with exit code 1179 Something hasn't been recompiled. The binary you're trying to link against is an OMF (aka 32bit) library. Microsofts linker use PE-COFF. By my guess recompile DQuick as 64bit. Since you're compiling an example. Ok It's certainly the gdi32.lib that I forgot. Thx
[Windows] Building in 64bits
I try to build in 64bits with dmd to be able to use VS tools. Please notice on linux our project build fine in 64bits. Here is my error : E:\Dev\Personal\DQuick\src\samples\Minesweeper>dub --arch=x86_64 Checking dependencies in 'E:\Dev\Personal\DQuick\src\samples\Minesweeper' Building configuration "application", build type debug Compiling... Linking... Mine Sweeper.obj : fatal error LNK1179: fichier non valide ou endommagé : '_D6dquick6script5utils162__T31fullyQualifiedNameImplForTypes2TDFC6dquick6script11itemBinding65__T11ItemBindingTC6dquick4item15declarativeItem15DeclarativeItemZ11ItemBindingZvVb0Vb0Vb0Vb0Z29__T20storageClassesStringVk0Z20storageClassesStringFNaNdNfZAya' COMDAT dupliqué --- errorlevel 1179 Error executing command run: Link command failed with exit code 1179
Re: How to link to libdl under linux
Le 19/12/2013 13:46, MrSmith a écrit : Still need help. I've tried compiling a little test project with dub and it compiled. Then i tried to compile it by hand and got the same error. I think there is some issue in my command with parameter ordering. Here is test project module test; import derelict.glfw3.glfw3; import std.stdio; void main() { DerelictGLFW3.load(); writeln("test"); } with package { "targetName": "test", "dependencies": { "derelict-glfw3": "~master" }, "targetType":"executable", "name": "test", "sourceFiles":["./test.d"] } The dub does two step compilation: dmd -m32 -of.dub/build/application-debug-x86-dmd-DA39A3EE5E6B4B0D3255BFEF95601890AFD80709/test -c -of.dub/build/application-debug-x86-dmd-DA39A3EE5E6B4B0D3255BFEF95601890AFD80709/test.o -debug -g -w -version=Have_test -version=Have_derelict_glfw3 -version=Have_derelict_util -I../../.dub/packages/derelict-glfw3-master/source -I../../.dub/packages/derelict-util-1.0.0/source test.d ../../.dub/packages/derelict-glfw3-master/source/derelict/glfw3/package.d ../../.dub/packages/derelict-glfw3-master/source/derelict/glfw3/glfw3.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/xtypes.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/exception.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/system.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/loader.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/sharedlib.d ../../.dub/packages/derelict-util-1.0.0/source/derelict/util/wintypes.d Linking... dmd -of.dub/build/application-debug-x86-dmd-DA39A3EE5E6B4B0D3255BFEF95601890AFD80709/test .dub/build/application-debug-x86-dmd-DA39A3EE5E6B4B0D3255BFEF95601890AFD80709/test.o -L-ldl -m32 -g Copying target from /home/andrey/test/.dub/build/application-debug-x86-dmd-DA39A3EE5E6B4B0D3255BFEF95601890AFD80709/test to /home/andrey/test So, do i need to use two step compilation or i need proper ordering of parameters? One more question: why dub uses -of flag twice? I use pragma(lib, "dl"), but it doesn't work with DUB cause it separate build and link steps. I like the idea that sources knows them selves how they have to be build. I think it's possible to simply do rdmd main.d when using pragma(lib, "xxx"). For dub, add this line in your package.json : "libs-posix": ["dl"],
Re: Deimos rules?
Le 13/11/2013 23:01, Xavier Bigand a écrit : I work on XCB integration, so I think that I can add bindings in deimos. C headers are translated to d modules by using DStep or manually? If manually need I respect some syntactical rules? I think it's enough mature to be integrated to deimos. It builds fine but I have some issues with XCB, the official sample written in C doesn't work too, so the bindings seems correct. You can fork it from : https://github.com/D-Quick/XCB
Re: longjmp crashes on Windows
Le 16/11/2013 23:26, Rene Zwanenburg a écrit : On Saturday, 16 November 2013 at 16:22:17 UTC, Piotr Podsiadły wrote: On Saturday, 16 November 2013 at 14:41:46 UTC, Maxim Fomin wrote: What kind of problem you try to solve by manual defining system data structures? Why not use platform independent valid declarations? Why did you decide that _JBLEN is 64, 256, 528 according to version? Why did you decide that having _JBLEN bytes filled with zeros is a valid value of jmp_buf object? Why should setjmp/longjmp take buffer by reference? I couldn't find these declarations for Windows in druntime (there is only POSIX version). Values of _JBLEN are based on constants from headers from DMC (x86 version) and Visual Studio (x86_64 and ia64). These are declarations copied from setjmp.h from DMC: #define _JBLEN 16 typedef int jmp_buf[_JBLEN]; #define __CLIB__cdecl int __CLIB _setjmp(jmp_buf); void __CLIB longjmp(jmp_buf,int); jmp_buf is initialized by the first call to setjmp, so its initial value doesn't matter. In C, arrays are alawys passed as a pointer - that's why I used ref. Try to use proper version of setjmp/jmp_buf from druntime. By the way, why did you decide to use it in D language in a first place? I'm trying to use libpng and libjpeg directly from D, without any wrappers. These libraries use longjmp to handle errors (the only alternative is to call exit() or abort()). As an alternative to those libraries, may I suggest using the DevIL binding in Derelict? It's quite easy to use: https://github.com/aldacron/Derelict3 Piotr Podsiadły help us to remove SDL_Image from DQuick. Our goal is to avoid big dependencies, to provide a light weight and easy to build/install library. An other point is to simplify port to new platforms such as Smartphones OS, or any other embedded devices, video games consoles,... It's important for "D" to support this kind of feature correctly.
Re: Deimos rules?
Le 14/11/2013 13:13, Jacob Carlborg a écrit : On 2013-11-13 23:01, Xavier Bigand wrote: I work on XCB integration, so I think that I can add bindings in deimos. C headers are translated to d modules by using DStep or manually? If manually need I respect some syntactical rules? I would say stay as close to the original C code as possible. Although I do prefer to translate typedefs like int8_t to real D types, like byte, if they exist. I started by changing extension of files from .h to .d, and translating in place everything. I take libX11 as model. I'll certainly make a pull request when I will able to run a simple demo. I let all originals comments as it during the translation.
Deimos rules?
I work on XCB integration, so I think that I can add bindings in deimos. C headers are translated to d modules by using DStep or manually? If manually need I respect some syntactical rules?
Re: [Font] Getting font folder on all platforms
Le 08/11/2013 21:05, Flamaros a écrit : On Tuesday, 15 October 2013 at 23:10:32 UTC, Flamaros wrote: On Friday, 6 September 2013 at 20:54:53 UTC, Flamaros wrote: On Friday, 6 September 2013 at 16:05:43 UTC, Tourist wrote: On Thursday, 5 September 2013 at 19:48:07 UTC, Flamaros wrote: I am searching the right way to find fonts folder for each platforms (Windows, linux, macOS X) On Windows it's generally "C:\Windows\Fonts" but a direct access seems brutal, it's certainly expected to retrieve this path by using some register keys? Is someone know how it works for linux and/or macOS X? I need to be able to retrieve fastest as possible the right file from the font and family name. Windows: call SHGetKnownFolderPath with FOLDERID_Fonts as rfid. http://msdn.microsoft.com/en-us/library/windows/desktop/bb762188%28v=vs.85%29.aspx Nice, thx. Do you know if there is a table of fonts and there family, or need open all font file my self? I need to do some more tests, but scanning the registry seems working under Windows. Here is my test code : stringfontPathFromName(in string name, in Font.Family family = Font.Family.Regular) { version(Windows) { import std.windows.registry; stringfontPath = "C:/Windows/Fonts/"; stringfontFileName; KeyfontKey; fontKey = Registry.localMachine().getKey("Software\\Microsoft\\Windows NT\\CurrentVersion\\Fonts"); if (family == Font.Family.Regular) fontFileName = fontKey.getValue(name ~ " (TrueType)").value_EXPAND_SZ(); else if (family == Font.Family.Bold) fontFileName = fontKey.getValue(name ~ " Bold (TrueType)").value_EXPAND_SZ(); else if (family == Font.Family.Italic) fontFileName = fontKey.getValue(name ~ " Italic (TrueType)").value_EXPAND_SZ(); else if (family == (Font.Family.Bold | Font.Family.Italic)) fontFileName = fontKey.getValue(name ~ " Bold Italic (TrueType)").value_EXPAND_SZ(); return fontPath ~ fontFileName; } } unittest { assert(fontPathFromName("Arial") == "C:/Windows/Fonts/arial.ttf"); assert(fontPathFromName("arial") == "C:/Windows/Fonts/arial.ttf");// Test with wrong case assert(fontPathFromName("Arial", Font.Family.Bold | Font.Family.Italic) == "C:/Windows/Fonts/arialbi.ttf"); } I did some progress with fontconfig under linux, I'll try to use it for Windows too. I find a way to make it work under Windows. I also bind almost all fontconfig API in a similar way of Derelict. If someone is interested look font.d at : https://github.com/D-Quick/DQuick PS : I took fontconfig dll from GTK packages