Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 08:06:03 UTC, Jens Bauer wrote: I better also mention that the problem is the same for floats in Lexer::inreal(Token *). Ignore that - it's always read/written as a long double. Sorry for then noise.
Re: Named unittests
On 2015-04-02 22:31, Dicebot wrote: I don't think anyone is going to put those in a same inline style as unittest blocks, so this is not truly relevant. At least I hope so. You mean inline with the code it tests? No, I hope so too. I put my unit tests in separate files as well, but that's just the way I prefer it. -- /Jacob Carlborg
Re: Mid-term vision review
On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei Regarding C++ integration. I'm working on https://issues.dlang.org/show_bug.cgi?id=14178 which is the first step to get STL working.
Mistake of opening of a file having a name in cp1251.
Greetings to all! I work on Windows with cp1251 and I have a mistake in the program: import std.stdio; int main (string [] args) { string nameFile = `«Ёлки с объектами №876».txt`; File f = File (nameFile, w); f.writeln (Greetings!); return 0; } This mistake of a kind: std.exception. ErrnoException@std\stdio.d (372): Cannot open file ` ┬л╨Б ╨╗╨║╨╕ ╨Ю ╨▒ ╤К ╨╡╨║╤ 876 ┬╗. txt ' in mode ` w ' (Invalid argument) For correction of this mistake I had to change a file std\stdio.d, having added in it function fromUtf8toAnsiW (). The piece of an source code of a file std\stdio.d with changes is presented more low. private FILE* fopen(in char[] name, in char[] mode = r) @trusted // nothrow @nogc - mgw отключено из-за fromUtf8toAnsiW { import std.internal.cstring : tempCString; version(Windows) { // 02.04.2015 8:31:19 repair for dmd 2.067.0 wchar* fromUtf8toAnsiW(in char[] s, uint codePage = 0) @trusted { import std.c.windows.windows: WideCharToMultiByte, GetLastError; import std.windows.syserror: sysErrorString; import std.conv: to; char[] result, rez; int readLen; auto ws = std.utf.toUTF16z(s); result.length = WideCharToMultiByte(codePage, 0, ws, -1, null, 0, null, null); if (result.length) { readLen = WideCharToMultiByte(codePage, 0, ws, -1, result.ptr, to!int(result.length), null, null); for(int i; i != result.length; i++) { rez ~= result[i]; rez ~= 0; } rez ~= 0; rez ~= 0; } if (!readLen || readLen != result.length) { throw new Exception(Couldn't convert string: ~ sysErrorString(GetLastError())); } return cast(wchar*)rez.ptr; } import std.internal.cstring : tempCStringW; // return _wfopen(name.tempCStringW(), mode.tempCStringW()); return _wfopen(fromUtf8toAnsiW(name), mode.tempCStringW()); } else version(Posix) { import core.sys.posix.stdio : fopen; return fopen(name.tempCString(), mode.tempCString()); } else { return .fopen(name.tempCString(), mode.tempCString()); } } With the given correction all works correctly. Question. Whether correctly such change from the point of view of programming canons on D ?
[Issue 14402] New: std.conv.emplace for classes segfaults for nested class
https://issues.dlang.org/show_bug.cgi?id=14402 Issue ID: 14402 Summary: std.conv.emplace for classes segfaults for nested class Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: major Priority: P1 Component: Phobos Assignee: nob...@puremagic.com Reporter: mkline.o...@gmail.com I came across this while doing work on std.typecons.Unique (see https://github.com/D-Programming-Language/phobos/pull/3139). Any class that accesses a local context seems to segfault when emplaced. A minimal test case follows: import core.stdc.stdlib : malloc, free; import std.conv : emplace; import std.traits : classInstanceAlignment; void main() { int created; int destroyed; class Foo { this() { ++created; } ~this() { ++destroyed; } } immutable size_t size = __traits(classInstanceSize, Foo); void* m = malloc(size); assert(m); Foo f = emplace!Foo(m[0 .. size]); assert(created == 1); f.destroy(); free(m); assert(destroyed == 1); } The problem doesn't seem to be the amount of memory I am allocating, given that it makes it past the testEmplaceChunk call in emplace. A stack trace from GDB is as follows: Program received signal SIGSEGV, Segmentation fault. 0x0042afd4 in wat.main().Foo.this() (this=0x66d450) at wat.d:10 10this() { ++created; } (gdb) where #0 0x0042afd4 in wat.main().Foo.this() (this=0x66d450) at wat.d:10 #1 0x0042b167 in std.conv.emplace!(wat.main().Foo).emplace(void[]) (chunk=...) at /home/mrkline/src/dlang/phobos/std/conv.d:5005 #2 0x0042af76 in D main () at wat.d:19 where the relevant line in conv.d is result.__ctor(args); This was seen on 2.067 and the current master for dmd and phobos as of 2015-04-03 01:20 PST. --
[Issue 14402] std.conv.emplace for classes segfaults for nested class
https://issues.dlang.org/show_bug.cgi?id=14402 Matt Kline mkline.o...@gmail.com changed: What|Removed |Added CC||mkline.o...@gmail.com --
[Issue 14397] dmd: Provide full source range for compiler errors [enhancement]
https://issues.dlang.org/show_bug.cgi?id=14397 Jacob Carlborg d...@me.com changed: What|Removed |Added CC||d...@me.com --- Comment #1 from Jacob Carlborg d...@me.com --- Would be nice of have. Clang and Xcode (which uses Clang) both have this feature. --
Re: Mistake of opening of a file having a name in cp1251.
According to the documentation https://msdn.microsoft.com/de-de/library/yeby3zcb.aspx, _wfopen already takes a wide-character string, not an ANSI string. So return _wfopen(name.tempCStringW(), mode.tempCStringW()); would be the correct way. All these weird ansi versions are Windows 98 era legacy, they aren't commonly used anymore. Please also try whether your C runtime implements _wfopen correctly or otherwise your file name is somehow broken (maybe it's on a FAT filesystem etc). For that, please try opening it in a C program using _wfopen(_T(filenamehere), r); For comparison, try to create a file with the same name on an NTFS filesystem and try opening it in a C program using _wfopen(_T(filenamehere), r); Does that work? Also, what version and flavour (DMD, GDC, LDC) of D do you use?
[Issue 14401] typeid(shared X).init is empty for class types
https://issues.dlang.org/show_bug.cgi?id=14401 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com Hardware|x86 |All OS|Mac OS X|All --
[Issue 10606] DMD Exit code 139
https://issues.dlang.org/show_bug.cgi?id=10606 Daniel Kozak kozz...@gmail.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #3 from Daniel Kozak kozz...@gmail.com --- With 2.067 it works ok --
linking C library with D, creating a OpenVG D port
hi! i'm trying to run OpenVG examples in D. So far i have compiled ShivaVG (an implementation of OpenVG standard) in C++ and created a shared library. Now i'd like to link my OpenVG.lib with a D program. (the library was compiled with msvc2013x86, i'd like to avoid switching compiler) to link the library i added my C++ lib to my dub file, libs: [OpenVG], dmd complains: Error 43: Not a Valid Library File after some googling i found out i have to convert/create my lib file in the appropriate format to be used with dmd, so i used implib to create a new lib from my dll. viewing this lib no exports are visible with dumpbin, dumpbin /EXPORTS OpenVG_D.lib also reports a warning: OpenVG_D.lib : warning LNK4048: Invalid format file; ignored maybe i have to use another tool here to view the exports? but dumpbin can view pe and coff, dmd uses coff i think so this should work i guess if i try to compile my application it now tells me about the obviously missing export, Error 42: Symbol Undefined _vgCreateContextS
Re: Mid-term vision review
On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei Are there any plans for the creation of a powerful AST-macro system in D? http://wiki.dlang.org/DIP50 On Friday, 3 April 2015 at 08:30:18 UTC, Guillaume Chatelet wrote: Regarding C++ integration. I'm working on https://issues.dlang.org/show_bug.cgi?id=14178 which is the first step to get STL working. Fine.
Re: Dgame RC #1
On Friday, 3 April 2015 at 04:55:42 UTC, Mike Parker wrote: On Thursday, 2 April 2015 at 09:38:05 UTC, Namespace wrote: Dgame is based on SDL 2.0.3 (as described in the installation tutorial), but tries to wrap any function call which is introduced after SDL 2.0.0: static if (SDL_VERSION_ATLEAST(2, 0, 2)) so that Dgame should be usable with any SDL2.x version. I will investigate which function is calling SDL_HasAVX. None of that matters. This has nothing to do with what Dgame is calling, but what Derelict is actually trying to load. SDL_HasAVX was added to the API in 2.0.2 so does not exist in previous versions of SDL, therefore an exception will be thrown when Derelict tries to load older versions and that function is missing. Dgame will load DerelictSDL2 as usual and then it will check if the supported version is below 2.0.2. If so, DerelictSDL2 will be reloaded with SharedLibVersion(MAX_SUPPORTED_VERSION)). That should that work, right? No, it won't. By default, Derelict attempts to load functions from the 2.0.2 API (which includes 2.0.3, since the API did not change). That means anything below 2.0.2 will *always* fail to load because they are missing the functions added to the API in 2.0.2. The right way to do this is to use the selective loading mechanism to disable exceptions for certain functions. With the 1.9.x versions of DerelictSDL2, you no longer have to implement that manually. As I wrote above, you can do this: DerelictSDL2.load(SharedLibVersion(2,0,0)); With that, you can load any version of SDL2 available on the system, from 2.0.0 on up. It uses selective loading internally. For example, 2.0.0 will load even though it is missing SDL_HasAVX and several other functions added in 2.0.1 and 2.0.2. But you should only do this if you are absolutely sure that you are not calling any functions that were not present in 2.0.0. For example, the SDL_GetPrefPath/SDL_GetBasePath functions were added in 2.0.1. If you require those and need nothing from 2.0.2, then you should do this: DerelictSDL2.load(SharedLibVersion(2,0,1)); Now, 2.0.0 will fail to load, but 2.0.1 and higher will succeed. You can look at the functions allowSDL_2_0_0 and allowSDL_2_0_1 in sdl.d [1] to see exactly which functions were added in 2.0.1 and 2.0.2 so that you can determine if you require any of them. I also encourage you to go and do a diff of the SDL headers for each release to see things other than functions, like new constants, that were added in each release (and to protect against the possibility that I've made a mistake somewhere). That won't affect whether or not Derleict loads, but a new constant added in SDL 2.0.2 won't work with a function that existed in 2.0.0, for example. Yes, you're right. I'll undo my changes and I'll set SDL 2.0.2 as a basis for Dgame. Thank you for the explanation. :)
[Issue 14340] AssertError in std.algorithm.sorting: unstable sort fails to sort an array with a custom predicate
https://issues.dlang.org/show_bug.cgi?id=14340 Ivan Kazmenko ga...@mail.ru changed: What|Removed |Added Severity|major |critical --- Comment #1 from Ivan Kazmenko ga...@mail.ru --- The culprit is optimisticInsertionSort. When the range hasAssignableElements, it changes the array too heavily while calling the predicate: https://github.com/D-Programming-Language/phobos/blob/4abe95ef/std/algorithm/sorting.d#L799-L806 The problem is: 1. The predicate (count) depends on array integrity (order does not matter, contents do) at all times it is called. 2. The library is too optimistic taking an element to a temporary variable and then overwriting the elements, all the way calling the predicate. I'm unsure what should be done. On one hand, the user has the right to abstract away from sort implementation. On the other hand, the speedup for a range which hasAssignableElements may be significant for more trivial cases. I'd suggest to be on the safe side. First, find the greatest value of j using pred, just like now: - for (; j maxJ pred(r[j + 1], temp); ++j) {} - Only after that, perform all swaps: - auto temp = r[i]; for (size_t k = i; k j; ++k) { r[k] = r[k + 1]; } r[j] = temp; - After all, the last thing one wants is a failing library sort function, no matter how weird its usage may be. Another solution would be to note in the documentation that, with unstable sort, the predicate must not depend on the range. But most users won't care to read or remember that unless the sort goes wrong. Ivan Kazmenko. --
Re: Mid-term vision review
It would be great to have dmd on embedded platforms. On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei
D, Python, and Chapel
Chapel 1.11 just got release and they are making a big play on the integration of Chapel with Python. This could be huge and potentially disrupt the complacency of the NumPy based folk. Chapel is a rather pleasant PGAS language that makes parallelism and clustering quite nice. Certainly if the choice is Python+C++ vs Python+Chapel, this is now a no contest. This may put a kibosh on the whole Python+D thing. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Mid-term vision review
On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 That's great news, looking forward to ddmd getting out soon. Looks like this will take about 50-60% of dmd's source to D, which should also help a bit with creating a D cross-compiler. One move that might help is to list specific steps you'd like taken for each of those broader goals, ie a list of action items to further each of those goals. Otherwise, many of those are too vague for people to actually pick up and act on: how do they improve the quality of implementation or brand? You have already been doing this recently by listing various needed jobs on the forum with the [WORK] label, which is helpful for those looking for ways to pitch in, but it would be better if they were listed on a single page in the wiki. Things just get lost in bugzilla, as it's hard to search, though you could put together a list on the wiki with links to the relevant bugzilla issues, if you wanted to keep bugzilla as the main archive. Also, now that the download page splits up the zip files by OS, a long-needed move, it would be interesting to see stats of downloads by OS. :)
[Issue 14395] default value collapsed (dmd2.067)
https://issues.dlang.org/show_bug.cgi?id=14395 ag0ae...@gmail.com changed: What|Removed |Added CC||ag0ae...@gmail.com --- Comment #2 from ag0ae...@gmail.com --- Introduced in https://github.com/D-Programming-Language/dmd/pull/4015 --
Re: Trouble with Cortex-M Hello World
Jens Bauer wrote in message news:ckqcspcptqazbawds...@forum.dlang.org... Well, it seems I found the problem. lexer.h, line 203 reads: Yeah, I thought it might be that. Looking for union-tricks, I also found ... stringtable.c:24 hash will not be the same value on Big Endian, Mixed Endian and Little Endian machines. To hash correctly, read one byte at a time, then use bit-shifting by 8 if another byte is to be read. It doesn't matter for this hash table.
Re: Mistake of opening of a file having a name in cp1251.
The decision, which I have applied using function fromUtf8toAnsiW() already works correctly. I use dmd 2.067.0
Re: Trouble with Cortex-M Hello World
Well, it seems I found the problem. lexer.h, line 203 reads: union { d_int32 int32value; d_uns32 uns32value; d_int64 int64value; d_uns64 uns64value; ... ... ... }; While this optimization is neat, it does not produce correct code on Big Endian or Mixed Endian platforms. If we write a value to int64value and read it from int32value, then it will be 0 always. This is because the int32value is always read if the upper 32 bits of the int64value is zero. And since a Big Endian platform reads the most significant bits/bytes first, they'll read the upper 32 bits, not the lower 32 bits. lexer.c:1874; Lexer::number(Token *t) correctly writes n to t-uns64value. -But let's take parse.c:6384 - here token.uns32value is read, thus we'll get a zero, no matter which value was written by number(Token *). In parse.c:6379 we would get a minus one always. Looking for union-tricks, I also found ... stringtable.c:24 hash will not be the same value on Big Endian, Mixed Endian and Little Endian machines. To hash correctly, read one byte at a time, then use bit-shifting by 8 if another byte is to be read.
Re: Trouble with Cortex-M Hello World
I better also mention that the problem is the same for floats in Lexer::inreal(Token *).
Re: extending 'import' using 'with'
On 2015-04-02 21:21, weaselcat wrote: no this is the reason java is unusable without an IDE. Yeah, in a Java IDE it would automatically add the missing imports. -- /Jacob Carlborg
[Issue 14402] std.conv.emplace segfaults for nested class
https://issues.dlang.org/show_bug.cgi?id=14402 Matt Kline mkline.o...@gmail.com changed: What|Removed |Added Summary|std.conv.emplace for|std.conv.emplace segfaults |classes segfaults for |for nested class |nested class| --
Re: linking C library with D, creating a OpenVG D port
progress ... i think in some forum posts i've read 64bit dmd uses a differnt linker which supports coff atleast i can now link my app in 64bit mode without errors dmd -m64 source/app.d OpenVG.lib also an exported test function prints to stdout, so my problem is solved for x64 :) if anyone could explain how this could be done with 32bit builds i'd appreciate it why does dmd use different linkers for 32 and 64bit anyway?
Re: linking C library with D, creating a OpenVG D port
On 4/04/2015 1:00 a.m., ddos wrote: progress ... i think in some forum posts i've read 64bit dmd uses a differnt linker which supports coff atleast i can now link my app in 64bit mode without errors dmd -m64 source/app.d OpenVG.lib also an exported test function prints to stdout, so my problem is solved for x64 :) if anyone could explain how this could be done with 32bit builds i'd appreciate it why does dmd use different linkers for 32 and 64bit anyway? In the beginning, Symantec had their own c/c++ compiler. That compiler was made by a humble but brilliant developer called Walter Bright. At the time the Microsoft toolchain was horribly incompatible between versions. In fact it was so bad, that a linker was made specifically for it. At the time OMF was the standard binary format for Windows. Not long after PE-COFF was transitioned to for Windows 95. Not long after this, Symantec dropped their toolchain and that clever developer negotiated the rights to it and in the process created a little company. Furthering this, that clever developer didn't enjoy c++ not nearly as much as what others think of it. Instead choose to create a new language using the existing toolchain. Fast forward about 12 years. The compiler still used that old linker but only for 32bit on Windows. But now the need was strong for 64bit support. After successful addition of 64bit support of *nix, Windows was worked on. Now nobody wanted to add 64bit support to that old old linker as it was written in an arcane dialect of assembly. Almost nobody understood it. So the clever people, implemented it using the standard toolchain on the Windows platform. Instead of their own custom one. However it used PE-COFF for binary format. Unfortunately the need was not great to add it also for 32bit. Causing a great divide. That divide has since been resolved and will be forgotten about after one more major release. Disclaimer: not 100% accurate, bunch of stuff I missed out. But more or less accurate timeline.
[Issue 14399] std.json cannot parse its own output for nan
https://issues.dlang.org/show_bug.cgi?id=14399 ag0ae...@gmail.com changed: What|Removed |Added Keywords||pull CC||ag0ae...@gmail.com Hardware|x86_64 |All OS|Linux |All --- Comment #2 from ag0ae...@gmail.com --- https://github.com/D-Programming-Language/phobos/pull/3141 --
Re: Making regex replace CTFE by removing malloc
On Friday, 3 April 2015 at 03:58:33 UTC, ketmar wrote: On Thu, 02 Apr 2015 17:14:24 +, Pierre Krafft wrote: What can replace malloc that can run on compile time and won't make it slower at run time? this is actually two questions, so i'll answer to two questions. 1. What can replace malloc that can run on compile time? new ubyte[](size) 2. ...and won't make it slower at run time? but we can still use malloc in runtime! `if (_ctfe)` allows us to select the necessary code branch. p.s. i don't think that this is the only problem, though. but i never read std.regexp source. it's bad, 'cause i want to make it work with any range, not only with strings. this will allow to run regexp on anything -- and open way to my rbtree-based document system. Thanks! I'll take a look and see if I can make it work.
Re: Typeinfo
On 4/2/15 8:21 PM, Andrei Alexandrescu wrote: Hey folks, is there any way to figure out whether a type is immutable or shared given its typeinfo? I see there's only the flags() method that doesn't tell that. I'm thinking we'd do good to extend that. This is needed for allocators. I'm thinking an allocator might be interested at creation time to use different allocation strategies for qualified types, especially shared ones. I use this technique in lifetime.d: // typeof(ti) is TypeInfo auto isshared = typeid(ti) is typeid(TypeInfo_Shared); https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lifetime.d#L640 -Steve
Re: Mistake of opening of a file having a name in cp1251.
On 4/3/15 2:26 AM, MGW wrote: Greetings to all! I work on Windows with cp1251 and I have a mistake in the program: import std.stdio; int main (string [] args) { string nameFile = `«Ёлки с объектами №876».txt`; File f = File (nameFile, w); f.writeln (Greetings!); return 0; } Please see if this will fix your problem, currently open PR: https://github.com/D-Programming-Language/phobos/pull/3138 Seems like the same issue -Steve
[Issue 14207] [REG2.065] [CTFE] ICE on unsupported reinterpret cast in compile time
https://issues.dlang.org/show_bug.cgi?id=14207 Kenji Hara k.hara...@gmail.com changed: What|Removed |Added Summary|[REG2.065] Assertion|[REG2.065] [CTFE] ICE on |failure:|unsupported reinterpret |'(vd-storage_class|cast in compile time |(STCout | STCref)) ?| |isCtfeReferenceValid(newval | |) : | |isCtfeValueValid(newval)' | |on line 6724 in file| |'interpret.c' | --
Placing variable/array in a particular section
Today I finally succeeded in building my first Hello World D program (after fixing the endian problem). Is there a way of setting the target section for a variable or an array ? Eg. the equivalent way of doing this using gcc is: __attribute__((section(.isr_vector))) VectorFunc g_pfnVectors[] = { ... }; -I need this functionality, because on microcontrollers, it's necessary to control where in RAM / flash memory the data is written. I could of course do this using C or assembly language, but I'd like to use pure D only, if possible.
Re: D1 operator overloading in D2
On 3/30/15 11:25 AM, Steven Schveighoffer wrote: I'll put in a doc PR to reference the D1 documentation. https://github.com/D-Programming-Language/dlang.org/pull/953 -Steve
Re: Making regex replace CTFE by removing malloc
On Friday, 3 April 2015 at 03:58:33 UTC, ketmar wrote: On Thu, 02 Apr 2015 17:14:24 +, Pierre Krafft wrote: What can replace malloc that can run on compile time and won't make it slower at run time? this is actually two questions, so i'll answer to two questions. 1. What can replace malloc that can run on compile time? new ubyte[](size) 2. ...and won't make it slower at run time? but we can still use malloc in runtime! `if (_ctfe)` allows us to select the necessary code branch. p.s. i don't think that this is the only problem, though. but i never read std.regexp source. it's bad, 'cause i want to make it work with any range, not only with strings. this will allow to run regexp on anything -- and open way to my rbtree-based document system. It seems like I have treaded into something which is outside my knowledge domain. The malloc is indeed one of the least problems with that code. The code makes use of completely unsafe code with pointer casts that are disallowed in CTFE code. If someone knows how to replace the pointer casts that would probably be everything needed. Otherwise I think it would need a rewrite to make it use safe D. The new code would probably be slower so for most people it would be a step back. A solution would be to have different code for CTFE and runtime but that seems unmaintainable and subpar.
Re: Mid-term vision review
On 4/3/15 1:30 AM, Guillaume Chatelet wrote: On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei Regarding C++ integration. I'm working on https://issues.dlang.org/show_bug.cgi?id=14178 which is the first step to get STL working. Sounds great, thanks! Keep us posted. -- Andrei
Re: Mid-term vision review
On 4/3/15 3:10 AM, Andrea Fontana wrote: It would be great to have dmd on embedded platforms. I agree. We just don't have the champion for that yet. -- Andrei
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 14:39:35 UTC, Jens Bauer wrote: On Friday, 3 April 2015 at 14:22:43 UTC, Kai Nacke wrote: On Wednesday, 1 April 2015 at 13:59:34 UTC, Johannes Pfau wrote: I'm not sure if anybody ever used GDC/DMD/LDC on a big-endian system. LDC is endian-clean. I used LDC on big-endian Linux/PPC64. Unfortunately, I can't b uild LLVM on my PowerMac. :/ ... I find it a little strange that LDC is endian-clean; don't they all use the same parser ? LDC's frontend is a fork of DMD's frontend - modified to use LLVM backend and a few other tweaks - that tracks upstream. Not exactly the same.
Re: D, Python, and Chapel
On 4/3/15 6:36 AM, Gary Willoughby wrote: Chapel overview: http://chapel.cray.com/overview.html Their hello world examples do a fantastic job of illustrating their main selling point. My hat's off to whoever put that on their site. D may have difficulty coming up with something like that, since its selling point(s) may be harder to distil. But if we could, it should get front 'n' center attention on dlang.org.
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 14:22:43 UTC, Kai Nacke wrote: On Wednesday, 1 April 2015 at 13:59:34 UTC, Johannes Pfau wrote: I'm not sure if anybody ever used GDC/DMD/LDC on a big-endian system. LDC is endian-clean. I used LDC on big-endian Linux/PPC64. Unfortunately, I can't b uild LLVM on my PowerMac. :/ ... I find it a little strange that LDC is endian-clean; don't they all use the same parser ?
Re: Trouble with Cortex-M Hello World
Am Fri, 03 Apr 2015 07:32:21 + schrieb Jens Bauer doc...@who.no: Well, it seems I found the problem. lexer.h, line 203 reads: union { d_int32 int32value; d_uns32 uns32value; d_int64 int64value; d_uns64 uns64value; ... ... ... }; While this optimization is neat, it does not produce correct code on Big Endian or Mixed Endian platforms. If we write a value to int64value and read it from int32value, then it will be 0 always. This is because the int32value is always read if the upper 32 bits of the int64value is zero. And since a Big Endian platform reads the most significant bits/bytes first, they'll read the upper 32 bits, not the lower 32 bits. lexer.c:1874; Lexer::number(Token *t) correctly writes n to t-uns64value. -But let's take parse.c:6384 - here token.uns32value is read, thus we'll get a zero, no matter which value was written by number(Token *). In parse.c:6379 we would get a minus one always. Nice find. If you open a pull request on https://github.com/D-Programming-Language/dmd please notify me (@jpf91). I'll make sure to backport the fix to gdc once it's been committed to dmd upstream. Looking for union-tricks, I also found ... stringtable.c:24 hash will not be the same value on Big Endian, Mixed Endian and Little Endian machines. To hash correctly, read one byte at a time, then use bit-shifting by 8 if another byte is to be read. IIRC it does not really matter if the hash is different here as it's only used internally in the compiler. So as long as it still hashes correctly ('no' collisions) it shouldn't matter.
Re: Escape codes are not 100% portable
You can convert to host encoding, gets more interesting if you have worked with data from 390's. Anyway here is the Newline reference from Unicode. http://www.unicode.org/versions/Unicode4.0.0/ch05.pdf#G10213 na On Thursday, 2 April 2015 at 13:57:32 UTC, Steven Schveighoffer wrote: On 4/2/15 9:05 AM, Jens Bauer wrote: On Thursday, 2 April 2015 at 12:24:22 UTC, Kagamin wrote: On Thursday, 2 April 2015 at 11:42:50 UTC, Jens Bauer wrote: On the other hand, if a file was copied to a platform, where \r = 13 and \n = 10, and the file contains lines ending in 0x0d, then this compiler would not be able to build the file. Where it will fail? It can see extra lines, but those are whitespace, the source should compile just fine. You're right here; because the D compiler does not require reading line-by-line. The line numbers reported will be incorrect, but that's probably the worst that can happen. However, in a case like PPM (Portable Pixmap Format), the problem is that when the first \n character is met, the format switches to binary; but that will not occur until we've already read a bunch of bytes from the binary stream, resulting in the picture being out of sync. After reading all this thread, I can safely say, I'm OK with D not targeting these platforms. In addition, Not portable doesn't mean buildable without any changes. Is it not considered a porting activity to just change those constants for that version of DMD? And finally, if the files are written for that platform, won't they have this wonky coding anyway? And if they are files from another platform which treats \n and \r traditionally, won't editors on that platform do the same thing with line numbers? I really see no problem with the way the code is. -Steve
Re: unittests are really part of the build, not a special run
On 2015-04-02 21:11, Ary Borenszweig wrote: We can. But then it becomes harder to understand what's going on. In RSpec I don't quite understand what's going on really, and I like a bit of magic but not too much of it. It's quite straightforward to implement, in Ruby as least. Something like this: module DSL def describe(name, block) context = Class.new(self) context.send(:extend, DSL) context.instance_eval(block) end def it(name, block) send(:define_method, name, block) end end class Foo extend DSL describe 'foo' do it 'bar' do p 'asd' end end end You need to register the tests somehow also but be able to run them, but this is the basic idea. I cheated here and used a class to start with to simplify the example. In fact with macros it's not that simple because you need to remember the context where you are defining stuff, so that might need adding that capabilities to macros, which will complicate the language. Yeah, I don't really now how macros work in Cyrstal. -- /Jacob Carlborg
[Issue 14403] New: DDox: std.algorithm index links are 404
https://issues.dlang.org/show_bug.cgi?id=14403 Issue ID: 14403 Summary: DDox: std.algorithm index links are 404 Product: D Version: D2 Hardware: All URL: http://dlang.org/library/std/algorithm.html OS: All Status: NEW Keywords: ddoc Severity: normal Priority: P1 Component: websites Assignee: nob...@puremagic.com Reporter: thecybersha...@gmail.com CC: c...@dawg.eu, slud...@outerproduct.org All the links in the overview table on http://dlang.org/library/std/algorithm.html are 404. Looks like the links are using _ (underscore) as the module path delimiter, but DDox expects / (forward slash). --
[Issue 14399] std.json cannot parse its own output for nan
https://issues.dlang.org/show_bug.cgi?id=14399 bb.t...@gmx.com changed: What|Removed |Added CC||bb.t...@gmx.com --- Comment #1 from bb.t...@gmx.com --- according to this discussion, it seems that the problem appends when the value is written: null should be written instead of nan: http://tools.ietf.org/html/rfc4627 Numeric values that cannot be represented as sequences of digits (such as Infinity and NaN) are not permitted. (https://code.google.com/p/go/issues/detail?id=3480) --
Re: Redirecting dead links on the website
On Thursday, 2 April 2015 at 04:38:50 UTC, Martin Nowak wrote: On Wednesday, 1 April 2015 at 21:20:49 UTC, w0rp wrote: We sould track down the old links and redirect to the new documentation pages. Working on a fix, will hopefully be deployed tomorrow. https://github.com/D-Programming-Language/dlang.org/pull/951 BTW, we could really need more people with frontend/web skills to help us with dlang.org. I can probably help a little, if there's a bug list of things that need to be done, etc.
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 16:11:59 UTC, Iain Buclaw wrote: On 3 April 2015 at 17:58, Jens Bauer via Digitalmars-d Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way. I guess if you're happy to work with D1... (which is far more easier to port than D2 will ever be). It's actually only clang I have problems with, not LDC. D2 probably doesn't have any serious errors on my platform. -The only part that was a bit challenging was the BE issue, but I'm confident that it will be solved the right way. ;) At some point, I might want to try building dmd as well; currently it does not seem to support Mac/PPC (I had a short look at it, but it seems I need to write some header files in order to get it working).
Re: Mid-term vision review
On Friday, 3 April 2015 at 15:07:57 UTC, Andrei Alexandrescu wrote: On 4/3/15 3:10 AM, Andrea Fontana wrote: It would be great to have dmd on embedded platforms. I agree. We just don't have the champion for that yet. -- Andrei I might obviously be biased, but to be honest I don't see much value in starting to port a largely obsolete backend to a whole new processor architecture. Sure, it might be a fun exercise for somebody interested in learning about code generation. But in terms of pushing D forward, I think the much better option is to encourage people to contribute to GDC or LDC instead, where backends for virtually all important embedded platforms already exist. — David
Re: Mid-term vision review
On Friday, 3 April 2015 at 16:03:11 UTC, Joakim wrote: On Friday, 3 April 2015 at 11:04:36 UTC, Dennis Ritchie wrote: Are there any plans for the creation of a powerful AST-macro system in D? http://wiki.dlang.org/DIP50 No, Walter and Andrei are against it: http://forum.dlang.org/thread/l5otb1$1dhi$1...@digitalmars.com?page=20#post-l62466:242n8p:241:40digitalmars.com Thanks.
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:10:33 UTC, Dicebot wrote: On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote: . Separate compilation. One file changes, only one file gets rebuilt This immediately has caught my eye as huge no in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way. I understand that. But: 1. One of D's advantages is fast compilation. I don't think that means we should should compile everything all the time because we can (it's fast anyway!) 2. There are measureable differences in compile-time. While working on reggae I got much faster edit-compile-unittest cycles because of separate compilation 3. This is valuable feedback. I was wondering what everybody else would think. It could be configureable, your not endorse it in any way notwithstanding. I for one would rather have it compile separately 4. CTFE and memory consumption can go through the roof (anecdotally anyway, it's never been a problem for me) when compiling everything at once.
Re: Redirecting dead links on the website
On Wednesday, 1 April 2015 at 23:23:27 UTC, Martin Nowak wrote: I might look into that, it's quite some work though. Can we feed them the sitemap instead? http://dlang.org/library/sitemap.xml It looks like the sitemap has the IPv4 localhost IP address in it, instead of the site's domain name. I'm pretty sure that will need to be fixed too.
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 15:58:03 UTC, Jens Bauer wrote: Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way. Can't you just bootstrap your way up to a recent GCC first? — David
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 16:39:40 UTC, David Nadlinger wrote: On Friday, 3 April 2015 at 15:58:03 UTC, Jens Bauer wrote: Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way. Can't you just bootstrap your way up to a recent GCC first? Unfortunately, GCC 4.2 is not patched by Apple. When Apple suddenly decided to stop using GCC, they withdrew all patches and made clear to the GCC team that they were not allowed to use any of those patches. This causes some problems with building Mach'o binaries, which is native to Mac/PPC. It also causes incompatibilities regarding ObjC and poor Xcode integration (!) -So it would have been better to get the old clang sources and build clang that way. Or perhaps someone with an Intel Mac could build clang for PPC, heh. ;)
C++ to D - recursion with std.variant
Hi, Is it possible to write on D recursion using std.variant? - #include boost/variant.hpp #include iostream struct Nil {}; auto nil = Nil{}; template typename T struct Cons; template typename T using List = boost::variantNil, boost::recursive_wrapperConsT; template typename T struct Cons { Cons(T val, ListT list) : head(val), tail(list) {} T head; ListT tail; }; template typename T class length_visitor : public boost::static_visitorsize_t { public: int operator()(Nil) const { return 0; } int operator()(const ConsT c) const { return 1 + length(c.tail); } }; template typename T auto cons(T head, ListT tail) { return ListT(ConsT(head, tail)); } template typename T auto cons(T head, Nil) { return ListT(ConsT(head, ListT(Nil{}))); } template typename T size_t length(const ListT list) { return boost::apply_visitor(length_visitorT(), list); } int main() { auto l = cons(3, cons(2, cons(1, nil))); std::cout length(l) std::endl; // prints 3 return 0; } - http://ideone.com/qBuOvJ
[Issue 14395] [REG2.067] Typesafe variadic function call collapsed if being used for default value
https://issues.dlang.org/show_bug.cgi?id=14395 Kenji Hara k.hara...@gmail.com changed: What|Removed |Added Keywords||pull, wrong-code Hardware|x86_64 |All Summary|default value collapsed |[REG2.067] Typesafe |(dmd2.067) |variadic function call ||collapsed if being used for ||default value OS|Windows |All --- Comment #3 from Kenji Hara k.hara...@gmail.com --- https://github.com/D-Programming-Language/dmd/pull/4551 --
[Issue 14341] [REG 2.067] Crash with -O -release -inline after sort and map!(to!string)
https://issues.dlang.org/show_bug.cgi?id=14341 Kenji Hara k.hara...@gmail.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #4 from Kenji Hara k.hara...@gmail.com --- (In reply to Kenji Hara from comment #2) (In reply to Vladimir Panteleev from comment #1) Introduced in https://github.com/D-Programming-Language/dmd/pull/4415 This is a dup of issue 14220, but its fix is not yet cherry-picked in 2.067 branch. The 14220 fix was cherry-picked in stable branch. --
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:13:41 UTC, Dicebot wrote: Also I don't see any point in yet another meta build system. The very point of initial discussion was about getting D only cross-platform solution that won't require installing any additional software but working D compiler. I was also thinking of a binary backend (producing a binary executable that does the build, kinda like what ctRegex does but for builds), and also something that just builds it on the spot. The thing is, I want to get feedback on the API first and foremost, and delegating the whole do-I-or-do-I-not-need-to-build-it logic to programs that already do that (and well) first was the obvious (for me) choice. Also, Ninja is _really_ fast.
Re: Mid-term vision review
On Friday, 3 April 2015 at 11:04:36 UTC, Dennis Ritchie wrote: Are there any plans for the creation of a powerful AST-macro system in D? http://wiki.dlang.org/DIP50 No, Walter and Andrei are against it: http://forum.dlang.org/thread/l5otb1$1dhi$1...@digitalmars.com?page=20#post-l62466:242n8p:241:40digitalmars.com On Friday, 3 April 2015 at 13:41:02 UTC, Wyatt wrote: On Friday, 3 April 2015 at 10:56:24 UTC, Joakim wrote: You have already been doing this recently by listing various needed jobs on the forum with the [WORK] label, which is helpful for those looking for ways to pitch in, but it would be better if they were listed on a single page in the wiki. Or use the built-in keywords functionality of Bugzilla. Or tracking bugs. Those would both work, but you still need to surface them somewhere on the wiki or main site, because the bugzilla interface doesn't appear that great at surfacing such topics. Things just get lost in bugzilla, as it's hard to search, though you could put together a list on the wiki with links to the relevant bugzilla issues, if you wanted to keep bugzilla as the main archive. Is it? I don't understand, what problem are you having? (IME, It's a damn sight better than any of the alternatives when you have this many issues.) I've had problems searching for dmd error strings in bugzilla and had a much better experience doing a google custom search on forum.dlang.org instead, using google search to find it through the web mirror. Perhaps I'm just not adequately aware of how bugzilla search works, but the moment you need casual users to guess the right keywords to search for, you lose 99% of them. Even if it's all done in bugzilla underneath, you need to have some sort of external links from the wiki or somewhere easier to find, because bugzilla is a very effective haystack.
Re: Trouble with Cortex-M Hello World
On 3 April 2015 at 17:58, Jens Bauer via Digitalmars-d digitalmars-d@puremagic.com wrote: On Friday, 3 April 2015 at 15:41:34 UTC, David Nadlinger wrote: On Friday, 3 April 2015 at 14:39:35 UTC, Jens Bauer wrote: Unfortunately, I can't b uild LLVM on my PowerMac. :/ Why would that be so? Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way. I guess if you're happy to work with D1... (which is far more easier to port than D2 will ever be). Iain.
Reggae v0.0.5 super alpha: A build system in D
I wanted to work on this a little more before announcing it, but it seems I'm going to be busy working on trying to get unit-threaded into std.experimental so here it is: http://code.dlang.org/packages/reggae If you're wondering about the name, it's because it's supposed to build on dub. You might wonder at some of the design decisions. Some of them are solutions to weird problems caused by writing build descriptions in a compiled language, others I'm not too sure of. Should compiler flags be an array of strings or a string? I got tired of typing square brackets so it's a string for now. Please let me know if the API is suitable or not, preferably by trying to actually use it to build your software. Existing dub projects might work by just doing this from a build directory of your choice: reggae -b make /path/to/project. That should generate a Makefile (or equivalent Ninja ones if `-b ninja` is used) to do what `dub build` usually does. It _should_ work for all dub projects, but it doesn't right now. For at least a few projects it's due to bugs in `dub describe`. For others it might be bugs in reggae, I don't know yet. Any dub.json files that use dub configurations extensively is likely to not work. Features: . Make and Ninja backends (tup will be the next one) . Automatically imports dub projects and writes the reggae build configuration . Access to all objects to be built with dub (including dependencies) when writing custom builds (reggae does this itself) . Out-of-tree builds, like CMake . Arbitrary build rules but pre-built ease-of-use higher level targets . Separate compilation. One file changes, only one file gets rebuilt . Automatic dependency detection for D, C, and C++ source files . Can build itself (but includes too many object files, another `dub describe` bug) There are several runnable examples in the features directory, in the form of Cucumber tests. They include linking D code to C++. I submitted a proposal to talk about this at DConf but I'll be talking about testing instead. Maybe next year? Anyway, destroy! Atila
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote: . Separate compilation. One file changes, only one file gets rebuilt This immediately has caught my eye as huge no in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way.
Re: D, Python, and Chapel
On Friday, 3 April 2015 at 15:34:05 UTC, John Colvin wrote: On Friday, 3 April 2015 at 10:18:11 UTC, Russel Winder wrote: Chapel 1.11 just got release and they are making a big play on the integration of Chapel with Python. This could be huge and potentially disrupt the complacency of the NumPy based folk. Chapel is a rather pleasant PGAS language that makes parallelism and clustering quite nice. Certainly if the choice is Python+C++ vs Python+Chapel, this is now a no contest. This may put a kibosh on the whole Python+D thing. I've had a look at Chapel and I don't get what the big deal is. There's some nice syntax and good thinking about parallelism in there*, but I don't see what's exciting after that... Maybe D has spoiled me for seeing power in a language. I guess what I'm saying is I can see that they've put a lot of thought in to good abstractions for parallelism in HPC, we should steal a bunch of it because D is eminently capable of supporting similar abstractions, while being a much more rounded language in other regards. The big deal is that is being developed in open collaboration with most companies and research labs that matter in HPC.
Re: Reggae v0.0.5 super alpha: A build system in D
Also I don't see any point in yet another meta build system. The very point of initial discussion was about getting D only cross-platform solution that won't require installing any additional software but working D compiler.
Re: Typeinfo
On 4/3/15 4:53 AM, Steven Schveighoffer wrote: On 4/2/15 8:21 PM, Andrei Alexandrescu wrote: Hey folks, is there any way to figure out whether a type is immutable or shared given its typeinfo? I see there's only the flags() method that doesn't tell that. I'm thinking we'd do good to extend that. This is needed for allocators. I'm thinking an allocator might be interested at creation time to use different allocation strategies for qualified types, especially shared ones. I use this technique in lifetime.d: // typeof(ti) is TypeInfo auto isshared = typeid(ti) is typeid(TypeInfo_Shared); https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lifetime.d#L640 Thanks Adam and Steve. Guess I should have asked this in the learn forum :o). -- Andrei
Re: Mid-term vision review
On 4/3/15 4:04 AM, Dennis Ritchie wrote: On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei Are there any plans for the creation of a powerful AST-macro system in D? http://wiki.dlang.org/DIP50 Not for the time being. -- Andrei
Re: Mid-term vision review
On Friday, 3 April 2015 at 17:51:00 UTC, Andrei Alexandrescu wrote: On 4/3/15 4:04 AM, Dennis Ritchie wrote: On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei Are there any plans for the creation of a powerful AST-macro system in D? http://wiki.dlang.org/DIP50 Not for the time being. -- Andrei Conclusion with wish I agree. However, I'd like us to keep in mind that the proposal is around, as I think it is highly relevant for discussion like the ones related to unittests - behavior could be easily customized if the damn thing was a macro. Also, what are the plan for GDC and LDC if we move toward DDMD ?
Re: I submitted my container library to code.dlang.org
On Wednesday, 1 April 2015 at 06:31:28 UTC, thedeemon wrote: On Tuesday, 31 March 2015 at 21:17:04 UTC, Martin Nowak wrote: Robin Hood sounds like a good idea, but it really isn't. Keep your load factor reasonable and distribute values evenly, then you don't need a LRU lookup. Is there a D version of a hash table with open addressing and quadratic probing? It would be interesting to compare speeds. I like Robin Hood for cache-friendliness, and it works quite well for many combinations of key-value types. Now there is. I just changed the hashmap in my container library to use open addressing and quadratic probing. There's a benchmark program in the library for testing it in comparison to the standard associative array. I tested using 2.066 with optimisations on, and it seems to be about the same. https://github.com/w0rp/dstruct/blob/master/source/dstruct/map.d Now, I am not the most fantastic Computer Science guy in the world, and I probably got a few things wrong. If anyone would like to look at my code and point out mistakes, please do. I will add any improvements suggested.
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 17:05:28 UTC, John Colvin wrote: There is universal binary of LLVM 2.1 with clang (llvm-gcc back then I think) available here: http://llvm.org/releases/2.1/llvm-llvm-gcc4.0-2.1-darwin-univ.tar.gz Thank you so much; I'll try it immediately. I don't know why I haven't noticed it!
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:17:50 UTC, Atila Neves wrote: On Friday, 3 April 2015 at 17:13:41 UTC, Dicebot wrote: Also I don't see any point in yet another meta build system. The very point of initial discussion was about getting D only cross-platform solution that won't require installing any additional software but working D compiler. I was also thinking of a binary backend (producing a binary executable that does the build, kinda like what ctRegex does but for builds), and also something that just builds it on the spot. The thing is, I want to get feedback on the API first and foremost, and delegating the whole do-I-or-do-I-not-need-to-build-it logic to programs that already do that (and well) first was the obvious (for me) choice. Also, Ninja is _really_ fast. The thing is, it may actually affect API. The way I have originally expected it, any legal D code would be allowed for build commands instead of pure DSL approach. So instead of providing high level abstraction like this: const mainObj = Target(main.o, dmd -I$project/src -c $in -of$out, Target(src/main.d)); const mathsObj = Target(maths.o, dmd -c $in -of$out, Target(src/maths.d)); const app = Target(myapp, dmd -of$out $in, [mainObj, mathsObj]); .. you instead define dependency building blocks in D domain: struct App { enum path = ./myapp; alias deps = Depends!(mainObj, mathsObj); static void generate() { import std.process; enforce(execute([ dmd, -ofmyapp, deps[0].path, deps[1].path]).status); } } And provide higher level helper abstractions on top of that, tuned for D projects. This is just random syntax I have just invented for example of course. It is already possible to write decent cross-platform scripts in D - only dependency tracking library is missing. But of course that would make using other build systems as backends impossible.
Re: I submitted my container library to code.dlang.org
On Friday, 3 April 2015 at 16:12:00 UTC, w0rp wrote: On Wednesday, 1 April 2015 at 06:31:28 UTC, thedeemon wrote: On Tuesday, 31 March 2015 at 21:17:04 UTC, Martin Nowak wrote: Robin Hood sounds like a good idea, but it really isn't. Keep your load factor reasonable and distribute values evenly, then you don't need a LRU lookup. Is there a D version of a hash table with open addressing and quadratic probing? It would be interesting to compare speeds. I like Robin Hood for cache-friendliness, and it works quite well for many combinations of key-value types. Now there is. I just changed the hashmap in my container library to use open addressing and quadratic probing. There's a benchmark program in the library for testing it in comparison to the standard associative array. I tested using 2.066 with optimisations on, and it seems to be about the same. https://github.com/w0rp/dstruct/blob/master/source/dstruct/map.d Now, I am not the most fantastic Computer Science guy in the world, and I probably got a few things wrong. If anyone would like to look at my code and point out mistakes, please do. I will add any improvements suggested. Also, I changed the API slightly. 1. I removed setDefault for the standard associative array type. Now it's only there for my type. The module now only deals with the added library type. 2. I changed my range functions to match the names for range functions in 2.067, so they are now named byKey, byValue, and byKeyValue. I left out 'keys' and 'values', which I think are mistakes, when you can do byKey.array yourself, etc. 3. I renamed ItemRange to KeyValueRange. (My ranges for the hashmaps are all named, which is supposed to make it easier to build higher order ranges from them.) 4. I added a constructor for the hashmaps which lets you maybe avoid some allocations by providing a minimum size, where the hashmap will probably use a size larger than the one you provide. 5. I started using prime sizes for the hashmaps, because quadratic probing doesn't seem to work without primes. I think that's the bigger changes at least.
Re: Mid-term vision review
On Thursday, 2 April 2015 at 22:44:56 UTC, Andrei Alexandrescu wrote: It's the end of Q1. Walter and I reviewed our vision document. We're staying the course with one important addition: switching to ddmd, hopefully with 2.068. http://wiki.dlang.org/Vision/2015H1 Andrei I'm glad to see embedded systems mentioned there. I'm planning on seeing if I can get a D program to run on a Pebble watch, so I've been asking around for limited runtimes recently. I think it would be a big bonus to have some support for limited runtimes, as I think D can do very well as a better C with some support there. I won't promise much from my experiments, but I'll certainly post about it if it gets somewhere. When std.allocator becomes part of Phobos, I'll probaly experiment with writing containers with different allocators.
[Issue 14368] stdio.rawRead underperforms stdio
https://issues.dlang.org/show_bug.cgi?id=14368 --- Comment #1 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/D-Programming-Language/phobos https://github.com/D-Programming-Language/phobos/commit/e741c24f8578498be6b11bcc76e150b4f001be3a Fix Issue 14368 - rawRead performance reduce performance gap between fread and rawRead cf. https://issues.dlang.org/show_bug.cgi?id=14368 https://github.com/D-Programming-Language/phobos/commit/4d30c1d15d47eee3f4341feb3ee25eb0102fb6ae Merge pull request #3127 from charles-cooper/issue_14368 Fix Issue 14368 - rawRead performance --
[Issue 14368] stdio.rawRead underperforms stdio
https://issues.dlang.org/show_bug.cgi?id=14368 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:59:22 UTC, Atila Neves wrote: Well, I took your advice (and one of my acceptance tests is based off of your simplified real-work example) and started with the low-level any-command-will-do API first. I built the high-level ones on top of that. It doesn't seem crazy to me that certain builds can only be done by certain backends. The fact that the make backend can track C/C++/D dependencies wasn't a given and the implementation is quite ugly. In any case, the Target structs aren't high-level abstractions, they're just data. Data that can be generated by any code. Your example is basically how the `dExe` rule works: run dmd at run-time, collect dependencies and build all the `Target` instances. You could have a D backend that outputs (then compiles and runs) your example. The only problem I can see is execution speed. Maybe I didn't include enough examples. I also need to think of your example a bit more. I may have misunderstood how it works judging only by provided examples. Give a me bit more time to investigate actual sources and I may reconsider :)
Re: Digger 1.1
On 2015-03-18 12:14:01 +, Vladimir Panteleev said: I've pushed support for DMD bootstrapping, so if you need to build master now, build latest Digger from source. I'll make a binary release after 2.067 is out. Any news on this? And will there by COFF32 support as well? -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Mid-term vision review
On Friday, 3 April 2015 at 16:41:14 UTC, David Nadlinger wrote: On Friday, 3 April 2015 at 15:07:57 UTC, Andrei Alexandrescu wrote: On 4/3/15 3:10 AM, Andrea Fontana wrote: It would be great to have dmd on embedded platforms. I agree. We just don't have the champion for that yet. -- Andrei I might obviously be biased, but to be honest I don't see much value in starting to port a largely obsolete backend to a whole new processor architecture. Sure, it might be a fun exercise for somebody interested in learning about code generation. But in terms of pushing D forward, I think the much better option is to encourage people to contribute to GDC or LDC instead, where backends for virtually all important embedded platforms already exist. — David I think I might agree to that. What matters more is having light weight runtimes for embedded systems. There's no single good answer, so it will have to be down to switching features on and off. (GC for sure, exceptions, type info, etc.)
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 15:58:03 UTC, Jens Bauer wrote: On Friday, 3 April 2015 at 15:41:34 UTC, David Nadlinger wrote: On Friday, 3 April 2015 at 14:39:35 UTC, Jens Bauer wrote: Unfortunately, I can't b uild LLVM on my PowerMac. :/ Why would that be so? Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way. There is universal binary of LLVM 2.1 with clang (llvm-gcc back then I think) available here: http://llvm.org/releases/2.1/llvm-llvm-gcc4.0-2.1-darwin-univ.tar.gz
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:25:51 UTC, Ben Boeckel wrote: On Fri, Apr 03, 2015 at 17:10:31 +, Dicebot via Digitalmars-d-announce wrote: On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote: . Separate compilation. One file changes, only one file gets rebuilt This immediately has caught my eye as huge no in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way. Why? Other than the -fversion=... stuff, what is really blocking this? I personally find unity builds to not be worth it, but I don't see anything blocking separate compilation for D if dependencies are set up properly. --Ben There are 2 big problems with C-style separate compilation: 1) Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that. This also applies to attribute inference which has become a really important development direction to handle growing attribute hell. During last D Berlin Meetup we had an interesting conversation on attribute inference topic with Martin Nowak and dropping legacy C-style separate compilation seemed to be recognized as unavoidable to implement anything decent in that domain. 2) Ironically, it is just very slow. Those who come from C world got used to using separate compilation to speed up rebuilds but it doesn't work that way in D. It may look better if you change only 1 or 2 module but as amount of modified modules grows, incremental rebuild quickly becomes _slower_ than full program build with all files processed in one go. It can sometimes result in order of magnitude slowdown (personal experience). Difference from C is that repeated imports are very cheap in D (you don't copy-paste module content again and again like with headers) but at the same time semantic analysis of imported module is more expensive (because D semantics are more complicated). When you do separate compilation you discard already processed imports and repeat it again and again from the very beginning for each new compiled file, accumulating huge slowdown for application in total. To get best compilation speed in D you want to process as many modules with shared imports at one time as possible. At the same time for really big projects it becomes not feasible at some point, especially if CTFE is heavily used and memory consumption explodes. In that case best approach is partial separate compilation - decoupling parts of a program as static libraries and doing parallel compilation of each separate library - but still compiling each library in one go. That allows to get parallelization without doing the same costly work again and again.
Re: I submitted my container library to code.dlang.org
On Friday, 3 April 2015 at 16:12:00 UTC, w0rp wrote: Is there a D version of a hash table with open addressing and quadratic probing? Now there is. https://github.com/w0rp/dstruct/blob/master/source/dstruct/map.d Great! I'll experiment with it and do some comparisons.
[Issue 14398] Segfault when nested struct in static array accesses context
https://issues.dlang.org/show_bug.cgi?id=14398 Kenji Hara k.hara...@gmail.com changed: What|Removed |Added Keywords||pull, wrong-code --- Comment #1 from Kenji Hara k.hara...@gmail.com --- https://github.com/D-Programming-Language/dmd/pull/4552 --
Re: Trouble with Cortex-M Hello World
On Friday, 3 April 2015 at 15:41:34 UTC, David Nadlinger wrote: On Friday, 3 April 2015 at 14:39:35 UTC, Jens Bauer wrote: Unfortunately, I can't b uild LLVM on my PowerMac. :/ Why would that be so? Basically because it requires GCC 4.2 - but unfortunately there's more. Once upon a time, LLVM did support being built with GCC 4.2, but I can't get those sources anymore, so I can't get a 'bootstrap LLVM' that way.
Re: Typeinfo
On Friday, 3 April 2015 at 17:51:33 UTC, Andrei Alexandrescu wrote: Thanks Adam and Steve. Guess I should have asked this in the learn forum :o). -- Andrei Yeah, I have totally expected some revolutionary proposal for RTTI improvement in D from you when opening the topic :( You have broken my heart!
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:40:42 UTC, Dicebot wrote: On Friday, 3 April 2015 at 17:17:50 UTC, Atila Neves wrote: On Friday, 3 April 2015 at 17:13:41 UTC, Dicebot wrote: Also I don't see any point in yet another meta build system. The very point of initial discussion was about getting D only cross-platform solution that won't require installing any additional software but working D compiler. I was also thinking of a binary backend (producing a binary executable that does the build, kinda like what ctRegex does but for builds), and also something that just builds it on the spot. The thing is, I want to get feedback on the API first and foremost, and delegating the whole do-I-or-do-I-not-need-to-build-it logic to programs that already do that (and well) first was the obvious (for me) choice. Also, Ninja is _really_ fast. The thing is, it may actually affect API. The way I have originally expected it, any legal D code would be allowed for build commands instead of pure DSL approach. So instead of providing high level abstraction like this: const mainObj = Target(main.o, dmd -I$project/src -c $in -of$out, Target(src/main.d)); const mathsObj = Target(maths.o, dmd -c $in -of$out, Target(src/maths.d)); const app = Target(myapp, dmd -of$out $in, [mainObj, mathsObj]); .. you instead define dependency building blocks in D domain: struct App { enum path = ./myapp; alias deps = Depends!(mainObj, mathsObj); static void generate() { import std.process; enforce(execute([ dmd, -ofmyapp, deps[0].path, deps[1].path]).status); } } And provide higher level helper abstractions on top of that, tuned for D projects. This is just random syntax I have just invented for example of course. It is already possible to write decent cross-platform scripts in D - only dependency tracking library is missing. But of course that would make using other build systems as backends impossible. Well, I took your advice (and one of my acceptance tests is based off of your simplified real-work example) and started with the low-level any-command-will-do API first. I built the high-level ones on top of that. It doesn't seem crazy to me that certain builds can only be done by certain backends. The fact that the make backend can track C/C++/D dependencies wasn't a given and the implementation is quite ugly. In any case, the Target structs aren't high-level abstractions, they're just data. Data that can be generated by any code. Your example is basically how the `dExe` rule works: run dmd at run-time, collect dependencies and build all the `Target` instances. You could have a D backend that outputs (then compiles and runs) your example. The only problem I can see is execution speed. Maybe I didn't include enough examples. I also need to think of your example a bit more.
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:22:42 UTC, Atila Neves wrote: On Friday, 3 April 2015 at 17:10:33 UTC, Dicebot wrote: On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote: . Separate compilation. One file changes, only one file gets rebuilt This immediately has caught my eye as huge no in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way. I understand that. But: 1. One of D's advantages is fast compilation. I don't think that means we should should compile everything all the time because we can (it's fast anyway!) 2. There are measureable differences in compile-time. While working on reggae I got much faster edit-compile-unittest cycles because of separate compilation 3. This is valuable feedback. I was wondering what everybody else would think. It could be configureable, your not endorse it in any way notwithstanding. I for one would rather have it compile separately 4. CTFE and memory consumption can go through the roof (anecdotally anyway, it's never been a problem for me) when compiling everything at once. See http://forum.dlang.org/post/nhaoahnqucqkjgdwt...@forum.dlang.org tl; dr: separate compilation support is necessary, but not at single module level.
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 17:55:00 UTC, Dicebot wrote: On Friday, 3 April 2015 at 17:25:51 UTC, Ben Boeckel wrote: On Fri, Apr 03, 2015 at 17:10:31 +, Dicebot via Digitalmars-d-announce wrote: On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote: . Separate compilation. One file changes, only one file gets rebuilt This immediately has caught my eye as huge no in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way. Why? Other than the -fversion=... stuff, what is really blocking this? I personally find unity builds to not be worth it, but I don't see anything blocking separate compilation for D if dependencies are set up properly. --Ben There are 2 big problems with C-style separate compilation: 1) Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that. This also applies to attribute inference which has become a really important development direction to handle growing attribute hell. During last D Berlin Meetup we had an interesting conversation on attribute inference topic with Martin Nowak and dropping legacy C-style separate compilation seemed to be recognized as unavoidable to implement anything decent in that domain. 2) Ironically, it is just very slow. Those who come from C world got used to using separate compilation to speed up rebuilds but it doesn't work that way in D. It may look better if you change only 1 or 2 module but as amount of modified modules grows, incremental rebuild quickly becomes _slower_ than full program build with all files processed in one go. It can sometimes result in order of magnitude slowdown (personal experience). Difference from C is that repeated imports are very cheap in D (you don't copy-paste module content again and again like with headers) but at the same time semantic analysis of imported module is more expensive (because D semantics are more complicated). When you do separate compilation you discard already processed imports and repeat it again and again from the very beginning for each new compiled file, accumulating huge slowdown for application in total. To get best compilation speed in D you want to process as many modules with shared imports at one time as possible. At the same time for really big projects it becomes not feasible at some point, especially if CTFE is heavily used and memory consumption explodes. In that case best approach is partial separate compilation - decoupling parts of a program as static libraries and doing parallel compilation of each separate library - but still compiling each library in one go. That allows to get parallelization without doing the same costly work again and again. Interesting. It's true that it's not always faster to compile each module separately, I already knew that. It seems to me, however, that when that's actually the case, the practical difference is negligible. Even if 10x slower, the linker will take longer anyway. Because it'll all still be under a second. That's been my experience anyway. i.e. It's either faster or it doesn't make much of a difference. All I know is I've seen a definite improvement in my edit-compile-unittest cycle by compiling modules separately. How would the decoupling happen? Is the user supposed to partition the binary into suitable static libraries? Or is the system supposed to be smart enough to figure that out? Atila
Re: Reggae v0.0.5 super alpha: A build system in D
On 2015-04-03 20:06, Atila Neves wrote: Interesting. It's true that it's not always faster to compile each module separately, I already knew that. It seems to me, however, that when that's actually the case, the practical difference is negligible. Even if 10x slower, the linker will take longer anyway. Because it'll all still be under a second. That's been my experience anyway. i.e. It's either faster or it doesn't make much of a difference. I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well. -- /Jacob Carlborg
Issue with free() for linked list implementation
Hello. I’m trying to write my own version of a list that doesn’t rely on the garbage collector. I’m working on a very bare bones implementation using malloc and free, but I’m running into an exception when I attempt to call free. Here is a very minimal code sample to illustrate the issue: // Some constant values we can use static const int two = 2, ten = 10; // Get memory for two new nodes Node* head = cast(Node*)malloc(two.sizeof); Node* node1 = cast(Node*)malloc(ten.sizeof); // Initialize the nodes node1.value = ten; node1.next = null; head.value = two; head.next = node1; // Attempt to free the head node Node* temp = head.next; head.next = null; free(head); // Exception right here head = temp; Note, if I comment out the line ‘head.next = node1’, this code works. Does anyone know what I’m doing wrong with my manual memory management?
[Issue 14401] typeid(shared X).init is empty for class types
https://issues.dlang.org/show_bug.cgi?id=14401 Kenji Hara k.hara...@gmail.com changed: What|Removed |Added Keywords||pull, wrong-code Component|DMD |druntime --- Comment #1 from Kenji Hara k.hara...@gmail.com --- The root problem is in the definition of TypeInfo_Class. So it's a druntime issue. https://github.com/D-Programming-Language/druntime/pull/1205 https://github.com/D-Programming-Language/phobos/pull/3143 --
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 19:07:09 UTC, Jacob Carlborg wrote: On 2015-04-03 20:06, Atila Neves wrote: Interesting. It's true that it's not always faster to compile each module separately, I already knew that. It seems to me, however, that when that's actually the case, the practical difference is negligible. Even if 10x slower, the linker will take longer anyway. Because it'll all still be under a second. That's been my experience anyway. i.e. It's either faster or it doesn't make much of a difference. I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well. change one file and see which one is faster with an incremental build.
Re: Reggae v0.0.5 super alpha: A build system in D
On 2015-04-03 19:03, Atila Neves wrote: I wanted to work on this a little more before announcing it, but it seems I'm going to be busy working on trying to get unit-threaded into std.experimental so here it is: http://code.dlang.org/packages/reggae One thing I noticed immediately (unless I'm mistaken), compiling a D project without dependencies is too complicated. It should just be: $ cd my_d_project $ reggae -- /Jacob Carlborg
Re: Loading of widgets from DML markup and DML Editor in DlangUI
If you are interested, we are doing a GUI system inspired by QtQuick/QMLEngine : https://github.com/D-Quick/DQuick
Re: Mid-term vision review
On 4/3/15 9:41 AM, David Nadlinger wrote: On Friday, 3 April 2015 at 15:07:57 UTC, Andrei Alexandrescu wrote: On 4/3/15 3:10 AM, Andrea Fontana wrote: It would be great to have dmd on embedded platforms. I agree. We just don't have the champion for that yet. -- Andrei I might obviously be biased, but to be honest I don't see much value in starting to port a largely obsolete backend to a whole new processor architecture. Sure, it might be a fun exercise for somebody interested in learning about code generation. But in terms of pushing D forward, I think the much better option is to encourage people to contribute to GDC or LDC instead, where backends for virtually all important embedded platforms already exist. The matter of finding a champion applies all the same :o). -- Andrei
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 19:08:58 UTC, weaselcat wrote: I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well. change one file and see which one is faster with an incremental build. I don't care if incremental build is 10x faster if full build still stays at ~1 second. However I do care (and consider unacceptable) if support for incremental builds makes full build 10 seconds long.
Re: Reggae v0.0.5 super alpha: A build system in D
On 4/3/15 12:07 PM, Jacob Carlborg wrote: On 2015-04-03 20:06, Atila Neves wrote: Interesting. It's true that it's not always faster to compile each module separately, I already knew that. It seems to me, however, that when that's actually the case, the practical difference is negligible. Even if 10x slower, the linker will take longer anyway. Because it'll all still be under a second. That's been my experience anyway. i.e. It's either faster or it doesn't make much of a difference. I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well. Truth be told that's 5.3 seconds for an entire build so the comparison is only partially relevant. -- Andrei
Re: Reggae v0.0.5 super alpha: A build system in D
On Friday, 3 April 2015 at 18:06:42 UTC, Atila Neves wrote: All I know is I've seen a definite improvement in my edit-compile-unittest cycle by compiling modules separately. How would the decoupling happen? Is the user supposed to partition the binary into suitable static libraries? Or is the system supposed to be smart enough to figure that out? Ideally both. Build system should be smart enough to group into static libraries automatically if user doesn't care (Andrei suggestion of one package per library makes sense) but option of explicit definition of compilation units is still necessary of course.
[Issue 14327] Unhandled exception from writeln() in C++/D application
https://issues.dlang.org/show_bug.cgi?id=14327 --- Comment #7 from Szymon Gatner szymon.gat...@gmail.com --- Build started 2015-04-03 22:49:35. 1Project C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\CppDMix\CppDMix.vcxproj on node 2 (Build target(s)). 1ClCompile: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64\CL.exe /c /Zi /nologo /W3 /WX- /Od /D WIN32 /D _DEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm /EHsc /RTC1 /MDd /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fox64\Debug\\ /Fdx64\Debug\vc110.pdb /Gd /TP /errorReport:prompt main.cpp main.cpp Link: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64\link.exe /ERRORREPORT:PROMPT /OUT:C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\x64\Debug\CppDMix.exe /INCREMENTAL /NOLOGO /LIBPATH:D:\devel\D\dmd2\windows\lib64 /LIBPATH:C:\Users\bravo\Documents\visual studio 2012\Projects\CppDMix\dlib\Debug phobos64.lib dlib.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /MANIFEST /MANIFESTUAC:level='asInvoker' uiAccess='false' /manifest:embed /DEBUG /PDB:C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\x64\Debug\CppDMix.pdb /SUBSYSTEM:CONSOLE /TLBID:1 /DYNAMICBASE /NXCOMPAT /IMPLIB:C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\x64\Debug\CppDMix.lib /MACHINE:X64 x64\Debug\main.obj CppDMix.vcxproj - C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\x64\Debug\CppDMix.exe 1Done Building Project C:\Users\bravo\documents\visual studio 2012\Projects\CppDMix\CppDMix\CppDMix.vcxproj (Build target(s)). Build succeeded. Time Elapsed 00:00:00.32 --
[Issue 14381] It is too difficult to contribute to the auto-tester
https://issues.dlang.org/show_bug.cgi?id=14381 Martin Nowak c...@dawg.eu changed: What|Removed |Added CC||c...@dawg.eu --- Comment #5 from Martin Nowak c...@dawg.eu --- Let's take a concrete example. I'm trying to get dlang.org/documentation building integrated into or CI since 4 month, because it breaks too often and regularly stalls releases. https://github.com/braddr/d-tester/issues/41 I tried various ways (email, newsgroup, github) to get a bit of your attention on that topic. I also tried to implement it myself, but couldn't get the client script to run on my machine. This is asking for 10-20 minutes of your time for an important feature, it is mainly a communication issue though. --
Re: Issue with free() for linked list implementation
On Friday, 3 April 2015 at 22:08:52 UTC, Kitt wrote: Thanks for the help =) I guess I've been in C# land at work for way too long now, my low level C skills are evaporating! I've written a straight forward linked list implementation here: https://github.com/nomad-software/etcetera/blob/master/source/etcetera/collection/linkedlist.d Even though I'm using the GC to manage memory, maybe it will help you.
Re: Redirecting dead links on the website
On 04/03/2015 06:40 PM, w0rp wrote: I can probably help a little, if there's a bug list of things that need to be done, etc. Well, there is https://issues.dlang.org/buglist.cgi?component=websiteslist_id=199823query_format=advancedresolution=---, but it's mostly about documentation issues. I think the best help would be to enable notifications (watch) for https://github.com/D-Programming-Language/dlang.org and review pull requests. https://github.com/D-Programming-Language/dlang.org/pull/949 https://github.com/D-Programming-Language/dlang.org/pull/938 The other help would be to improve ugly parts, e.g. the code run example.
Re: I submitted my container library to code.dlang.org
On 04/03/2015 06:11 PM, w0rp wrote: Now, I am not the most fantastic Computer Science guy in the world, and I probably got a few things wrong. If anyone would like to look at my code and point out mistakes, please do. I will add any improvements suggested. You should use triangular numbers and power of 2 bucket sizes instead of quadratic numbers and prime sized buckets, because that guarantees full utilisation of the buckets and a minimal period of the numbers. The necessary loop is really trivial. https://github.com/D-Programming-Language/dmd/blob/f234c39a0e633fc9a0b5474fe2def76be9a373ef/src/root/stringtable.c#L162 If you replace i = (i + j) (tabledim - 1); with this i = (i + 1) (tabledim - 1); you get linear probing btw.
Re: unittests are really part of the build, not a special run
On 2015-04-02 23:46, Wyatt wrote: Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the plain English method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier. At work we're using Turnip [1], which basically is Gherkin (Cucumber) files running on top of RSpec, best of both worlds Dicebot ;). It has two big advatnages compared to regular Cucumber: * It doesn't use regular expression for the steps, just plain strings * The steps are implemented in modules which are later included where needed. They're not floating around in global space like in Cucumber We also made some modifications so we have one file with one module matching one scenario, which is automatically included based on the scenario name. This made it possible to have steps that don't interfere with each other. We can have two steps which are identical in two different scenarios with two different implementations that doesn't conflict. This also made it possible to take full advantage of RSpec, by creating instance variables that keeps the data across steps. We're also currently experimenting with a gem (I can't recall its name right now) which allows to write the Cucumber steps inline in the RSpec tests, looking like this: describe foobar do Steps this is a scenario do Given some kind of setup do end When when something cool happens do end Then something even cooler will happen do end end end [1] https://github.com/jnicklas/turnip -- /Jacob Carlborg