Re: Quality of errors in DMD
On Sunday, 4 September 2016 at 10:33:44 UTC, Walter Bright wrote: As I mentioned before, assert failures are usually the result of the last edit one did. The problem is already narrowed down. That's just not true in my case. Most of the asserts I triggered were after updating to a newer compiler version. I don't find -v helpful as it only points to a single haystack out of many, I still have to find the needle. The backend source is hard to get into, it would really help to know a little bit more about the cause for the error. One assert made me delay updating to a newer DMD version for like a year. I was reluctant to make the bug report because of the size of my codebase and inability to make a simple test case, I didn't even know for certain which function was the cause for it. Certainly just stating that an assert was triggered in this and this line of the backend is not enough? Having more information would help create those bug reports.
Re: Dub needs some additions
On Saturday, 23 January 2016 at 20:24:05 UTC, Igor wrote: Some simple extensions to dub are required for proper windows support: 1. The Ability to generate full build selections for Visual D. I only get Win32 when using `dub generate VisualD`. Win64 support should be added, along with alternate compiler support. (GDC and LDC) dub generate visuald --arch=x86_64 etc. You have to use flags to configure the project. 2. The ability to refresh a project by adding new dependencies and files to the previously generate Visual D project. It should retain modified settings of the project file. I'm thinking of something like `dub refresh VisualD` would work. Yeah, that would be nice. The current generate settings are just child's toys without the ability to retain and update project info. Thank You.
Re: Dub needs some additions
On Saturday, 23 January 2016 at 21:24:27 UTC, Igor wrote: No, I am talking about adding the different build options inside the visual studio project. When one generates the project, it should add the 64bit build options regardless. It is a standard with all Visual Studio projects in C++, C#, and VB. It should not require a flag to set. It doesn't change the architecture but only provides a simple build option. When I use the flag, I only get 64... I want both!!! I should be able to switch seemlessly between 64-bit and 32-bit by using the configuration manager/combo box like every other normal project in VS has as default. Ah, yeah, that makes sense. To me personally setting project working directory to $(OUTDIR) when generating would be nice. Doesn't sound like a hard thing to do, I'll try making a pull request for that.
Re: DirectX 12 bindings.
On Wednesday, 2 December 2015 at 04:26:39 UTC, Rikki Cattermole wrote: On 02/12/15 5:17 PM, Denis Gladkiy wrote: On Wednesday, 2 December 2015 at 02:05:22 UTC, Rikki Cattermole wrote: On 02/12/15 3:45 AM, Denis Gladkiy wrote: Where can I find DirectX 12 bindings for D? There is https://github.com/evilrat666/directx-d Which plans by the looks of things to support it eventually. Right now it sits at DX11. Last update was in 2014. What are the signs of upcoming DirectX 12 support? Okay I may have stretched it a little on that one. But it shouldn't be too hard to do that last bit of work. Why? The two APIs are so different that it's no use to base the work on current DirectX 11 bindings. That said though, 12th is much smaller so it's less work anyways.
Re: Release D 2.069.1
On Wednesday, 11 November 2015 at 12:05:16 UTC, Martin Nowak wrote: This is an unplanned point release whose sole purpose is to fix a severe Windows installer bug. http://dlang.org/download.html http://downloads.dlang.org/releases/2.x/2.069.1/ http://dlang.org/changelog/2.069.1.html -Martin People not being able to install DMD on a fresh PC made a near-disaster for me yesterday. Thank you a lot for releasing this so soon!
Re: D for Game Development
On Thursday, 30 July 2015 at 13:43:35 UTC, karabuta wrote: D is really cool and makes a good candidate for developing a game. Are there any guys out there using D for indie games? Not an indie game, but Remedy is making Quantum Break using D.
Re: Final by default?
On Saturday, 15 March 2014 at 08:50:00 UTC, Daniel Murphy wrote: This is nonsense. I tried out the warning on some of my projects, and they required ZERO changes - because it's a warning! Phobos requires 37 virtual:s to be added - or just change the makefile to use '-wi' instead of '-w'. Druntime needed 25. We don't even need to follow the usual 6-months per stage deprecation - We could leave it as a warning for 2 years if we wanted! Grepping for class declarations and sticking in virtual: is as trivial as a fix can possibly be. When virtual keyword was introduced in Github master I immediately went to add a bunch of virtual in my projects... only to find myself done after few minutes. I see some irony in the fact that if classes are made final-by-default, removing all the unnecessary final attributes would be an order of magnitude longer task.
Re: D-Scanner 0.1.0-beta3 and DCD 0.3.0-beta4
On Thursday, 6 March 2014 at 13:36:46 UTC, Jussi Jumppanen wrote: modulecache.d(138): Error: cannot implicitly convert expression (f.size()) of type ulong to uint I fixed that by adding cast(size_t) at that location. But after fixing that problem the build still fails, I had to remove -g flag in build.bat file. I assume it was added by mistake? The problem appears on Windows 7 32bit.
Re: Nobody understands templates?
On Wednesday, 5 March 2014 at 22:46:40 UTC, sclytrack wrote: Are there any disadvantages of using a fixed size array for fixed size coordinates and vectors, over creating an actual typedef or struct Vec3? Don't know what's the current situation in druntime, but when I tried static arrays a while ago in my engine every second there were megabytes of garbage generated. I ended up using structs with fields.
Re: Nobody understands templates?
On Wednesday, 5 March 2014 at 23:47:33 UTC, H. S. Teoh wrote: Whoa. What did you do with those arrays?? Either you did something wrong, or there's a nasty bug somewhere in the compiler/language; AFAIK static arrays are supposed to be value types so they shouldn't generate any garbage at all. I think it was the case of using array literals, like this (I didn't know much about D back then) this(float x, float y, float z) { this.vector = [x, y, z]; } And megabytes accumulated because there were hundreds of objects all doing complicated stuff every frame, passing and constructing vectors and matrices around. Memory leaks could have been avoided, but still, one should be careful when using arrays.
Re: Smart pointers instead of GC?
On Saturday, 1 February 2014 at 12:04:56 UTC, JR wrote: In your opinion, of how much value would deadlining be? As in, okay handyman, you may sweep the floor now BUT ONLY FOR 6 MILLISECONDS; Unrelated to the whole post note: 6ms is too high, I would allow max 1ms a frame for a game I'm developing and even that is somewhat painful. If I turn on GC on the game I'm making, it takes 80ms every frame when memory usage is 800mb (shown on task manager). This was actually surprising, previously it was 6ms, good that I made my engine GC independent.
Re: Smart pointers instead of GC?
On Saturday, 1 February 2014 at 12:29:10 UTC, Nick Sabalausky wrote: Come to think of it, I wonder what language Frostbite 3 uses for game-level scripting code. From what I've heard about them, Frostbite 3 and Unreal Engine 4 both sound like they have some notable similarities with Unity3D (from the standpoint of the user-experience for game developers), although AIUI Unreal Engine 4 still uses C++ for game code (unless the engine user wants to to do lua or something on their own). I imagine Frostbite's probably the same, C++, but I haven't actually heard anything. Frostbite has an option for both Lua and C++. C++ is the preferred one. Unreal Engine went from Unrealscript to C++ and everyone applauded that.
Re: Smart pointers instead of GC?
On Saturday, 1 February 2014 at 16:37:23 UTC, Paulo Pinto wrote: Do you have any experience with Project Anarchy? -- Paulo No I have not. I haven't got a smartphone yet, so I guess that explains it.
Re: Smart pointers instead of GC?
On Saturday, 1 February 2014 at 14:24:17 UTC, Francesco Cattoglio wrote: On Saturday, 1 February 2014 at 12:20:33 UTC, develop32 wrote: On Saturday, 1 February 2014 at 12:04:56 UTC, JR wrote: In your opinion, of how much value would deadlining be? As in, okay handyman, you may sweep the floor now BUT ONLY FOR 6 MILLISECONDS; Unrelated to the whole post note: 6ms is too high, I would allow max 1ms a frame for a game I'm developing and even that is somewhat painful. If I turn on GC on the game I'm making, it takes 80ms every frame when memory usage is 800mb (shown on task manager). This was actually surprising, previously it was 6ms, good that I made my engine GC independent. Wow! That's interesting! What kind of game are you working on? Don't want to reveal too much now, its early in development, plan to have a proper trailer few months later. Its an economical space sim, set in an asteroid field of 3000km radius, filled with 100k asteroids, hundred stations and few hundred AI ships. All of the characters/stations are simulated simultaneously. http://imgur.com/QAKzJWb
Re: Smart pointers instead of GC?
On Saturday, 1 February 2014 at 13:16:02 UTC, JR wrote: On Saturday, 1 February 2014 at 12:20:33 UTC, develop32 wrote: If I turn on GC on the game I'm making, it takes 80ms every frame when memory usage is 800mb (shown on task manager). This was actually surprising, previously it was 6ms, good that I made my engine GC independent. You wouldn't happen to have a blog post someplace with reflections on the steps you took to avoid the GC, by any chance? Hint hint! I'm genuinely curious. Or is it merely a matter of manual allocation? I don't have a blog... It is mostly a matter of memory reuse (LOTS of it) and manual freeing when needed.
Re: Aurora Graphics Library Initial Design Discussion
On Sunday, 19 January 2014 at 17:44:57 UTC, ponce wrote: I think you should target D3D9 instead of D3D11 on Windows. I have to disagree, I doubt it that Aurora will use D3D11-only features, I imagine any gpu that supports Feature Level 9 will be fine, the only requirement for user is to have OS newer than Windows XP.
Re: Microsoft working on new systems language
On Monday, 30 December 2013 at 11:23:22 UTC, JN wrote: I'm kind of an outsider to this discussion, but take a look how many games are written using GC-languages, Minecraft is written in Java, Terraria in C# and all Unity3D games use Mono underneath (usually C#). And these languages don't allow you to use malloc even if you wanted to (you can do some of that stuff with NIO buffers in java but it's a PITA). The best you can do in those languages usually is to just not allocate stuff during the game. So arguing that GC is useless for games is an overstatement. Sure, a game engine of magnitude like Unreal Engine 3 might have problems with use of GC, but for most other projects it will be OK. As far as I know, Unreal Engine 3 has its own GC implemention for its scripting system.
Re: Microsoft working on new systems language
On Sunday, 29 December 2013 at 23:14:59 UTC, Ola Fosheim Grøstad wrote: On Sunday, 29 December 2013 at 22:27:43 UTC, Walter Bright wrote: Your reply doesn't take into account that you can control if and when the GC runs fairly simply. So you can run it at a time when it won't matter to the cache. In a computer game GC should run frequently. You don't want to waste memory that could be used to hold textures on GC headroom. Realtime audio applications should run for 1 hour+ with absolutely no hiccups and very low latency. Working around the limitations of a naive GC implementation is probably more work than it is worth. I work on a somewhat large game using D, there is no GC running because there are no allocations. As far as I know, people tend not to use system primitives like malloc/free in C++ games either as even those are too slow. And why would you store textures in RAM?
Re: Microsoft working on new systems language
Because you want to stream them to the GPU when you walk across the land in a seamless engine? Indeed, but the arrays that are used for holding the data are allocated (not in GC heap) before the main loop and always reused.
Re: Microsoft working on new systems language
I assumed GC would be useful for AI/game world. It will benefit from GC because you have complex interdependencies, many different classes and want to be able to experiment. Experimentation and explicit memory deallocation is a likely source for memory leaks… However this will only work well with a basic GC if the AI/game world representation consists of relatively few objects. So it limits your design space. I can imagine that, and I have been there in my C# days. I agree that its not a GC-friendly pattern. But then why use it in a language with a GC? In my engine, world object (entity) is basically just a number. All of the entity data is stored in multiple components that are just functionless structs held in few global arrays. All of the game logic is done by separate managers. In the end, AI/game logic uses the same mechanic as texture streaming - reuse of the previously allocated memory.
Re: Microsoft working on new systems language
That is a possibility of course, but for a heterogenous environment you risk running out of slots. E.g. online games, virtual worlds, sandbox games where users build etc. No, not really, just allocate more. The memory is managed by a single closed class, I can do whatever I want with it. online games MMO games are the source of this idea, components are easy to store in DB tables. virtual worlds Remove unneeded render/physics components when entity is out of range, etc. sandbox games where users build No idea how to fix a problem of not enough RAM. The thing is, all of that game logic data takes a really surprisingly small amount of memory.
Re: Microsoft working on new systems language
On Monday, 30 December 2013 at 02:48:30 UTC, Ola Fosheim Grøstad wrote: On Monday, 30 December 2013 at 02:44:27 UTC, develop32 wrote: The thing is, all of that game logic data takes a really surprisingly small amount of memory. In that case you probably could use GC, so why don't you? Because there is nothing for GC to free in my engine. In other engine architectures it surely can be a possibility. In my experiments it took 3ms to run it at the end of each game loop.
Re: Autobounty?
On Wednesday, 4 December 2013 at 12:24:38 UTC, Shammah Chancellor wrote: I don't know if this was discussed in the recent bounty thread. However, I thought it might be neat if we were to setup our own bounty website that parses the d bugzilla better and does something that I've seen in many boardgames -- autobounty. My idea is this: Allow people donate to D bounties in general, and each day place add a small amount to the bounty of each bug. The longer bugs are open, the larger the bounty becomes. Eventually it will become large enough to make it worth someone's time to fix. Right now, with around 4000 open bugs in the bugzilla, this would be a lot of money each day, but maybe D will eventually become popular enough to make the bounties more than very small amounts. Also, the benefit to this is we can avoid the 10% fee that bountysource has. Thoughts? I would be willing to write the code to do this in vibe-d if there is interest. Personally, I'm willing to donate a good amount of money to bounties, but I'd rather do it in the above format. -Shammah Wouldn't that make people ignore fixing fresh bugs with no bounties accumulated?
Re: D and C++
On Monday, 21 October 2013 at 11:08:15 UTC, qznc wrote: On Saturday, 19 October 2013 at 13:20:28 UTC, develop32 wrote: Hi, Are there any recent improvements in how D interfaces with C++? I got the impression that some work has been done on that, in order to make DMD a self-hosting compiler. I do not know of any recent improvements. The current plan to make DMD self-hosting seems to be a conversion tool. The transition would one step then. The challenge is mostly on LDC then to use the C++ LLVM bindings. I'm aware of the conversion tool, but it seems it will only apply to the frontend? The DMD backend will be left as is.
D and C++
Hi, Are there any recent improvements in how D interfaces with C++? I got the impression that some work has been done on that, in order to make DMD a self-hosting compiler.
Memory leaks
Windows task manager shows constantly increasing memory usage of my D application, yet there is ZERO allocation done by it at the time. I have made a hook inside the rt/lifetime.d for _d_allocmemory, and I do get notified when an array or a class object in constructed. What could be the source of rapidly climbing (hundred kilobytes per second) memory usage?
Re: Memory leaks
On Sunday, 13 October 2013 at 13:18:47 UTC, Benjamin Thaut wrote: Am 13.10.2013 14:08, schrieb develop32: Windows task manager shows constantly increasing memory usage of my D application, yet there is ZERO allocation done by it at the time. I have made a hook inside the rt/lifetime.d for _d_allocmemory, and I do get notified when an array or a class object in constructed. What could be the source of rapidly climbing (hundred kilobytes per second) memory usage? If your programm is a 32-bit application the GC most likely leaks memory because it thinks its still referenced. Try compilling your application for 64-bit. If the problem goes away it's not your fault. A possible workaround for this situation is to allocate large blocks of memory which you know will not contain any pointers with GC.malloc manually and specifying the don't scan flag. If the problem still exists in a 64-bit application you most likely continue to allocate memory that remains referenced. Make sure you don't have any infinitly growing arrays or other containers that still reference the memory you allocated. Thanks, will try 64-bit when possible. No, there are no allocations in my code, I have set breakpoints wherever I can throughout it and there are no allocations being made every frame (its a video game). I don't use built-in dynamic array directly, have my own wrapper for it. My question is, are there any more places in druntime that I can change so that I will get a notification when allocation occurs, besides _d_allocmemory?
Re: Memory leaks
On Sunday, 13 October 2013 at 13:18:47 UTC, Benjamin Thaut wrote: Am 13.10.2013 14:08, schrieb develop32: Windows task manager shows constantly increasing memory usage of my D application, yet there is ZERO allocation done by it at the time. I have made a hook inside the rt/lifetime.d for _d_allocmemory, and I do get notified when an array or a class object in constructed. What could be the source of rapidly climbing (hundred kilobytes per second) memory usage? If your programm is a 32-bit application the GC most likely leaks memory because it thinks its still referenced. Try compilling your application for 64-bit. If the problem goes away it's not your fault. A possible workaround for this situation is to allocate large blocks of memory which you know will not contain any pointers with GC.malloc manually and specifying the don't scan flag. If the problem still exists in a 64-bit application you most likely continue to allocate memory that remains referenced. Make sure you don't have any infinitly growing arrays or other containers that still reference the memory you allocated. I've added GC.collect and GC.minimize() at the end of each frame, looks like the growth has stopped, so its not a 32-bit problem yet.
Re: Memory leaks
On Sunday, 13 October 2013 at 14:01:09 UTC, Adam D. Ruppe wrote: On Sunday, 13 October 2013 at 13:46:13 UTC, develop32 wrote: My question is, are there any more places in druntime that I can change so that I will get a notification when allocation occurs, besides _d_allocmemory? I set breakpoints on gc_malloc and gc_qalloc too. I'm pretty sure they catch more than d_allocmemory. qalloc in particular catches accidental usage of array literals. Yes, they always allocate at runtime unless specifically marked static... i'd love for that to change! Thank you, that helped to find it. Its as shame that usage of static arrays doesn't prevent heap allocation.
Re: Memory leaks
On Sunday, 13 October 2013 at 14:31:13 UTC, Benjamin Thaut wrote: Am 13.10.2013 16:23, schrieb develop32: On Sunday, 13 October 2013 at 14:13:39 UTC, Benjamin Thaut wrote: Am 13.10.2013 15:52, schrieb develop32: On Sunday, 13 October 2013 at 13:18:47 UTC, Benjamin Thaut wrote: Am 13.10.2013 14:08, schrieb develop32: Windows task manager shows constantly increasing memory usage of my D application, yet there is ZERO allocation done by it at the time. I have made a hook inside the rt/lifetime.d for _d_allocmemory, and I do get notified when an array or a class object in constructed. What could be the source of rapidly climbing (hundred kilobytes per second) memory usage? If your programm is a 32-bit application the GC most likely leaks memory because it thinks its still referenced. Try compilling your application for 64-bit. If the problem goes away it's not your fault. A possible workaround for this situation is to allocate large blocks of memory which you know will not contain any pointers with GC.malloc manually and specifying the don't scan flag. If the problem still exists in a 64-bit application you most likely continue to allocate memory that remains referenced. Make sure you don't have any infinitly growing arrays or other containers that still reference the memory you allocated. I've added GC.collect and GC.minimize() at the end of each frame, looks like the growth has stopped, so its not a 32-bit problem yet. Well the GC doesn't kick in always. It only collects memory when its neccessary (a certain limit is reached). For a game it is usually a good idea to run the GC every frame to avoid the ocasional long pause times that otherwise occur. Or you could work around the GC entierly. Kind Regards Benjamin Thaut Indeed, I tried to work around it and use manual memory management a year ago, but everything felt so ugly that I decided to throw that and go the idiomatic way. I'm bit surprised that even with GC running every frame I haven't noticed a drop in performance, even when using more than 1 gb RAM. Did you profile how much time the GC takes every frame? For me that was 8-9 ms every frame. That was not acceptable for my case. Currently it is 3 ms, would love to drop that to one. Though I'm using a very slow laptop, can't play any AAA game released in last 5 years.
Re: ArtemisD: A D port of Artemis Entity System Framework for games.
On Tuesday, 8 October 2013 at 10:35:53 UTC, Kiith-Sa wrote: In my implementation I don't even use a getComponent equivalent; a process() (or opApply() in the older version I linked) function directly takes component references, and all components are in plain arrays. Processing of a System is done (in generated code) by iterating over the components the System specifies in its signature, which is cache-friendly, and means the components needed are always available directly to the process() function without any lookup. However, this is a less flexible approach than what you're doing (although intentional, again, to eventually enable very easy threading). Looks similar to what I do in my engine, although entities and components are not classes. Basic example: struct State2D { Vector2 position; float rotation; } struct BoundKill {} // Acts as a tag class BoundManager : Manager!(State2D, BoundKill) { override void process(Entity entity, State2D state, BoundKill tag) { if (state.y 0) world.remove(entity); } } And entity is just a struct containing size_t id
Re: dub: should we make it the de jure package manager for D?
On Thursday, 26 September 2013 at 07:44:25 UTC, Manu wrote: I've used dub once, I was very satisfied with the experience. I think a front-end where you can browse the repository and select/deselect packages conveniently would be a great addition to the experience, if it's not already available. Do you mean a tool that edits package.json file? I see little gain in having GUI for that.
Re: VisualD now on github.com/d-programming-language
On Tuesday, 10 September 2013 at 06:42:06 UTC, Walter Bright wrote: https://github.com/D-Programming-Language/visuald Congratulations to Rainer Schuetze and collaborators for this great work! Do you have plans for changing the homepage for VisualD? The one on dsource is more related to the old repository.
Re: Move VisualD to github/d-programming-language ?
On Monday, 9 September 2013 at 09:29:11 UTC, Ramon wrote: Well, to me someone who talks in a negative way about another user (or smartly works out subtle differences in such remarks) rather than about D related issues looks like an idiot. You do understand that by saying Windoze you *are* insulting users that use that platform? On a related note, I am using VisualD for more than a year and I'm happy that it will get a bigger support.
Re: std.logger
On Thursday, 22 August 2013 at 14:13:29 UTC, Robert Schadek wrote: I'm still missing a logging facility in D and as the last attempt seam to have stopped I want to throw in my version. After reading through the std.log thread I made my conclusions and created my own logger. People seamed to be unhappy with the naming and the way of configuration. Additionally when to throw or not to throw seamed to be an argument. My attempt is to provide a very small functional interface to logging. IMO it is impossible to fulfill all requirements a D developer can have through configuration classes and such, I designed the a abstract Logger class that can be easily implemented to one's own needs. As a quick start feature I created a Stdio- and File-Logger. If no Logger is provided to the log function a defaultLogger will be used. Docu: http://burner.github.io/phobos/phobos-prerelease/std_logger.html Pull Request: https://github.com/D-Programming-Language/phobos/pull/1500 I hope this will lead to some progress in phobos, when it comes to message logging. Why logging functions accept only a string? I would expect it to behave as std.stdio with its variadic parameters. It would be more straightforward to write logging code: log(Moving , data, to , destination); Where 'data' and 'destination' are any variables. I use such setup in my projects and it helps greatly to identify what went wrong when not using a debugger.
Re: Are properties actually all that good?
On Monday, 22 July 2013 at 16:42:57 UTC, Land wrote: Thank you for the replies. So, what's the bottom line? Should I use accessor methods or should I use properties? (I've read quite a bit of the mentioned 500-post topic, by the way, but I'm still not clear on what's the most logical step here) In my case I use both. If function is bigger than certain threshold (say, 5 lines) or allocates/returns new object each time, i use getSomething(), otherwise its a property. It can also depend on the logic. For example, in graphics programming, there are render targets and if I want to get a texture representing it I may do it in two ways: .getTexture() and .texture At least to me, .texture means that it would be the same object each time, possibly a reference to a private variable. Whereas getTexture() implies to me that the function creates a new texture, copies pixel data and returns it.
Re: Problem with Variant
On Saturday, 29 June 2013 at 00:14:19 UTC, Andrei Alexandrescu wrote: Could you please paste your note into a bug report? In a perfect world you may want to also submit a pull request! Filled a bug http://d.puremagic.com/issues/show_bug.cgi?id=10500 And I'll try to make world a better place too...
Problem with Variant
void main() { struct Vector { float length(); } import std.variant; Variant v = Vector(); } Currently this does not work, as it seems Variant thinks length() is a property restricted to arrays. cannot implicitly convert expression ((*zis).length()) of type float to int C:\D\dmd2\src\phobos\std\variant.d 488 I quickly changed that line in variant.do to static if (is(typeof(zis.length)) is(ReturnType!(zis.length) == size_t)) and my code compiles. Currently its a hack, but should it be more developed and would it not interfere with something else?
Emplace using private constructor
In a project I'm working on there are classes that are available publicly but I want to disable their construction outside of their modules. class Display { const string name; private this(string name) { this.name = name; } static Display[] all() { // Returns all available displays, the only place where Display objects // are to be constructed. } } This is easy when using new. But I'm using std.conv.emplace which does not work with private constructors. Don't know how to initialize an object of type Display with arguments (string) Is there any solution to this problem? I know I can just make the constructor public but it just feels wrong, I want to keep public API as small as possible and it bugs me that language/library limits me here.
Re: Emplace using private constructor
Nevermind, problem was not worth the question. I just copied code from Phobos std.conv.emplace and placed it directly in my code, it works since it is in the same module as the private constructor.
Re: Emplace using private constructor
On Friday, 7 June 2013 at 16:14:25 UTC, Ali Çehreli wrote: But it is too heavy-handed. Your problem exposes a real weakness. Other... Well, its not *that* heavy. // Inside Display class. auto display = alloc!Display; display.__ctor(name); Where alloc(T) is a bigger part of the original emplace code and is widely used. But generally, yeah. It would be nice to fix mixins.
Re: Strange output
On Friday, 7 June 2013 at 19:10:54 UTC, Daemon wrote: The following program is supposed to print out only numbers that are less than 5, yet the number 63 gets printed. module main; import std.stdio; import std.conv; int main(string[] argv) { auto de = find!(delegate(a) { return a 5; })([10, 11, 15, 16, 27, 20, 2, -4, -17, 8, 64, 6]); foreach (int b; de[0 .. $]) { writeln(b); } readln(); return 0; } T[] find(alias pred, T)(T[] input) if (is(typeof(pred(input[0])) == bool)) { for (; input.length 0; input = input[1 .. $]) { if (pred(input[0])) { break; } } return input; } I realize it's extremely improbable that I've discovered a bug, so what exactly could I be doing wrong? Why is the output: 2 -4 -17 8 63 6 I'm probably missing something obvious. If the input is just [63], then nothing gets printed out. It looks like your find function just stops at the first item that is less than five (which is two) and returns a range consisting of all the following items. If you'll look closer, your output is just all items after and including 2.
Re: Boost.units ported to D
How about something like meters!15 or newtons!30? For me it is more pleasing and shorter than multiplication.
Re: Boost.units ported to D
On Monday, 11 February 2013 at 22:33:02 UTC, Arlen wrote: 'meters!15' is supposed to be '15 * meters', right? Then how would you express '15 / meters' ? You cannot; therefore, 'meters!15' doesn't make sense. 15 * meters, has an outupt '15 m' which is short for '15 m^1'. The exponent is not shown when it's 1. 15 / meters, has an output '15 m^-1' Oh, somehow that skipped my mind. But 15 / meters could be meters!(15, -1). It suits me more as it stands out of the whole expression and I would less likely make a mistake while writing it.