Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On 08/05/14 22:43, Nick Sabalausky wrote: I wasn't trying to blame Qt/Gtk (actually, I kinda like Qt stuff - I've heard it's not technically native UI, but hell if I can actually tell the difference. They've done a damn fine job.) I think it's quite easy to tell the difference, on OS X. But that might be a problem with the actual application, that doesn't follow OS X conventions, and not Qt itself. -- /Jacob Carlborg
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On 05/08/2014 06:05 PM, Andrei Alexandrescu wrote: https://hn.algolia.com/#!/story/forever/0/Tkd I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..?
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On 5/9/14, simendsjo via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..? We should just link to the post with some remove me characters. E.g.: https://remove_menews.ycombinator.com/item?id=7716010 You copy-paste the URL, and that should avoid any issues with hotlinking, wouldn't it?
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On 05/09/2014 11:32 AM, Andrej Mitrovic via Digitalmars-d-announce wrote: On 5/9/14, simendsjo via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..? We should just link to the post with some remove me characters. E.g.: https://remove_menews.ycombinator.com/item?id=7716010 You copy-paste the URL, and that should avoid any issues with hotlinking, wouldn't it? Thanks. There's no tracking info in the url, so it it's probably the best way as long as the search site doesn't work as intended.
Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On Thursday, 8 May 2014 at 20:51:17 UTC, Sönke Ludwig wrote: Am 08.05.2014 21:41, schrieb Nick Sabalausky: (...)my trackpad's scrolling gestures don't even fucking work on it (they work fine on nearly anything else). To be fair, some time ago I've had the joy to try and properly support scrolling gestures properly for my UI framework and I wound up naming the window class of my windows OperaWindowClass, because that triggers a special case path in the touchpad driver, which actually sends useful window messages. I didn't find another way to get useful data. The whole (Synaptics) driver is obviously nothing but a crapload of special case junk to make the most popular applications and controls work, because the people involved obviously don't manage to develop a standard API for pixel perfect scrolling. Now that was a surprise! I just could not understand why I receive WM_MOUSEWHEEL only with 120 as Delta, but Internet Explorer receives fine grained (ex. 24). Seting the window class name as you said, 'solves' indeed the problem!
Re: How I Came to Write D -- by Walter Bright
On 16/04/2014 09:21, Bienlein wrote: There are a number of job adds for Go developers (see http://golangprojects.com). Go seems to be a good complement for Ruby, Python, PHP which are slow and have bad concurrency. Whoa, that's quite a few jobs already! (Given how relatively new Go is...) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: How I Came to Write D -- by Walter Bright
On 08/04/2014 22:44, Andrei Alexandrescu wrote: http://www.reddit.com/r/programming/comments/22jwcu/how_i_came_to_write_d/ We were using C because it was the only high-level language we could find that actually worked on the PC. C + high-level... those where different times indeed! :) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: GDC binaries updated
On 9 May 2014 12:20, Bruno Medeiros via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: On 07/05/2014 17:42, Johannes Pfau wrote: Am Wed, 07 May 2014 14:38:32 +0100 schrieb Bruno Medeiros bruno.do.medeiros+...@gmail.com: On 04/05/2014 10:38, Johannes Pfau wrote: We've just uploaded new binary releases to http://gdcproject.org/downloads/ ## GDC changes ## As we merged the first parts of Daniel Greens MinGW changes back into GDC we now also provide initial (automated) MinGW builds. These builds are mostly unsupported and will likely have many more bugs than the older releases posted by Daniel so don't expect too much. Glad to hear there is some progress here, but are there plans to make this supported in the future? Also, what is the difference between Daniel Green's build, and the native Standard Builds? Daniels builds apply some more patches, see https://github.com/venix1/MinGW-GDC for details. The builds on gdcproject.org use the standard git sources of gdc which only include the subset of these patches that's necessary to compile run a hello world program. I'm not familiar with the internals of compiler and runtime architecture, but I'm curious, why is is that so many complicated patches are necessary? I understand the D runtime has to access Windows API, correct? But that should all be available in the MinGW target as well, no? Otherwise, what is the difference here when DMD for Windows is compiled, vs when GDC is compiled? DMD x86 on Windows uses the Digital Mars toolchain for linking, etc. DMD x86_64 on Windows uses the MSVC toolchain for linking, etc. GDC on Windows uses the GNU toolchain for linking, etc. Another potentially crucial difference is that DMD compiles directly to object file. GCC requires an assembler installed. This probably does make it easier for DMD to invented custom sections for its own abuse.
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On 5/9/14, 2:44 AM, simendsjo wrote: On 05/09/2014 11:32 AM, Andrej Mitrovic via Digitalmars-d-announce wrote: On 5/9/14, simendsjo via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..? We should just link to the post with some remove me characters. E.g.: https://remove_menews.ycombinator.com/item?id=7716010 You copy-paste the URL, and that should avoid any issues with hotlinking, wouldn't it? Thanks. There's no tracking info in the url, so it it's probably the best way as long as the search site doesn't work as intended. I thought they track the referrer, no? Anyhow if I go to https://hn.algolia.com/#!/story/forever/0/Tkd the story is the second hit. Andrei
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On 05/09/2014 06:09 PM, Andrei Alexandrescu wrote: On 5/9/14, 2:44 AM, simendsjo wrote: On 05/09/2014 11:32 AM, Andrej Mitrovic via Digitalmars-d-announce wrote: On 5/9/14, simendsjo via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..? We should just link to the post with some remove me characters. E.g.: https://remove_menews.ycombinator.com/item?id=7716010 You copy-paste the URL, and that should avoid any issues with hotlinking, wouldn't it? Thanks. There's no tracking info in the url, so it it's probably the best way as long as the search site doesn't work as intended. I thought they track the referrer, no? But if you go directly through the browser, no referrer is added. Anyhow if I go to https://hn.algolia.com/#!/story/forever/0/Tkd the story is the second hit. Ah, I see why I was confused now. Your entry links to the reddit announcement, not the forums.
Re: Tkd – Cross platform GUI toolkit based on Tcl/Tk
On Friday, 9 May 2014 at 16:09:15 UTC, Andrei Alexandrescu wrote: On 5/9/14, 2:44 AM, simendsjo wrote: On 05/09/2014 11:32 AM, Andrej Mitrovic via Digitalmars-d-announce wrote: On 5/9/14, simendsjo via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: I'm unable to find the HN link. This search shows the reddit link and a link straight to the forum. I even tried to go through several pages of newest on HN doing a search for tkd without any luck. Did it get deleted..? We should just link to the post with some remove me characters. E.g.: https://remove_menews.ycombinator.com/item?id=7716010 You copy-paste the URL, and that should avoid any issues with hotlinking, wouldn't it? Thanks. There's no tracking info in the url, so it it's probably the best way as long as the search site doesn't work as intended. I thought they track the referrer, no? Anyhow if I go to https://hn.algolia.com/#!/story/forever/0/Tkd the story is the second hit. Andrei If you go to an address from the url bar rather than clicking a link, most browsers won't include a referrer. If you used https to access the forums, it likely would not include a referrer either.
Submit your D presentation to Strangeloop now! (Deadline is today)
https://thestrangeloop.com/sessions-page/call-for-presentations The deadline is today. I submitted mine! Everyone who submitted a proposal to present at Dconf should submit it here as well.
We're gearing up for DConf!
https://twitter.com/fbOpenSource/status/464850637402812417 Andrei
Livestreaming DConf?
Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei
Re: Livestreaming DConf?
On Fri, 09 May 2014 12:48:28 -0700, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei +1
Re: Livestreaming DConf?
On 5/9/2014 3:48 PM, Andrei Alexandrescu wrote: We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: Yes please!!! I'd like some of that!
Re: Livestreaming DConf?
I *might* watch some of it on the 'net myself since I most likely won't actually be in Menlo Park until Friday the 23rd. The whole week is gonna be hell for me tho so idk if I actually would stream or not. Watching later on youtube like we did last year is cool by me too.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei Very keen for this.
Re: Livestreaming DConf?
On 5/9/14, 12:48 PM, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei I'm interested but probably wouldn't be able to catch it all live due to work schedule. I'll definitely watch it all later at my own pace whether livestreamed or not. What I would hate is for livestreaming to be the only opportunity to watch it online.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei Definitely support the idea and would watch.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. That would be fantastic. I really hope you can make this happen.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei This would be cool, but I'd hope that it doesn't replace having videos posted to be viewable afterwards.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei Yes please! I've got finals that week, so I can't make it out to CA but I'd love to be able to watch as much as I can.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei I'd watch the presentations live. I imagine one or two will fall on a lunch time for me.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei This would be cool, but I'd also hope that it doesn't replace having videos, (and perhaps any presentation slides) be posted to be viewable afterwards. Nick
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei Well, I'd certainly watch, I was actually intending to request that someone do this, even if it were as simple as someone with a webcam broadcasting to twitch.
Re: Livestreaming DConf?
On Friday, 9 May 2014 at 19:48:20 UTC, Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei I'll definitely watch as much as I can. But my schedule might not permit watching all of it. If at least the keynotes could be live-streamed I would be sure to watch those. Posting any videos later like last year would be a huge plus. Thanks for considering. Joseph
Re: Livestreaming DConf?
On 10/05/2014 7:48 a.m., Andrei Alexandrescu wrote: Hi folks, We at Facebook are very excited about the upcoming DConf 2014. In fact, so excited we're considering livestreaming the event for the benefit of the many of us who can't make it to Menlo Park, CA. Livestreaming entails additional costs so we're trying to assess the size of the online audience. Please follow up here and on twitter: https://twitter.com/D_Programming/status/464854296001933312 Thanks, Andrei Definitely would if I'm available to watch!
Re: More on Rust language
On Friday, 9 May 2014 at 04:55:28 UTC, Caligo via Digitalmars-d wrote: On Thu, Nov 3, 2011 at 10:43 PM, Walter Bright newshou...@digitalmars.comwrote: How do you implement a moving GC in D if D has raw pointers? It can be done if the D compiler emits full runtime type info. It's a solved problem with GCs. D semantics doesn't allow the GC to automatically modify those pointers when the GC moves the data. Yes, it does. I've implemented a moving collector before designing D, and I carefully defined the semantics so that it could be done for D. Besides, having two pointer types in D would be disastrously complex. C++/CLI does, and C++/CLI is a failure in the marketplace. (I've dealt with multiple pointer types from the DOS daze, and believe me it is a BAD BAD BAD idea.) Given the recent discussion on radical changes to GC and dtors, could someone please explain why having multiple pointer types is a bad idea? It increases the complexity to reason about code. If the compiler does not give an helping hand, bugs are too easy to create. -- Paulo
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 05:22:36 UTC, Russel Winder via Digitalmars-d wrote: On Thu, 2014-05-08 at 19:37 +, Jesse Phillips via Digitalmars-d wrote: […] Ah, well context around it removes all my claims. It is clear he is saying that Go 1.x will not have generics. Given the statements about backward compatibility there is no way Go 1.x can have generics. I'm fairly sure the core Go team are convinced the interface way and manual overloading is the way to go, and that generics are an unnecessary burden. Many people argue they are wrong without actually trying the style of programming inherent in the language. You mean programming in - Turbo Pascal with object as root - Oberon with object as root - Smalltalk with object as root - Modula-3 with ROOTANY as root - C++ without templates - Java with object as root - C# with object as root Me, I know what it means. -- Paulo
Re: [OT] Go officially won't get generics
Well, he had previously stated that there would be no breaking changes, and that if there were changes it would have to be called go version 2 or something. So when generics were brought up he stated that there were no plans for generics and I said we are going to leave the language, we are done (with version 1 semantics). Ola.. Robert Pike says in this thread (https://groups.google.com/forum/?hl=de#!topic/golang-nuts/3fOIZ1VLn1o): Go has type switches, and therefore no need for the Visitor Pattern.. He has exactly the same mindset as Niklaus Wirth and Oberon never got templates. Future will tell... Would be a nice thing to bet a dime on whether Go will have generics or not. I bet not ;-).
Re: Porting DMD compiler to haiku OS
On Thursday, 8 May 2014 at 10:02:53 UTC, Joakim wrote: On Thursday, 8 May 2014 at 08:18:16 UTC, iridium wrote: On Thursday, 8 May 2014 at 07:55:04 UTC, Jacob Carlborg wrote: On 08/05/14 08:53, iridium wrote: That's what happens when linking: http://itmages.ru/image/view/1655772/669acb30 You need to link with both Phobos and the C standard library. Run dmd -v main.d and look at the linking step at the end. dmd -v test.d result: http://itmages.ru/image/view/1655879/d0bb1c62 gcc -o main test.o ../../phobos/generated/haiku/release/32/libphobos2.a -lstdc++ -lroot result: http://itmages.ru/image/view/1655886/0be9d48f Try linking to druntime alone first, before you start messing with phobos. You'll notice that the makefile for druntime builds a libdruntime-haiku32.a library, if you modify the makefile for Haiku. Try building some simple executables with that first before moving on to phobos. You need to go through all the TARGET_FREEBSD blocks in the dmd source and all the version(FreeBSD) blocks in druntime and add TARGET_HAIKU, version(Haiku), and the appropriate source for Haiku inside those blocks. We can't sit here and help you do that, unless it's something truly unusual that you're unable to figure out. Most of these errors seem to be the result of missing some blocks here and there. Admittedly, porting to a new OS is not a small job, and you seem to be going pretty fast. Keep plugging away at it and I'm sure you'll be able to figure it out. I'm not asking to solve the problem for me. Just tell me what is that. I try to building some simple executables and get this and that: http://itmages.ru/image/view/1657447/92442093 dmd -c -v main.d: http://itmages.ru/image/view/1657453/3f46cbd9 main.d contains: module main; void main() { }
Re: Ranges of char and wchar
On 08/05/14 23:33, Walter Bright wrote: It's true that when I first encountered C#'s LINQ, I was surprised that it was lazy. It's also true that most of std.algorithm is lazy. Apart from coming up with a new naming convention (and renaming algorithms in Phobos), I don't see any obvious solution to what's lazy and what's not. One possibility is to informally (i.e. in the documentation rather than the core language spec) call something an 'algorithm' if it is lazy and 'function' if it is eager. Don't know if it helps, but we could add a UDA indicating which functions are lazy. This wouldn't require any renaming of existing functions. -- /Jacob Carlborg
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 07:05:59 UTC, Bienlein wrote: Well, he had previously stated that there would be no breaking changes, and that if there were changes it would have to be called go version 2 or something. So when generics were brought up he stated that there were no plans for generics and I said we are going to leave the language, we are done (with version 1 semantics). Ola.. Robert Pike says in this thread (https://groups.google.com/forum/?hl=de#!topic/golang-nuts/3fOIZ1VLn1o): Go has type switches, and therefore no need for the Visitor Pattern.. He has exactly the same mindset as Niklaus Wirth and Oberon never got templates. Future will tell... Would be a nice thing to bet a dime on whether Go will have generics or not. I bet not ;-). Oberon did eventually get some basic form of templates in Active Oberon, but that was not under Wirth's supervision. He actually went into the other direction by making a minimalist version of Oberon with Oberon-07. I had the opportunity to meet Wirth at CERN, when he and a few ETHZ members took part on the Oberon Day, back in 2004. He is really great guy, but he could not understand why Oberon was being ignored in the industry. As he expected the desire for quality would drive developers to it. In a similar vein to Rob Pike writing the blog post why he thinks C++ developers don't care for Go, Niklaus Wirth wrote a long article about the industry lack of interest in minimalist languages. The problem they fail to understand, or acknowledge, is that large scale architectures in simple languages usually lead to complex code with lots of boilerplate. -- Paulo
Re: FYI - mo' work on std.allocator
On Wednesday, 7 May 2014 at 02:48:45 UTC, Brian Schott wrote: Here's an interesting anecdote: I have a version of DCD that I've been working on for a while that uses allocators and an allocator-backed container library. (You can find this on Github easily enough, but I'm not announcing it yet.) The allocator version uses 1/3 the memory that the older GC version used. This new version of DCD is now checked in at https://github.com/Hackerpilot/DCD.
Re: More on Rust language
On Friday, 4 November 2011 at 03:14:29 UTC, bearophile wrote: Through Reddit I've found two introductions to the system language Rust being developed by Mozilla. This is one of them: http://marijnhaverbeke.nl/rust_tutorial/ This is an alpha-state tutorial, so some parts are unfinished and some parts will probably change, in the language too. Unfortunately this first tutorial doesn't discuss typestates and syntax macros (yet), two of the most significant features of Rust. The second tutorial discussed a bit typestates too. Currently the Rust compiler is written in Rust and it's based on the LLVM back-end. This allows it to eat its own dog food (there are few descriptions of typestate usage in the compiler itself) and the backend is efficient enough. Compared to DMD the Rust compiler is in a earlier stage of development, it works and it's able to compile itself but I think it's not usable yet for practical purposes. On the GitHub page the Rust project has 547 Watch and 52 Fork, while DMD has 159 and 49 of them, despite Rust is a quite younger compiler/software compared to D/DMD. So it seems enough people are interested in Rust. Most of the text below is quotations from the tutorials. --- http://marijnhaverbeke.nl/rust_tutorial/control.html Pattern matching Rust's alt construct is a generalized, cleaned-up version of C's switch construct. You provide it with a value and a number of arms, each labelled with a pattern, and it will execute the arm that matches the value. alt my_number { 0 { std::io::println(zero); } 1 | 2 { std::io::println(one or two); } 3 to 10 { std::io::println(three to ten); } _ { std::io::println(something else); } } There is no 'falling through' between arms, as in C—only one arm is executed, and it doesn't have to explicitly break out of the construct when it is finished. The part to the left of each arm is called the pattern. Literals are valid patterns, and will match only their own value. The pipe operator (|) can be used to assign multiple patterns to a single arm. Ranges of numeric literal patterns can be expressed with to. The underscore (_) is a wildcard pattern that matches everything. If the arm with the wildcard pattern was left off in the above example, running it on a number greater than ten (or negative) would cause a run-time failure. When no arm matches, alt constructs do not silently fall through—they blow up instead. A powerful application of pattern matching is destructuring, where you use the matching to get at the contents of data types. Remember that (float, float) is a tuple of two floats: fn angle(vec: (float, float)) - float { alt vec { (0f, y) when y 0f { 1.5 * std::math::pi } (0f, y) { 0.5 * std::math::pi } (x, y) { std::math::atan(y / x) } } } A variable name in a pattern matches everything, and binds that name to the value of the matched thing inside of the arm block. Thus, (0f, y) matches any tuple whose first element is zero, and binds y to the second element. (x, y) matches any tuple, and binds both elements to a variable. Any alt arm can have a guard clause (written when EXPR), which is an expression of type bool that determines, after the pattern is found to match, whether the arm is taken or not. The variables bound by the pattern are available in this guard expression. Record patterns Records can be destructured on in alt patterns. The basic syntax is {fieldname: pattern, ...}, but the pattern for a field can be omitted as a shorthand for simply binding the variable with the same name as the field. alt mypoint { {x: 0f, y: y_name} { /* Provide sub-patterns for fields */ } {x, y} { /* Simply bind the fields */ } } The field names of a record do not have to appear in a pattern in the same order they appear in the type. When you are not interested in all the fields of a record, a record pattern may end with , _ (as in {field1, _}) to indicate that you're ignoring all other fields. Tags Tags [FIXME terminology] are datatypes that have several different representations. For example, the type shown earlier: tag shape { circle(point, float); rectangle(point, point); } A value of this type is either a circle¸ in which case it contains a point record and a float, or a rectangle, in which case it contains two point records. The run-time representation of such a value includes an identifier of the actual form that it holds, much like the 'tagged union' pattern in C, but with better ergonomics. Tag patterns For tag types with multiple variants, destructuring is the only way to get at their contents. All variant constructors can be used as patterns, as in this definition of area: fn area(sh: shape) - float { alt sh { circle(_, size) { std::math::pi * size * size } rectangle({x, y}, {x: x2, y: y2}) { (x2 - x) * (y2 - y) } } }
Re: Ranges of char and wchar
On Thursday, 8 May 2014 at 22:27:11 UTC, Luís Marques wrote: (I guess that's one drawback of providing functional programming without immutability?) Another issue is iteration might be not repeatable, especially if a closure accidentally slips into the range. In C# iteration is not destructing, in D one can save a range.
Re: Suggestion to implement __traits(getImports, Scope)
On Friday, 9 May 2014 at 04:09:46 UTC, captaindet wrote: by coincidence, i have use for this too. also thought __traits(allMembers, ...) would work. too bad it doesn't. is this a bug or expected behavior? /det Just out of curiosity, what's your use case?
Re: The Current Status of DQt
On Friday, 9 May 2014 at 05:00:48 UTC, Russel Winder via Digitalmars-d wrote: Pacific Standard Time, UTC−8:00 Pakistan Standard Time, UTC+5:00 Philippine Standard Time, UTC+8:00 :-) I think we can infer that Andrei meant to say 09:00-08:00. Unless there is some shenanigans with moving the clocks forward an hour. He probably refers to some study of reddit activity, results of that study might not match his time zone. Also you detected it as UTC-7. Please see this public service announcement: http://xkcd.com/1179/ Though it lists 20130227 as discouraged format, but it's a valid ISO 8601 format, and phobos Date.toISOString generates string in that format: http://dlang.org/phobos/std_datetime.html#.Date.toISOString
Re: Removing zlib1.dll in favor of zlib1.lib
You can just compile a sample application using zlib from phobos and see if it works without dll.
Re: opApply and const
On Friday, 9 May 2014 at 05:26:12 UTC, Arne Ludwig wrote: Hello, when using opApply it seems natural to have two versions: one normal and one const. My problem is that I cannot find a way to describe both versions with one code block. Since there could be a number of basic variants with different numbers of delegate arguments this can lead to serious code duplication. This problem was discussed years ago: http://www.digitalmars.com/d/archives/digitalmars/D/opApply_and_const_63436.html Small example: http://pastebin.com/kRrPp6Yg In that example I need four times (mostly) the same code. There should be any way around that. Has someone ideas? I answered a similar question on stackoverflow [1]. The same approach can be used here. If dg doesn't mutate, the mutable overloads are de-facto const. So, if you have a const dg, it's safe to cast the object's const away and call the mutable versions of opApply: /* de-facto const if dg doesn't mutate */ int opApply(int delegate (T*) dg) { ... implementation ... } /* ditto */ int opApply(int delegate (size_t, T*) dg) { ... implementation ... } int opApply(int delegate(const T*) dg) const { return (cast() this).opApply(cast(int delegate(T*)) dg); } int opApply(int delegate (size_t, const T*) dg) const { return (cast() this).opApply(cast(int delegate(size_t, T*)) dg); } [1] http://stackoverflow.com/questions/22442031/how-to-make-a-template-function-const-if-the-template-is-true/22442425
Re: [OT] Go officially won't get generics
On 08/05/2014 22:09, Bienlein wrote: On Wednesday, 7 May 2014 at 15:54:42 UTC, Paulo Pinto wrote: So the videos of the Gophercon 2014 are being made available. Rob Pike did the keynote. At the expected question about generics, his answer was There are no plans for generics. I said we're going to leave the language; we're done.. Discussion ongoing on HN, https://news.ycombinator.com/item?id=7708904 -- Paulo I agree with Paulo. At 54:40 he says what Paulo has already quoted. And we are done means that's it, folks. It even sounds to me like the language is finished and it will be left like that. -- Bienlein I find this aspect much more interesting than the get generics or not one. So Rob Pike and the other guy is leaving the language then? I wonder what that means for the future of Go. I guess the community will take over, but will there be someone from Google still in charge? And how many resources/manpower from Google will they still dedicate to Go? The thing about generics is that, if Go where to break through and become a mainstream popular language, generics would likely be added to it somehow. Maybe in the main language, as in Go 2.0, or maybe as side-project/language-extension (Go++ ?) that someone else would take. Similar to Java which tried to keep the language as simple as possible in the beginning (no operator overload, no metaprogramming or generics, etc), but eventually saw the shortcoming as too significant.. (even if the only thing they added was type-parameterization generics, but even just that makes a big difference) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 07:38:46 UTC, Paulo Pinto wrote: On Friday, 9 May 2014 at 07:05:59 UTC, Bienlein wrote: Well, he had previously stated that there would be no breaking changes, and that if there were changes it would have to be called go version 2 or something. So when generics were brought up he stated that there were no plans for generics and I said we are going to leave the language, we are done (with version 1 semantics). Ola.. Robert Pike says in this thread (https://groups.google.com/forum/?hl=de#!topic/golang-nuts/3fOIZ1VLn1o): Go has type switches, and therefore no need for the Visitor Pattern.. He has exactly the same mindset as Niklaus Wirth and Oberon never got templates. Future will tell... Would be a nice thing to bet a dime on whether Go will have generics or not. I bet not ;-). Oberon did eventually get some basic form of templates in Active Oberon, but that was not under Wirth's supervision. He actually went into the other direction by making a minimalist version of Oberon with Oberon-07. I had the opportunity to meet Wirth at CERN, when he and a few ETHZ members took part on the Oberon Day, back in 2004. He is really great guy, but he could not understand why Oberon was being ignored in the industry. As he expected the desire for quality would drive developers to it. In a similar vein to Rob Pike writing the blog post why he thinks C++ developers don't care for Go, Niklaus Wirth wrote a long article about the industry lack of interest in minimalist languages. The problem they fail to understand, or acknowledge, is that large scale architectures in simple languages usually lead to complex code with lots of boilerplate. -- Paulo With generics you mean templates, right? I started to use templates in D, although I wasn't convinced. But the more I use them the more I appreciate them. If you work with one or two basic types, templates don't seem to make much sense. But when you use the power of D, like having arrays of structs that hold arrays of structs etc., then templates start to make sense. The thing is, the language has to be designed in a way that templates make sense and can be used throughout the language. If not, better not to introduce them. D, at a certain point in time, started to be designed around templates, or with templates in mind. I think it was Andrei who convinced Walter to do that. But in my view the language has to cater for templates for them to be useful. Introducing them randomly for the sake of having them will not work.
Re: [OT] DConf - How to survive without a car?
On Thursday, 8 May 2014 at 14:34:13 UTC, Steven Schveighoffer wrote: On Wed, 07 May 2014 00:11:45 -0400, Mike n...@none.com wrote: On Tuesday, 6 May 2014 at 02:20:46 UTC, Lionello Lunesu wrote: Hi all, After last year's incident with my tires getting slashed, I'm really hoping I can do without a car during this year's DConf. How feasible is this? I'll be staying at Aloft. Would be great if there's someone I can share a ride with. I've also seen there's a public bus going more or less to FB and back, so I should be good there. (Right?) But how about getting to SFO or down town? Am I causing myself a whole lot of pain (albeit of a different kind) by not renting a car? To be clear, I'm not looking for an economical option, just peace of mind. Lio. I'm wondering about this myself. My current plan to get from SFO to the Aloft is via BART (SFO - Balboa Park - Bay Fair - Freemont) and then take a bus or a taxi from Freemont to the hotel. Just FYI, I took BART and buses to my hotel last year, it took 2.5 hours. When Andrew drove me back to the airport from facebook, it took 30 minutes. Something to think about :) Also, the site 511.org is awesome for using public transportation in the bay area. But, thinking about it a little more, a car is starting to look pretty good. I will have a car this year, and will play taxi driver from Aloft to facebook. Excellent. I need a lift if possible? Of so please tell me when I should be in the aloft lobby? /Jonas
Re: Porting DMD compiler to haiku OS
On Friday, 9 May 2014 at 07:21:36 UTC, iridium wrote: I'm not asking to solve the problem for me. Just tell me what is that. I try to building some simple executables and get this and that: http://itmages.ru/image/view/1657447/92442093 Those are linker errors, because you are missing symbols like __stdoutp and __stderrp. My guess is that you simply cut and pasted the stdc.stdio block from FreeBSD, but Haiku likely doesn't define the same symbols. You need to look at stdio.h and other header files in Haiku and fill those druntime blocks in with the declarations appropriate to Haiku. It also looks like you messed something up with setting up rt.sections_haiku. Finally, as Jacob mentioned, you should look at the flags that dmd normally uses when it links, as you're missing some of those.
Re: Parallel execution of unittests
On Thursday, 8 May 2014 at 18:54:30 UTC, Jacob Carlborg wrote: I mean, what the h*ll does this unit test tests: https://github.com/D-Programming-Language/phobos/blob/master/std/numeric.d#L995 It is explained in comments there. And it won't become more simple if you add some fancy syntax there. It looks complicated because it _is_ complicated, not because syntax is bad. @describe(foo) This is redundant as D unittest blocks are associated with symbols they are placed next to. { @it(should do something useful) unittest { This is essentially @name with an overly smart name and weird attribute placement. Is not so much different from what you suggested with named unit tests. It introduces bunch of artificial annotations for something that can be taken care of by a single attribute as a side effect. Not KISS.
Re: isUniformRNG
On Friday, 9 May 2014 at 00:43:10 UTC, Nick Sabalausky wrote: On 5/8/2014 5:29 PM, Joseph Rushton Wakeling via Digitalmars-d wrote: That seems a problematic fix for me -- doesn't it mean that there can only ever be one instance of any individual RNG? There can technically be multiple instances, but yea, they're all effectively tied together. However, I'm leaning towards the belief that's correct behavior for a RNG. It's *definitely* correct for a crypto RNG - you certainly wouldn't want two crypto RNGs ever generating the same sequence, not even deliberately. That would defeat the whole point of a crypto RNG. As for ordinary non-crypto RNGs, I honestly can't imaging any purpose for reliably generating the same values other than playing back a previous sequence. But if that's what you want, then IMO it's better to record the sequence of emitted values, or even record/replay the higher-level decisions which the randomness was used to influence. For randomness, record/replay is just less fiddly and error-prone than reliably regenerating values from an algorithm that intentionally tries to imitate non-predictability. One slip-up while regenerating the sequence (example: trying to replay from the same seed on a RNG that has since had a bug fixed, or on a different architecture which the RNG wasn't well-tested on) and then the inaccuracies just cascade. Plus this way you can swap in a non-deterministic RNG and everything will still work. Or more easily skip back/ahead. I'm just not seeing a legitimate use-case for multiple states of the same RNG engine (at least on the same thread) that wouldn't be better served by different approach. I strongly disagree here. If a user has explicitly requested a PRNG, they should be able to rely on its most basic property, being deterministic.
Re: isUniformRNG
On Friday, 9 May 2014 at 00:43:10 UTC, Nick Sabalausky wrote: As for ordinary non-crypto RNGs, I honestly can't imaging any purpose for reliably generating the same values other than playing back a previous sequence. But if that's what you want, then IMO it's better to record the sequence of emitted values, or even record/replay the higher-level decisions which the randomness was used to influence. Theres lots of uses. e.g. Minecraft.
Re: From slices to perfect imitators: opByValue
On Thursday, 8 May 2014 at 21:08:36 UTC, bearophile wrote: I think T1 and T2 should be equivalent for built-in tuples. There are no built-in tuples in D.
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 11:46:20 UTC, Chris wrote: On Friday, 9 May 2014 at 07:38:46 UTC, Paulo Pinto wrote: On Friday, 9 May 2014 at 07:05:59 UTC, Bienlein wrote: Well, he had previously stated that there would be no breaking changes, and that if there were changes it would have to be called go version 2 or something. So when generics were brought up he stated that there were no plans for generics and I said we are going to leave the language, we are done (with version 1 semantics). Ola.. Robert Pike says in this thread (https://groups.google.com/forum/?hl=de#!topic/golang-nuts/3fOIZ1VLn1o): Go has type switches, and therefore no need for the Visitor Pattern.. He has exactly the same mindset as Niklaus Wirth and Oberon never got templates. Future will tell... Would be a nice thing to bet a dime on whether Go will have generics or not. I bet not ;-). Oberon did eventually get some basic form of templates in Active Oberon, but that was not under Wirth's supervision. He actually went into the other direction by making a minimalist version of Oberon with Oberon-07. I had the opportunity to meet Wirth at CERN, when he and a few ETHZ members took part on the Oberon Day, back in 2004. He is really great guy, but he could not understand why Oberon was being ignored in the industry. As he expected the desire for quality would drive developers to it. In a similar vein to Rob Pike writing the blog post why he thinks C++ developers don't care for Go, Niklaus Wirth wrote a long article about the industry lack of interest in minimalist languages. The problem they fail to understand, or acknowledge, is that large scale architectures in simple languages usually lead to complex code with lots of boilerplate. -- Paulo With generics you mean templates, right? I started to use templates in D, although I wasn't convinced. But the more I use them the more I appreciate them. If you work with one or two basic types, templates don't seem to make much sense. But when you use the power of D, like having arrays of structs that hold arrays of structs etc., then templates start to make sense. The thing is, the language has to be designed in a way that templates make sense and can be used throughout the language. If not, better not to introduce them. D, at a certain point in time, started to be designed around templates, or with templates in mind. I think it was Andrei who convinced Walter to do that. But in my view the language has to cater for templates for them to be useful. Introducing them randomly for the sake of having them will not work. Agreed. The issue being that to design a strong typed language in 2007 without support for genericity, does not make much sense. When all other mainstream languages have adopted them. Lets not forget that CLU (1975) and Ada (1980) were among the first ones to support it. The initial version of C++ STL was actually based on a preliminary version done in Ada. Even .NET was actually designed with generics support in mind (1999): http://blogs.msdn.com/b/dsyme/archive/2011/03/15/net-c-generics-history-some-photos-from-feb-1999.aspx -- Paulo
Re: Allocating a wstring on the stack (no GC)?
On Wed, 07 May 2014 19:41:16 +0100, Maxime Chevalier-Boisvert maximechevali...@gmail.com wrote: Unless I'm misunderstanding it should be as simple as: wchar[100] stackws; // alloca() if you need it to be dynamically sized. A slice of this static array behaves just like a slice of a dynamic array. I do need it to be dynamically sized. I also want to avoid copying my string data if possible. Basically, I just want to create a wstring view on an existing raw buffer that exists in memory somewhere, based on a pointer to this buffer and its length. import std.stdio; import core.stdc.stdlib : malloc; import core.stdc.wchar_ : wcscpy; wchar[] toWChar(const void *ptr, int len) { // Cast pointer to wchar*, create slice (on the heap?) from it (copies no data) return (cast(wchar*)ptr)[0..len]; } void main() { // Pre-existing data int len = 12; wchar *ptr = cast(wchar*)malloc(len * wchar.sizeof); wcscpy(ptr, Hello World); // Create slice of data wchar[] slice = toWChar(ptr, len); writefln(%s, slice); } R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Re: D for Android
On 2014-05-08 12:16 PM, Joakim wrote: All you need to get going is to download the latest Android NDK (http://developer.android.com/tools/sdk/ndk/index.html) and run Android/x86 (http://www.android-x86.org/, I recommend the 4.3 build) in a VM. I'll put up some basic setup and build instructions if someone is interested. Thanks for this, it's more than necessary and I believe no time will have been wasted.
Re: D for Android
On Thursday, 8 May 2014 at 16:16:22 UTC, Joakim wrote: Well, Android/x86 for now. I've been plugging away at getting D running on Android/x86 and got all of the druntime modules' unit tests and 37 of 50 phobos modules' unit tests to pass. I had to hack dmd into producing something like packed TLS for ELF, my patch is online here: http://164.138.25.188/dmd/packed_tls_for_elf.patch I simply turned off all TLS flags for ELF and spliced in the el_picvar patch from OS X to call ___tls_get_addr. Somebody who knows dmd better than me should verify to make sure this is right. I've also put online preliminary pulls for druntime and phobos: https://github.com/D-Programming-Language/druntime/pull/784 https://github.com/D-Programming-Language/phobos/pull/2150 Now that a significant chunk of D is working on Android/x86, I'm looking for others to pitch in. We really need to get D on mobile, and Android/x86 is an ideal place to start. Dan Olson has done some nice work getting D on iOS using ldc, I'm sure he could use help too: http://forum.dlang.org/thread/m2txc2kqxv@comcast.net http://forum.dlang.org/thread/m2d2h15ao3@comcast.net Stuff remaining to be done: 1. Fix all phobos unit tests. Those who know the failing modules better would be best equipped to get them to work. 2. I tried creating an Android app, ie an apk, which is really just a shared library called from the Dalvik JVM, as opposed to the standalone executables I've been running from the Android command line so far. The apk enters the D code and then segfaults in the new TLS support, I'll debug that next. 3. Use ldc/gdc to build for Android/ARM. 4. Start translating various headers on Android so they can be called from D, ie EGL, OpenGL ES, sensors, etc. 5. Integrate the D compilers into the existing Makefile-based build system of the Android NDK. Right now, I extract the necessary compiler and linker commands and run them by hand when necessary. All you need to get going is to download the latest Android NDK (http://developer.android.com/tools/sdk/ndk/index.html) and run Android/x86 (http://www.android-x86.org/, I recommend the 4.3 build) in a VM. I'll put up some basic setup and build instructions if someone is interested. I can't tell you how much I appreciate this! It's wonderful. All the stuff I'm working on will have to go on smart phones and tablets sooner or later. People do ask for it, because everything is an app these days. Much as I appreciate all the efforts to improve D as a language (GC, library etc.), if we can't get into the mobile market, D won't take off. People think app. What do you think? - Think? Is there an app for that?
Re: D for Android
On Thursday, 8 May 2014 at 16:16:22 UTC, Joakim wrote: Great to hear! Much appreciated.
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 07:38:46 UTC, Paulo Pinto wrote: I had the opportunity to meet Wirth at CERN, when he and a few ETHZ members took part on the Oberon Day, back in 2004. He is really great guy, but he could not understand why Oberon was being ignored in the industry. As he expected the desire for quality would drive developers to it. From Wirth published books and known projects I have an impression that he is a brilliant scientist who has an extremely basic understanding of what engineers need. His languages are all about proposing academical solutions to practical problems.
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 13:59:38 UTC, Dicebot wrote: On Friday, 9 May 2014 at 07:38:46 UTC, Paulo Pinto wrote: I had the opportunity to meet Wirth at CERN, when he and a few ETHZ members took part on the Oberon Day, back in 2004. He is really great guy, but he could not understand why Oberon was being ignored in the industry. As he expected the desire for quality would drive developers to it. From Wirth published books and known projects I have an impression that he is a brilliant scientist who has an extremely basic understanding of what engineers need. His languages are all about proposing academical solutions to practical problems. There is this conflict between textbooks and real world (hardware / software interaction). Walter once said on this forum that when he sees textbook examples, he says that things don't really work that way. Through D, I've learned a lot about that, because the language was designed on hindsight and with the machine in mind.
Re: More radical ideas about gc and reference counting
On 8 May 2014 16:11, Paulo Pinto via Digitalmars-d digitalmars-d@puremagic.com wrote: On Wednesday, 7 May 2014 at 20:09:07 UTC, Xavier Bigand wrote: 4MB?! That is a world of pleasure. Try to cram a Z80 application into 48 KB. :) I've never heard of a Z80 program running a tracing GC. (I have used refcounting on a Z80 though...) The main problem nowadays is not automatic memory management, in whatever form, be it GC, RC, compiler dataflow, dependent types or whatever. The problem is how many developers code like the memory was infinite without pausing a second to think about their data structures and algorithms. Okay, please don't be patronising. Let's assume the programmers you're talking to aren't incompetent. This discussion's existence should prove that your theory is incorrect. Let's consider your argument though, it sounds EXACTLY like the sort of programmer that would invent and rely on a tracing GC! The technology practically implies a presumption of infinite (or significant excess) memory. Such an excess suggests either the software isn't making full use of the system it's running on, or the programmers writing the software are like you say. Just yesterday I have re-written a Java application that in the application hot path does zero allocations on the code under our control. It requires execution analysis tooling, and thinking how to write the said code. That's it. Congratulations. But I'm not sure how that helps me? You know my arguments, how does this example of yours address them? What were the operating and target hardware requirements of your project? How much memory did you have? How much memory did you use? What is the frequency of runtime resource loading/swapping? What was the pause period and the consequence when you did collect? Let's also bear in mind that Java's GC is worlds ahead of D's. I am getting very tired of repeating myself and having my points basically ignored, or dismissed with something like my project which doesn't actually share those requirements works fine (not that I'm saying you did that just now; I don't know, you need to tell me more about your project). I'd really like to establish as fact or fiction whether tracing GC is _practically_ incompatible with competitive embedded/realtime environments where pushing the hardware to the limits is a desire. (Once upon a time, this was a matter of pride for all software engineers) If people can prove that I'm making it all up, and my concerns are invalid if I just ..., or whether my points are actually true. It doesn't matter what awesome GC research is out there if it's incompatible with D and/or small devices that may or may not have a robust operating system. D is allegedly a systems language, and while there is no such definition, my own take is that means it shouldn't be incompatible with, or discourage certain classes of computers or software by nature, otherwise it becomes a niche language. Please, prove me wrong. Show me how tracing collection can satisfy the basic requirements I've raised on countless prior posts, or practical workarounds that you would find reasonable if you were to consider working within those restrictions yourself and still remain compelling enough to adopt D in your corporation in the first place (implying a massive risk, and cost in retraining all the staff and retooling). I don't know how to reconcile the problem with the existing GC, and I am not happy to sacrifice large parts of the language for it. I've made the argument before that sacrificing large parts of the language as a 'work-around' is, in essence, sacrificing practically all libraries. That is a truly absurd notion; to suggest that anybody should take advice to sacrifice access to libraries is being unrealistic. I refer again to my example from last weekend. I was helping my mates try and finish up their PS4 release. It turns out, the GC is very erratic, causing them massive performance problems, and they're super stressed about this. Naturally, the first thing I did was scolded them for being stupid enough to use C# on a game in the first place, but then as I tried to search for practical options, I started to realise the gravity of the situation, and I'm really glad I'm not wearing their shoes! I've made the argument in the past that this is the single most dangerous class of issue to encounter; one that emerges as a problem only at the very end of the project and the issues are fundamental and distributed, beyond practical control. Massive volumes of code are committed, and no time or budget exists to revisit and repair the work that was already signed off. The only practical solution in this emergency situation is to start cutting assets (read: eliminate your competitive edge against competition), and try and get the system resource usage down to that half-ish of the system, as demonstrated from the prior android vs iphone comparison. It comes to this; how can a GC ever work in a memory
Re: [OT] Go officially won't get generics
Beyond being fodder for people who don't write Go but hate it for some reason, this seems to be an ongoing non-event. The official mailing list has practically no mention of generics anymore.
Re: [OT] DConf - How to survive without a car?
I live in the area and will be driving by Aloft on my way to FB. I can fit 3 other people (sliver Rav4). Ill just pull up with a clever D theme sign on my car and pick up any stragglers if needed.
Re: [OT] Go officially won't get generics
On 5/9/2014 7:18 AM, Chris wrote: There is this conflict between textbooks and real world (hardware / software interaction). Walter once said on this forum that when he sees textbook examples, he says that things don't really work that way. Found that out when implementing textbook optimization algorithms. It's sort of like watching those miracle cleaning products on an infomercial, and then trying them out yourself :-)
Re: D for Android
Le 09/05/2014 15:22, Chris a écrit : On Thursday, 8 May 2014 at 16:16:22 UTC, Joakim wrote: Well, Android/x86 for now. I've been plugging away at getting D running on Android/x86 and got all of the druntime modules' unit tests and 37 of 50 phobos modules' unit tests to pass. I had to hack dmd into producing something like packed TLS for ELF, my patch is online here: http://164.138.25.188/dmd/packed_tls_for_elf.patch I simply turned off all TLS flags for ELF and spliced in the el_picvar patch from OS X to call ___tls_get_addr. Somebody who knows dmd better than me should verify to make sure this is right. I've also put online preliminary pulls for druntime and phobos: https://github.com/D-Programming-Language/druntime/pull/784 https://github.com/D-Programming-Language/phobos/pull/2150 Now that a significant chunk of D is working on Android/x86, I'm looking for others to pitch in. We really need to get D on mobile, and Android/x86 is an ideal place to start. Dan Olson has done some nice work getting D on iOS using ldc, I'm sure he could use help too: http://forum.dlang.org/thread/m2txc2kqxv@comcast.net http://forum.dlang.org/thread/m2d2h15ao3@comcast.net Stuff remaining to be done: 1. Fix all phobos unit tests. Those who know the failing modules better would be best equipped to get them to work. 2. I tried creating an Android app, ie an apk, which is really just a shared library called from the Dalvik JVM, as opposed to the standalone executables I've been running from the Android command line so far. The apk enters the D code and then segfaults in the new TLS support, I'll debug that next. 3. Use ldc/gdc to build for Android/ARM. 4. Start translating various headers on Android so they can be called from D, ie EGL, OpenGL ES, sensors, etc. 5. Integrate the D compilers into the existing Makefile-based build system of the Android NDK. Right now, I extract the necessary compiler and linker commands and run them by hand when necessary. All you need to get going is to download the latest Android NDK (http://developer.android.com/tools/sdk/ndk/index.html) and run Android/x86 (http://www.android-x86.org/, I recommend the 4.3 build) in a VM. I'll put up some basic setup and build instructions if someone is interested. I can't tell you how much I appreciate this! It's wonderful. All the stuff I'm working on will have to go on smart phones and tablets sooner or later. People do ask for it, because everything is an app these days. Much as I appreciate all the efforts to improve D as a language (GC, library etc.), if we can't get into the mobile market, D won't take off. People think app. What do you think? - Think? Is there an app for that? +1
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 11:46:20 UTC, Chris wrote: If not, better not to introduce them. D, at a certain point in time, started to be designed around templates, or with templates in mind. I think it was Andrei who convinced Walter to do that. It wasn't Andrei, but I don't remember who Walter gave credit to.
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 11:38:13 UTC, Bruno Medeiros wrote: I find this aspect much more interesting than the get generics or not one. So Rob Pike and the other guy is leaving the language then? No, the context around what he said is very important. Google isn't leaving Go development, generics are not nixed for Go 2.0, the language will continue to see bug fixes. This is all very clear with context. I don't expect the will look to add generics in Go 2.0 (others will, but not them).
Re: More on Rust language
It increases the complexity to reason about code. No, that's wrong. If the compiler does not give an helping hand, bugs are too easy to create. Usually a type system is used to increase safety...
Re: From slices to perfect imitators: opByValue
On Thursday, 8 May 2014 at 11:05:20 UTC, monarch_dodra wrote: On Thursday, 8 May 2014 at 07:09:24 UTC, Sönke Ludwig wrote: Just a general note: This is not only interesting for range/slice types, but for any user defined reference type (e.g. RefCounted!T or Isolated!T). Not necessarily: As soon as indirections come into play, you are basically screwed, since const is turtles all the way down. So for example, the conversion from const RefCounted!T to RefCounted!(const T) is simply not possible, because it strips the const-ness of the ref count. What we would *really* need here is NOT: const RefCounted!T = RefCounted!(const T) But rather RefCounted!T = RefCounted!(const T) The idea is to cut out the head const directly. This also applies to most ranges too BTW. Skip paragraph. Okay daedalnix. Second attempt. Started with Container!(const(T)). Thought about separating the the const. Container!(T, const) and then only one const. None of that Container!(A,B, immutable, const). Then thought about int qual(*) * a. With qual as entry point. Then decided to go tail const only, single head mutable. But then seeing that there are two cases above, decided to go acceptor, copy. --- a) I'm going to call this the copying case where the value types are copied const RefCounted!T = RefCounted!(const T) immutable int [] a = immutable (int) [] a; immutable to mutable b) Acceptor case. RefCounted!T = RefCounted!(const T) e.g a const field accepts the mutable or immutable field. --- struct DemoStruct { int * * a; acceptor int * * b; void demonstrate() acceptor { assert(typeof(a).stringof == int * *); assert(typeof(b).stringof == acceptor(int *) *); } } void test(acceptor(const) DemoStruct v) { assert(typeof(v.a).stringof == int * *); assert(typeof(v.b).stringof == const(int *) *); } void main() { DemoStruct m; test(m); acceptor(immutable) i; test(i); } Like having acceptor behave like inout or something. The acceptor field can receive the following. int * * a; immutable (int *) * b; const(int *) * acceptorfield = a; acceptorfield = b; const(int) * * oops = b; // not valid acceptor. const(int * *) meh = b; // will choose the first pointer mutable since // is a copy so meh. The acceptor field is tail const or something with the first entry being mutable. --- struct DemoStruct { int * * a; acceptor int * * b; void demonstrate() copy { assert(typeof(a).stringof == copy(int *) *); assert(typeof(b).stringof == copy(int *) *); } } void test(copy(const) DemoStruct v) { assert(typeof(v.a).stringof == const(int *) *); assert(typeof(v.b).stringof == const(int *) *); } void main() { immutable DemoStruct i; test(i); } For the copying version, immutable == mutable the acceptor is applied to all fields. Please forgive me for pressing the send button. Sclytrack
Re: [OT] DConf - How to survive without a car?
This is an offer to all of you who will be staying somewhere on the way from Los Altos to FB (Mountain View and Palo Alto should be fine): I have one available seat in my car every day of the conference for * Going to the conference from your place * Going to Aloft after the conference * Going back to your place after Aloft * Potentially driving you to the airport If anybody needs that seat just let me know. Ali
Re: More on Rust language
Am 09.05.2014 21:53, schrieb Araq: It increases the complexity to reason about code. No, that's wrong. Why it is wrong? Even you ever seen a programmer reason about unique pointers, shared pointers, weak pointers, naked pointers, references and cyclic data structures without mistakes? In any language that provide them? If the compiler does not give an helping hand, bugs are too easy to create. Usually a type system is used to increase safety... That is why Rust provides a type system that knows about pointer types, lifetimes and usage dataflow. Because in languages that don't go that far, the desired outcome is not always the best. -- Paulo
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 19:07:24 UTC, Jesse Phillips wrote: No, the context around what he said is very important. Google isn't leaving Go development, generics are not nixed for Go 2.0, the language will continue to see bug fixes. This is all very clear with context. I see this as a good. What would you rather use - a third party library written against abstractions or one written against concrete types? I would rather use a library based on concrete types. My observation is that the more abstraction people indulge, the greater the chance I will regard one of their abstractions as a code smell. And it isn't the the case that the lack of generics is inhibiting participation. Go's library selection is already very good and getting better daily. Just yesterday I needed a Go lz4 compression library and was able to find three distinct implementations. Go is not hurting for third-party libraries.
Re: [OT] Go officially won't get generics
On Friday, 9 May 2014 at 21:03:06 UTC, brad clawsie wrote: On Friday, 9 May 2014 at 19:07:24 UTC, Jesse Phillips wrote: No, the context around what he said is very important. Google isn't leaving Go development, generics are not nixed for Go 2.0, the language will continue to see bug fixes. This is all very clear with context. I see this as a good. What would you rather use - a third party library written against abstractions or one written against concrete types? I would rather use a library based on concrete types. My observation is that the more abstraction people indulge, the greater the chance I will regard one of their abstractions as a code smell. Quite likely you won't be able to use that 3d party library with your types at all and will need runtime conversion between library types and your own. std.algorithm is prime example of how generalization improves code reuse. And it isn't the the case that the lack of generics is inhibiting participation. Go's library selection is already very good and getting better daily. Just yesterday I needed a Go lz4 compression library and was able to find three distinct implementations. Go is not hurting for third-party libraries. This has nothing to do with the language. Existing mainstream languages are so bad that people will contribute to anything that is backed by solid brand and has enough fuss about. I clearly remember seeing several articles about crazy library design that is forced by Go lack of generics of any sort.
Re: More radical ideas about gc and reference counting
On Friday, 9 May 2014 at 16:12:00 UTC, Manu via Digitalmars-d wrote: Let's also bear in mind that Java's GC is worlds ahead of D's. Is Sun/Oracle reference implementation actually any good? I am getting very tired of repeating myself and having my points basically ignored, or dismissed with something like my project which doesn't actually share those requirements works fine (not that I'm saying you did that just now; I don't know, you need to tell me more about your project). I'd really like to establish as fact or fiction whether tracing GC is _practically_ incompatible with competitive embedded/realtime environments where pushing the hardware to the limits is a [requirement]. I've actually become very curious about this, too. I know that our GC isn't good, but I've seen a lot of handwaving. The pattern lately has involved someone talking about ARC being insufficient, leading somehow to Manu asserting sufficient GC is impossible (why?), and everyone kind of ignoring it after that. I've been digging into research on the subject while I wait for test scripts to run, and my gut feeling is it's definitely possible to get GC at least into striking distance, but I'm not nearly an expert on this area. (Some of these are dead clever, though! I just read this one today: https://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf) I don't know how to reconcile the problem with the existing GC, and I am not happy to sacrifice large parts of the language for it. I've made the argument before that sacrificing large parts of the language as a 'work-around' is, in essence, sacrificing practically all libraries. That is a truly absurd notion; to suggest that anybody should take advice to sacrifice access to libraries is being unrealistic. This is important, and simply throwing up our collective hands and saying to just not use major language features (I believe I recall slices were in that list?) really doesn't sit well with me either. But conversely, Manu, something has been bothering me: aren't you restricted from using most libraries anyway, even in C++? Decent or acceptable performance isn't anywhere near maximum, so shouldn't any library code that allocates in any language be equally suspect? So from that standpoint, isn't any library you use in any language going to _also_ be tuned for performance in the hot path? Maybe I'm barking up the wrong tree, but I don't recall seeing this point addressed. More generally, I feel like we're collectively missing some important context: What are you _doing_ in your 16.6ms timeslice? I know _I'd_ appreciate a real example of what you're dealing with without any hyperbole. What actually _must_ be done in that timeframe? Why must collection run inside that window? What must be collected when it runs in that situation? (Serious questions.) See, in the final-by-default discussions, you clearly explained the issues and related them well to concerns that are felt broadly, but this... yeah, I don't really have any context for this, when D would already be much faster than the thirty years of C navel lint (KR flavour!) that I grapple in my day job. -Wyatt
Re: More radical ideas about gc and reference counting
On Friday, 9 May 2014 at 21:05:18 UTC, Wyatt wrote: But conversely, Manu, something has been bothering me: aren't you restricted from using most libraries anyway, even in C++? Decent or acceptable performance isn't anywhere near maximum, so shouldn't any library code that allocates in any language be equally suspect? So from that standpoint, isn't any library you use in any language going to _also_ be tuned for performance in the hot path? Maybe I'm barking up the wrong tree, but I don't recall seeing this point addressed. More generally, I feel like we're collectively missing some important context: What are you _doing_ in your 16.6ms timeslice? I know _I'd_ appreciate a real example of what you're dealing with without any hyperbole. What actually _must_ be done in that timeframe? Why must collection run inside that window? What must be collected when it runs in that situation? (Serious questions.) I'll try to guess: if you want something running at 60 Frames per Second, 16.6ms is the time you have to do everything between frames. This means that in that timeframe you have to: -update your game state. -possibly process all network I/O. -prepare the rendering pipeline for the next frame. Updating the game state can imply make computations on lots of stuff: physics, animations, creation and deletion of entities and particles, AI logic... pick your poison. At every frame you will have an handful of objects being destroyed and a few resources that might go forgotten. One frame would probably only need very little objects collected. But given some times the amount of junk can grow out of control easily. Your code will end up stuttering at some point (because of random collections at random times), and this can be really bad.
Re: More radical ideas about gc and reference counting
On 6.5.2014. 20:10, Walter Bright wrote: On 5/6/2014 10:47 AM, Manu via Digitalmars-d wrote: On 7 May 2014 01:46, Andrei Alexandrescu via Digitalmars-d I'm not even sure what the process it... if I go through and LGTM a bunch of pulls, does someone accept my judgement and click the merge button? You can see why I might not feel qualified to do such a thing? You don't need to be qualified (although you certainly are) to review PR's. The process is anyone can review/comment on them. Non-language-changing PR's can be pulled by anyone on Team DMD. Language changing PR's need to be approved by Andrei and I. Team DMD consists of people who have a consistent history of doing solid work reviewing PR's. Interesting. This really needs to pointed out on the site.
Re: -nofloat flag = should we destroy it?
No float is probably important for OS kernel and device driver developers. The kernel of an operating system will usually not save the floating point registers during a context switch (to the kernel). For this reason its important the compiler can guarantee never to use floating point numbers or the registers. Removing such a flag may prevent the compiler being used to write things like Linux device drivers. I know this is usually done in C, but there might be an OS in D one day. On 23/04/2014 10:22 AM, Mike via Digitalmars-d wrote: On Tuesday, 22 April 2014 at 23:57:49 UTC, Andrej Mitrovic wrote: See: https://issues.dlang.org/show_bug.cgi?id=8196 Are there any D platforms where -nofloat is useful? If we're not getting rid of it then it needs to be documented (the above issue). Well, I couldn't find any documentation on what this means, so I can't really say. Does it disable floating point usage completely, or does it force software emulation? There are a couple people in this community interested in bringing D to 32-bit microcontrollers. Most of the 32-bit ARM Cortex microcontrollers don't have an FPU. For the few that do, here are the attributes in GCC (http://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html) needed to specify the configuration. -mfloat-abi=name -mfpu=name -mfp16-format=name But, these are target specific. Since DMD doesn't support any ARM platform, I suspect this is irrelevant, but there, you have it anyway. Mike
Re: Any chance to avoid monitor field in my class?
Yuriy: but I like D, and i strongly believe it's the next big language. Oh, good. Do you want to briefly explain why? :) Bye, bearophile
Re: Any chance to avoid monitor field in my class?
Imho, offtop, also i'm a C++/Obj-C guy and that might partially explain my preferences, but here are some more reasons: 1. I like the concept of CT-reflection and CTFE a lot. This makes metaprogramming extremely powerful without any RT overheads. It brings a lot more control to what goes to RT. I guess D still needs to shrink it's runtime a bit more, and __monitors is just another example of that. 2. It's extremely easy for C++/C#/Java/Objc-C developers to switch to D without loosing any bit of their productivity, but gaining lots of possibilities, that can be used in future. And C++/C#/Java/Obj-C is the majority of the world now. Even PHP developers should think of D one day =). 3. That's the most arguable, but D's syntax and semantics looks much cleaner and uniform to me than Rust's.
Avoiding __traits(getAttributes, ...) on alias
I've been playing with UDAs a bit and I wanted to find all variables with a particular attribute in various modules. I thought I had it cracked, until I added a module that contains an alias declaration, which makes it choke when trying to execute __traits(getAttributes, ...). A small example is shown below. Is there any conditional I can insert between the two foreach lines to make it detect such an alias declaration, and move on to the next derived member? Or should getAttributes handle this by just returning no attributes? import std.traits; @(testattr) int foo; alias char[256] MyChar; @(testattr) int bar; void main() { foreach(e ; __traits(derivedMembers, mixin(__MODULE__))) { foreach( t; __traits(getAttributes, mixin(e)) ){ pragma(msg, t); } } // testattr // test.d(9): Error: first argument is not a symbol // test.d(9): Error: invalid foreach aggregate false // testattr } Any hints would be appreciated! Kind regards, Stefan Frijters
Re: Avoiding __traits(getAttributes, ...) on alias
On Friday, 9 May 2014 at 12:19:12 UTC, John Colvin wrote: On Friday, 9 May 2014 at 11:53:59 UTC, Stefan Frijters wrote: I've been playing with UDAs a bit and I wanted to find all variables with a particular attribute in various modules. I thought I had it cracked, until I added a module that contains an alias declaration, which makes it choke when trying to execute __traits(getAttributes, ...). A small example is shown below. Is there any conditional I can insert between the two foreach lines to make it detect such an alias declaration, and move on to the next derived member? Or should getAttributes handle this by just returning no attributes? import std.traits; @(testattr) int foo; alias char[256] MyChar; @(testattr) int bar; void main() { foreach(e ; __traits(derivedMembers, mixin(__MODULE__))) { foreach( t; __traits(getAttributes, mixin(e)) ){ pragma(msg, t); } } // testattr // test.d(9): Error: first argument is not a symbol // test.d(9): Error: invalid foreach aggregate false // testattr } Any hints would be appreciated! Kind regards, Stefan Frijters You could always do a static if with __traits(compiles, __traits(getAttributes, mixin(e)) Thank you for the fast reply; this solves my problem. I actually tried this before, but in my actual code instead of the example, where I'm deep into backticks and quotes and escaped quotes so I probably made a mistake there...
Re: Down the VisualD0.3.38-1.exe ,found virus!
Trend Micro and Comodo have (from my limited experience) been pretty good about dealing with false positives, so does anyone want to inform them and the others as well? On 5/8/14, sigod via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: On Friday, 9 May 2014 at 01:02:39 UTC, FrankLike wrote: Hi,everyone, down VisulaD from http://rainers.github.io/visuald/visuald/StartPage.html found the virus:Win32.Troj.Undef.(kcloud) Why? Frank https://www.virustotal.com/en/file/bbd76ddb41a80f0526f6cf1e37a2db2736cfa8f29ed3f5fd7a4336bf4c8bbe43/analysis/ Just 5 of 52. Probably a false alarm.
sort struct of arrays
If you have an array of structs, such as... struct Foo { int x; int y; } Foo[] foos; ...and you wanted to sort the foos then you'd do something like... foos.sort!(a.x b.x), ..and, of course, both of the fields x and y get sorted together. If you have a so-called struct of arrays, or an equivalent situation, such as... int[] fooX; int[] fooY; ...is there a simple way to sort fooX and fooY together/coherently (keyed on, say, fooX), using the standard lib?
Re: sort struct of arrays
On Friday, 9 May 2014 at 14:23:41 UTC, Luís Marques wrote: If you have an array of structs, such as... struct Foo { int x; int y; } Foo[] foos; ...and you wanted to sort the foos then you'd do something like... foos.sort!(a.x b.x), ..and, of course, both of the fields x and y get sorted together. If you have a so-called struct of arrays, or an equivalent situation, such as... int[] fooX; int[] fooY; ...is there a simple way to sort fooX and fooY together/coherently (keyed on, say, fooX), using the standard lib? std.range.zip(fooX, fooY).sort!((a, b) = a[0] b[0]); I wasn't sure if that's supposed to work. Turns out the documentation on zip [1] has this exact use case as an example. [1] http://dlang.org/phobos/std_range.html#zip
Re: Any chance to avoid monitor field in my class?
One thing I hate about C# (which is what I use professionally) is the sync block index in every single class instance. Why not have the developer decide when he needs a Monitor and manually use it?! I am disappointed D took the same route.
Re: sort struct of arrays
On Friday, 9 May 2014 at 14:48:50 UTC, anonymous wrote: std.range.zip(fooX, fooY).sort!((a, b) = a[0] b[0]); I wasn't sure if that's supposed to work. Turns out the documentation on zip [1] has this exact use case as an example. [1] http://dlang.org/phobos/std_range.html#zip Ha! Awesome! Sorry that I missed that example.
Re: sort struct of arrays
On Friday, 9 May 2014 at 14:23:41 UTC, Luís Marques wrote: If you have an array of structs, such as... struct Foo { int x; int y; } Foo[] foos; ...and you wanted to sort the foos then you'd do something like... foos.sort!(a.x b.x), ..and, of course, both of the fields x and y get sorted together. If you have a so-called struct of arrays, or an equivalent situation, such as... int[] fooX; int[] fooY; ...is there a simple way to sort fooX and fooY together/coherently (keyed on, say, fooX), using the standard lib? For some situations (expensive/impossible to move/copy elements of fooY), you would be best with this: auto indices = zip(iota(fooX.length).array, fooX).sort!a[1] b[1].map!a[0]; auto sortedFooY = fooY.indexed(indices); bearing in mind that this causes an allocation for the index, but if you really can't move the elements of fooY (and fooX isn't already indices of fooY) then you don't have much of a choice.
Re: Any chance to avoid monitor field in my class?
On Friday, 9 May 2014 at 14:56:21 UTC, flamencofantasy wrote: One thing I hate about C# (which is what I use professionally) is the sync block index in every single class instance. Why not have the developer decide when he needs a Monitor and manually use it?! I am disappointed D took the same route. If it can be changed without breaking existing code, you might be able to convince people to make it somehow optional or elided when unnecessary.
Re: Any chance to avoid monitor field in my class?
flamencofantasy, thanx for that! Where do we vote here? =)
Re: sort struct of arrays
On Friday, 9 May 2014 at 15:52:51 UTC, John Colvin wrote: On Friday, 9 May 2014 at 14:23:41 UTC, Luís Marques wrote: If you have an array of structs, such as... struct Foo { int x; int y; } Foo[] foos; ...and you wanted to sort the foos then you'd do something like... foos.sort!(a.x b.x), ..and, of course, both of the fields x and y get sorted together. If you have a so-called struct of arrays, or an equivalent situation, such as... int[] fooX; int[] fooY; ...is there a simple way to sort fooX and fooY together/coherently (keyed on, say, fooX), using the standard lib? For some situations (expensive/impossible to move/copy elements of fooY), you would be best with this: auto indices = zip(iota(fooX.length).array, fooX).sort!a[1] b[1].map!a[0]; auto sortedFooY = fooY.indexed(indices); bearing in mind that this causes an allocation for the index, but if you really can't move the elements of fooY (and fooX isn't already indices of fooY) then you don't have much of a choice. It's probably better to use makeIndex: http://dlang.org/phobos/std_algorithm.html#makeIndex
Re: sort struct of arrays
On Friday, 9 May 2014 at 16:26:22 UTC, Rene Zwanenburg wrote: On Friday, 9 May 2014 at 15:52:51 UTC, John Colvin wrote: On Friday, 9 May 2014 at 14:23:41 UTC, Luís Marques wrote: If you have an array of structs, such as... struct Foo { int x; int y; } Foo[] foos; ...and you wanted to sort the foos then you'd do something like... foos.sort!(a.x b.x), ..and, of course, both of the fields x and y get sorted together. If you have a so-called struct of arrays, or an equivalent situation, such as... int[] fooX; int[] fooY; ...is there a simple way to sort fooX and fooY together/coherently (keyed on, say, fooX), using the standard lib? For some situations (expensive/impossible to move/copy elements of fooY), you would be best with this: auto indices = zip(iota(fooX.length).array, fooX).sort!a[1] b[1].map!a[0]; auto sortedFooY = fooY.indexed(indices); bearing in mind that this causes an allocation for the index, but if you really can't move the elements of fooY (and fooX isn't already indices of fooY) then you don't have much of a choice. It's probably better to use makeIndex: http://dlang.org/phobos/std_algorithm.html#makeIndex good call, I didn't realise that existed.
core.sync.rwmutex example
The example code from core.sync.rwmutex seems bugged. After copying it I added an import for core.sync.rwmutex, and moved the executions of runTest into...well: void main() {runTest(ReadWriteMutex.Policy.PREFER_READERS); runTest(ReadWriteMutex.Policy.PREFER_WRITERS); } Then I tried to compile it. I got the following error messages: test3.d(36): Error: class core.sync.rwmutex.ReadWriteMutex member m_commonMutex is not accessible test3.d(38): Error: class core.sync.rwmutex.ReadWriteMutex member m_numQueuedReaders is not accessible test3.d(39): Error: class core.sync.rwmutex.ReadWriteMutex member m_numQueuedWriters is not accessible Checking out the documentation, I don't see that they SHOULD be accessible, so I think the compiler's correct, and the example is wrong. P.S.: Does anyone have a good working example of rwmutex? I'm trying to build a hash table that is write accessible from one thread, and readable from anywhere, and this looked like the best way to do it...except that when I start to figure out how to use it I get errors. Also does anyone have any examples of two directional communication between two particular threads (a bit more than just yes/no) in the presence of multiple other threads, so that when a thread asks another for information, only that other thread is allowed to reply? Perhaps that's a better way to implement the shared-read hash table. (I'd like to use std.concurrency, but I can't figure out how one is supposed to manage specific inter-thread communications.) -- Charles Hixson
Re: CMake for D
On Monday, 24 March 2014 at 23:55:14 UTC, Dragos Carp wrote: I moved cmaked2 to github [1], updated and simplified the usage a little (system cmake patch not necessary anymore). You can give it a try. Dub registry support is also on the way. Dragos What is the best way to specify a mixin include directory for dmd under CMake? I can arbitarily add a -J option to the compiler flags but should the cmake-d module include a variable that should be set? -- Chris
why can't I call const methods on shared objects?
Error: non-shared const method is not callable using a shared mutable object Why not? If the method is const, it can't modify the object anyway.
Re: why can't I call const methods on shared objects?
On Fri, 09 May 2014 17:37:35 -0400, Vlad Levenfeld vlevenf...@gmail.com wrote: Error: non-shared const method is not callable using a shared mutable object Why not? If the method is const, it can't modify the object anyway. Non-shared methods cannot be called on shared objects. Otherwise, you could have unintended race conditions. -Steve
Re: CMake for D
On Friday, 9 May 2014 at 21:11:54 UTC, Chris Piker wrote: On Monday, 24 March 2014 at 23:55:14 UTC, Dragos Carp wrote: I moved cmaked2 to github [1], updated and simplified the usage a little (system cmake patch not necessary anymore). You can give it a try. Dub registry support is also on the way. Dragos What is the best way to specify a mixin include directory for dmd under CMake? I can arbitarily add a -J option to the compiler flags but should the cmake-d module include a variable that should be set? -- Chris The way I've tackled that in my (still work-in-progress) CMake fork[1] is to add an `include_directories(TEXT ...)` signature. Unfortunately, you'll need to build my CMake from source, though that isn't difficult. Also, while I am hopeful about getting my changes merged upstream, there is no guarantee of that, so proceed with caution. Do note that my CMake work is independent of CMakeD2 and its forks. See my project wiki for more info. - Trent [1] https://github.com/trentforkert/cmake
Re: why can't I call const methods on shared objects?
Is this still the case if the method is const or pure?
Re: why can't I call const methods on shared objects?
Is there any way to declare a method as safe regardless of shared/mutability/etc (or some other way to avoid cast(Type)object.property every time I want to check a property which won't affect any state)?
Re: core.sync.rwmutex example
Hi Charles, would the following work (just a shot in the dark) ? //--- module test; import std.stdio; import std.concurrency; void spawnedFuncFoo(Tid tid, Tid tidBar) { receive( (int i) { writeln(Foo Received the number , i); send(tidBar, i, thisTid); auto barSuccessful = receiveOnly!(string); writeln(Bar got my (Foo) message); } ); send(tid, true); } void spawnedFuncBar(Tid tid) { receive( (int i, Tid tidFoo) { writeln(Foo passed me (Bar) the number , i); send(tidFoo, done); } ); receive( (string sig) { writeln(Main says I'm (Bar) done.); send(tid, 42); } ); } void main() { auto tidBar = spawn(spawnedFuncBar, thisTid); auto tidFoo = spawn(spawnedFuncFoo, thisTid, tidBar); send(tidFoo, 42); auto fooWasSuccessful = receiveOnly!(bool); assert(fooWasSuccessful); send(tidBar, your done); auto barWasSuccessful = receiveOnly!(int); assert(barWasSuccessful == 42); writeln(Successfully had two separate threads communicate with each other); } //---
Re: why can't I call const methods on shared objects?
I mean I thought that's what pure was for but the compiler complains all the same.
Re: why can't I call const methods on shared objects?
On Fri, 09 May 2014 17:45:37 -0400, Vlad Levenfeld vlevenf...@gmail.com wrote: Is there any way to declare a method as safe regardless of shared/mutability/etc (or some other way to avoid cast(Type)object.property every time I want to check a property which won't affect any state)? Not really for shared. For everything else, there's const for value properties, and inout for reference properties. Shared is quite different, because the method has to be cognizant of race conditions. It has to be implemented differently. Casting away shared is somewhat dangerous, but OK if you logically know there is no race condition. -Steve
Re: CMake for D
On Friday, 9 May 2014 at 21:43:04 UTC, Trent Forkert wrote: The way I've tackled that in my (still work-in-progress) CMake fork[1] is to add an `include_directories(TEXT ...)` signature. I like that, it seems clean. Unfortunately, you'll need to build my CMake from source, though that isn't difficult. Also, while I am hopeful about The standard CMake build instructions just don't work on the CentOS 6.5 systems we are currently using at work. I gave up after 2 hours. Since CMakeD2 is working for me I'm sticking with it for now but... getting my changes merged upstream, there is no guarantee of that, so proceed with caution. ... it would be great to have better D support in CMake, I really hope your changes do get committed upstream. I plan on using D somewhat quitely for about a year and after I've worked out the gotcha's start talking to my programming friends. An upstream merge would get used. Do note that my CMake work is independent of CMakeD2 and its forks. See my project wiki for more info. [1] https://github.com/trentforkert/cmake Thanks, I'll check it out. -- Chris
Re: Improving IO Speed
Try this; import std.mmfile; scope mmFile = new MmFile(T201212A.IDX); TaqIdx* arr = cast(TaqIdx*)mmFile[0..mmFile.length].ptr; for (ulong i = 0; i mmFile.length/TaqIdx.sizeof; ++i) { // do something... writeln(arr[i].symbol); } On Friday, 14 March 2014 at 18:00:58 UTC, TJB wrote: I have a program in C++ that I am translating to D as a way to investigate and learn D. The program is used to process potentially hundreds of TB's of financial transactions data so it is crucial that it be performant. Right now the C++ version is orders of magnitude faster. Here is a simple example of what I am doing in D: import std.stdio : writefln; import std.stream; align(1) struct TaqIdx { align(1) char[10] symbol; align(1) int tdate; align(1) int begrec; align(1) int endrec; } void main() { auto input = new File(T201212A.IDX); TaqIdx tmp; int count; while(!input.eof()) { input.readExact(tmp, TaqIdx.sizeof); // Do something with the data } } Do you have any suggestions for improving the speed in this situation? Thank you! TJB
Re: why can't I call const methods on shared objects?
Let me see if I understand this right... let's say I have some (unshared) class that launches threads to do its real work in. class Foo { this () {thread = spawn (work);} shared void work () {...}; void send_message (T) (T msg) {thread.send (msg);} Tid thread; } It has an unshared method to pass messages to the worker thread, and everything is copacetic, but what is the best practice for when I want another thread to interact with one of these instances? class Bar { this () {thread = spawn (pass);} shared void pass () { receive ( (shared Foo F) {this.F = F;} // (1) (T msg) {(cast(Foo)F).send_message (msg);}, // (2) ); void send_message (T) (T msg) {thread.send (msg);} Tid thread; Foo F; } void main () { auto F = new Foo; auto B = new Bar; B.send_message (cast(shared)F);// (3) // sleep } When I interact with F from at (1), this.F is automatically shared (because Bar is shared) so this compiles. Here, this.F refers to the same F I would refer to if I accessed this.F from the main method, it just gets automatically shared whenever Bar is shared. So when I spawn a worker thread, the worker thread sees a shared Bar (and shared F), while the main thread sees a Bar that it owns, but they are both looking at the same Bar (just with different access rules, like a const and non-const reference to the same object). At (2), I have to cast F to unshared in order to send it a message. But this is ok, because I am only sending messages, and am not risking a race condition. Basically I am announcing I know that multiple threads can see this, but I am taking ownership of it and taking responsiblity for preventing races. And when I cast to shared at (3), I am announcing I am explicitly recognizing that multiple threads may now access this, and that there is a potential for races. Is this the right way to handle this situation? And on a more general note, the shared keyword, as I understand it, is intended to be a sort of compiler-reinforced tag that I can use to prevent races the same way const helps prevent unintended side-effects, and to aid my reasoning when I do need to debug a race?