Re: Report generator for D
I found this http://xlslib.sourceforge.net/. Also, it hasn't been updated in over about two and a half years.
Re: Report generator for D
On Tuesday, 6 May 2014 at 06:36:40 UTC, Rikki Cattermole wrote: On Tuesday, 6 May 2014 at 06:28:20 UTC, simendsjo wrote: On 05/06/2014 08:08 AM, Rikki Cattermole wrote: On Tuesday, 6 May 2014 at 04:34:26 UTC, Sergey wrote: Please, help... I want to use D (Vibe.d) to create a web client to access the database of medical institutions. Tell me, please, what about the reports (report generator)? I need to upload reports in DOC and XLS. Thanks in advance. Regards, Sergey I'm afraid I don't believe there to be any libraries for dealing with those formats. But via a quick google search I did find a (paid) library that you could easily bind to via extern(C) [0]. I also found another library [1] for word documents and supposedly excel. I don't know what the quality is for either. Also if you want to do a shared library binding instead of static you may want to check out derelict-util. [0] http://www.libxl.com/ [1] http://libopc.codeplex.com/ There is also using COM directly, which I've used previously. It's quite tedious, but it works. For D1 there's also Juno: http://www.dsource.org/projects/juno Not sure if someone has updated it for D2. It has been according to its description [0]. But it looks like it hasn't been updated in over a year. Also Windows only, which may not be what Sergey is wanting. [0] https://github.com/JesseKPhillips/Juno-Windows-Class-Library need a library for Linux
Re: More radical ideas about gc and reference counting
On 6 May 2014 16:28, Jacob Carlborg via Digitalmars-d wrote: > On 06/05/14 05:51, HaraldZealot wrote: > >> Manu, can you direct me what is ARC? This abbreviation is very >> misgooglly. > > > Automatic Reference Counting. Like regular RC but the compiler automatically > inserts calls to release/free. And further, more importantly, automatically *eliminates* redundant calls to add/dec ref, which I think has much greater potential in D's type system than in Obj-C.
Re: Report generator for D
On Tuesday, 6 May 2014 at 06:28:20 UTC, simendsjo wrote: On 05/06/2014 08:08 AM, Rikki Cattermole wrote: On Tuesday, 6 May 2014 at 04:34:26 UTC, Sergey wrote: Please, help... I want to use D (Vibe.d) to create a web client to access the database of medical institutions. Tell me, please, what about the reports (report generator)? I need to upload reports in DOC and XLS. Thanks in advance. Regards, Sergey I'm afraid I don't believe there to be any libraries for dealing with those formats. But via a quick google search I did find a (paid) library that you could easily bind to via extern(C) [0]. I also found another library [1] for word documents and supposedly excel. I don't know what the quality is for either. Also if you want to do a shared library binding instead of static you may want to check out derelict-util. [0] http://www.libxl.com/ [1] http://libopc.codeplex.com/ There is also using COM directly, which I've used previously. It's quite tedious, but it works. For D1 there's also Juno: http://www.dsource.org/projects/juno Not sure if someone has updated it for D2. It has been according to its description [0]. But it looks like it hasn't been updated in over a year. Also Windows only, which may not be what Sergey is wanting. [0] https://github.com/JesseKPhillips/Juno-Windows-Class-Library
Re: More radical ideas about gc and reference counting
On Tuesday, 6 May 2014 at 06:07:41 UTC, HaraldZealot wrote: I notice that I view only part of problem, can anybody link or describe me completely state and problems of current garbage collection and other resource management? It help me in finding of existence solution (at least theoretical). A precise scanning GC is the only robust general solution. RC with weak pointers can only account for a subset of all possible models. But I agree with you, the language should be redesigned to have a GC friendly set of D constructs where FFI is followed by programmer guaranteed postconditions (specified by library authors). In other words the @nogc appoach is not sufficient, a @gc approach is needed.
Re: More radical ideas about gc and reference counting
On 6 May 2014 14:09, Andrei Alexandrescu via Digitalmars-d wrote: > On 5/5/14, 8:19 PM, Manu via Digitalmars-d wrote: >> >> On 5 May 2014 14:09, Andrei Alexandrescu via Digitalmars-d >> >> wrote: >>> >>> This is nice, but on the face of it it's just this: an idea on how other >>> people should do things on their free time. I'd have difficulty >>> convincing >>> people they should work that way. The kind of ideas that I noticed are >>> successful are those that actually carry the work through and serve as >>> good >>> examples to follow. >> >> >> There's imperfect but useful pull requests hanging around for years, >> extern(Obj-C) for instance, which may be useful as an experimental >> feature to many users, even if it's not ready for inclusion in the >> official feature list and support. >> I suspect it's (experimental) presence would stimulate further >> contribution towards D on iOS for instance; it may be an enabler for >> other potential contributors. > > > So it would be nice if you reviewed that code. I don't really know anything about it... and that's not the point. I'm just suggesting by my prior email that some steps like creating an experimental space with a lower barrier to entry might encourage growth in the number of overall contributors, which I think was the basic flavour of the emails leading up to it. >> What about AST macros? It seems to me that this is never going to be >> explored and there are competing proposals, but I wonder if there's >> room for experimental implementations that anyone in the community can >> toy with? > > > There would be of course room as long as there's be one or more champions > for it. Would that be something you'd be interested in? I have no horse in that race, but I see it come up all the time, and it is something I am passively interested in. There's at least one DIP which received little attention afaict, it's an example of something that I think would probably manifest into code in an experimental space, but clearly couldn't be accepted as a language feature without lots of field time. In lieu of an experimental space, there will be no action. It's an interesting example actually. I think lots feel the DIP isn't really an effective solution, but nobody has the motivation or ideas to refine it. The DIP author clearly has no motivation to test it experimentally, but perhaps that's what it needs to progress? The DIP's shortcomings might be discovered by experimental users in the field? It's hard to know, but it's an example of the sort of things that may have a stifling effect on progress and contribution. >> UDA's are super-useful, but they're still lacking the thing to really >> set them off, which is the ability to introduce additional boilerplate >> code at the site of the attribute. > > > Interesting. Have you worked on a related proposal? Not really, I've initiated numerous discussions which always seems to end at AST macros. The only other semi-reasonable idea I've had is the concept that tagging mixin templates as UDA's might be a practical angle, but it doesn't really make clean sense, creates a syntactic special case and also doesn't seem powerful enough, so I'm not with any solid proposal that I can imagine within the current language framework. There are presently bigger issues that keep me awake at night. >> I reckon there's a good chance that creating a proper platform for >> experimental features would also have an advantage for community >> building and increase contribution in general. If new contributors can >> get in, have some fun, and start trying their ideas while also being >> able to share them with the community for feedback without fear >> they'll just be shot down and denied after all their work... are they >> not more likely to actually make a contribution in the first place? > > > I'd say so, but we'd need initiative and quite a bit of work for such a > platform. Would you be interested? Well, in phobos, just approve 'exp' which has been raised countless times. I've got contributions that should be in exp, but instead, they're in limbo, and I've lost momentum and motivation since their completion is blocked by other issues, and I'm receiving no feedback from field testing. What happened to std.serislisation? There was motion there a year or so back... I was looking forward to it, and did some minor reviewing at the time. I wonder if that's an interesting case study? (I haven't looked) In the compiler... I may be interested, but I don't have any such compiler feature in mind to motivate the effort. I have no idea what an experimental feature platform should look like in the compiler, and if it were to exist, I have no such feature in mind to make use of it, but I have raised examples of others that have. >> Once they've made a single contribution of any sort, are they then >> more likely to continue making other contributions in the future >> (having now taken the time to acclimatise themselves with the >> codebase)?
Re: More radical ideas about gc and reference counting
On 06/05/14 08:07, HaraldZealot wrote: I notice that I view only part of problem, can anybody link or describe me completely state and problems of current garbage collection and other resource management? It help me in finding of existence solution (at least theoretical). The major issue with the garbage collector is that it's not guaranteed to run a collection. When a collection is run the GC will call the destructors for the objects it collects. If there's no guarantee a collection is run there can be no guarantee that destructors are called. A collection is usually run when allocating new memory and there's not enough memory available. -- /Jacob Carlborg
Porting DMD compiler to haiku OS
Good day. Sorry for my bad english. I'm trying to build the dmd compiler to Haiku OS. At compile time get the following error: http://itmages.ru/image/view/1652327/b501e37b What could be the reason? Thanks.
Re: Report generator for D
On 05/06/2014 08:08 AM, Rikki Cattermole wrote: > On Tuesday, 6 May 2014 at 04:34:26 UTC, Sergey wrote: >> Please, help... >> >> I want to use D (Vibe.d) to create a web client to access the database >> of medical institutions. Tell me, please, what about the reports >> (report generator)? I need to upload reports in DOC and XLS. >> >> Thanks in advance. >> >> Regards, Sergey > > I'm afraid I don't believe there to be any libraries for dealing > with those formats. > But via a quick google search I did find a (paid) library that > you could easily bind to via extern(C) [0]. > I also found another library [1] for word documents and > supposedly excel. > > I don't know what the quality is for either. Also if you want to > do a shared library binding instead of static you may want to > check out derelict-util. > > [0] http://www.libxl.com/ > [1] http://libopc.codeplex.com/ There is also using COM directly, which I've used previously. It's quite tedious, but it works. For D1 there's also Juno: http://www.dsource.org/projects/juno Not sure if someone has updated it for D2.
Re: More radical ideas about gc and reference counting
On 06/05/14 05:51, HaraldZealot wrote: Manu, can you direct me what is ARC? This abbreviation is very misgooglly. Automatic Reference Counting. Like regular RC but the compiler automatically inserts calls to release/free. -- /Jacob Carlborg
Re: FYI - mo' work on std.allocator
Am Mon, 05 May 2014 21:13:10 +0400 schrieb Dmitry Olshansky : > 05-May-2014 20:57, Marco Leise пишет: > > > > That sounds like a more complicated topic than anything I had > > in mind. I think a »std.virtualmemory« module should already > > implement all the primitives in a portable form, so we don't > > have to do that again for the next use case. Since > > cross-platform code is always hard to get right, it could also > > avoid latent bugs. > > I had an idea of core.vmm. It didn't survive the last review though, > plus I never got around to test OSes aside from Windows & Linux. > Comments on initial design are welcome. > https://github.com/D-Programming-Language/druntime/pull/653 That's exactly what I had in mind and more. :) These are all free functions that can be used as building blocks for more specific objects. Was there a dedicated review thread on the news group? All I could find was a discussion about why not to use a VMM struct with static functions as a namespace replacement. -- Marco
Re: A new trait to retrieve doc comments (if available).
On 06/05/14 02:49, Mason McGill wrote: **I'm fairly new to D, so let me know if this belongs in another thread.** I'd like to contribute a new feature to the DMD front-end, and I'd appreciate some feedback on the design before I start on a pull request. Feature: `__traits(comment, symbol)` will evaluate to the doc-comment of `symbol`, if it is available, and "", otherwise. For DMD, this means it will provide comment information if the "-D" compiler option is used. Other implementations can choose to always evaluate it to "". Use Cases: == Here's my use case: I'm building an automatic wrapper generator for binding D to dynamic languages (mostly for scientific applications, at the moment). It's like SWIG, but more automated and narrower in scope. Right now, I have two suboptimal options for supporting documentation comments: I have thought about it a couple of times before. I would say, just go for it. __traits is a pretty good staring point for someone not familiar with the DMD source. -- /Jacob Carlborg
Re: Report generator for D
On Tuesday, 6 May 2014 at 04:34:26 UTC, Sergey wrote: Please, help... I want to use D (Vibe.d) to create a web client to access the database of medical institutions. Tell me, please, what about the reports (report generator)? I need to upload reports in DOC and XLS. Thanks in advance. Regards, Sergey I'm afraid I don't believe there to be any libraries for dealing with those formats. But via a quick google search I did find a (paid) library that you could easily bind to via extern(C) [0]. I also found another library [1] for word documents and supposedly excel. I don't know what the quality is for either. Also if you want to do a shared library binding instead of static you may want to check out derelict-util. [0] http://www.libxl.com/ [1] http://libopc.codeplex.com/
Re: More radical ideas about gc and reference counting
I have to say that all this discussion (more precisely the understanding on the side of key developers) make me very upset. It's good that Andrei agreed with impossibility of the harebrained disallowing of the class destructors. But I was very surprise, that so thought go to such head, because such solution contradicts the D spirit totally. There are many language which are very popular and have many dark moments in their design. I (and I think not only me) go to the D not for its popularity, but for its clarity, power and SANITY (that bases on strong guaranties). The strong solutions found on the strong decision makes D itself. (Examples of such strong solutions: immutabilities, default unshareness, struct and class as distinct being). And way that leads us in state where stucts have dtors and classes haven't but allow struct with dtor as member and people have to call struct dtor manually, isn't D way. Because such way relies on programmers discipline, but how Andrei has written "If there one thing that decades of computing have taught us, it must be that discipline-oriented programming does not scale."[TDPL, p. 399]. Our negative filling flood out may be sane from psychologically view, but neither sane nor constructive for D future. For solving problem it's need its formulate. We have to state that current state (lack of structs' dtors call guaranty) is insane, the harebrained disallowing of the class destructors is insane too. And what is sane? If I properly understand philosophy of D, we need semiautomated (not full) resource manager with strong guaranty and good performance, and which automated mode covers the most part of use-case. It is the target. Garbage collection or reference counting or any possible third way is a detail therefor task and mean not a target. And one task, that lays on the way to target, is minimal rape of D2 language (even if solution will be D3), so IMO dtors (perhaps only for structs) must survive. I notice that I view only part of problem, can anybody link or describe me completely state and problems of current garbage collection and other resource management? It help me in finding of existence solution (at least theoretical). --- Alaksiej Stankievič
Re: Running Phobos unit tests in threads: I have data
https://issues.dlang.org/show_bug.cgi?id=12708 On Sunday, 4 May 2014 at 16:07:30 UTC, Andrei Alexandrescu wrote: On 5/4/14, 1:44 AM, Atila Neves wrote: On Saturday, 3 May 2014 at 22:46:03 UTC, Andrei Alexandrescu wrote: On 5/3/14, 2:42 PM, Atila Neves wrote: gdc gave _very_ different results. I had to use different modules because at some point tests started failing, but with gdc the threaded version runs ~3x faster. On my own unit-threaded benchmarks, running the UTs for Cerealed over and over again was only slightly slower with threads than without. With dmd the threaded version was nearly 3x slower. Sounds like a severe bug in dmd or dependents. -- Andrei Seems like it. Just to be sure I swapped ld.gold for ld.bfd and the problem was still there. I'm not entirely sure how to file this bug: with just my simple example above? The simpler the better. -- Andrei
Re: More radical ideas about gc and reference counting
Am Mon, 05 May 2014 17:24:38 + schrieb "Dicebot" : > > That experimental package idea that was discussed months ago > > comes to my mind again. Add that thing as exp.rational and > > have people report bugs or shortcomings to the original > > author. When it seems to be usable by everyone interested it > > can move into Phobos proper after the formal review (that > > includes code style checks, unit tests etc. that mere users > > don't take as seriously). > > And same objections still remain. Sneaky didn't work this time. -- Marco
Re: API
On 5/5/2014 7:51 PM, Andrei Alexandrescu wrote: On 5/5/14, 5:54 PM, Walter Bright wrote: 2. why "make" instead of "construct" or "factory"? Shorter. "Less" is 4 characters. I win!
Re: FYI - mo' work on std.allocator
Am Mon, 05 May 2014 11:23:58 -0700 schrieb Andrei Alexandrescu : > On 5/5/14, 9:57 AM, Marco Leise wrote: > > That sounds like a more complicated topic than anything I had > > in mind. I think a »std.virtualmemory« module should already > > implement all the primitives in a portable form, so we don't > > have to do that again for the next use case. Since > > cross-platform code is always hard to get right, it could also > > avoid latent bugs. > > That module would also offer functionality to get the page > > size and allocation granularity and wrappers for common needs > > like getting n KiB of writable memory. Management however > > (i.e. RAII structs) would not be part of it. > > It sounds like not too much work with great benefit for a > > systems programming language. > > I think adding portable primitives to > http://dlang.org/phobos/std_mmfile.html (plus better yet refactoring its > existing code to use them) would be awesome and wouldn't need a DIP. -- > Andrei I like Dmitry's core.vm better, since conceptually we are not necessarily dealing with memory mapped files, but probably with just-in-time compilation, circular buffers, memory access tracing etc. Virtual memory really is a basic building block. -- Marco
Re: API
On 5/5/14, 9:56 PM, Walter Bright wrote: On 5/5/2014 7:51 PM, Andrei Alexandrescu wrote: On 5/5/14, 5:54 PM, Walter Bright wrote: 2. why "make" instead of "construct" or "factory"? Shorter. "Less" is 4 characters. I win! Semantics suk. -- Andrei
Report generator for D
Please, help... I want to use D (Vibe.d) to create a web client to access the database of medical institutions. Tell me, please, what about the reports (report generator)? I need to upload reports in DOC and XLS. Thanks in advance. Regards, Sergey
Re: More radical ideas about gc and reference counting
On 5/5/14, 8:19 PM, Manu via Digitalmars-d wrote: On 5 May 2014 14:09, Andrei Alexandrescu via Digitalmars-d wrote: This is nice, but on the face of it it's just this: an idea on how other people should do things on their free time. I'd have difficulty convincing people they should work that way. The kind of ideas that I noticed are successful are those that actually carry the work through and serve as good examples to follow. There's imperfect but useful pull requests hanging around for years, extern(Obj-C) for instance, which may be useful as an experimental feature to many users, even if it's not ready for inclusion in the official feature list and support. I suspect it's (experimental) presence would stimulate further contribution towards D on iOS for instance; it may be an enabler for other potential contributors. So it would be nice if you reviewed that code. What about AST macros? It seems to me that this is never going to be explored and there are competing proposals, but I wonder if there's room for experimental implementations that anyone in the community can toy with? There would be of course room as long as there's be one or more champions for it. Would that be something you'd be interested in? UDA's are super-useful, but they're still lacking the thing to really set them off, which is the ability to introduce additional boilerplate code at the site of the attribute. Interesting. Have you worked on a related proposal? I reckon there's a good chance that creating a proper platform for experimental features would also have an advantage for community building and increase contribution in general. If new contributors can get in, have some fun, and start trying their ideas while also being able to share them with the community for feedback without fear they'll just be shot down and denied after all their work... are they not more likely to actually make a contribution in the first place? I'd say so, but we'd need initiative and quite a bit of work for such a platform. Would you be interested? Once they've made a single contribution of any sort, are they then more likely to continue making other contributions in the future (having now taken the time to acclimatise themselves with the codebase)? I agree - and that applies to you, too. I personally feel the perceived unlikeliness of any experimental contribution being accepted is a massive deterrence to making compiler contributions in the first place by anyone other than the most serious OSS advocates. Contributions make it into the compiler and standard library if and they are properly motivated, well done, and reviewed by the core team which is literally self-appointed. The key to being on the core team is just reviewing contributions. Have you considered looking at submissions that are "hanging around for years"? I have no prior experience with OSS, and it's certainly a factor that's kept me at arms length. It's as easy as just reviewing stuff. Acta, non verba. Andrei
Re: More radical ideas about gc and reference counting
On 6 May 2014 13:51, HaraldZealot via Digitalmars-d wrote: >> That said, I really want my destructors, and would be very upset to >> see them go. So... ARC? > > > Manu, can you direct me what is ARC? This abbreviation is very > misgooglly. Automatic reference counting, and solution used by Apple in Obj-C. There has been massive debate on the topic already, but it's generally been dismissed.
Re: More radical ideas about gc and reference counting
That said, I really want my destructors, and would be very upset to see them go. So... ARC? Manu, can you direct me what is ARC? This abbreviation is very misgooglly.
Re: More radical ideas about gc and reference counting
On 3 May 2014 18:49, Benjamin Thaut via Digitalmars-d wrote: > Am 30.04.2014 22:21, schrieb Andrei Alexandrescu: >> >> Walter and I have had a long chat in which we figured our current >> offering of abstractions could be improved. Here are some thoughts. >> There's a lot of work ahead of us on that and I wanted to make sure >> we're getting full community buy-in and backup. >> >> First off, we're considering eliminating destructor calls from within >> the GC entirely. It makes for a faster and better GC, but the real >> reason here is that destructors are philosophically bankrupt in a GC >> environment. I think there's no need to argue that in this community. >> >> The GC never guarantees calling destructors even today, so this decision >> would be just a point in the definition space (albeit an extreme one). >> >> That means classes that need cleanup (either directly or by having >> fields that are structs with destructors) would need to garner that by >> other means, such as reference counting or manual. We're considering >> deprecating ~this() for classes in the future. >> >> Also, we're considering a revamp of built-in slices, as follows. Slices >> of types without destructors stay as they are. >> >> Slices T[] of structs with destructors shall be silently lowered into >> RCSlice!T, defined inside object.d. That type would occupy THREE words, >> one of which being a pointer to a reference count. That type would >> redefine all slice primitives to update the reference count accordingly. >> >> RCSlice!T will not convert implicitly to void[]. Explicit cast(void[]) >> will be allowed, and will ignore the reference count (so if a void[] >> extracted from a T[] via a cast outlives all slices, dangling pointers >> will ensue). >> >> I foresee any number of theoretical and practical issues with this >> approach. Let's discuss some of them here. >> >> >> Thanks, >> >> Andrei > > > Honestly, that sounds like the entierly wrong apporach to me. Your > approaching the problem in this way: > > "We can not implement a propper GC in D because the language design prevents > us from doing so. So lets remove destructors to migate the issue of false > pointers." > > While the approach should be. > > "The language does not allow to implement a propper GC (anything else then > dirty mark & sweep), what needs to be changed to allow a implementation of a > more sophisticated GC." Couldn't agree more. Abandoning destructors is a disaster. Without destructors, you effectively have manual memory management, or rather, manual 'resource' management, which is basically the same thing, even if you have a GC. It totally undermines the point of memory management as a foundational element of the language if most things are to require manual release/finalisation/destruction or whatever you wanna call it. > Also let me tell you that at work we have a large C# codebase which heavily > relies on resource management. So basically every class in there inherits > from C#'s IDisposable interface which is used to manually call the finalizer > on the class (but the C# GC will also call that finalizer!). Basically the > entire codebase feels like manual memory management. You have to think about > manually destroying every class and the entire advantage of having a GC, > e.g. not having to think about memory management and thus beeing more > productive, vanished. It really feels like writing C++ with C# syntax. Do we > really want that for D? This is interesting to hear someone else say this. I have always found C# - an alleged GC language - to result in extensive manual memory management in practise too. I've ranted enough about it already, but I have come to the firm conclusion that the entire premise of a mark&sweep GC is practically corrupt. Especially in D. Given this example that you raise with C#, and my own experience that absolutely parallels your example, I realise that GC's failure extends into far more cases than just the ones I'm usually representing. I also maintain that GC isn't future-proof in essence. Computers grow exponentially, and GC performance inversely tracks the volume of memory in the system. Anything with an exponential growth curve is fundamentally not future-proof. I predict a 2025 Wikipedia entry: "GC was a cute idea that existed for a few years in the early 2000's while memory ranged in the 100's mb - few gb's, but quickly became unsustainable as computer technology advanced". > And what if I want unsafe slices of structs with destructors, for > performance? Maybe I perfectly know that the memory behind the slice will > outlive the slice, and I don't want the overhead of all the reference > counthing behind it? > > If you actually deprecate ~this, there would be two options for me. > 1) Migrate my entire codebase to some user defiend finalizer function (which > doesn't have compiler support), which would be a lot of work. Does ~this() actually work, or just usually work? Do you call your destructors manual
Re: More radical ideas about gc and reference counting
On 5 May 2014 14:09, Andrei Alexandrescu via Digitalmars-d wrote: > On 5/4/14, 5:38 PM, Caligo via Digitalmars-d wrote: >> >> On Sun, May 4, 2014 at 12:22 AM, Andrei Alexandrescu via Digitalmars-d >> mailto:digitalmars-d@puremagic.com>> wrote: >> Here is an idea: include new features in DMD/Phobos as soon as they >> arrive, and make them part of the official binary release so that the >> average D user can try them out. Make sure they are marked as unstable, >> and put a on/off switch on them (something like what Rust/Haskell have; >> not a compiler switch). If the feature receives no implementation bug >> reports for X consecutive days AND no design bug reports for Y >> consecutive days, then the feature is marked stable and officially >> becomes part of DMD/Phobos. The X and the Y can be decreased as D's >> number of users increases over the years. The whole idea is very much >> like farming: you are planting seeds. As the plants grow, some of them >> will not survive, others will be destroyed, and some of them will take >> years to grow. In any case, you harvest the fruits when they are ready. >> >> Here are good starting values for X and Y: >> X = 90 days >> Y = 180 days > > > This is nice, but on the face of it it's just this: an idea on how other > people should do things on their free time. I'd have difficulty convincing > people they should work that way. The kind of ideas that I noticed are > successful are those that actually carry the work through and serve as good > examples to follow. There's imperfect but useful pull requests hanging around for years, extern(Obj-C) for instance, which may be useful as an experimental feature to many users, even if it's not ready for inclusion in the official feature list and support. I suspect it's (experimental) presence would stimulate further contribution towards D on iOS for instance; it may be an enabler for other potential contributors. What about AST macros? It seems to me that this is never going to be explored and there are competing proposals, but I wonder if there's room for experimental implementations that anyone in the community can toy with? UDA's are super-useful, but they're still lacking the thing to really set them off, which is the ability to introduce additional boilerplate code at the site of the attribute. I reckon there's a good chance that creating a proper platform for experimental features would also have an advantage for community building and increase contribution in general. If new contributors can get in, have some fun, and start trying their ideas while also being able to share them with the community for feedback without fear they'll just be shot down and denied after all their work... are they not more likely to actually make a contribution in the first place? Once they've made a single contribution of any sort, are they then more likely to continue making other contributions in the future (having now taken the time to acclimatise themselves with the codebase)? I personally feel the perceived unlikeliness of any experimental contribution being accepted is a massive deterrence to making compiler contributions in the first place by anyone other than the most serious OSS advocates. I have no prior experience with OSS, and it's certainly a factor that's kept me at arms length.
Re: API
On 5/5/14, 5:54 PM, Walter Bright wrote: On 5/5/2014 5:10 PM, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. My bikeshed color comments: 1. "allok" is too kute for a name. Rather have "allocate" or "alloc". Fortunately that's not part of the API, just of the example. 2. why "make" instead of "construct" or "factory"? Shorter. Andrei
Re: More radical ideas about gc and reference counting
On 4 May 2014 19:00, via Digitalmars-d wrote: > On Saturday, 3 May 2014 at 11:12:56 UTC, Michel Fortin wrote: >> >> On 2014-05-01 17:35:36 +, "Marc Schütz" said: >> >>> Maybe the language should have some way to distinguish between GC-managed >>> and manually-managed objects, preferably in the type system. Then it could >>> be statically checked whether an object is supposed to be GC-managed, and >>> consequentially shouldn't have a destructor. >> >> >> Or turn the rule on its head: make it so having a destructor makes the >> heap memory block reference counted. With this adding a destructor always >> cause deterministic destruction. >> >> The compiler knows statically whether a struct has a destructor. For a >> class you need a runtime trick because the root object which can be either. >> Use a virtual call or a magic value in the reference count field to handle >> the reference count management. You also need a way to tag a class to be >> guarantied it has no derived class with a destructor (to provide a static >> proof for the compiler it can omit ARC code), perhaps @disable ~this(). >> >> Then remains the problem of cycles. It could be a hard error if the >> destructor is @safe (error thrown when the GC collects it). The destructor >> could be allowed to run (in any thread) if the destructor is @system or >> @trusted. >> >> The interesting thing with this is that the current D semantics are >> preserved, destructors become deterministic (except in the presence of >> cycles, which the GC will detect for you), and if you're manipulating >> pointers to pure memory (memory blocks having no destructor) there's no ARC >> overhead. And finally, no new pointer attributes; Walter will like this last >> one. > > > This is certainly also an interesting idea, but I suspect it is bound to > fail, simply because it involves ARC. Reference counting always makes things > so much more complicated... See for example the cycles problem you > mentioned: If you need a GC for that, you cannot guarantee that the objects > will be collected, which was the reason to introduce ARC in the first place. So specify that improper weak reference attribution may lead to interference with proper execution of destructors. People generally understand this, and at least they'd have such a tool to make their code behave correctly. Perhaps even have rules that things with destructors create static errors if they are used in a way that may create circular references when effective weak attribution is not detected by the compiler (if such a thing is statically possible?). > Then there are the problems with shared vs. thread-local RC (including > casting between the two), The problem is exactly the same as 'shared' exists already. What makes it any different? shared <-> not-shared requires blunt casting, and the same can apply to shared RC. 'shared' implies RC access must use atomics, and otherwise not, I don't imagine any distinction in data structure? > and arrays/slices of RC objects. Slices need to know their offset (or base pointer), or have an explicit RC pointer. Either way, I don't see how slices are a particularly troublesome case. 12byte slices on x32 - needs an extra field, 16 bytes should still be sufficient on x64 considering that 64bit pointers are only 40-48 bits, which means there are loads of spare bits in the pointer and in the slice length field; should be plenty to stash an offset. > And, of course, > Walter doesn't like it ;-) True. But I'm still waiting to see another even theoretically workable solution.
Re: API
On Tuesday, 6 May 2014 at 00:10:36 UTC, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. 1. Follow the new int[n] convention: auto a = allok.make!(int[])(42); assert(a.length == 42); assert(a.equal(repeat(0, 42)); 2. Follow the [ literal ] convention: auto a = allok.make!(int[])(42); assert(a.length == 1); assert(a[0] == 42); For the second option, to create longer arrays: auto a = allok.make!(int[])(42, 43, 44); assert(a.length == 3); assert(a.equal(iota(42, 45)); Nice ways to repeat things: auto a = allok.make!(int[])(42, repeat(43, 5), 44); And even nice ways to create holes for efficiency: auto a = allok.make!(int[])(42, uninitialized(5), 44); Destroy. Andrei Does it have to be one function? allok.makeLength!(int[])(42) as per new int[n] convention allok.makeUsing!(int[])(42, 43, 44) as per literal convention s/makeUsing/makeFilled -- or just makeFill s/makeUsing/makeFrom s/makeUsing/makeWith etc. etc. Cheers, Ed
[OT] DConf - How to survive without a car?
Hi all, After last year's incident with my tires getting slashed, I'm really hoping I can do without a car during this year's DConf. How feasible is this? I'll be staying at Aloft. Would be great if there's someone I can share a ride with. I've also seen there's a public bus going more or less to FB and back, so I should be good there. (Right?) But how about getting to SFO or down town? Am I causing myself a whole lot of pain (albeit of a different kind) by not renting a car? To be clear, I'm not looking for an economical option, just peace of mind. Lio.
Re: API
On Tuesday, 6 May 2014 at 00:10:36 UTC, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. I like Adam's input range idea. It gives you the best of both worlds, I think. It clears the conflict between ints and lengths using an interface. 1. Follow the new int[n] convention: auto a = allok.make!(int[])(42); assert(a.length == 42); assert(a.equal(repeat(0, 42)); auto a = allok.make(repeat(int.init).take(42)) A bit verbose but a shortcut could be added. I believe you may have proposed a second length argument to repeat before which would be nice and clean with UFCS, e.g., int.init.repeat(42). 2. Follow the [ literal ] convention: auto a = allok.make!(int[])(42); assert(a.length == 1); assert(a[0] == 42); auto a = allok.make(42); // maybe require only() or don't use IFTI. For the second option, to create longer arrays: auto a = allok.make!(int[])(42, 43, 44); assert(a.length == 3); assert(a.equal(iota(42, 45)); auto a = allok.make(iota(42, 45)); Nice ways to repeat things: auto a = allok.make!(int[])(42, repeat(43, 5), 44); auto a = allok.make(42, repeat(43).take(5), 44); // recognize RoRs and expand them (or joiner() with only() And even nice ways to create holes for efficiency: auto a = allok.make!(int[])(42, uninitialized(5), 44); uninitialized would be useful. Maybe uninitialized!int.take(5) to keep with the theme though I haven't fully considered if that would penalize performance. Destroy. Andrei One last thought. If array() accepted a second argument for an allocator you could just use a lot of existing code and slap an allocator into that final argument on your UFCS chain. auto arr = iota(1, 5).joiner(only(7, 13)).array(allok);
Re: API
On Tuesday, 6 May 2014 at 00:54:33 UTC, Walter Bright wrote: On 5/5/2014 5:10 PM, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. My bikeshed color comments: 2. why "make" instead of "construct" or "factory"? Or "build" ? Matheus.
Re: API
On 5/5/2014 5:10 PM, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. My bikeshed color comments: 1. "allok" is too kute for a name. Rather have "allocate" or "alloc". 2. why "make" instead of "construct" or "factory"?
Re: API
On Tuesday, 6 May 2014 at 00:10:36 UTC, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. 1. Follow the new int[n] convention: auto a = allok.make!(int[])(42); assert(a.length == 42); assert(a.equal(repeat(0, 42)); 2. Follow the [ literal ] convention: auto a = allok.make!(int[])(42); assert(a.length == 1); assert(a[0] == 42); For the second option, to create longer arrays: auto a = allok.make!(int[])(42, 43, 44); assert(a.length == 3); assert(a.equal(iota(42, 45)); Nice ways to repeat things: auto a = allok.make!(int[])(42, repeat(43, 5), 44); And even nice ways to create holes for efficiency: auto a = allok.make!(int[])(42, uninitialized(5), 44); Destroy. Andrei The first option should have better performance(write fast code). The second method allows more expressiveness(write code fast). I generally like expressiveness, but direct memory allocation is low level programming, so fast code should take precedence.
Re: API
On Tuesday, 6 May 2014 at 00:39:44 UTC, bearophile wrote: If you know that the item is always the same (coming from a repeat) can't you optimize the memory filling better? Oh yeah, that's a good point, you can similarly optimize if you know the size of the copy that's needed at compile time (generate better inline memcpy code).
A new trait to retrieve doc comments (if available).
**I'm fairly new to D, so let me know if this belongs in another thread.** I'd like to contribute a new feature to the DMD front-end, and I'd appreciate some feedback on the design before I start on a pull request. Feature: `__traits(comment, symbol)` will evaluate to the doc-comment of `symbol`, if it is available, and "", otherwise. For DMD, this means it will provide comment information if the "-D" compiler option is used. Other implementations can choose to always evaluate it to "". Use Cases: == Here's my use case: I'm building an automatic wrapper generator for binding D to dynamic languages (mostly for scientific applications, at the moment). It's like SWIG, but more automated and narrower in scope. Right now, I have two suboptimal options for supporting documentation comments: 1) Have users put their documentation in UDAs (instead of comments), and extract those. This means departing from D style guidelines, and that DDOC doesn't work. 2) Dig comments out of DMD's JSON output. This means users have to inform the wrapping tool of all of their D source files (not just a shared library), complicating build setups. It also means DMD loads and parses each file twice. Having doc-comments accessible at compile time would let me simplify the wrapping process for my users. Other applications include metaprogramming (e.g. forwarding documentation from a template argument) and simplifying documentation generators. I'm sure having doc-comments accessible in Python made things like Sphinx and IPython easier to build. Implementation: === I'm not too familiar with DMD, but it seems like evaluating `__traits(comment, symbol)` would just require reading out the relevant `DSymbol`'s `comment` field. Alternatives: = Alternative names: - `__traits(getComment, symbol)` - `__traits(documentation, symbol)` - `__traits(getDocumentation, symbol)` Alternative behaviors: - `__traits(comment, symbol)` could evaluate to `null` (rather than "") if there is no comment associated with `symbol`. Thoughts?
Re: API
On Tuesday, 6 May 2014 at 00:31:04 UTC, bearophile wrote: This is OK if "make" recognizes the repeat.take type statically and uses this information to allocate the array efficiently. I don't think it needs any special beyond hasLength!T so it can allocate it all in one go, so it wouldn't be specialized on Take specifically, just anything with an explicit length property.
Re: API
Adam D. Ruppe: I don't think it needs any special beyond hasLength!T so it can allocate it all in one go, so it wouldn't be specialized on Take specifically, just anything with an explicit length property. If you know that the item is always the same (coming from a repeat) can't you optimize the memory filling better? Bye, bearophile
Re: API
On Tuesday, 6 May 2014 at 00:10:36 UTC, Andrei Alexandrescu wrote: So I'm looking at creation functions and in particular creation functions for arrays. 1. Follow the new int[n] convention: I prefer this one.
Re: API
On Tue, May 06, 2014 at 12:28:19AM +, bearophile via Digitalmars-d wrote: > Andrei Alexandrescu: > > >So I'm looking at creation functions and in particular creation > >functions for arrays. > > > >1. Follow the new int[n] convention: > > > >auto a = allok.make!(int[])(42); > >assert(a.length == 42); > >assert(a.equal(repeat(0, 42)); > > > >2. Follow the [ literal ] convention: > > > >auto a = allok.make!(int[])(42); > >assert(a.length == 1); > >assert(a[0] == 42); > > Both cases are needed. Unfortunately we don't have named arguments in > D, otherwise you can use the same name for both operations, using > different arguments. [...] Adam's trick with a Length struct neatly solves this problem. :-) struct Length { size_t n; } auto a = allok.make!(int[])(Length(42), 1); ... // etc It's a very clever idea, and I like it! T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Re: API
Adam D. Ruppe: I guess if it takes an input range with lengths though it could just as well do alloc.make(repeat(0).take(45)) This is OK if "make" recognizes the repeat.take type statically and uses this information to allocate the array efficiently. In general such pattern recognition tricks should be more present in Phobos (in Haskell there is even a syntax to add them). Bye, bearophile
Re: FYI - mo' work on std.allocator
On Sunday, 27 April 2014 at 05:43:07 UTC, Andrei Alexandrescu wrote: Added SbrkRegion, SimpleBlocklist, and Blocklist. http://erdani.com/d/phobos-prerelease/std_allocator.html#.SbrkRegion http://erdani.com/d/phobos-prerelease/std_allocator.html#.SimpleBlocklist http://erdani.com/d/phobos-prerelease/std_allocator.html#.Blocklist https://github.com/andralex/phobos/blob/allocator/std/allocator.d Destruction is as always welcome. I plan to get into tracing tomorrow morning. Andrei These are my biggest concerns with the allocator API: 1. Struct postblit/destructors don't work reliably, so knowing when to call deallocate() is very difficult. 2. As hard as I try, I still end up with the only references to GC-allocated memory being in my allocator-backed containers. The GC frees all sorts of memory that it shoudn't. It's a giant game af whack-a-mole trying to find the GC memory, which leads me to: 3. GC.removeRange is one of the slowest functions I've ever used. My allocator-backed binary tree implementation took 14 seconds to load a very large data set (compared to RedBlackTree's 35 seconds) and then spent the next five minutes in GC.removeRange before I got bored and killed it.
Re: API
Andrei Alexandrescu: So I'm looking at creation functions and in particular creation functions for arrays. 1. Follow the new int[n] convention: auto a = allok.make!(int[])(42); assert(a.length == 42); assert(a.equal(repeat(0, 42)); 2. Follow the [ literal ] convention: auto a = allok.make!(int[])(42); assert(a.length == 1); assert(a[0] == 42); Both cases are needed. Unfortunately we don't have named arguments in D, otherwise you can use the same name for both operations, using different arguments. Also keep in mind that in std.array we have functions like assocArray, uninitializedArray, and minimallyInitializedArray. I suggest to take a look at a similar function in Scala and especially in Lisp. Lisp has decades of experience to steal from. A common desire is to create an array based on a given function of its indexes: immutable a = allok.make!(int[][])(tuple(2, 3), (r, c) => r * 10 + c); ==> [[0, 1, 2], [10, 11, 12]] This is rather handy when you want to create immutable arrays. auto a = allok.make!(int[])(42, uninitialized(5), 44); From my experience this is a very uncommon situation, I don't remember if I have ever had such need. (In most cases you want an initialized or an unitialized array. More rarely you want to initialize the first N items and leave M items not initialized). Bye, bearophile
Re: API
I guess if it takes an input range with lengths though it could just as well do alloc.make(repeat(0).take(45))
Re: API
On Tuesday, 6 May 2014 at 00:10:36 UTC, Andrei Alexandrescu wrote: 1. Follow the new int[n] convention: 2. Follow the [ literal ] convention: We could combine these pretty easily: struct Length { size_t length; } allok.make!(int[])(Length(42)); // #1 allok.make!(int[])(1,2,3); // #2 btw could also use IFTI here which is nice This also potentially gives a third option: allok.make!(Length(42), 5); // length == 42, all elements == 5, typeof(return) == typeof(args[1])[] I kinda like that, though I'm not sure if I'd still like it if I used it regularly. Then, of course, we can also take other ranges to initialize too.
API
So I'm looking at creation functions and in particular creation functions for arrays. 1. Follow the new int[n] convention: auto a = allok.make!(int[])(42); assert(a.length == 42); assert(a.equal(repeat(0, 42)); 2. Follow the [ literal ] convention: auto a = allok.make!(int[])(42); assert(a.length == 1); assert(a[0] == 42); For the second option, to create longer arrays: auto a = allok.make!(int[])(42, 43, 44); assert(a.length == 3); assert(a.equal(iota(42, 45)); Nice ways to repeat things: auto a = allok.make!(int[])(42, repeat(43, 5), 44); And even nice ways to create holes for efficiency: auto a = allok.make!(int[])(42, uninitialized(5), 44); Destroy. Andrei
Re: Adding a chocolatey package
On 5/5/2014 3:41 PM, Etienne Cimon wrote: On 2014-05-05 18:21, Vladimir Panteleev wrote: On Monday, 5 May 2014 at 20:05:09 UTC, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. The Windows installer does not include DMD in itself, it just downloads the zip file and sets it up. You could go the same way. This way, you don't need to get permission from WB. Ok, so if the zip from dlang.org downloads is downloaded and unpacked on the computer automatically it's all good? Yes, and you'll also be getting the latest that way, rather than having to constantly update your installer.
Re: The Current Status of DQt
On 2014-05-04 09:26, w0rp wrote: Qt 4 support basically arises from what is easy to do right now. Supporting Qt 5 doesn't seem that far off. I went with Qt 4 for now because it's easier, and at this stage it's more important to work with something that can actually work and learn from that, than to try and work with something which might not actually work at all. Nice work, I think Qt 4 is a very nice start and can help bring a lot more interest in D from the C++ crowd if it's successfully implemented, I think these people worry mostly about using the same data types and interface in a new programming language.
Re: The Current Status of DQt
http://forum.dlang.org/thread/wdddgiowaidcojbrk...@forum.dlang.org Worth a reddit announcement tomorrow morning? -- Andrei TkD is nice,but the exe's Memory usage is 6.8~7M,but DFL's only 2.8~3M,and only a single file on windows 7. https://github.com/Rayerd/dfl, https://github.com/FrankLIKE/dfl
Re: Adding a chocolatey package
On 2014-05-05 18:21, Vladimir Panteleev wrote: On Monday, 5 May 2014 at 20:05:09 UTC, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. The Windows installer does not include DMD in itself, it just downloads the zip file and sets it up. You could go the same way. This way, you don't need to get permission from WB. Ok, so if the zip from dlang.org downloads is downloaded and unpacked on the computer automatically it's all good?
Re: Adding a chocolatey package
On Monday, 5 May 2014 at 20:05:09 UTC, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. The Windows installer does not include DMD in itself, it just downloads the zip file and sets it up. You could go the same way. This way, you don't need to get permission from WB.
Re: Running Phobos unit tests in threads: I have data
On Monday, 5 May 2014 at 17:56:11 UTC, Dicebot wrote: On Saturday, 3 May 2014 at 12:26:13 UTC, Rikki Cattermole wrote: On Saturday, 3 May 2014 at 12:24:59 UTC, Atila Neves wrote: Out of curiosity are you on Windows? No, Arch Linux 64-bit. I also just noticed a glaring threading bug in my code as well that somehow's never turned up. This is not a good day. Atila I'm surprised. Threads should be cheap on Linux. Something funky is definitely going on I bet. Threads are never cheap. Regarding this, I found this talk interesting: https://www.youtube.com/watch?v=KXuZi9aeGTw
Re: Get object address when creating it in for loop
Hi Jonathan, Thanks for your reply. So actually I was getting the pointer of n itself. I understand now what was my problem. The problem was that I did not know that array support references of objects, so I thought that I must fill it with pointers of objects. But its great that I do not have to use pointers :) Thanks a lot.
Re: Adding a chocolatey package
On 5/5/2014 4:26 PM, Nick Sabalausky wrote: Use the "Send email to Walter Bright" and request permission. He's known to be cool about this sort of thing. IIUC, the whole "permission" thing is just a formality necessitated by the backend's former life as part of various companies's commercial compilers. Ahem, "...necessitated by **DMD's backend's** former life..."
Re: Adding a chocolatey package
On 5/5/2014 4:05 PM, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. Go here (with JS on): http://www.walterbright.com/ Use the "Send email to Walter Bright" and request permission. He's known to be cool about this sort of thing. IIUC, the whole "permission" thing is just a formality necessitated by the backend's former life as part of various companies's commercial compilers.
Re: Running Phobos unit tests in threads: I have data
On 5 May 2014 19:07, Orvid King via Digitalmars-d wrote: > Going to take a wild guess, but as core.atomic.casImpl will never be > inlined anywhere with DMD, due to it's inline assembly, you have the > cost of building and destroying a stack frame, the cost of passing the > args in, moving them into registers, saving potentially trashed > registers, etc. every time it even attempts to acquire a lock, and the > GC uses a single global lock for just about everything. As you can > imagine, I suspect this is far from optimal, and, if I remember right, > GDC uses intrinsics for the atomic operations. > Aye, and atomic intrinsics though they may be, it could even be improved by switching over to C++ atomic intrinsics, which map directly to core.atomics. :)
Adding a chocolatey package
Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary.
Re: Scenario: OpenSSL in D language, pros/cons
On 2014-05-05 2:54 PM, Daniele M. wrote: Have you thought about creating an SSL/TLS implementations tester instead? You mean testing existing TLS libraries using this information? The advantages of using all-D is having zero-copy buffers that inline with the other layers of streams when built inside another D project. I can also add processor-specific assembler-code algorithms of AES and RSA from openSSL (optimizing the critical parts can put it on par or better in speed). To answer the question about safety, the code is very modular and so when you decide to zero out memory of keys before/after serialization/deserialization or even for the buffers, it happens for everything regardless of the complexity of the application. It's definitely easier to make it safer!
Re: More radical ideas about gc and reference counting
On Monday, 5 May 2014 at 00:44:43 UTC, Caligo via Digitalmars-d wrote: On Sun, May 4, 2014 at 12:22 AM, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: The on/off switch may be a nice idea in the abstract but is hardly the perfect recipe to good language feature development; otherwise everybody would be using it, and there's not overwhelming evidence to that. (I do know it's been done a few times, such as the (in)famous "new scoping rule of the for statement" for C++ which has been introduced as an option by VC++.) No, it's nothing abstract, and it's very practical and useful. Rust has such a thing, #![feature(X,Y,Z)]. So does Haskell, with {-# feature #-}. Even Python has __future__, and many others. Well, python __future__ it's not exactly that: it's for introducing changes that are impacting the actual codebase... It's some sort of extreme care for not braking anything out there. /Paolo
Re: Formal review of std.lexer
17-Mar-2014 02:13, Martin Nowak пишет: On 02/22/2014 09:31 PM, "Marc Schütz" " wrote: But that still doesn't explain why a custom hash table implementation is necessary. Maybe a lightweight wrapper around built-in AAs is sufficient? I'm also wondering what benefit this hash table provides. Getting back to this. The custom hash map originaly was a product of optimization, the benefits over built-in AAs are: a) Allocation was amortized by allocating nodes in batches. b) Allowed custom hash function to be used with built-in type (string). Not sure how much of that stands today. -- Dmitry Olshansky
Re: Enforced @nogc for dtors?
Am 05.05.2014 19:46, schrieb Orvid King via Digitalmars-d: The current GC cannot allocate within a destructor because of the fact that it has to acquire a global lock on the GC before calling the actual destructor, meaning that attempting to allocate or do anything that requires a global lock on the GC is impossible, because the lock has already been acquired by the thread. Admittedly this isn't the way it actually fails, but it is the flaw in the design that causes it to fail. This is precisely the point. I see this whole discussion as going around in circles instead of fixing the GC. Which is fine, assuming that at the end of the day, D gets a sound automatic memory management model, be it RC/GC/compiler dataflow based, which doesn't keep be questioned all the time. Otherwise, I see this as the second coming of Tango vs Phobos. -- Paulo
Re: Thread name conflict
On 2014-05-05 15:32, Jonathan M Davis via Digitalmars-d wrote: Maybe they should still be visible for the purposes of reflection or some other case where seeing the symbols would be useful Yes, it's useful for .tupleof to access private members. -- /Jacob Carlborg
Re: Parallel execution of unittests
On 5/5/14, 11:47 AM, Dicebot wrote: On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote: My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. "Clean /tmp/ judiciously." Dogmatic approach that solves the issue is still better than ignoring it completely. The problem with your stance, i.e.: "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. is that it immediately generalizes into the unreasonable: "Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite". So that gets into machines not having any memory available, with full disks etc. Just make sure test machines are prepared for running unittests to the extent unittests are expecting them to. We're wasting time trying to frame this as a problem purely related to unittests alone. Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole "parallel" thing implies). If I had my way I'd make parallel the default and single-threaded opt-in, thus penalizing unittests that had issues to start with. But I understand the merits of not breaking backwards compatibility so probably we should start with opt-in parallel unittesting. Andrei
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 10:41:41 UTC, Jonathan M Davis via Digitalmars-d wrote: On Mon, 05 May 2014 10:24:27 + via Digitalmars-d wrote: On Monday, 5 May 2014 at 09:32:40 UTC, JR wrote: > On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: >> And then comes my next question: except for that >> malloc-hack, >> would it have been possible to write it in @safe D? I guess >> that if not, module(s) could have been made un-@safe. Not >> saying that a similar separation of concerns was not >> possible >> in OpenSSL itself, but that D could have made it less >> development-expensive in my opinion. > > TDPL SafeD visions notwithstanding, @safe is very very > limiting. > > I/O is forbidden so simple Hello Worlds are right out, let > alone advanced socket libraries. I/O is not forbidden, it's just that writeln and friends currently can't be made safe, but that is being worked on AFAIK. While I/O usually goes through the OS, the system calls can be manually verified and made @trusted. As the underlying OS calls are all C functions, there will always be @system code involved in I/O, but in most cases, we should be able to wrap those functions in D functions which are @trusted. Regarldess, I would think that SSL could be implemented without sockets - that is, all of its operations should be able to operate on arbitrary data regardless of whether that data is sent over a socket or not. And if that's the case, then even if the socket operations themselves had to be @system, then everything else should still be able to be @safe. Most of the problems with @safe stem either from library functions that don't use it like they should, or because the compiler does not yet do a good enough job with attribute inference on templated functions. Both problems are being addressed, so the situation will improve over time. Regardless, there's nothing fundamentally limited about @safe except for operations which are actually unsafe with regards to memory, and any case where something isn't @safe when it's actually memory safe should be and will be fixed (as well as any situation which isn't memory safe but is considered @safe anyway - we do unfortunately still have a few of those). - Jonathan M Davis You nailed it. If we wanted to translate the theoretical exercise into something real, it would be nice to have an implementation of PolarSSL that works on ring buffers only, then leave network layer integration to clients. Much cleaner separation of concerns.
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 14:59:13 UTC, Etienne wrote: On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D. This is a quite radical approach, I am very interested to see its development! Have you thought about creating an SSL/TLS implementations tester instead? With the compiled information I see this goal quite well in range.
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote: My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. Dogmatic approach that solves the issue is still better than ignoring it completely. Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole "parallel" thing implies).
Re: Enforced @nogc for dtors?
On Monday, 5 May 2014 at 17:46:35 UTC, Orvid King via Digitalmars-d wrote: Destructors and finalizers are the same thing. That is exactly the point that I am arguing against. That they are confused in D (or 'unified', if you think is a good thing) I accept, but I think it's a language design error, or at least an unfortunate omission. Did you read the citation I provided? I think Boehm's argument is convincing; you've provided no rebuttal. The entire brouhaha going on now is because they're different: we assume that destructors will be called at a precise time so we can use them to manage constrained resources and we don't know that about finalizers.
Re: Parallel execution of unittests
On 5/5/14, 11:25 AM, Dicebot wrote: On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote: On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei You are not helping your point to look reasonable. My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote: On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei You are not helping your point to look reasonable.
Re: Parallel execution of unittests
On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei
Re: FYI - mo' work on std.allocator
On 5/5/14, 9:57 AM, Marco Leise wrote: That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. That module would also offer functionality to get the page size and allocation granularity and wrappers for common needs like getting n KiB of writable memory. Management however (i.e. RAII structs) would not be part of it. It sounds like not too much work with great benefit for a systems programming language. I think adding portable primitives to http://dlang.org/phobos/std_mmfile.html (plus better yet refactoring its existing code to use them) would be awesome and wouldn't need a DIP. -- Andrei
Why not memory specific destructors?
I never got the real issue with destructors(I haven't seen the issue explained, just a lot of talk about it being a problem and how to fix it) but I think doing away with them would be a very bad idea. Assuming the only/main issue is with the GC not guaranteeing to call them then that is really throwing out the baby with the bathwater. Some of us do not want to be locked down by the GC. If you shape the D language around using the GC then you just our hole deeper and deeper. (We are trying to get out of this hole, remember?) So, instead of removing destructors why not have multiple types? If the object is manually allocated then we can guarantee the destructor will be called when the object is free'ed. But basically, since they would be different types of destructors there would be no confusion about when they would or wouldn't be called. 1. GC destructors - Never called when the object is managed by the GC. (or maybe one can flag certain ones to always be called and the GC will respect that) 2. Manual memory management destructors - Always called when the object is allocated by manually. 3. Others(ARC, etc) - Same principle. So, while this could provide different behavior depending on how you use memory(not a great thing but possibly necessary), it at least provides the separation for a choice. (and it's a about choice, not about forcing people to use something that doesn't work for them) It seems to me we have 4 basic lifetimes of an object: 1. Fixed/Physical Scope - The object lives and dies very quickly and is well defined. 2. UnFixed/Logical Scope - The scope is not well defined but something somewhere free's the object in a predictable way when it(the programmer) decides it should be free'ed). 3. Auto Scope - A combination of the above where an object can live in both at the same time and automatically determines when it goes out of the the last scope. This is like ARC type of stuff. 4. Unknown/Non-Deterministic/Unpredictable - There are no scopes. Objects lifetimes are completely handled by God(the GC). We don't have to worry about any of it. Unfortunately D's GC hasn't had it's `god mode` flag set. 1 and 2 essentially are old school manual memory management. If we have object's lifetimes that exist in different ways then having different destructors for these possibilities seems logical. The problem may simply be that we are trying to fit one destructor to all the cases and it simply doesn't work that way. Anyways... just food for thought.
Re: Running Phobos unit tests in threads: I have data
Going to take a wild guess, but as core.atomic.casImpl will never be inlined anywhere with DMD, due to it's inline assembly, you have the cost of building and destroying a stack frame, the cost of passing the args in, moving them into registers, saving potentially trashed registers, etc. every time it even attempts to acquire a lock, and the GC uses a single global lock for just about everything. As you can imagine, I suspect this is far from optimal, and, if I remember right, GDC uses intrinsics for the atomic operations. On 5/5/14, Atila Neves via Digitalmars-d wrote: > On Sunday, 4 May 2014 at 17:01:23 UTC, safety0ff wrote: >> On Saturday, 3 May 2014 at 22:46:03 UTC, Andrei Alexandrescu >> wrote: >>> On 5/3/14, 2:42 PM, Atila Neves wrote: gdc gave _very_ different results. I had to use different modules because at some point tests started failing, but with gdc the threaded version runs ~3x faster. On my own unit-threaded benchmarks, running the UTs for Cerealed over and over again was only slightly slower with threads than without. With dmd the threaded version was nearly 3x slower. >>> >>> Sounds like a severe bug in dmd or dependents. -- Andrei >> >> This reminds me of when I was parallelizing a project euler >> solution: atomic access was so much slower on DMD that it made >> performance worse than the single threaded version for one >> stage of the program. >> >> I know that std.parallelism does make use of core.atomic under >> the hood, so this may be a factor when using DMD. > > Funny you should say that, a friend of mine tried porting a > lock-free algorithm of his from Java to D a few weeks ago. The D > version ran 3 orders of magnitude slower. Then I tried gdc and > ldc on his code. ldc produced code running at around 80% of the > speed of the Java version, fdc was around 30%. But dmd... >
Re: Running Phobos unit tests in threads: I have data
On Saturday, 3 May 2014 at 12:26:13 UTC, Rikki Cattermole wrote: On Saturday, 3 May 2014 at 12:24:59 UTC, Atila Neves wrote: Out of curiosity are you on Windows? No, Arch Linux 64-bit. I also just noticed a glaring threading bug in my code as well that somehow's never turned up. This is not a good day. Atila I'm surprised. Threads should be cheap on Linux. Something funky is definitely going on I bet. Threads are never cheap.
Re: Enforced @nogc for dtors?
The current GC cannot allocate within a destructor because of the fact that it has to acquire a global lock on the GC before calling the actual destructor, meaning that attempting to allocate or do anything that requires a global lock on the GC is impossible, because the lock has already been acquired by the thread. Admittedly this isn't the way it actually fails, but it is the flaw in the design that causes it to fail. Destructors and finalizers are the same thing. They are declared the same, function the same, and do the same things. In D, the deterministic invocation of a destructor is a side-effect of the optimization of allocations to occur on the stack rather than the heap, whether this is done by the user by declaring a value a struct, or by the compiler when it determines the value never escapes the scope. Currently the GC doesn't invoke the destructor of a struct that has been heap allocated, but I view this as a bug, because it is the same thing as if it had been declared as a class instead, and a destructor must take this into account, and not be dependent on the deterministic destruction qualities of stack-allocated values. On 5/5/14, Brian Rogoff via Digitalmars-d wrote: > On Monday, 5 May 2014 at 14:17:04 UTC, Orvid King via > Digitalmars-d wrote: >> Also, the @nogc for destructors is specific to the current GC, >> and is a limitation that isn't really needed were destructors >> implemented properly in the current GC. > > How does one implement destructors (described below) properly in > a garbage collector? > > I'm a bit puzzled by the recent storm over destructors. I think > of garbage collected entities (classes in Java) as possibly > having "finalizers", and scoped things as possibly having > "destructors". The two concepts are related but distinct. > Destructors are supposed to be deterministic, finalizers by being > tied to a tracing GC are not. Java doesn't have stack allocated > objects, but since 1.7 has try-'with resources' and AutoCloseable > to cover some cases in RAII-like fashion. My terminology is from > this http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html > > IMO, since D has a GC, and stack allocated structs, it would make > sense to use different terms for destruction and finalization, so > what you really want is to properly implement finalizers in your > GC. > I'm a lot more reluctant to use classes in D now, and I'd like to > see a lot more code with @nogc or compiled with a the previously > discussed and rejected no runtime switch. > > Interestingly, Ada finalization via 'controlled' types is > actually what we call destructors here. The Ada approach is > interesting, but I don't know if a similar approach would fit > well with D, which is a much more pointer intensive language. > > >
Re: Get object address when creating it in for loop
On Mon, 05 May 2014 16:15:42 + hardcoremore via Digitalmars-d wrote: > How to get and address of newly created object and put it in > pointer array? > > > int maxNeurons = 100; > Neuron*[] neurons = new Neuron*[](maxNeurons); > > Neuron n; > > for(int i = 0; i < maxNeurons; i++) > { > n = new Neuron(); > neurons[] = &n; // here &n always returns same adress > } > > writefln("Thread func complete. Len: %s", neurons); > > > This script above will print array with all the same address > values, why is that? &n gives you the address of the local variable n, not of the object on the heap that it points to. You don't normally get at the address of class objects in D. There's rarely any reason to. Classes always live on the heap, so they're already references. Neuron* is by definition a pointer to a class _reference_ not to an instance of Neuron. So, you'd normally do Neuron[] neurons; for your array. I very much doubt that you really want an array of Neuron*. IIRC, you _can_ get at an address of a class instance by casting its reference to void*, but I'm not sure, because I've never done it. And even then, you're then using void*, not Neuron*. Also FYI, questions like this belong in D.learn. The D newsgroup is for general discussions about D, not for questions related to learning D. - Jonathan M Davis
Re: More radical ideas about gc and reference counting
Am Mon, 5 May 2014 09:39:30 -0700 schrieb "H. S. Teoh via Digitalmars-d" : > On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: > > Andrei Alexandrescu: > > > > >I think the "needs to support BigInt" argument is not a blocker - we > > >can release std.rational to only support built-in integers, and then > > >adjust things later to expand support while keeping backward > > >compatibility. I do think it's important that BigInt supports > > >appropriate traits to be recognized as an integral-like type. > > > > Bigints support is necessary for usable rationals, but I agree this > > can't block their introduction in Phobos if the API is good and > > adaptable to the successive support of bigints. > > Yeah, rationals without bigints will overflow very easily, causing many > usability problems in user code. > > > > >If you, Joseph, or both would want to put std.rational again through > > >the review process I think it should get a fair shake. I do agree > > >that a lot of persistence is needed. > > > > Rationals are rather basic (important) things, so a little of > > persistence is well spent here :-) > [...] > > I agree, and support pushing std.rational through the queue. So, please > don't give up, we need it get it in somehow. :) > > > T That experimental package idea that was discussed months ago comes to my mind again. Add that thing as exp.rational and have people report bugs or shortcomings to the original author. When it seems to be usable by everyone interested it can move into Phobos proper after the formal review (that includes code style checks, unit tests etc. that mere users don't take as seriously). As long as there is nothing even semi-official, it is tempting to write such a module from scratch in a quick&dirty fashion and ignore existing work. The experimental package makes it clear that this code is eventually going to the the official way and home brewed stuff wont have a future. Something in the standard library is much less likely to be reinvented. On the other hand, once a module is in Phobos proper, it is close to impossible to change the API to accommodate for a new use case. That's why I think the most focused library testing and development can happen in the experimental phase of a module. The longer it is, the more people will have tried it in their projects before formal review, which would greatly improve informed decisions. The original std.rationale proposal could have been in active use now for months! -- Marco
Re: More radical ideas about gc and reference counting
On Monday, 5 May 2014 at 17:22:58 UTC, Marco Leise wrote: Am Mon, 5 May 2014 09:39:30 -0700 schrieb "H. S. Teoh via Digitalmars-d" : On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: > Andrei Alexandrescu: > > >I think the "needs to support BigInt" argument is not a > >blocker - we > >can release std.rational to only support built-in integers, > >and then > >adjust things later to expand support while keeping backward > >compatibility. I do think it's important that BigInt > >supports > >appropriate traits to be recognized as an integral-like > >type. > > Bigints support is necessary for usable rationals, but I > agree this > can't block their introduction in Phobos if the API is good > and > adaptable to the successive support of bigints. Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code. > >If you, Joseph, or both would want to put std.rational > >again through > >the review process I think it should get a fair shake. I do > >agree > >that a lot of persistence is needed. > > Rationals are rather basic (important) things, so a little of > persistence is well spent here :-) [...] I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :) T That experimental package idea that was discussed months ago comes to my mind again. Add that thing as exp.rational and have people report bugs or shortcomings to the original author. When it seems to be usable by everyone interested it can move into Phobos proper after the formal review (that includes code style checks, unit tests etc. that mere users don't take as seriously). And same objections still remain.
Re: Running Phobos unit tests in threads: I have data
On Sunday, 4 May 2014 at 17:01:23 UTC, safety0ff wrote: On Saturday, 3 May 2014 at 22:46:03 UTC, Andrei Alexandrescu wrote: On 5/3/14, 2:42 PM, Atila Neves wrote: gdc gave _very_ different results. I had to use different modules because at some point tests started failing, but with gdc the threaded version runs ~3x faster. On my own unit-threaded benchmarks, running the UTs for Cerealed over and over again was only slightly slower with threads than without. With dmd the threaded version was nearly 3x slower. Sounds like a severe bug in dmd or dependents. -- Andrei This reminds me of when I was parallelizing a project euler solution: atomic access was so much slower on DMD that it made performance worse than the single threaded version for one stage of the program. I know that std.parallelism does make use of core.atomic under the hood, so this may be a factor when using DMD. Funny you should say that, a friend of mine tried porting a lock-free algorithm of his from Java to D a few weeks ago. The D version ran 3 orders of magnitude slower. Then I tried gdc and ldc on his code. ldc produced code running at around 80% of the speed of the Java version, fdc was around 30%. But dmd...
Re: FYI - mo' work on std.allocator
05-May-2014 20:57, Marco Leise пишет: That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. I had an idea of core.vmm. It didn't survive the last review though, plus I never got around to test OSes aside from Windows & Linux. Comments on initial design are welcome. https://github.com/D-Programming-Language/druntime/pull/653 -- Dmitry Olshansky
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. It is a very well-define practical goal - getting either reproducible or informative reports for test failures from machines you don't have routine access to. Why still keeping test sources maintainable (ok this part is subjective). It is relatively simple engineering problem but you discard widely adopted solution for it (strict control of test requirements) without proposing any real alternative. "I will yell at someone when it breaks" is not really a solution.
Re: Get object address when creating it in for loop
Hi Guys, Thanks so much for your reply. This fixes my problem like Adam D. Ruppe suggested: int maxNeurons = 100; Neuron[] neurons = new Neuron[](maxNeurons); Neuron n; for(int i = 0; i < maxNeurons; i++) { n = new Neuron(); neurons[] = n; } But can you give me a more details so I can understand what is going on. What is the difference between Neuron[] neurons = new Neuron[](maxNeurons); and Neuron*[] neurons = new Neuron*[](maxNeurons); As I understand Neuron*[] should create array which elements are pointers? Is it possible to instantiate 100 objects in a for loop and get a address of each object instance and store it in array of pointers? Thanks
Re: FYI - mo' work on std.allocator
Am Sun, 04 May 2014 21:05:01 -0700 schrieb Andrei Alexandrescu : > I've decided that runtime-chosen page sizes are too much of a > complication for the benefits. Alright. Note however, that on Windows the allocation granularity is larger than the page size (64 KiB). So it is a cleaner design in my eyes to use portable wrappers around page size and allocation granularity. > > 2) For embedded Linux systems there is the flag > > MAP_UNINITIALIZED to break the guarantee of getting > > zeroed-out memory. So if it is desired, »zeroesAllocations« > > could be a writable property there. > > This can be easily done, but from what MAP_UNINITIALIZED is strongly > discouraged and only implemented on small embedded systems. Agreed. > > In the cases where I used virtual memory, I often wanted to > > exercise more of its features. As it stands now »MmapAllocator« > > works as a basic allocator for 4k blocks of memory. Is that > > the intended scope or are you open to supporting all of it? > > For now I just wanted to get a basic mmap-based allocator off the > ground. I am aware there's a bunch of things to do. The most prominent > is that (according to Jason Evans) Linux is pretty bad at munmap() so > it's actually better to advise() pages away upon deallocation but never > unmap them. > > > Andrei That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. That module would also offer functionality to get the page size and allocation granularity and wrappers for common needs like getting n KiB of writable memory. Management however (i.e. RAII structs) would not be part of it. It sounds like not too much work with great benefit for a systems programming language. -- Marco
Re: More radical ideas about gc and reference counting
On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: > Andrei Alexandrescu: > > >I think the "needs to support BigInt" argument is not a blocker - we > >can release std.rational to only support built-in integers, and then > >adjust things later to expand support while keeping backward > >compatibility. I do think it's important that BigInt supports > >appropriate traits to be recognized as an integral-like type. > > Bigints support is necessary for usable rationals, but I agree this > can't block their introduction in Phobos if the API is good and > adaptable to the successive support of bigints. Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code. > >If you, Joseph, or both would want to put std.rational again through > >the review process I think it should get a fair shake. I do agree > >that a lot of persistence is needed. > > Rationals are rather basic (important) things, so a little of > persistence is well spent here :-) [...] I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :) T -- I see that you JS got Bach.
Re: Parallel execution of unittests
On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei
Re: Get object address when creating it in for loop
On 5/5/2014 12:15 PM, hardcoremore wrote: How to get and address of newly created object and put it in pointer array? int maxNeurons = 100; Neuron*[] neurons = new Neuron*[](maxNeurons); Neuron n; for(int i = 0; i < maxNeurons; i++) { n = new Neuron(); neurons[] = &n; // here &n always returns same adress } writefln("Thread func complete. Len: %s", neurons); This script above will print array with all the same address values, why is that? Thanks These sorts of questions should go in digitalmars.D.learn, but your problem is a simple typo here: neurons[] = &n; That sets the *entire* array to "&n". You forgot the index: neurons[i] = &n;
Re: Enforced @nogc for dtors?
On Monday, 5 May 2014 at 14:17:04 UTC, Orvid King via Digitalmars-d wrote: Also, the @nogc for destructors is specific to the current GC, and is a limitation that isn't really needed were destructors implemented properly in the current GC. How does one implement destructors (described below) properly in a garbage collector? I'm a bit puzzled by the recent storm over destructors. I think of garbage collected entities (classes in Java) as possibly having "finalizers", and scoped things as possibly having "destructors". The two concepts are related but distinct. Destructors are supposed to be deterministic, finalizers by being tied to a tracing GC are not. Java doesn't have stack allocated objects, but since 1.7 has try-'with resources' and AutoCloseable to cover some cases in RAII-like fashion. My terminology is from this http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html IMO, since D has a GC, and stack allocated structs, it would make sense to use different terms for destruction and finalization, so what you really want is to properly implement finalizers in your GC. I'm a lot more reluctant to use classes in D now, and I'd like to see a lot more code with @nogc or compiled with a the previously discussed and rejected no runtime switch. Interestingly, Ada finalization via 'controlled' types is actually what we call destructors here. The Ada approach is interesting, but I don't know if a similar approach would fit well with D, which is a much more pointer intensive language.
Re: Get object address when creating it in for loop
On Monday, 5 May 2014 at 16:15:43 UTC, hardcoremore wrote: neurons[] = &n; // here &n always returns same adress You're taking the address of the pointer, which isn't changing. Just use plain n - when you new it, it is already a pointer so just add that value to your array.
Get object address when creating it in for loop
How to get and address of newly created object and put it in pointer array? int maxNeurons = 100; Neuron*[] neurons = new Neuron*[](maxNeurons); Neuron n; for(int i = 0; i < maxNeurons; i++) { n = new Neuron(); neurons[] = &n; // here &n always returns same adress } writefln("Thread func complete. Len: %s", neurons); This script above will print array with all the same address values, why is that? Thanks
Re: Scenario: OpenSSL in D language, pros/cons
05-May-2014 18:59, Etienne пишет: On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d Cool, keep us posted. The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D. Aye, D seems to be a nice choice for writing compilers. -- Dmitry Olshansky
Re: More radical ideas about gc and reference counting
Andrei Alexandrescu: I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Rationals are rather basic (important) things, so a little of persistence is well spent here :-) Bye, bearophile
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 15:36:19 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:11 AM, Dicebot wrote: On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it. It is possible to write a unit test which provides graceful failure reporting for such issues but once you get there it becomes hard to see actual tests behind boilerplate of environmental verification and actual application code behind tests. Any tests that rely on I/O need some sort of commonly repeated initialize-verify-test-finalize pattern, one that is simply impractical to do with unit tests. This segment started with your claim that unittests should do no file I/O because they may fail with a full /tmp/. I disagree with that, and with framing the full /tmp/ problem as a problem with the unittests doing file I/O. It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification.
Re: Parallel execution of unittests
Meta: However, the community is starting to standardize around Dub as the standard package manager. Dub makes downloading a package as easy as editing a JSON file (and it scales such that you can download a project of any size this way). Having package manager(s) in Python doesn't make single module Python projects less popular or less appreciated. Most Python projects are very small, thanks to both standard library and code succinctness (that allows a small program to do a lot), and the presence of an healthy ecology of third party modules that you can import to avoid re-writing things already done by other people. All this should become more common in the D world :-) Did Python have a proper package manager before this idiom arose? Both are very old, and I am not sure, but I think the main module idiom is precedent. Bye, bearophile
Re: Parallel execution of unittests
On 5/5/14, 8:11 AM, Dicebot wrote: On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it. This segment started with your claim that unittests should do no file I/O because they may fail with a full /tmp/. I disagree with that, and with framing the full /tmp/ problem as a problem with the unittests doing file I/O. Andrei
Re: Parallel execution of unittests
On 5/5/14, 8:16 AM, Dicebot wrote: On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu wrote: On 5/1/14, 11:49 AM, Jacob Carlborg wrote: On 2014-05-01 17:15, Andrei Alexandrescu wrote: That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button "parallelize unittests" to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world. Sure. But on the other hand, why should D not have a great unit testing framework built-in. It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection. Penalizing unittests that were bad in the first place is pretty attractive, but propagating attributes properly is even better. -- Andrei
Re: More radical ideas about gc and reference counting
On 5/4/14, 11:16 PM, Arlen wrote: A couple years ago I submitted std.rational, but it didn't go anywhere. About a year later I discovered that someone else had done a similar thing, but it never made it into Phobos either. Of course, it's not because we didn't belong to some "inner circle", but I think it has to do with the fact that D has a very poor development process. The point being, something as simple as a Rational library shouldn't take years for it to become part of Phobos, specially when people are taking the time to do the work. I looked into this (not sure to what extent it's representative of a pattern), and probably we could and should fix it. Looks like back in 2012 you've done the right things (http://goo.gl/kbYQJM) but for whatever reason there was not enough response from the community. Later on, Joseph Rushton Wakeling tried (http://goo.gl/XyQu3D) to put std.rational through the review process but things got stuck at https://github.com/D-Programming-Language/phobos/pull/1616 with support of traits by BigInt. I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Andrei
Re: Parallel execution of unittests
On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu wrote: On 5/1/14, 11:49 AM, Jacob Carlborg wrote: On 2014-05-01 17:15, Andrei Alexandrescu wrote: That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button "parallelize unittests" to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world. Sure. But on the other hand, why should D not have a great unit testing framework built-in. It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection.
Re: Parallel execution of unittests
On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O
Re: GC vs Resource management.
On 5/5/14, 3:18 AM, "Marc Schütz" " wrote: On Sunday, 4 May 2014 at 16:13:23 UTC, Andrei Alexandrescu wrote: On 5/4/14, 4:42 AM, "Marc Schütz" " wrote: But I'm afraid your suggestion is unsafe: There also needs to be a way to guarantee that no references to the scoped object exist when it is destroyed. Actually, it should be fine to call the destructor, then blast T.init over the object, while keeping the actual memory in the GC. This possible approach has come up a number of times, and I think it has promise. -- Andrei Then accesses at runtime would still appear to work, but you're actually accessing something else than you believe you do. IMO, this is almost as bad as silent heap corruption. Not as bad because memory safety is preserved and the errors are reproducible. Such code should just be rejected at compile-time, if at all possible. Yah that would be best. Andrei
Re: Scenario: OpenSSL in D language, pros/cons
On 5/5/14, 2:32 AM, JR wrote: On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries. Sounds like a library bug. Has it been submitted? -- Andrei
Re: Scenario: OpenSSL in D language, pros/cons
On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D.