Re: building of D for...
Nick Sabalausky wrote: Thank you, i'll see. Unfortunately i'm not shure dmd can helps and of course, peoples rudely hopes for native solution... :) I think he means that some people expect unix apps to do things the unix way, which is scattering the parts of each application across the filesystem. No, i mean application coherently builded accross distribution, i.e., using dynamically linked libraries, not conflicting with any apps/libraries and so on. -- /GLeb * Institute of Atmospheric Optics, SB RAS, Tomsk, Russia
Re: building of D for...
Jordi Sayol wrote: What's the approximate total amount of people that uses ALT-Linux? Hard to estimate, i do afraid. If overlook some statistic of ALT's forum (http://forum.altlinux.org/index.php?action=stats), there are circa 6680 of *registered* users in the forum and even more in subscription lists (http://lists.altlinux.org/). In another side, ALT is the second mentioned distribution in xUSSR (after Ubuntu, of course). And finally, it is base of the state project of implementation of free and open software for school and goverment use. By statistic of 2008, there are circa 7 general (1 -- 11 years of education) schools in Russia (http://medportal.ru/enc/parentschildren/school/16/), so common quantity of pupil may be estimated in 20 000 000, so as minimum 9 000 000 of pupils should teach IT. 30% of schools should use Linux exclusively or together with Windows. So may mention upper estimate as 2 700 000 active users. As far as i can imagine by discussions, 60% of that choose ALT and derivatives, rest prefer Ubuntu (Kubuntu, Edubuntu). Some percents selects Open SuSe and Mandriva. So rough and optimistic estimation of summary ALT's user base may be circa 1 620 000 of pupils. Of cource, many of that 30% of schools does not use any Lniux at all, so estimation should be reduced, but from the other side, there are many professional peoples in IT and companies besides education that defenitelly do. But in any case, estimated user base is substantively large and may be interested to propagate technology of D among them. :) -- /GLeb * Institute of Atmospheric Optics, SB RAS, Tomsk, Russia
Re: Google Summer of Code 2011 application
On 2011-03-09 00:14, Daniel Gibson wrote: Am 08.03.2011 20:37, schrieb Andrei Alexandrescu: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei Two (ok, maybe three) IDE related ideas: 1. integration of the profiler (use profilers output to directly jump to related sections in the code, mark time-intensive sections, stuff like that) 2. possibility to show assembly code from (de-)compiled executable inline in source so it's easier for developers who don't know much assembly language to understand how much machine code is generated by their code, possibly creating bottlenecks from harmless-looking statements etc. 3. Any work on IDEs should be for cross-platform IDEs, maybe eclipse DDT or codeblocks. Or maybe somebody could port D-IDE (d-ide.sf.net), which is pretty good as far as I know, to mono so it can be used on other platforms than windows? Wouldn't it be better to use a platform independent GUI library. (I post this here for discussion before inserting it in the wiki). Cheers, - Daniel -- /Jacob Carlborg
Re: Google Summer of Code 2011 application
Andrei Alexandrescu wrote: On 3/8/11 3:11 PM, Jens Mueller wrote: Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Great. I find all of the provided projects useful. Maybe one should add ZeroMQ to the bindings project. I haven't used it myself. But it seems useful. Before I change the wiki page I'd like to receive some feedback for some ideas. Somehow I think it's important to offer not too much. Just tell what's important for you and what you really miss in D. =Improved documentation= I think there should be a standard theme (like the one you find for Java; Candydoc is a good step in that direction) and unittests should be included in the documentation (http://d.puremagic.com/issues/show_bug.cgi?id=2630). This is not much. Maybe more can be added. =Testing Framework= Building a testing framework on top of the available built-in unittests and assertions. Includes maybe also improving the built-in assert and build-in unittests (named unittests). Something like GoogleTest. =Logging Library= It was once on the list but it didn't lead to anything. Something like Google-glog. =Units Library= It was once on the list but it didn't lead to anything. Like Boost.Units. =Containers= What about better/more containers for std.container. Which? =std.sysinfo/core.cpuid= There is core.cpuid. I think it has no unittests which is bad. At least on Linux one could try parsing /proc/cpuinfo to have some tests. std.sysinfo would be an extension to core.cpuid providing system specific information that are not covered by core.cpuid. I'm going to write better descriptions in the wiki page if a project is considered interesting. Jens Of these I think logging, units, and containers would be good. I added those. Jens
Re: Google Summer of Code 2011 application
On 2011-03-08 20:37, Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei How about a GUI library. Probably helping with an already existing one, DWT for example. -- /Jacob Carlborg
64bit port for FreeBSD
I've tried to build dmd2-2.0.52 on my newly installed PCBSD 9.0, but unfortunately it's not for x86_64. Any idea when we might find dmd2 for 64 bit port ready? Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: Google Summer of Code 2011 application
On 03/09/2011 01:21 AM, Jens Mueller wrote: %u wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei Uh... how helping fix compiler bugs? Could we help with that? I feel that's *much* more important than benchmarking, for instance, since it doesn't make sense to benchmark something if it has bugs. :\ I have the same feeling. I'd like to see such projects. But I believe students are more likely to pick feature-oriented projects. The stuff that sounds cool. Compare: I implemented a Garbage Collector for D that improved performance dramatically vs. I fixed bugs in the compiler. I do not think that fixing bugs is less demanding. Actually I do believe it's more difficult and it is fun. You know the feeling, when you finally understand what's the cause of the problem and when you know how to fix it properly. Do you have an idea for packaging fixing bugs in a way that makes it look more interesting? The real issue, I guess, is that fixing bugs (efficiently) require getting an intimate knowledge of the app. Denis -- _ vita es estrany spir.wikidot.com
Re: Google Summer of Code 2011 application
On 03/09/2011 01:52 AM, Andrei Alexandrescu wrote: On 3/8/11 4:11 PM, %u wrote: Uh... how helping fix compiler bugs? Could we help with that? I feel that's *much* more important than benchmarking, for instance, since it doesn't make sense to benchmark something if it has bugs. :\ The funny thing is that sometimes it makes perfect sense, as benchmarks _do_ push the limits of, for instance, GC and may reveal a latent bug ;) Those are a very specific class of bugs -- bigger bugs like compiler errors with handling templates are completely unrelated to benchmarking, and they can be a deal breaker for many people. I don't think anyone cares about *speed* as much as *correctness*... would you rather have your 50% accurate program be twice as fast, or have your 100% accurate program be half as fast? In machine learning it's very common to trade off accuracy for speed. Accuracy is not correctness. A result can be inaccurate and correct inside a tolerance field, which is precisely one common path for machine learning. If the program were incorrect, the machine would not learn (what one expects it to learn). Denis -- _ vita es estrany spir.wikidot.com
Re: Google Summer of Code 2011 application
spir wrote: On 03/09/2011 01:21 AM, Jens Mueller wrote: %u wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei Uh... how helping fix compiler bugs? Could we help with that? I feel that's *much* more important than benchmarking, for instance, since it doesn't make sense to benchmark something if it has bugs. :\ I have the same feeling. I'd like to see such projects. But I believe students are more likely to pick feature-oriented projects. The stuff that sounds cool. Compare: I implemented a Garbage Collector for D that improved performance dramatically vs. I fixed bugs in the compiler. I do not think that fixing bugs is less demanding. Actually I do believe it's more difficult and it is fun. You know the feeling, when you finally understand what's the cause of the problem and when you know how to fix it properly. Do you have an idea for packaging fixing bugs in a way that makes it look more interesting? The real issue, I guess, is that fixing bugs (efficiently) require getting an intimate knowledge of the app. That is true. But I believe there are students who are amazingly good at this. Further there is always the mentor who can give advice and of course the community. If %u really likes fixing bugs and has enough time over summer he/she should submit it as a project. To attract more students for this kind of work one needs a good project description. That is what I find difficult. Jens
Re: Google Summer of Code 2011 application
On 03/09/2011 10:57 AM, spir wrote: On 03/09/2011 01:52 AM, Andrei Alexandrescu wrote: On 3/8/11 4:11 PM, %u wrote: Uh... how helping fix compiler bugs? Could we help with that? I feel that's *much* more important than benchmarking, for instance, since it doesn't make sense to benchmark something if it has bugs. :\ The funny thing is that sometimes it makes perfect sense, as benchmarks _do_ push the limits of, for instance, GC and may reveal a latent bug ;) Those are a very specific class of bugs -- bigger bugs like compiler errors with handling templates are completely unrelated to benchmarking, and they can be a deal breaker for many people. I don't think anyone cares about *speed* as much as *correctness*... would you rather have your 50% accurate program be twice as fast, or have your 100% accurate program be half as fast? In machine learning it's very common to trade off accuracy for speed. Accuracy is not correctness. A result can be inaccurate and correct inside a tolerance field, which is precisely one common path for machine learning. If the program were incorrect, the machine would not learn (what one expects it to learn). Sorry, I was unclear. I meant inaccuracy and incorrectness can often two different notions, depending on the topic. Just like simplicity and difficulty. While people often mistake one for the other. Denis -- _ vita es estrany spir.wikidot.com
Re: Google Summer of Code 2011 application
How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD.
Re: Discussion on XOmB on Hacker News
Andrei Alexandrescu wrote: http://apps.ycombinator.com/item?id=2301249 That is great. It dates back to 2007. But they use D1 and ldc, I think. Otherwise this would be a good promotional project for D2. Jens
Re: Google Summer of Code 2011 application
On 2011-03-09 11:11, Trass3r wrote: How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD. Too bad that's the general opinion people seem to have about GUI libraries. I don't understand what they don't like about DWT. BTW, I received a patch for DWT which makes it work with D2. -- /Jacob Carlborg
Re: Google Summer of Code 2011 application
On 03/09/2011 11:46 AM, Jacob Carlborg wrote: On 2011-03-09 11:11, Trass3r wrote: How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD. Too bad that's the general opinion people seem to have about GUI libraries. I don't understand what they don't like about DWT. I think the advantage of gtk or Qt is people can reinvest previous knowledge of the framework. (I mean, they are cross-language in addition to be cross-platform ;-) I would personly prefere a clearly designed D-specific GUI system than gtk's huge mess. (Dunno about Qt, people seem to find it far better designed, but recent events...) Denis -- _ vita es estrany spir.wikidot.com
Re: 64bit port for FreeBSD
On Wednesday 09 March 2011 01:28:43 Gour wrote: I've tried to build dmd2-2.0.52 on my newly installed PCBSD 9.0, but unfortunately it's not for x86_64. Any idea when we might find dmd2 for 64 bit port ready? Do you mean having dmd _itself_ as a 64-bit binary or having dmd _build_ 64-bit binaries? There are _no_ plans, as far as I know, to have a 64-bit binary for dmd. As I understand it, Walter doesn't really see much point in doing so (not to mention, there are plenty of other pressing issues). So don't expect a 64-bit binary of dmd any time soon. As for producing 64-bit binaries with dmd, as of dmd 2.052, if you build with - m64 on Linux, dmd should produce 64-bit binaries just fine. However, I don't know how well it will work for any of the BSDs. - Jonathan M Davis
Re: Google Summer of Code 2011 application
I think the advantage of gtk or Qt is people can reinvest previous knowledge of the framework. (I mean, they are cross-language in addition to be cross-platform ;-) I would personly prefere a clearly designed D-specific GUI system than gtk's huge mess. (Dunno about Qt, people seem to find it far better designed, but recent events...) Denis There's something I absolutely ***HATE*** about Gtk, and it's the fact that the controls aren't real controls: The buttons don't fade the way they're supposed to in Windows 7, because they aren't even buttons in the first place. (They're just rectangles drawn to _look_ like buttons, but they fail at imitating them.) Maybe I'm OCD, but I just can't stand developing with Gtk. :(
Resizing an array: Dangerous? Possibly buggy?
Increasing the sizes of an array has always given me the shivers, as beautiful as it is. Could someone explain why this code behaves the way it does? string s = 1234; s.length = 7; writefln(\%s\, s); prints: 1234��� Given that it makes no sense to extend a const-size array, shouldn't there be a run-time check on the new size of the array, so that it throws whenever it's bigger than the current size? Also, another issue: Let's say you have an array that you dynamically resize (meaning you grow and shrink it a lot of times). In fact, let's say you have: int[] arr = new int[1024 * 1024]; and then you decide, hey, that's too big for how much space I actually needed: arr.length = 5; Can the rest of the array be garbage collected? a. If the answer is yes, then how does the GC know? b. If the answer is no, then doesn't that: (1) Mean that we can have memory leaks? (2) Mean that we still need an ArrayList!(T) type? (Note that using Array!(T) does _NOT_ help here, because it can't hold object references due to garbage collection issues.)
Re: Google Summer of Code 2011 application
Am 09.03.2011 09:39, schrieb Jacob Carlborg: On 2011-03-09 00:14, Daniel Gibson wrote: Am 08.03.2011 20:37, schrieb Andrei Alexandrescu: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei Two (ok, maybe three) IDE related ideas: 1. integration of the profiler (use profilers output to directly jump to related sections in the code, mark time-intensive sections, stuff like that) 2. possibility to show assembly code from (de-)compiled executable inline in source so it's easier for developers who don't know much assembly language to understand how much machine code is generated by their code, possibly creating bottlenecks from harmless-looking statements etc. 3. Any work on IDEs should be for cross-platform IDEs, maybe eclipse DDT or codeblocks. Or maybe somebody could port D-IDE (d-ide.sf.net), which is pretty good as far as I know, to mono so it can be used on other platforms than windows? Wouldn't it be better to use a platform independent GUI library. When starting from scratch - certainly. But if D-IDE should be improved/ported - which currently uses .Net - mono is probably the best choice (better than rewriting it completely). Or do you think it should be ported to Qyoto (Qt for C#) or GTK# or something like that for a more native look and feel? And if some other IDE should be improved (probably it should be discussed what specific IDE this should be anyway) then *please* improve a cross-platform IDE like eclipse DDT or codeblocks (and not, e.g. Visual D or Posedion, because those are only available on Windows anyway). Something else to consider: For improvements-of-existing-IDEs the current IDE developers should probably be involved as mentors. Cheers, - Daniel
Re: Google Summer of Code 2011 application
Am 09.03.2011 11:55, schrieb spir: On 03/09/2011 11:46 AM, Jacob Carlborg wrote: On 2011-03-09 11:11, Trass3r wrote: How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD. Too bad that's the general opinion people seem to have about GUI libraries. I don't understand what they don't like about DWT. I think the advantage of gtk or Qt is people can reinvest previous knowledge of the framework. (I mean, they are cross-language in addition to be cross-platform ;-) I would personly prefere a clearly designed D-specific GUI system than gtk's huge mess. (Dunno about Qt, people seem to find it far better designed, but recent events...) Denis AFAIK DDT is modeled after (Java) SWT (used in eclipse). I'd love to see DWT improved, so it really works on all/most platforms that are supported by SWT, especially Linux i386/amd64 and OSX. (I haven't looked at DWT's homepage for some time and it currently seems to be down due to dsource issues). Cheers, - Daniel
Re: Pretty please: Named arguments
Named arguments are useful when you have a function that takes a large number of parameters, the vast majority of which have default values. For example, have a look at this constructor in wxWidgets: http://docs.wxwidgets.org/trunk/classwx_frame.html#01b53ac2d4a5e6b0773ecbcf7b5f6af8 wxFrame::wxFrame( wxWindow * parent, wxWindowID id, const wxString title, const wxPoint pos = wxDefaultPosition, const wxSize size = wxDefaultSize, longstyle = wxDEFAULT_FRAME_STYLE, const wxString name = wxFrameNameStr ) If you want to change the name argument you need to call new wxFrame(a_parent,wxANY,Hello world,wxDefaultPosition,wxDefaultSize,wxDEFAULT_FRAME_STYLE,My Custom name str) Which meant I had to look up what the default values of pos,size and style where even though I was happy with those default values. The more arguments the more of a pain this setup is without named arguments. Contrast to a hypothetical C++ syntax: new wxFrame(a_parent,wxANY,Hello world,name = My Custom name str) I haven't bothered with the arguments I don't care about and my function call has ended up less sensitive to changes in the wxFrame constructor. On 28/02/11 21:50, Jonathan M Davis wrote: On Monday, February 28, 2011 13:38:34 Don wrote: spir wrote: On 02/28/2011 07:51 PM, Jonathan M Davis wrote: I'm not entirely against named arguments being in D, however I do think that any functions that actually need them should be refactored anyway. I agree. CreateFont() in the Windows API, I'm looking at you. (For Linux people, that function has about 12 parameters). ??? In actuality, if I were to vote on whether named arguments should be in the language, I would definitely vote against it (I just plain don't want the code clutter, [...] Just don't use them! You don't have that option. At least, if you're a library developer, you don't. (I'm a bit sick of people saying you don't have to use it if you don't want to in language design. If it is in the language, you don't have a choice. You will encounter it). There are a couple of things that I really, really don't like about the names argument idea: 1. It makes parameter names part of the API. Providing no way for the function writer to control whether it is part of the API or not, and especially, doing it retrospectively, strikes me as extremely rude. 2. It introduces a different syntax for calling a function. foo(4, 5); foo(x: 4, y: 5); They look different, but they do exactly the same thing. I don't like that redundancy. Especially since, as far as I can tell, the named arguments are just comments (which the compiler can check). If so, a syntax like this would be possible, with no language change at all: pragma(namedarguments); // applies to whole module foo(/*x*/ 4, /*y*/ 5); --- if a function parameter has a comment which forms a valid identifier, it's a named parameter. But I still don't see the need for this feature. Aren't people using IDEs where the function signature (with parameter names) pops up when you're entering the function, and when you move the mouse over the function call? And if you really want to see them all the time, why not build that feature into the IDE? (hit ctrl-f9 to show all parameter names, hit it again to hide them). I agree with pretty much everything said here. However, as I understand it, named parameters (at least as they work in Python) would allow for you to reorder parameters and give values for paramters which are normally default parameters without giving values for the default paramters before them, and those changes could not be dealt with by comments. However, I consider them to be a big _problem_, not a feature - _especially_ the ability to rearrange the function arguments. All of a sudden you could have foo(4, 5); foo(x : 4, y : 5); foo(y : 5, X : 4); all making _exactly_ the same function call. That seem _very_ bug-prone and confusing to me. foo(x : 4, y : 5) was bad enough, but allowing foo(y : 5, x : 4)? Not good. The amount of effort to understand the code becomes considerably higher. You could be very familiar with foo and know exactly what parameters it takes and totally mistake what it's really getting passed, because the arguments were flipped in comparison to the function parameters. I agree with Don. I think that named parameters would cause far more intellectual overhead and problems than they'd solve. - Jonathan M Davis - Jonathan M Davis
Re: Is DMD 2.052 32-bit?
Am 09.03.2011 08:24, schrieb Jason E. Aten: On Wed, Mar 9, 2011 at 12:55 AM, Walter Bright newshou...@digitalmars.com mailto:newshou...@digitalmars.com wrote: On 3/8/2011 1:23 PM, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. It has worked in the past, but the 64 bit build is not regularly tested. When I tried this last week, changing -m32 to -m64 got me a clean 64-bit build of dmd2, with no build errors. Very easy. A caveat-- I tried the same search and replace s/32/64/ in places on the druntime and phobos makefiles, but there was something more subtle going on--the generated libraries built fine but were still 32-bit objects inside the .a archives. Somehow I wasn't passing the right flags to generate 64-bit libraries. Is there a flag to tell dmd to compile to 64-bit objects? Thanks, Jason since 2.052 the -m64 flag should be supported to compile 64bit binaries (on Linux).
Re: Google Summer of Code 2011 application
On 08/03/2011 19:37, Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei I've added two ideas in the IDE category, for Eclipse. (dunno why this NG message wasn't sent earlier) I was thinking if anything related to debugger integration could be added, but I suspect (from what I recall from my conversations with Ary a long time ago) that adding Eclipse support for debugger should be fairly easy (and particularly a lot of code might from Descent might be reusable, since this is not much related to semantic analysis). Instead, the greatest effort comes from debugger support itself. In Linux OSes the situation is fine, gdb works well, but in Windows things are not so good. There is the ddbg debugger but it is no longer maintained (I'm not sure how good it still is); there is Mago the Visual D debugger, but from what I understand it can't properly be used from the command line (and thus be integrated with other IDEs); and there's gdb for windows, but that requires compiling and using GDC, which apparently has a host of issues and problems as well; I wonder what is the best way to address these issues. I definitely hope the Windows platform doesn't further become a second-rate target for D development. -- Bruno Medeiros - Software Engineer
Re: Resizing an array: Dangerous? Possibly buggy?
On Wed, 09 Mar 2011 06:41:54 -0500, %u wfunct...@hotmail.com wrote: Increasing the sizes of an array has always given me the shivers, as beautiful as it is. Since dmd around 2.042, array resizing and memory management has been extremely safe. It should be very difficult for you to get into trouble. Could someone explain why this code behaves the way it does? string s = 1234; s.length = 7; writefln(\%s\, s); prints: 1234��� Given that it makes no sense to extend a const-size array, shouldn't there be a run-time check on the new size of the array, so that it throws whenever it's bigger than the current size? A string is an immutable(char)[], it is not a fixed size. Try this, and you will get an error: int[5] x; x.length = 7; I will also point out that setting the length of a string is not a useful thing -- once the elements are added, they are set in stone (immutable), so essentially, you just added 4 unchangeable chars of 0xff. It's better to append to a string, which will do it in place if possible. Also, another issue: Let's say you have an array that you dynamically resize (meaning you grow and shrink it a lot of times). In fact, let's say you have: int[] arr = new int[1024 * 1024]; and then you decide, hey, that's too big for how much space I actually needed: arr.length = 5; Can the rest of the array be garbage collected? No. An array is one contiguous memory-allocated block. It cannot be partially collected. a. If the answer is yes, then how does the GC know? b. If the answer is no, then doesn't that: (1) Mean that we can have memory leaks? Somewhat, as long as you keep a reference to that small array. But once that array is unreferenced, the memory should be collected. Note that if you do know that you are holding lots of memory with a small array, you can do something like this: arr = arr[0..5].dup; and this makes a copy just large enough to hold the 5 elements. The original 1MB array should be collected. (2) Mean that we still need an ArrayList!(T) type? It depends on what you want to do, and how ArrayList is implemented. In dcollections, ArrayList stores its elements in one contiguous memory block, just like an array. -Steve
Re: Uh... destructors?
On Tue, 08 Mar 2011 18:33:31 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: I'm not saying all pointer arithmetic and manipulation should be illegal. It could be allowed, but only so long as the coder maintains the contract of the pure attribute. So this means that you could use pointers to manipulate whatever is transitively reachable from the function parameters (or stuff that was created inside the pure function), but the rest should not be accessed through pointer arithmetic, precisely because the compiler would not be able to determine that from the function signature. Note that when I said illegal I didn't necessarily mean compiler verified illegal code. That might be too complex to implement in the language, so instead it might just be a unchecked contract. Breaking that contract would result in /undefined behavior/. Then I think we are saying the same thing :) -Steve
Re: LLVM 3.0 type system changes
Caligo Wrote: And maybe, just maybe, today we would have a production quality free and open source D compiler that just works. Good luck trying to compile dil, ldc, etc, let alone have them compile your D code and produce an executable that runs the way it should. What's problem? If dmd doesn't work for you, try gcc.
Re: Pretty please: Named arguments
On 08/03/2011 21:37, Steven Schveighoffer wrote: On Tue, 08 Mar 2011 15:29:28 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: On 28/02/2011 22:13, Steven Schveighoffer wrote: Dunno, vim doesn't do that for me currently. I feel tempted to say something very short and concise regarding vim and emacs, but it would require a large amount of justification to properly expose such point. I am planning to write a blog post about that, but I haven't gotten to it yet. Also, if reviewing code on github, there is no ide. -Steve A) Should we give any significant weight to very minute and relatively infrequent code reading situations, like web-based code reviewing in github or whatever, or reading code in books? I doubt so. I contest that web based code reviewing is going to be infrequent, since all major phobos changes now must be reviewed by 2 peers before inclusion. Please look at a recent pull request review I participated in, without ever opening an editor/ide. GitHub provides very good collaborative review. If I have to install an IDE that I only use for reviewing, um... no. https://github.com/jmdavis/phobos/commit/aca1a2d7cfe7d5e934668e06028b78ffb6796245 Hum, looking at that GitHub pull request (first time I have done so), GitHub does look quite nice in terms of code reviewing functionality. So, hum, I do agree that web-based code reviewing might become significantly frequent (if it is not already). (Note that I am not talking about Phobos development only) Although in the particular cased of named arguments, I still don't feel it is worthwhile. Not unless it could be done in a very orthogonal way (both in semantics and syntax), and even so it should likely be very low priority (as in, not any time soon...). B) If the pull request is large, it should be near effortless to put those changes in the IDE and review them there. Again, don't have one. -Steve Just because you don't have or don't use an IDE, that is not an argument against doing B). What should be considered is whether it is worthwhile to review it in an IDE or do it the web application, for a given change request. At the moment, probably not (although I would say it depends a lot on the D code and the underlying project). But in the future things might change. It could even be that a web-based application like GitHub would grow some IDE features, like the one about parameters context information. -- Bruno Medeiros - Software Engineer
Re: Pretty please: Named arguments
On 09/03/2011 06:10, Brad Roberts wrote: Personally, I spend_way_ more time reading code (mine or other peoples) than I spend writing code. Ignoring time, I also read far more lines/files/whatever code than I write. I suspect these things are going to vary highly from person to person and from job role to job role. Those that primarily produce new applications, tools, websites, etc and do little maintenance programming will be the polar opposite. Me too, I also spend a lot of time reading and trying to understand code, especially when working with Eclipse, which is a huge code base. But I do it inside the IDE. I didn't mean reading in any kind of situation, but only minute ones when tools are not available. Like reading a book, reading a online guide/tutorial/snippet, etc.. -- Bruno Medeiros - Software Engineer
Re: Google Summer of Code 2011 application
On 2011-03-09 12:12, %u wrote: I think the advantage of gtk or Qt is people can reinvest previous knowledge of the framework. (I mean, they are cross-language in addition to be cross-platform ;-) I would personly prefere a clearly designed D-specific GUI system than gtk's huge mess. (Dunno about Qt, people seem to find it far better designed, but recent events...) Denis There's something I absolutely ***HATE*** about Gtk, and it's the fact that the controls aren't real controls: The buttons don't fade the way they're supposed to in Windows 7, because they aren't even buttons in the first place. (They're just rectangles drawn to _look_ like buttons, but they fail at imitating them.) I feel exactly the same. Maybe I'm OCD, but I just can't stand developing with Gtk. :( -- /Jacob Carlborg
Re: Google Summer of Code 2011 application
On 2011-03-09 11:55, spir wrote: On 03/09/2011 11:46 AM, Jacob Carlborg wrote: On 2011-03-09 11:11, Trass3r wrote: How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD. Too bad that's the general opinion people seem to have about GUI libraries. I don't understand what they don't like about DWT. I think the advantage of gtk or Qt is people can reinvest previous knowledge of the framework. (I mean, they are cross-language in addition to be cross-platform ;-) I would personly prefere a clearly designed D-specific GUI system than gtk's huge mess. (Dunno about Qt, people seem to find it far better designed, but recent events...) Denis Since DWT is a port of SWT people can reinvest previous knowledge there as well. In fact, that's what I did. I can also add that Java is probably the language that most looks like D, syntactically. You don't have to learn some kind of object oriented wrapper that GtkD possibly uses. -- /Jacob Carlborg
Re: Google Summer of Code 2011 application
On 2011-03-09 13:09, Daniel Gibson wrote: Am 09.03.2011 09:39, schrieb Jacob Carlborg: On 2011-03-09 00:14, Daniel Gibson wrote: Am 08.03.2011 20:37, schrieb Andrei Alexandrescu: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei Two (ok, maybe three) IDE related ideas: 1. integration of the profiler (use profilers output to directly jump to related sections in the code, mark time-intensive sections, stuff like that) 2. possibility to show assembly code from (de-)compiled executable inline in source so it's easier for developers who don't know much assembly language to understand how much machine code is generated by their code, possibly creating bottlenecks from harmless-looking statements etc. 3. Any work on IDEs should be for cross-platform IDEs, maybe eclipse DDT or codeblocks. Or maybe somebody could port D-IDE (d-ide.sf.net), which is pretty good as far as I know, to mono so it can be used on other platforms than windows? Wouldn't it be better to use a platform independent GUI library. When starting from scratch - certainly. But if D-IDE should be improved/ported - which currently uses .Net - mono is probably the best choice (better than rewriting it completely). Or do you think it should be ported to Qyoto (Qt for C#) or GTK# or something like that for a more native look and feel? Sorry, I assumed it was written in D, don't why I got that from. And if some other IDE should be improved (probably it should be discussed what specific IDE this should be anyway) then *please* improve a cross-platform IDE like eclipse DDT or codeblocks (and not, e.g. Visual D or Posedion, because those are only available on Windows anyway). Something else to consider: For improvements-of-existing-IDEs the current IDE developers should probably be involved as mentors. Cheers, - Daniel -- /Jacob Carlborg
Re: Google Summer of Code 2011 application
On Wed, 09 Mar 2011 04:37:43 +0900, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Bindings to popular IPC/RPC protocols such as Protocol Buffers and Apache Thrift Key Skills: Intimate knowledge of cross-machine communication protocols. Large-scale programming using D requires bindings to cross-machine and cross-language communication protocols. Such include Google's Protocol Buffers, Apache Thrift, and others. D's standard library currently includes no such protocol implementation. Providing such would motivate adoption of D for large-scale development. Protocol Buffers and Thrift need code generation using IDL unlike MessagePack, BERT, etc. Such protocols are acceptable to Phobos?
Re: Google Summer of Code 2011 application
On 2011-03-09 13:30, Bruno Medeiros wrote: On 08/03/2011 19:37, Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei I've added two ideas in the IDE category, for Eclipse. (dunno why this NG message wasn't sent earlier) I was thinking if anything related to debugger integration could be added, but I suspect (from what I recall from my conversations with Ary a long time ago) that adding Eclipse support for debugger should be fairly easy (and particularly a lot of code might from Descent might be reusable, since this is not much related to semantic analysis). Instead, the greatest effort comes from debugger support itself. In Linux OSes the situation is fine, gdb works well, but in Windows things are not so good. There is the ddbg debugger but it is no longer maintained (I'm not sure how good it still is); there is Mago the Visual D debugger, but from what I understand it can't properly be used from the command line (and thus be integrated with other IDEs); and there's gdb for windows, but that requires compiling and using GDC, which apparently has a host of issues and problems as well; I wonder what is the best way to address these issues. I definitely hope the Windows platform doesn't further become a second-rate target for D development. GDB doesn't work on Mac OS X as well as it does on Linux. Anything related to line number won't work. DMD still can't output the correct DWARF info on Mac OS X. -- /Jacob Carlborg
Re: Pretty please: Named arguments
On 9 March 2011 13:22, Gareth Charnock gareth.charn...@gmail.com wrote: Named arguments are useful when you have a function that takes a large number of parameters, the vast majority of which have default values. For example, have a look at this constructor in wxWidgets: http://docs.wxwidgets.org/trunk/classwx_frame.html#01b53ac2d4a5e6b0773ecbcf7b5f6af8 wxFrame::wxFrame( wxWindow * parent, wxWindowID id, const wxString title, const wxPoint pos = wxDefaultPosition, const wxSize size = wxDefaultSize, longstyle = wxDEFAULT_FRAME_STYLE, const wxString name = wxFrameNameStr ) If you want to change the name argument you need to call new wxFrame(a_parent,wxANY,Hello world,wxDefaultPosition,wxDefaultSize,wxDEFAULT_FRAME_STYLE,My Custom name str) Which meant I had to look up what the default values of pos,size and style where even though I was happy with those default values. The more arguments the more of a pain this setup is without named arguments. Contrast to a hypothetical C++ syntax: new wxFrame(a_parent,wxANY,Hello world,name = My Custom name str) I haven't bothered with the arguments I don't care about and my function call has ended up less sensitive to changes in the wxFrame constructor. On 28/02/11 21:50, Jonathan M Davis wrote: On Monday, February 28, 2011 13:38:34 Don wrote: spir wrote: On 02/28/2011 07:51 PM, Jonathan M Davis wrote: I'm not entirely against named arguments being in D, however I do think that any functions that actually need them should be refactored anyway. I agree. CreateFont() in the Windows API, I'm looking at you. (For Linux people, that function has about 12 parameters). ??? In actuality, if I were to vote on whether named arguments should be in the language, I would definitely vote against it (I just plain don't want the code clutter, [...] Just don't use them! You don't have that option. At least, if you're a library developer, you don't. (I'm a bit sick of people saying you don't have to use it if you don't want to in language design. If it is in the language, you don't have a choice. You will encounter it). There are a couple of things that I really, really don't like about the names argument idea: 1. It makes parameter names part of the API. Providing no way for the function writer to control whether it is part of the API or not, and especially, doing it retrospectively, strikes me as extremely rude. 2. It introduces a different syntax for calling a function. foo(4, 5); foo(x: 4, y: 5); They look different, but they do exactly the same thing. I don't like that redundancy. Especially since, as far as I can tell, the named arguments are just comments (which the compiler can check). If so, a syntax like this would be possible, with no language change at all: pragma(namedarguments); // applies to whole module foo(/*x*/ 4, /*y*/ 5); --- if a function parameter has a comment which forms a valid identifier, it's a named parameter. But I still don't see the need for this feature. Aren't people using IDEs where the function signature (with parameter names) pops up when you're entering the function, and when you move the mouse over the function call? And if you really want to see them all the time, why not build that feature into the IDE? (hit ctrl-f9 to show all parameter names, hit it again to hide them). I agree with pretty much everything said here. However, as I understand it, named parameters (at least as they work in Python) would allow for you to reorder parameters and give values for paramters which are normally default parameters without giving values for the default paramters before them, and those changes could not be dealt with by comments. However, I consider them to be a big _problem_, not a feature - _especially_ the ability to rearrange the function arguments. All of a sudden you could have foo(4, 5); foo(x : 4, y : 5); foo(y : 5, X : 4); all making _exactly_ the same function call. That seem _very_ bug-prone and confusing to me. foo(x : 4, y : 5) was bad enough, but allowing foo(y : 5, x : 4)? Not good. The amount of effort to understand the code becomes considerably higher. You could be very familiar with foo and know exactly what parameters it takes and totally mistake what it's really getting passed, because the arguments were flipped in comparison to the function parameters. I agree with Don. I think that named parameters would cause far more intellectual overhead and problems than they'd solve. - Jonathan M Davis - Jonathan M Davis I actually like the idea of named variables, for these funktions that takes a lot of parameters, mostly because if the libary dev, chooses to change the default value of something, my old placeholder value, will end up being wrong without me knowing anything about it. Say: wxFrame::wxFrame( wxWindow * parent,
Re: 64bit port for FreeBSD
On Wed, 9 Mar 2011 02:58:25 -0800 Jonathan M Davis jmdavisp...@gmx.com wrote: As for producing 64-bit binaries with dmd, as of dmd 2.052, if you build with - m64 on Linux, dmd should produce 64-bit binaries just fine. However, I don't know how well it will work for any of the BSDs. Well, when I attempted to build dmd2-2.0.52 port on my x86_64, I was informed that the port is available for i386 only. :-( Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: Google Summer of Code 2011 application
On 3/9/11 1:24 AM, Jacob Carlborg wrote: On 2011-03-08 20:37, Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei How about a GUI library. Probably helping with an already existing one, DWT for example. Ideally we'd get the authors of the respective libraries weigh in to assess what help they need. Andrei
Re: Google Summer of Code 2011 application
On 3/9/11, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: but that requires compiling and using GDC, which apparently has a host of issues and problems as well; It doesn't have much building problems anymore. There's a couple of patches that need to be applied, but everything is described here: https://gist.github.com/857381 I've successfully used GDB as well. I pass the -g flag for debug symbols, and it works fine this way when loading the exe in GDB.
Re: Is DMD 2.052 32-bit?
On Wed, 09 Mar 2011 01:24:56 -0600, Jason E. Aten wrote: On Wed, Mar 9, 2011 at 12:55 AM, Walter Bright newshou...@digitalmars.comwrote: On 3/8/2011 1:23 PM, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. It has worked in the past, but the 64 bit build is not regularly tested. When I tried this last week, changing -m32 to -m64 got me a clean 64-bit build of dmd2, with no build errors. Very easy. A caveat-- I tried the same search and replace s/32/64/ in places on the druntime and phobos makefiles, but there was something more subtle going on--the generated libraries built fine but were still 32-bit objects inside the .a archives. Somehow I wasn't passing the right flags to generate 64-bit libraries. Is there a flag to tell dmd to compile to 64-bit objects? Thanks, Jason In addition to changing the MODEL env variable in the makefiles, I also had to pass the -m64 flag to the dmd binary to build x64 libphobos.a and libdruntime.a i.e. make -f posix.mak DMD=PATH_TO_DMD -m64
Re: Google Summer of Code 2011 application
On Wed, 09 Mar 2011 04:37:43 +0900, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Networking I think high level networking is a part of IO. But unfortunately, Phobos does not have IO model. Does this idea include new IO model discussion?
Re: Google Summer of Code 2011 application
On 3/9/11 7:34 AM, Masahiro Nakagawa wrote: On Wed, 09 Mar 2011 04:37:43 +0900, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Bindings to popular IPC/RPC protocols such as Protocol Buffers and Apache Thrift Key Skills: Intimate knowledge of cross-machine communication protocols.Large-scale programming using D requires bindings to cross-machine and cross-language communication protocols. Such include Google's Protocol Buffers, Apache Thrift, and others. D's standard library currently includes no such protocol implementation. Providing such would motivate adoption of D for large-scale development. Protocol Buffers and Thrift need code generation using IDL unlike MessagePack, BERT, etc. Such protocols are acceptable to Phobos? I'm not sure how to go about the IDL compilers themselves. What I can do is to try to add D generation to Thrift if someone implements it. Andrei
Re: Pretty please: Named arguments
On Wed, 09 Mar 2011 09:02:14 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: Although in the particular cased of named arguments, I still don't feel it is worthwhile. Not unless it could be done in a very orthogonal way (both in semantics and syntax), and even so it should likely be very low priority (as in, not any time soon...). It's one of those things where, in some cases, not having named arguments makes code unreadable. But having named arguments does not make *all* code more readable. In many many cases, code is readable just fine without using the named arguments. But those cases where it does help, it's essential. Like Don's CreateFont example (actually almost any GUI function). We might do something like require explicit comments for confusing functions like: foo( null, // paramA null, // paramB null, // paramC ); during the review process. It would at least document properly what is happening. I see not too much value in the skip default parameter prospect, most code can do fine without that via overloading. But the documentation and compiler verification of parameter names is what I see as the killer feature here. B) If the pull request is large, it should be near effortless to put those changes in the IDE and review them there. Again, don't have one. -Steve Just because you don't have or don't use an IDE, that is not an argument against doing B). It absolutely is. I use NetBeans for doing php development. It took me hours and hours to set it up properly, install the right add-ons, etc. I'm not about to do that for something like descent or DDT so I can properly read code, I'm way more likely to just go look up the function signature myself for the few times I need it. Just having the named arguments idea makes so much more sense than having to load a specialized (and I might add, way overkill for just reviewing code in both size and complexity) tool. It's not one of those absolutely have to have it features, it's a nice-to-have. I certainly can and have lived without it, and there are certainly ways to compensate for not having it. It's kind of like arrays in D. Every time I have to use another language, I miss D's array syntax features. All the same functionality is there, it just takes more effort to do the same thing. That little effort is not terrible, but I much prefer not having to do it. -Steve
Re: Google Summer of Code 2011 application
On 2011-03-09 17:00, Andrei Alexandrescu wrote: On 3/9/11 1:24 AM, Jacob Carlborg wrote: On 2011-03-08 20:37, Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas Thanks, Andrei How about a GUI library. Probably helping with an already existing one, DWT for example. Ideally we'd get the authors of the respective libraries weigh in to assess what help they need. Andrei For DWT I can answer that: * Finish porting to D2 (I've received a patch that does this, not applied yet) * Finish the Mac OS X port and merge it with the DWT2 repository * Update to later versions of SWT * Port 64bit versions of SWT (probably we want to merge the 32bit and 64bit ports) -- /Jacob Carlborg
Re: Resizing an array: Dangerous? Possibly buggy?
Huh, interesting, okay. I think pitfalls like this one (with the garbage collector, for example) should definitely be documented somewhere. I would imagine that quite a few people who try to set the length of an array won't realize that they can run out of memory this way, especially because it's nondeterministic in many cases. Anyway, thanks for the response!
Re: Google Summer of Code 2011 application
Andrei Alexandrescu wrote: I just submitted an application for GSoC 2011 on behalf of Digital Mars. Please review and contribute to the project ideas page: http://prowiki.org/wiki4d/wiki.cgi?GSOC_2011_Ideas I did some research on Protocol Buffers. I found https://256.makerslocal.org/wiki/index.php/ProtocolBuffer Further it seems that Google encourages writing a plugin for their compiler (http://code.google.com/apis/protocolbuffers/docs/reference/other.html). Maybe the above project can be adapted. Because a plugin has to read a special request from stdin and output a special response to stdout. Jens
Re: Haskell infix syntax
On 07/03/11 01:01, Caligo wrote: On Sun, Mar 6, 2011 at 12:24 PM, Peter Alexander peter.alexander.au http://peter.alexander.au@gmail.com http://gmail.com wrote: On 6/03/11 4:22 PM, bearophile wrote: So I think it's not worth adding to D. But if you don't agree... talk. Bye, bearophile I agree. It would be nice in some situations (like cross and dot products for vectors), but otherwise it's unnecessary and just adds confusion in exchange for a tiny but of convenience in a handful of scenarios. With C++, for example, Eigen uses expression templates. How does one do expression templates in D? Could someone rewrite this http://en.wikipedia.org/wiki/Expression_templates this D? How does one do expression templates in D? Same basic idea but it should be a lot saner because D metaprograming isn't a Turing tar pit like with C++. The return type of your function is a template encoding in types of the inputs. So you could define an + operator that has a return type of opBinary(RHS,+,LHS) Or similar. The type of RHS and LHS could be other opBinary instansiations or a leaf time. Sooner or later you've got something that looks like a parse tree and you can process that at compile time using CTFE and then mixin the result. Heck, you might even be able to do something crazy like flatten the tree and re-parse and thus screw with the operator precedence or run your own specialized compiler backend and output inline asm. Trouble is, each time I feel motivated to give this a go I run into compiler bugs. Hopefully things will get better soon.
Re: Pretty please: Named arguments
On 03/09/2011 06:20 PM, Steven Schveighoffer wrote: It's kind of like arrays in D. Every time I have to use another language, I miss D's array syntax features. All the same functionality is there, it just takes more effort to do the same thing. That little effort is not terrible, but I much prefer not having to do it. Agreed. This overall programmer-friendly design, is D's very big + for me. Denis -- _ vita es estrany spir.wikidot.com
Code Sandwiches
Despite D is currently not widely used, it's not hard for me to find references about D into computer science papers I find around. This paper is titles Code Sandwiches, by Matt Elder, Steve Jackson, and Ben Liblit, it discusses D scope guards too (page 7 and several successive pages): http://pages.cs.wisc.edu/~liblit/tr-1647/ One of the things the paper says about D scope guards is: Scope guards do not provide encapsulation. Bye, bearophile
Re: Code Sandwiches
== Quote from bearophile (bearophileh...@lycos.com)'s article One of the things the paper says about D scope guards is: Scope guards do not provide encapsulation. (Rolls eyes.) I feel like this is a standard criticism of language features that's code for I don't like this feature. IIRC they said the same thing about delegates in Java. Without even reading the paper, there are two reasons why this is an idiotic thing to say: 1. D also provides struct destructors, which are a more encapsulated way of accomplishing the same thing. Scope guards are intended for one-off use cases where declaring a type, etc. is just extra overhead and accomplishes nothing. 2. Encapsulation is only a means, not an end in itself. Sometimes people lose sight of this. The end goal is to write correct, efficient, readable, maintainable programs. If increasing encapsulation hurts these goals instead of helping them (as excessive encapsulation as practiced by obsessive-compulsive people does), then it's a Bad Thing.
Re: Code Sandwiches
bearophile napisał: One of the things the paper says about D scope guards is: Scope guards do not provide encapsulation. Yep, they don't. So? -- Tomek
Re: Code Sandwiches
On 09.03.2011 23:15, bearophile wrote: Despite D is currently not widely used, it's not hard for me to find references about D into computer science papers I find around. This paper is titles Code Sandwiches, by Matt Elder, Steve Jackson, and Ben Liblit, it discusses D scope guards too (page 7 and several successive pages): http://pages.cs.wisc.edu/~liblit/tr-1647/ One of the things the paper says about D scope guards is: Scope guards do not provide encapsulation. Indeed, they are not related to encapsulation directly. What they do provide is a convenient way to write exception safe code, with less fuss and bugs. It's a practical shortcut feature meant to be used inside of encapsulated entity, not to create or support one. Bye, bearophile -- Dmitry Olshansky
Re: Is DMD 2.052 32-bit?
On Tue, 08 Mar 2011 22:23:19 +0100, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. Ok, I wrote a simple bash script: ===BEGIN=== #!/bin/bash echo building dmd... cd ./dmd make -f linux.mak MODEL=-m64 cd .. if [ ! -f ./dmd/dmd ] then echo failed. exit 1 fi echo building druntime... cd ./druntime make -f posix.mak MODEL=64 DMD=../dmd/dmd cd .. echo building phobos... cd ./phobos make -f posix.mak MODEL=64 DMD=../dmd/dmd cd .. ===END=== You have to put it in dmd2/src. I got a clean build on Ubuntu 10.04 x86_64 with GCC 4.4.3 and GNU Make 3.81.
Re: 64bit port for FreeBSD
On Wed, 09 Mar 2011 16:52:51 +0100, Gour wrote: On Wed, 9 Mar 2011 02:58:25 -0800 Jonathan M Davis jmdavisp...@gmx.com wrote: As for producing 64-bit binaries with dmd, as of dmd 2.052, if you build with - m64 on Linux, dmd should produce 64-bit binaries just fine. However, I don't know how well it will work for any of the BSDs. Well, when I attempted to build dmd2-2.0.52 port on my x86_64, I was informed that the port is available for i386 only. :-( Sincerely, Gour I'm not sure whether that will do, but you can check it out: http://www.digitalmars.com/pnews/read.php? server=news.digitalmars.comgroup=digitalmars.Dartnum=131658
Re: Code Sandwiches
Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh.
Re: Code Sandwiches
dsimcha dsim...@yahoo.com wrote in message news:il8nlh$10c1$1...@digitalmars.com... == Quote from bearophile (bearophileh...@lycos.com)'s article One of the things the paper says about D scope guards is: Scope guards do not provide encapsulation. (Rolls eyes.) I feel like this is a standard criticism of language features that's code for I don't like this feature. IIRC they said the same thing about delegates in Java. Without even reading the paper, there are two reasons why this is an idiotic thing to say: 1. D also provides struct destructors, which are a more encapsulated way of accomplishing the same thing. Scope guards are intended for one-off use cases where declaring a type, etc. is just extra overhead and accomplishes nothing. 2. Encapsulation is only a means, not an end in itself. Sometimes people lose sight of this. The end goal is to write correct, efficient, readable, maintainable programs. If increasing encapsulation hurts these goals instead of helping them (as excessive encapsulation as practiced by obsessive-compulsive people does), then it's a Bad Thing. Section 5.2 makes it clear that's not what he's trying to say. Although, the second-last sentence of that section is But [D's scope guards] remain statements, not functions or classes, and thus do not form reusable sandwich encapsulations. Frankly, though, that's true. Of course, they can be used as part of a mechanism for creating a reusable code sandwich, but by themselves they're not an encapsulation and not intended to be. And he never actually says that that's a bad thing. And if there's any subtext inferring it's a bad thing, then as far as I can tell it's small enough it may as well not even exist. That said, I wouldn't put much stock in the average academic paper anyway. (Although, from what little I read, this one doesn't seem quite as bad as some. It's actually readable by programmers, which is a nice change from the usual. And I didn't notice any blatantly stupid comments either. It doesn't seem to make much of a point, but it does still seem to have value in discussing the concept of code sandwiches and enumerating various approaches to them.) But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh.
Re: Code Sandwiches
Am 09.03.2011 22:33, schrieb Nick Sabalausky: Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? No it's not. At least PDF is a standard format with free and open viewers on about any platform. And while sticking photos into a Word document is pretty pointless using PDF for papers does make sense. One thing is that papers are usually published in printed form, the PDFs are more or less a by-product of that. Also they're usually written with LaTeX (or something similar) and the obvious (digital) formats to publish stuff written in *TeX are Postscript and PDF - I guess you agree that PDF is preferable, as it can be searched etc ;) You can also export *TeX to HTML, but that'll probably fuck up formatting and formulas. So you'd have to use some LaTeX-HTML converter and clean up stuff afterwards to make sure the formatting is OK, the formulas are like they were intended to be (missing a small detail like a ' or an index or whatever will make a formula unusable) etc.. This may not be a problem for this specific paper (it's only text, sourcecode and some tables I think), but for many other scientific papers it is. That's the reason why they're mostly published as PDFs. Cheers, - Daniel
Re: Code Sandwiches
Am 09.03.2011 22:49, schrieb Daniel Gibson: Am 09.03.2011 22:33, schrieb Nick Sabalausky: Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? No it's not. At least PDF is a standard format with free and open viewers on about any platform. And while sticking photos into a Word document is pretty pointless using PDF for papers does make sense. One thing is that papers are usually published in printed form, the PDFs are more or less a by-product of that. Also they're usually written with LaTeX (or something similar) and the obvious (digital) formats to publish stuff written in *TeX are Postscript and PDF - I guess you agree that PDF is preferable, as it can be searched etc ;) You can also export *TeX to HTML, but that'll probably fuck up formatting and formulas. So you'd have to use some LaTeX-HTML converter and clean up stuff afterwards to make sure the formatting is OK, the formulas are like they were intended to be (missing a small detail like a ' or an index or whatever will make a formula unusable) etc.. This may not be a problem for this specific paper (it's only text, sourcecode and some tables I think), but for many other scientific papers it is. That's the reason why they're mostly published as PDFs. Cheers, - Daniel One more thing: Published papers will probably be cited by other papers or theses. With PDF this is easier, you can write XYZ, page 42, l 13 - with HTML pages it's not that easy, you could maybe write in chapter 3 somewhere in the 5th paragraph or something like that, but that sucks. Or worse on the fourth page in the third paragraph and once a new CMS is used that splits pages differently that is completely meaningless..
Library Documentation
I've started to push more of my smaller work projects through D now, which means I had to dive a lot through the standard library source files, something I've previously complained about. As a result of (my) complaining and being a huge fan of XMind, I decided to try to organize the library for my own references as I encounter new sections of it. I have a decent portion of it in place now. I thought I'd post a link in case it can help anyone else out as well. http://polish.slavic.pitt.edu/~swan/theta/Phobos.xmind
Re: Google Summer of Code 2011 application
On Wed, Mar 9, 2011 at 4:46 AM, Jacob Carlborg d...@me.com wrote: On 2011-03-09 11:11, Trass3r wrote: How about a GUI library. Probably helping with an already existing one, DWT for example. Good idea, but rather improve GtkD or QtD. Too bad that's the general opinion people seem to have about GUI libraries. I don't understand what they don't like about DWT. BTW, I received a patch for DWT which makes it work with D2. Coming from the Java world, I'm a big fan of SWT because it's fast and native, and I started out using DWT, but I was frightened away when I realized that DWT contains a reimplementation of a significant portion of the Java standard library. It just seems like a decent UI framework for D shouldn't require another language's standard library to be ported over, but maybe I'm just critical. Where would I find DWT for D2?
Re: Code Sandwiches
Daniel Gibson metalcae...@gmail.com wrote in message news:il8t79$2t70$2...@digitalmars.com... Am 09.03.2011 22:49, schrieb Daniel Gibson: Am 09.03.2011 22:33, schrieb Nick Sabalausky: Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? No it's not. At least PDF is a standard format with free and open viewers on about any platform. Vaguely free, open and standard. Only in the same sense that swf, doc and docx are free, open and standard. HTML (bad as it may be) still wins here. And while sticking photos into a Word document is pretty pointless using PDF for papers does make sense. One thing is that papers are usually published in printed form, Still? the PDFs are more or less a by-product of that. Also they're usually written with LaTeX (or something similar) and the obvious (digital) formats to publish stuff written in *TeX are Postscript and PDF - I guess you agree that PDF is preferable, as it can be searched etc ;) *Some* PDFs can be searched. You can also export *TeX to HTML, but that'll probably fuck up formatting and formulas. So you'd have to use some LaTeX-HTML converter and clean up stuff afterwards to make sure the formatting is OK, the formulas are like they were intended to be (missing a small detail like a ' or an index or whatever will make a formula unusable) etc.. So after 15 years there still isn't a good Latex-HTML converter? Sounds more like the matter is a lack of interest in using anything other than PDF rather than a lack of a good Latex-HTML converter. This may not be a problem for this specific paper (it's only text, sourcecode and some tables I think), but for many other scientific papers it is. That's the reason why they're mostly published as PDFs. Cheers, - Daniel One more thing: Published papers will probably be cited by other papers or theses. With PDF this is easier, you can write XYZ, page 42, l 13 - with HTML pages it's not that easy, you could maybe write in chapter 3 somewhere in the 5th paragraph or something like that, but that sucks. Or worse on the fourth page in the third paragraph and once a new CMS is used that splits pages differently that is completely meaningless.. These formal papers are divided into sections and subsections, plus HTML supports links and anchors, and even supports disabled word wrapping if that's really needed, so those are non-issues.
Question about D, garbage collection and fork()
Where I work, we find it very useful to start a process, load data, then fork() to parallelize. Our data is large, such that we'd run out of memory trying to run a complete copy on each core. Once the process is loaded, we don't need that much writable memory, so fork is appealing to share the loaded pages. It's possible to use mmap for some of the data, but inconvenient for other data, even though it's read-only at runtime. So here's my question: In D, if I create a lot of data in the garbage-collected heap that will be read-only, then fork the process, will I get the benefit of the operating system's copy-on-write and only use a small amount of additional memory per process? In case you're wondering why I wouldn't use threading, one argument is that if you have a bug and the process crashes, you only lose one process instead of N threads. That's actually useful for robustness. Thoughts?
Re: Code Sandwiches
Am 09.03.2011 23:38, schrieb Nick Sabalausky: Daniel Gibson metalcae...@gmail.com wrote in message news:il8t79$2t70$2...@digitalmars.com... Am 09.03.2011 22:49, schrieb Daniel Gibson: Am 09.03.2011 22:33, schrieb Nick Sabalausky: Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? No it's not. At least PDF is a standard format with free and open viewers on about any platform. Vaguely free, open and standard. Only in the same sense that swf, doc and docx are free, open and standard. HTML (bad as it may be) still wins here. No, PDF is an ISO standard, swf and doc aren't and docx isn't either, because it doesn't really conform with the OOXML ISO standard.. As mentioned before: there are free and open viewers for PDF for (almost?) all platforms that work reasonably well. Can't say the same about doc(x) or swf.. That HTML is rendered almost the same on different browsers is a pretty recent development as well... Nevertheless HTML doesn't have as much formatting possibilities as LaTeX, especially for formulas, so you'd end up using a lot of images which is suboptimal. (Yeah I know there's MathML, but AFAIK it's not properly supported by all browsers). And while sticking photos into a Word document is pretty pointless using PDF for papers does make sense. One thing is that papers are usually published in printed form, Still? I think so. And even if they aren't they're formatted like that anyway. the PDFs are more or less a by-product of that. Also they're usually written with LaTeX (or something similar) and the obvious (digital) formats to publish stuff written in *TeX are Postscript and PDF - I guess you agree that PDF is preferable, as it can be searched etc ;) *Some* PDFs can be searched. Most can, the others are - most probably deliberately - broken. You can do the same with HTML if you want, just use images instead of real text.. You can also export *TeX to HTML, but that'll probably fuck up formatting and formulas. So you'd have to use some LaTeX-HTML converter and clean up stuff afterwards to make sure the formatting is OK, the formulas are like they were intended to be (missing a small detail like a ' or an index or whatever will make a formula unusable) etc.. So after 15 years there still isn't a good Latex-HTML converter? Sounds more like the matter is a lack of interest in using anything other than PDF rather than a lack of a good Latex-HTML converter. I don't know. I think I don't have to tell someone who still uses Firefox2 that people don't have the motivation to try new software all the time just because it may finally be usable ;) This may not be a problem for this specific paper (it's only text, sourcecode and some tables I think), but for many other scientific papers it is. That's the reason why they're mostly published as PDFs. Cheers, - Daniel One more thing: Published papers will probably be cited by other papers or theses. With PDF this is easier, you can write XYZ, page 42, l 13 - with HTML pages it's not that easy, you could maybe write in chapter 3 somewhere in the 5th paragraph or something like that, but that sucks. Or worse on the fourth page in the third paragraph and once a new CMS is used that splits pages differently that is completely meaningless.. These formal papers are divided into sections and subsections, plus HTML supports links and anchors, and even supports disabled word wrapping if that's really needed, so those are non-issues. If anchors etc are used.. fine. But you can't take that for granted.
Re: LLVM 3.0 type system changes
This is fantastic news! Many thanks for all your hard work. Not only seems LDC2 coming closer to be supporting the current D2, there is now SDC too (I must admit that a self hosting compiler (front end + LLVM back-end in D I mean) is a big statement for a language in my view). Please keep up the good work! Although, I don't post on this NG much, I've been following it for a while, and if the increased diversification in people participating here verbally/code/project wise are representative, then I can only see the momentum that D as a language recently is gathering to be very exciting. I work at a company where I have a big influence on the programming languages to be used and have been porting some of our code to D as an experiment/me learning the language/reading TDPL - and I'm very pleased with the results so far...to a point that if it was not for a broader developed phobos (database access, messaging framework bindings (zeroMQ, google protobuffers, thrift), cross platform gui tool kits, etc) I would be keen to push the language for production usage at this point (knowing existing bugs in the compiler, but where I feel one can work around these). Anyway, I just want this to be a message of support for D and great appreciation of what has been achieved so far. thanks, fil On 08/03/2011 01:54, Bernard Helyer wrote: On Mon, 07 Mar 2011 20:03:36 +, filgood wrote: as described here: http://nondot.org/sabre/LLVMNotes/TypeSystemRewrite.txt Btw, what is the status of the D2 LLVM compiler? You're probably wondering about LDC2, but I'll chip in with SDC's ( https://github.com/bhelyer/SDC ) status here: On the road to some kind of 0.1, but a lot of work to be done -- it should land some time this year, however. Keeping current with DMD releases, current with LLVM releases.
Re: Code Sandwiches
On 03/09/2011 09:24 PM, dsimcha wrote: 2. Encapsulation is only a means, not an end in itself. Sometimes people lose sight of this. The end goal is to write correct, efficient, readable, maintainable programs. If increasing encapsulation hurts these goals instead of helping them (as excessive encapsulation as practiced by obsessive-compulsive people does), then it's a Bad Thing. Oy, que yes! Denis -- _ vita es estrany spir.wikidot.com
Re: Code Sandwiches
On 03/09/2011 10:30 PM, Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. Agreed. Actually, AFAIK, pdf was born as a paper-printing format (ps legacy, in fact). That's why it systematically has white background. (Thank godS, Unbuntu's doc viewer recently got an inverse video mode. Unthank gods, white on black is far to be the most legible color combination. Anyway, better than the opposite...) Denis -- _ vita es estrany spir.wikidot.com
Re: Is DMD 2.052 32-bit?
On 09/03/2011 06:55, Walter Bright wrote: On 3/8/2011 1:23 PM, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. It has worked in the past, but the 64 bit build is not regularly tested. Does the toolchain compile on windows in 64 bit too? It's awesome you're finally starting the transition :)
Re: Is DMD 2.052 32-bit?
On Wednesday, March 09, 2011 16:23:15 Nebster wrote: On 09/03/2011 06:55, Walter Bright wrote: On 3/8/2011 1:23 PM, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. It has worked in the past, but the 64 bit build is not regularly tested. Does the toolchain compile on windows in 64 bit too? It's awesome you're finally starting the transition :) No. Regardless of whether you could build dmd itself as 64-bit (which is questionable), the linker is only 32-bit, and since it's written in assembly, you _definitely_ can't compile that as 64-bit. So, you don't have 64-bit on Windows - either the dmd binary or the binaries that it produces. And honestly, I'd be _very_ leery - make that _extremely leery - of using a 64-bit build of dmd on _any_ OS before Walter actually makes sure that it works as 64-bit and maintains it as such. Much as I'd love to have a 64-bit binary of dmd, I don't think that the gain is even vaguely worth the risk at this point. - Jonathan M Davis
Re: Is DMD 2.052 32-bit?
On 10/03/2011 00:30, Jonathan M Davis wrote: On Wednesday, March 09, 2011 16:23:15 Nebster wrote: On 09/03/2011 06:55, Walter Bright wrote: On 3/8/2011 1:23 PM, Trass3r wrote: Yes, but you can compile an x64 dmd yourself on Linux. Is there any how to? IIRC you have to edit linux.mak to use -m64 instead of -m32. It has worked in the past, but the 64 bit build is not regularly tested. Does the toolchain compile on windows in 64 bit too? It's awesome you're finally starting the transition :) No. Regardless of whether you could build dmd itself as 64-bit (which is questionable), the linker is only 32-bit, and since it's written in assembly, you _definitely_ can't compile that as 64-bit. So, you don't have 64-bit on Windows - either the dmd binary or the binaries that it produces. And honestly, Path: digitalmars.com!not-for-mail From: Sean Kellys...@f4.ca Newsgroups: digitalmars.D Subject: Re: If D becomes a failure, what's the key reason, do you think? Date: Fri, 07 Jul 2006 13:17:17 -0700 Organization: Digital Mars Lines: 5 Message-ID:e8mfgd$1hqe$1...@digitaldaemon.com References:e8khb5$160j$1...@digitaldaemon.com e8klm3$1b23$1...@digitaldaemon.com e8kp39$1fol$1...@digitaldaemon.com e8l426$26o3$1...@digitaldaemon.com e8m6pv$15bp$1...@digitaldaemon.com e8m9m3$19t5$1...@digitaldaemon.com e8mbuv$1cm4$1...@digitaldaemon.com e8meq3$1go2$2...@digitaldaemon.com Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Trace: digitaldaemon.com 1152303437 51022 216.127.61.99 (7 Jul 2006 20:17:17 GMT) X-Complaints-To: use...@digitalmars.com NNTP-Posting-Date: Fri, 7 Jul 2006 20:17:17 + (UTC) User-Agent: Thunderbird 1.5.0.4 (Windows/20060516) In-Reply-To:e8meq3$1go2$2...@digitaldaemon.com Xref: digitalmars.com digitalmars.D:39708 Walter Bright wrote: Not for 1.0. Thanks :-) Hehe, thanks for the response. I know about the linker but I was looking into the new version of UniLink which has support for D now afaik! It's not that much of a problem for me at the moment anyway :) Thanks, Nebster
Re: Code Sandwiches
spir: (Thank godS, Unbuntu's doc viewer recently got an inverse video mode. Unthank gods, white on black is far to be the most legible color combination. Anyway, better than the opposite...) Two of the most important PDF viewrs have an option to change the backgroup color of the pages to the color you want :-) Bye, bearophile
Re: LLVM 3.0 type system changes
On Tue, Mar 8, 2011 at 9:54 AM, spir denis.s...@gmail.com wrote: On 03/08/2011 03:33 PM, Caligo wrote: On Tue, Mar 8, 2011 at 12:29 AM, Bernard Helyerb.hel...@gmail.com wrote: On Tue, 08 Mar 2011 00:15:54 -0600, Caligo wrote: On Mon, Mar 7, 2011 at 11:34 PM, Bernard Helyerb.hel...@gmail.com wrote: On Mon, 07 Mar 2011 20:41:39 -0600, Caligo wrote: Do we really need another D compiler that doesn't work? Name me a working D2 compiler that doesn't have a front-end based based on DMD. Furthermore, name me an in progress independent implementation further along than SDC. The only candidate is Dil. SDC _will_ be finished, mark my words. Regards, Some one who remembers why they don't use the NG that much. Yes, I know about dil, but I wonder why you chose not to contribute to dil instead of starting a new project. AFAIK dil is D1. I know about Dil. aziz is great, the project is great. However, I know and want to use D2/Phobos. Plus, if I'm going to spend years on a project, I may as well use stuff I like. Furthermore, SDC didn't start out as a full compiler. Just playing around with lexing/parsing D. NIH syndrome, too. I was aware of your NIH syndrome, and that's why I have a problem with this. The main reason you are doing this is to serve your own needs, to make yourself feel good, to earn some kind of recognition, but not to serve the community in any meaningful way. Even if the project fails, it will look good on your resume because you took the time to develop a compiler. This kind of behavior is very common in the FOSS community, and it has become a disease. It's the reason why there are thousands of dead software projects that were never completed and maintained. I spoke with Aziz two years ago, and he had the same exact attitude. His excuse? He hates C++ and he thinks Walter's front end is horrible. So instead of contributing to ldc, he continued to work on his own compiler. After years of hard work, I doubt many would be willing to call dil a success. Instead of wasting all his time and energy on dil, Aziz could have contributed to ldc and ldc would have been in a much better shape today. And maybe, just maybe, today we would have a production quality free and open source D compiler that just works. Good luck trying to compile dil, ldc, etc, let alone have them compile your D code and produce an executable that runs the way it should. I just don't understand why people can't work together. Life is too damn short. I for one long for D tools in D. Thank you very much Aziz and Bernard for your efforts in this direction. I consider the initial choice of building a C++ front-end to be a dead-end. Even more since many D programmers precisely come to D out of C++ disgust. What would you, Caligo, recommand as an initiative for D to have tools the D community can easily and happily use *and* contribute to? What would be the ideal direction, process, sequence of actions, in your views? How would you personly engage in this, according to your own moral stances expressed above? Denis -- _ vita es estrany spir.wikidot.com I want to see people work together and not branch off and do their own thing when there is no good reason for it. We don't need to keep reinventing the wheel, specially when to this day we don't have a viable, production quality, free and open source alternative to DMD for D2. GDC could have been ready a long time ago, but it's not because development is moving very slow. So why start a new project and make an announcement about it when you could be helping out with GDC?
Re: LLVM 3.0 type system changes
On Tue, Mar 8, 2011 at 9:48 AM, Iain Buclaw ibuc...@ubuntu.com wrote: == Quote from Caligo (iteronve...@gmail.com)'s article --bcaec51a83ee693a30049df97ef8 Content-Type: text/plain; charset=ISO-8859-1 On Tue, Mar 8, 2011 at 12:29 AM, Bernard Helyer b.hel...@gmail.com wrote: On Tue, 08 Mar 2011 00:15:54 -0600, Caligo wrote: On Mon, Mar 7, 2011 at 11:34 PM, Bernard Helyer b.hel...@gmail.com wrote: On Mon, 07 Mar 2011 20:41:39 -0600, Caligo wrote: Do we really need another D compiler that doesn't work? Name me a working D2 compiler that doesn't have a front-end based based on DMD. Furthermore, name me an in progress independent implementation further along than SDC. The only candidate is Dil. SDC _will_ be finished, mark my words. Regards, Some one who remembers why they don't use the NG that much. Yes, I know about dil, but I wonder why you chose not to contribute to dil instead of starting a new project. AFAIK dil is D1. I know about Dil. aziz is great, the project is great. However, I know and want to use D2/Phobos. Plus, if I'm going to spend years on a project, I may as well use stuff I like. Furthermore, SDC didn't start out as a full compiler. Just playing around with lexing/parsing D. NIH syndrome, too. I was aware of your NIH syndrome, and that's why I have a problem with this. The main reason you are doing this is to serve your own needs, to make yourself feel good, to earn some kind of recognition, but not to serve the community in any meaningful way. Even if the project fails, it will look good on your resume because you took the time to develop a compiler. This kind of behavior is very common in the FOSS community, and it has become a disease. It's the reason why there are thousands of dead software projects that were never completed and maintained. IMHO, there's no such thing as a completed project. And if there is, then it will need maintaining in 6-12 months time regardless. Libraries change, systems change, compilers change. Ever tried compiling a 'finished' project written 5 years ago with a modern GCC compiler? It can be rather tricky, especially if said project depended on certain mis-features of the language implementation at the time. Sure, I agree, many software projects are constantly changing and improving. But many software projects also have released versions that one could say are complete; the user could use that version for months and perhaps years with no problem. Our university Unix servers have gcc 3.4.4, which is about 7 years old. Hundreds of students use it everyday with no problem. It would be nice to have have the latest version, but it works and does what it's supposed to. Which version of GDC do you feel comfortable using for the next 12 months to compile your D2 code? How about ldc2?
Re: LLVM 3.0 type system changes
Am 10.03.2011 02:02, schrieb Caligo: On Tue, Mar 8, 2011 at 9:54 AM, spir denis.s...@gmail.com wrote: On 03/08/2011 03:33 PM, Caligo wrote: On Tue, Mar 8, 2011 at 12:29 AM, Bernard Helyerb.hel...@gmail.com wrote: On Tue, 08 Mar 2011 00:15:54 -0600, Caligo wrote: On Mon, Mar 7, 2011 at 11:34 PM, Bernard Helyerb.hel...@gmail.com wrote: On Mon, 07 Mar 2011 20:41:39 -0600, Caligo wrote: Do we really need another D compiler that doesn't work? Name me a working D2 compiler that doesn't have a front-end based based on DMD. Furthermore, name me an in progress independent implementation further along than SDC. The only candidate is Dil. SDC _will_ be finished, mark my words. Regards, Some one who remembers why they don't use the NG that much. Yes, I know about dil, but I wonder why you chose not to contribute to dil instead of starting a new project. AFAIK dil is D1. I know about Dil. aziz is great, the project is great. However, I know and want to use D2/Phobos. Plus, if I'm going to spend years on a project, I may as well use stuff I like. Furthermore, SDC didn't start out as a full compiler. Just playing around with lexing/parsing D. NIH syndrome, too. I was aware of your NIH syndrome, and that's why I have a problem with this. The main reason you are doing this is to serve your own needs, to make yourself feel good, to earn some kind of recognition, but not to serve the community in any meaningful way. Even if the project fails, it will look good on your resume because you took the time to develop a compiler. This kind of behavior is very common in the FOSS community, and it has become a disease. It's the reason why there are thousands of dead software projects that were never completed and maintained. I spoke with Aziz two years ago, and he had the same exact attitude. His excuse? He hates C++ and he thinks Walter's front end is horrible. So instead of contributing to ldc, he continued to work on his own compiler. After years of hard work, I doubt many would be willing to call dil a success. Instead of wasting all his time and energy on dil, Aziz could have contributed to ldc and ldc would have been in a much better shape today. And maybe, just maybe, today we would have a production quality free and open source D compiler that just works. Good luck trying to compile dil, ldc, etc, let alone have them compile your D code and produce an executable that runs the way it should. I just don't understand why people can't work together. Life is too damn short. I for one long for D tools in D. Thank you very much Aziz and Bernard for your efforts in this direction. I consider the initial choice of building a C++ front-end to be a dead-end. Even more since many D programmers precisely come to D out of C++ disgust. What would you, Caligo, recommand as an initiative for D to have tools the D community can easily and happily use *and* contribute to? What would be the ideal direction, process, sequence of actions, in your views? How would you personly engage in this, according to your own moral stances expressed above? Denis -- _ vita es estrany spir.wikidot.com I want to see people work together and not branch off and do their own thing when there is no good reason for it. We don't need to keep reinventing the wheel, specially when to this day we don't have a viable, production quality, free and open source alternative to DMD for D2. GDC could have been ready a long time ago, but it's not because development is moving very slow. So why start a new project and make an announcement about it when you could be helping out with GDC? Why discuss the usefulness of n D compilers when *you* could be helping out with GDC?
Re: Is DMD 2.052 32-bit?
On 3/9/2011 4:30 PM, Jonathan M Davis wrote: Much as I'd love to have a 64-bit binary of dmd, I don't think that the gain is even vaguely worth the risk at this point. What is the gain? The only thing I can think of is some 64 bit OS distributions are hostile to 32 bit binaries.
Re: Is DMD 2.052 32-bit?
On Wednesday 09 March 2011 17:56:13 Walter Bright wrote: On 3/9/2011 4:30 PM, Jonathan M Davis wrote: Much as I'd love to have a 64-bit binary of dmd, I don't think that the gain is even vaguely worth the risk at this point. What is the gain? The only thing I can think of is some 64 bit OS distributions are hostile to 32 bit binaries. Well, the fact that you then have a binary native to your system is obviously a gain (and is likely the one which people will cite most often), and that _does_ count for quite a lot. However, regardless of that, it's actually pretty easy to get dmd to run out of memory when compiling if you do much in the way of CTFE or template stuff. Granted, fixing some of the worst memory-related bugs in dmd will go a _long_ way towards fixing that, but even if they are, you're theoretically eventually supposed to be able to do pretty much anything at compile time which you can do at runtime in SafeD. And using enough memory that you require the 64- bit address space would be one of the things that you can do in SafeD when compiling for 64-bit. As long as the compiler is only 32-bit, you can't do that at compile time even though you can do it at runtime (though the current limitations of CTFE do reduce the problem in that you can't do a lot of stuff at compile time period). In any case, the fact that dmd runs out of memory fairly easily makes having a 64-bit version which could use all of my machine's memory really attractive. And honestly, having an actual, 64-bit binary to run on a 64-bit system is something that people generally want, and it _is_ definitely a problem to get a 32-bit binary into the 64-bit release of a Liunx distro. Truth be told, I would have thought that it would be a given that there would be a 64-bit version of dmd when going to support 64-bit compilation and was quite surprised when that was not your intention. - Jonathan M Davis
Re: Code Sandwiches
On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. And yes, most of these papers are published in print format as their main form of release. You're usually lucky to be able to get a PDF format instead of having to have bought the appropriate magazine or book of papers from a particular conference. - Jonathan M Davis
GZip File Reading
I noticed last night that Phobos actually has all the machinations required for reading gzipped files, buried in etc.c.zlib. I've wanted a high-level D interface for reading and writing compressed files with an API similar to normal file I/O for a while. I'm thinking about what the easiest/best design would be. At a high level there are two designs: 1. Hack std.stdio.file to support gzipped formats. This would allow an identical interface for normal and compressed I/O. It would also allow reuse of things like ByLine. However, it would require major refactoring of File to decouple it from the C file I/O routines so that it could call either the C or GZip ones depending on how it's configured. Probably, it would make sense to make an interface that wraps I/O functions and make an instance for C and one for gzip, with bzip2 and other goodies possibly being added later. 2. Write something completely separate. This would keep std.stdio.File doing one thing well (wrapping C file I/O) but would be more of a PITA for the user and possibly result in code duplication. I'd like to get some comments on what an appropriate API design and implementation for writing gzipped files would be. Two key requirements are that it must be as easy to use as std.stdio.File and it must be easy to extend to support other single-file compression formats like bz2.
Re: GZip File Reading
Am 10.03.2011 05:53, schrieb dsimcha: I noticed last night that Phobos actually has all the machinations required for reading gzipped files, buried in etc.c.zlib. I've wanted a high-level D interface for reading and writing compressed files with an API similar to normal file I/O for a while. I'm thinking about what the easiest/best design would be. At a high level there are two designs: 1. Hack std.stdio.file to support gzipped formats. This would allow an identical interface for normal and compressed I/O. It would also allow reuse of things like ByLine. However, it would require major refactoring of File to decouple it from the C file I/O routines so that it could call either the C or GZip ones depending on how it's configured. Probably, it would make sense to make an interface that wraps I/O functions and make an instance for C and one for gzip, with bzip2 and other goodies possibly being added later. 2. Write something completely separate. This would keep std.stdio.File doing one thing well (wrapping C file I/O) but would be more of a PITA for the user and possibly result in code duplication. I'd like to get some comments on what an appropriate API design and implementation for writing gzipped files would be. Two key requirements are that it must be as easy to use as std.stdio.File and it must be easy to extend to support other single-file compression formats like bz2. Maybe a proper stream API would help. It could provide ByLine etc, could be used for any kind of compression format (as long as an appropriate input-stream is provided), ... (analogous for writing) Cheers, - Daniel
Re: GZip File Reading
On Wednesday 09 March 2011 21:10:59 Daniel Gibson wrote: Am 10.03.2011 05:53, schrieb dsimcha: I noticed last night that Phobos actually has all the machinations required for reading gzipped files, buried in etc.c.zlib. I've wanted a high-level D interface for reading and writing compressed files with an API similar to normal file I/O for a while. I'm thinking about what the easiest/best design would be. At a high level there are two designs: 1. Hack std.stdio.file to support gzipped formats. This would allow an identical interface for normal and compressed I/O. It would also allow reuse of things like ByLine. However, it would require major refactoring of File to decouple it from the C file I/O routines so that it could call either the C or GZip ones depending on how it's configured. Probably, it would make sense to make an interface that wraps I/O functions and make an instance for C and one for gzip, with bzip2 and other goodies possibly being added later. 2. Write something completely separate. This would keep std.stdio.File doing one thing well (wrapping C file I/O) but would be more of a PITA for the user and possibly result in code duplication. I'd like to get some comments on what an appropriate API design and implementation for writing gzipped files would be. Two key requirements are that it must be as easy to use as std.stdio.File and it must be easy to extend to support other single-file compression formats like bz2. Maybe a proper stream API would help. It could provide ByLine etc, could be used for any kind of compression format (as long as an appropriate input-stream is provided), ... (analogous for writing) That was my thought. We really need proper streams... The other potential issue with compressed files is that they can contain directories and such. A gzipped/bzipped file is not necessarily a file that you can read, even once it's been uncompressed. That may or not matter for this particular application of them, but it is something to be aware of. - Jonathan M Davis
Re: Is DMD 2.052 32-bit?
On Wed, 9 Mar 2011 19:08:04 -0800 Jonathan M Davis jmdavisp...@gmx.com wrote: Truth be told, I would have thought that it would be a given that there would be a 64-bit version of dmd when going to support 64-bit compilation and was quite surprised when that was not your intention. +1 Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: Code Sandwiches
Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. They're text. With minor formatting. That alone makes html better. Html is lousy for a lot of things, but formatted text is the one thing it's always been perfectly good at. And frankly I think I'd *rather* go with pretty much any word processing format if the only other option was pdf. Of course, show me a pdf viewer that's actually worth a damn for viewing documents on a PC instead of just printing, and maybe I could be persuaded to not mind so much. So far I've used (as far as I can think of, I know there's been others), Acrobat Reader (which I don't even allow on my computer anymore), the one built into OSX, and FoxIt. And yes, most of these papers are published in print format as their main form of release. You're usually lucky to be able to get a PDF format instead of having to have bought the appropriate magazine or book of papers from a particular conference. I'm all too well aware how much academics considers us unwashed masses lucky to ever be granted the privilege to so much as glance upon any of their pristine excellence.
Re: Code Sandwiches
On Wednesday 09 March 2011 22:18:53 Nick Sabalausky wrote: Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. They're text. With minor formatting. That alone makes html better. Html is lousy for a lot of things, but formatted text is the one thing it's always been perfectly good at. And frankly I think I'd *rather* go with pretty much any word processing format if the only other option was pdf. I'm afraid that I don't understand at all. The only time that I would consider html better than a pdf is if the pdf isn't searchable (and most papers _are_ searchable). And I _definitely_ don't like dealing with whatever word processor format someone happens to be using. PDF is nice and universal. I don't have to worry about whether I have the appropriate fonts or if I even have a program which can read their word processor format of choice. I don't really have any gripes with PDF at all. - Jonathan M Davis
Re: Code Sandwiches
Daniel Gibson metalcae...@gmail.com wrote in message news:il90m3$2t70$3...@digitalmars.com... Am 09.03.2011 23:38, schrieb Nick Sabalausky: Daniel Gibson metalcae...@gmail.com wrote in message news:il8t79$2t70$2...@digitalmars.com... Am 09.03.2011 22:49, schrieb Daniel Gibson: Am 09.03.2011 22:33, schrieb Nick Sabalausky: Nick Sabalausky a@a.a wrote in message news:il8rmg$176i$1...@digitalmars.com... But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? It's like how my dad tries to email photos by sticking them into a Word document first. WTF's the point? No it's not. At least PDF is a standard format with free and open viewers on about any platform. Vaguely free, open and standard. Only in the same sense that swf, doc and docx are free, open and standard. HTML (bad as it may be) still wins here. No, PDF is an ISO standard, swf and doc aren't and docx isn't either, because it doesn't really conform with the OOXML ISO standard.. Yea, well, it's still heavily rooted in Adobe. As mentioned before: there are free and open viewers for PDF for (almost?) all platforms that work reasonably well. Reasonably well only as far as viewing pdfs on a pc *ever* works reasonably well. Can't say the same about doc(x) or swf.. No argument here. Never been a fan of doc or swf anyway. That HTML is rendered almost the same on different browsers is a pretty recent development as well... They're documents. They have no need for perfectly consistent rendering. Hell, these are exactly the sorts of things html was *created* for. It was *intended* for people to view documents however they want to view them. Nevertheless HTML doesn't have as much formatting possibilities as LaTeX, especially for formulas, so you'd end up using a lot of images which is suboptimal. I don't see what's wrong with using images for formulas. As for other types of formatting, how much does a document (that isn't pretending to be real software or a multimedia experience) really need? the PDFs are more or less a by-product of that. Also they're usually written with LaTeX (or something similar) and the obvious (digital) formats to publish stuff written in *TeX are Postscript and PDF - I guess you agree that PDF is preferable, as it can be searched etc ;) *Some* PDFs can be searched. Most can, the others are - most probably deliberately - broken. You can do the same with HTML if you want, just use images instead of real text.. Yea, you can do the same with html, but nobody ever does. OTOH, I've come across plenty of pdfs with text that isn't really text. But I'll grant that's not much of a reason against the content producer choosing pdf, since they're fully capable of choosing to make it searchable. You can also export *TeX to HTML, but that'll probably fuck up formatting and formulas. So you'd have to use some LaTeX-HTML converter and clean up stuff afterwards to make sure the formatting is OK, the formulas are like they were intended to be (missing a small detail like a ' or an index or whatever will make a formula unusable) etc.. So after 15 years there still isn't a good Latex-HTML converter? Sounds more like the matter is a lack of interest in using anything other than PDF rather than a lack of a good Latex-HTML converter. I don't know. I think I don't have to tell someone who still uses Firefox2 that people don't have the motivation to try new software all the time just because it may finally be usable ;) I don't use FF2 because I like it. And I *certainly* don't use it for lack of trying all the alternatives. I have a *huge* amount of interest in a variant of FF3 or SRWare Iron or even IE that gets rid of all the crap that I don't have to deal with in FF2. The problem is, I'm the *only* one that has such interest. One more thing: Published papers will probably be cited by other papers or theses. With PDF this is easier, you can write XYZ, page 42, l 13 - with HTML pages it's not that easy, you could maybe write in chapter 3 somewhere in the 5th paragraph or something like that, but that sucks. Or worse on the fourth page in the third paragraph and once a new CMS is used that splits pages differently that is completely meaningless.. These formal papers are divided into sections and subsections, plus HTML supports links and anchors, and even supports disabled word wrapping if that's really needed, so those are non-issues. If anchors etc are used.. fine. But you can't take that for granted. Strawman argument. We're talking about the party that *releases* the document choosing pdf, not the party viewing it. The person putting the paper out is in *exactly* the position to include anchors. Of course they can take that ability for granted. Additionally, if one of the main reasons they choose pdf is because
Re: Code Sandwiches
Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2411.1299739219.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 22:18:53 Nick Sabalausky wrote: Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. They're text. With minor formatting. That alone makes html better. Html is lousy for a lot of things, but formatted text is the one thing it's always been perfectly good at. And frankly I think I'd *rather* go with pretty much any word processing format if the only other option was pdf. I'm afraid that I don't understand at all. The only time that I would consider html better than a pdf is if the pdf isn't searchable (and most papers _are_ searchable). And I _definitely_ don't like dealing with whatever word processor format someone happens to be using. PDF is nice and universal. I don't have to worry about whether I have the appropriate fonts or if I even have a program which can read their word processor format of choice. I don't really have any gripes with PDF at all. PDF: *Complete* inability to adapt appropriately to the viewing device, *completely* useless page breaks and associated top/bottom page margins in places that have absolutely *no* use for them, no flowing layout, frequent horizontal scrolling, poor (if any) linking, inability for the reader to choose the fonts/etc that *they* find readable. Oh, and ever tried reading one of those pdf's that use a multi-column layout? All of this together makes PDF the #1 worst document format for viewing on a PC. All for what? Increased accuracy the *few* times it ever gets printed? Outside of print-shops, pdf needs to die a horrible death.
Re: Is DMD 2.052 32-bit?
Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2408.1299726495.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 17:56:13 Walter Bright wrote: On 3/9/2011 4:30 PM, Jonathan M Davis wrote: Much as I'd love to have a 64-bit binary of dmd, I don't think that the gain is even vaguely worth the risk at this point. What is the gain? The only thing I can think of is some 64 bit OS distributions are hostile to 32 bit binaries. Well, the fact that you then have a binary native to your system is obviously a gain (and is likely the one which people will cite most often), and that _does_ count for quite a lot. Specifically? However, regardless of that, it's actually pretty easy to get dmd to run out of memory when compiling if you do much in the way of CTFE or template stuff. Granted, fixing some of the worst memory-related bugs in dmd will go a _long_ way towards fixing that, but even if they are, you're theoretically eventually supposed to be able to do pretty much anything at compile time which you can do at runtime in SafeD. And using enough memory that you require the 64- bit address space would be one of the things that you can do in SafeD when compiling for 64-bit. As long as the compiler is only 32-bit, you can't do that at compile time even though you can do it at runtime (though the current limitations of CTFE do reduce the problem in that you can't do a lot of stuff at compile time period). In any case, the fact that dmd runs out of memory fairly easily makes having a 64-bit version which could use all of my machine's memory really attractive. And honestly, having an actual, 64-bit binary to run on a 64-bit system is something that people generally want, and it _is_ definitely a problem to get a 32-bit binary into the 64-bit release of a Liunx distro. Truth be told, I would have thought that it would be a given that there would be a 64-bit version of dmd when going to support 64-bit compilation and was quite surprised when that was not your intention. I'd be more interested in a build of DMD that just doesn't eat memory like popcorn.
Re: Code Sandwiches
On Wednesday 09 March 2011 23:15:01 Nick Sabalausky wrote: Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2411.1299739219.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 22:18:53 Nick Sabalausky wrote: Jonathan M Davis jmdavisp...@gmx.com wrote in message news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. They're text. With minor formatting. That alone makes html better. Html is lousy for a lot of things, but formatted text is the one thing it's always been perfectly good at. And frankly I think I'd *rather* go with pretty much any word processing format if the only other option was pdf. I'm afraid that I don't understand at all. The only time that I would consider html better than a pdf is if the pdf isn't searchable (and most papers _are_ searchable). And I _definitely_ don't like dealing with whatever word processor format someone happens to be using. PDF is nice and universal. I don't have to worry about whether I have the appropriate fonts or if I even have a program which can read their word processor format of choice. I don't really have any gripes with PDF at all. PDF: *Complete* inability to adapt appropriately to the viewing device, *completely* useless page breaks and associated top/bottom page margins in places that have absolutely *no* use for them, no flowing layout, frequent horizontal scrolling, poor (if any) linking, inability for the reader to choose the fonts/etc that *they* find readable. Oh, and ever tried reading one of those pdf's that use a multi-column layout? All of this together makes PDF the #1 worst document format for viewing on a PC. All for what? Increased accuracy the *few* times it ever gets printed? Outside of print-shops, pdf needs to die a horrible death. LOL. It's _supposed_ to have a fixed look. That's part of what's so wonderful about it. You _know_ that it will look right every time. I think that it's quite clear that we're never going to see eye-to-eye on this one. - Jonathan M Davis
Re: Code Sandwiches
On 03/10/2011 08:15 AM, Nick Sabalausky wrote: Jonathan M Davisjmdavisp...@gmx.com wrote in message news:mailman.2411.1299739219.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 22:18:53 Nick Sabalausky wrote: Jonathan M Davisjmdavisp...@gmx.com wrote in message news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com... On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote: But why is it that academic authors have a chronic inability to release any form of text without first cramming it into a goddamn PDF of all things? This is one example of why I despise Adobe's predominance: PDF is fucking useless for anything but printing, and no one seems to know it. Isn't it about time the ivory tower learned about Mosaic? The web is more than a PDF-distribution tool...Really! It is! Welcome to the mid-90's. Sheesh. And what format would you _want_ it in? PDF is _way_ better than having a file for any particular word processor. What else would you pick? HTML? Yuck. How would _that_ be any better than a PDF? These are _papers_ after all, not some web article. They're either written up in a word processor or with latex. Distributing them as PDFs makes perfect sense. They're text. With minor formatting. That alone makes html better. Html is lousy for a lot of things, but formatted text is the one thing it's always been perfectly good at. And frankly I think I'd *rather* go with pretty much any word processing format if the only other option was pdf. I'm afraid that I don't understand at all. The only time that I would consider html better than a pdf is if the pdf isn't searchable (and most papers _are_ searchable). And I _definitely_ don't like dealing with whatever word processor format someone happens to be using. PDF is nice and universal. I don't have to worry about whether I have the appropriate fonts or if I even have a program which can read their word processor format of choice. I don't really have any gripes with PDF at all. PDF: *Complete* inability to adapt appropriately to the viewing device, *completely* useless page breaks and associated top/bottom page margins in places that have absolutely *no* use for them, no flowing layout, frequent horizontal scrolling, poor (if any) linking, inability for the reader to choose the fonts/etc that *they* find readable. Oh, and ever tried reading one of those pdf's that use a multi-column layout? All of this together makes PDF the #1 worst document format for viewing on a PC. All for what? Increased accuracy the *few* times it ever gets printed? Outside of print-shops, pdf needs to die a horrible death. Agreed. pdf (or maybe rather the more powerful ps) should be an end-of-chain format just before printing. Delivering pdf docs for anyhting else makes no sense. pdf is a printing format (a poor one, according to typo professional, please ask); nothing else. Also, nowadays, it's no more necessary to use ps or pdf to get (correct) printing. Nearly anything can be composed and printed as is. An exception may be complex math formulas (in latex indeed). Even then, one can precompose them into plain graphics. Denis -- _ vita es estrany spir.wikidot.com
Re: Templated struct doesn't need the parameterized type in return type definitions?
On Tue, 08 Mar 2011 15:25:27 -0500, Steven Schveighoffer wrote: Hey, wouldn't it be cool if I could add a custom allocator to all classes!?... class Collection(T, alloc = DefaultAllocator!T) { Collection!(T) add(T t) { ...; return this; } // 20 other now subtly incorrect functions like add... } See the problem? This seems like a good reason to keep allowing the feature. It would be nice if it could be documented clearly somewhere, maybe here: http://www.digitalmars.com/d/2.0/template.html#ClassTemplateDeclaration
Re: Mocking framework
I gave this some thought, and I'm probably just a bit braindamaged by C#. Consider you wish to unittest a class that fetches data from a database and sends an email. The common scenario here is to use IoC and mock the objects so you can check that FetchData was called and SendEmail is called using the data from your FetchData. So this is the scenario: interface SomeBigInterface { void doStuff(); // And many more methods here } class SomeClassImplementingInterface : SomeBigInterface { void doStuff() {} // And many more methods here } class SomeClass { SomeBigInterface _i; this(SomeBigInterface i) { _i = i; } void someComplexStuff() { _i.doStuff(); } } And how can I mock the entire SomeBigInterface containing 20 methods just to make sure doStuff is called? And what if it didn't take an interface, but an SmtpClient? But then I thought I might be forcing D into the limitations of C# and other pure oo langauges. Then I created this instead: // Now I'm only depending on the methods I use, not an entire interface or a specific class/struct class SomeClass2(T) if( is(typeof({ return T.init.doStuff(); })) ) { T _i; this(T i) { _i = i; } void someComplexStuff() { _i.doStuff(); } } Then I can mock it up a lot easier: class MyMock { int called; void doStuff() { ++called; } } auto mock = new MyMock(); auto a = new SomeClass2!MyMock(mock); assert(mock.called == 0); a.someComplexStuff(); assert(mock.called == 1); a.someComplexStuff(); assert(mock.called == 2); And voila.. It's simpler to create the mock object by hand than through a library. The question is, is this good practice? Should I often just create templates like isSomeInterface (like the range api does) checking for methods and properties instead of using interfaces directly?
Iterating over 0..T.max
In a (template) data structure I'm working on, I had the following thinko: auto a = new T[n]; foreach (T i, ref e; a) { e = i; } Then I instantiated it with T=bool, and n=256. Infinite loop, of course -- the problem being that i wraps around to 0 after the last iteration. Easily fixed, and not that much of a problem (given that I caught it) -- I'll just use e = cast(T) i. (Of course, even with that solution, I'd get wrap problems if n=257, but I just want to make sure I allow T.max as a size.) But I'm wondering: given D's excellent valua range propagations and overflow checking, would it be possible to catch this special case (i.e., detect when the array can be too long for the index type)? Or are any other safeguards possible to prevent this sort of thing? -- Magnus Lie Hetland http://hetland.org
struct opEquals
1) Why does this code not work (dmd 2.051) and how do I fix it: struct S { static S New() { S s; return s; } const bool opEquals(ref const(S) s) { return true; } } void main() { S s; assert(s == S.New); } 2) Why is the type of struct opEquals have to be const bool opEquals(ref const(T) s)? Why is it even enforced to be anything in particular (it's not like there's an Object or something to inherit from)? -SiegeLord
Re: struct opEquals
On Wed, 09 Mar 2011 11:40:25 -0500, SiegeLord n...@none.com wrote: 1) Why does this code not work (dmd 2.051) and how do I fix it: struct S { static S New() { S s; return s; } const bool opEquals(ref const(S) s) { return true; } } void main() { S s; assert(s == S.New); } Because passing an argument via ref means it must be an lvalue. New() returns an rvalue. However, that restriction is lifted for the 'this' parameter, so the following should actually work: assert(S.New() == s); 2) Why is the type of struct opEquals have to be const bool opEquals(ref const(T) s)? Why is it even enforced to be anything in particular (it's not like there's an Object or something to inherit from)? It's a mis-designed feature of structs. There is a bug report on it: http://d.puremagic.com/issues/show_bug.cgi?id=3659 -Steve
Re: struct opEquals
Steven Schveighoffer Wrote: It's a mis-designed feature of structs. There is a bug report on it: http://d.puremagic.com/issues/show_bug.cgi?id=3659 It worked fine in D1. Or did you mean that the mis-designed feature is the const system? Anyway, thanks for the link to the bug report. I'll work around it for now. -SiegeLord -Steve
Re: struct opEquals
On Wed, 09 Mar 2011 12:15:26 -0500, SiegeLord n...@none.com wrote: Steven Schveighoffer Wrote: It's a mis-designed feature of structs. There is a bug report on it: http://d.puremagic.com/issues/show_bug.cgi?id=3659 It worked fine in D1. Or did you mean that the mis-designed feature is the const system? No, the mis-designed feature is the compiler requiring that specific signature in D2. Const is not the mis-designed feature. It works in D1 because D1 doesn't generate intelligent opEquals for structs that do not have them, it just does a bit compare. For example, if you did this in D1, it fails the assert: struct S { string name; } void main() { S s1, s2; s1.name = hello.dup; s2.name = hello.dup; assert(s1 == s2); // fails in D1, should pass on D2. } -Steve
Re: Iterating over 0..T.max
On 03/09/2011 09:09 AM, Magnus Lie Hetland wrote: In a (template) data structure I'm working on, I had the following thinko: auto a = new T[n]; foreach (T i, ref e; a) { e = i; } Then I instantiated it with T=bool, and n=256. Infinite loop, of course -- the problem being that i wraps around to 0 after the last iteration. Easily fixed, and not that much of a problem (given that I caught it) -- I'll just use e = cast(T) i. (Of course, even with that solution, I'd get wrap problems if n=257, but I just want to make sure I allow T.max as a size.) But I'm wondering: given D's excellent valua range propagations and overflow checking, would it be possible to catch this special case (i.e., detect when the array can be too long for the index type)? Or are any other safeguards possible to prevent this sort of thing? I don't see how that works in dmd2, and I don't have much experience with dmd1, so I'll admit that this may be different on dmd1. I get compilation errors with your code: import std.stdio; void main() { make_new!(bool)(256); } void make_new(T)(size_t n) { auto a = new T[n]; foreach (T i, ref e; a) { e = i; } } This gives me: tmax.d(12): Error: operation not allowed on bool 'i' tmax.d(5): Error: template instance tmax.make_new!(bool) error instantiating As far as I understand, a foreach over an array makes the first value (i) a uint for the index, and the second value (e) a copy of the value in the array (in our case a bool). import std.stdio; void main() { make_new!(bool)(1); } void make_new(T)(size_t n) { auto a = new T[n]; writef(%s\n, typeof(a).stringof); foreach (i,e; a) { writef(%s\n, typeof(i).stringof); writef(%s\n, typeof(e).stringof); } } /* Output: bool[] uint bool */ But to the question about boundaries, 'i' is a uint (size_t), and on a 4GB machine, I can't call auto a = new bool[size_t.max];. The program segfaults. size_t.max / 2 for a size gives me Memory allocation failed. size_t.max / 4 works fine. I'm not sure how to get a program to do what you describe.
Re: full ident name without mangle/demange?
Nick Sabalausky napisał: Is there a way to get the fully-qualified name of an identifier without doing demange( mangledName!(foo) )? Heh, looks like there isn't. It may be worth filing an enhancement request for __traits(fullyQualifiedName, foo). BTW, what do you need it for? -- Tomek
Best way in D2 to rotate a ubyte[4] array
What is the most efficient way of implement a rotation of ubyte[4] array? By rotation I mean: rotateRight([1, 2, 3, 4]) - [4, 1, 2, 3] TIA, Tom;
I seem to be able to crash writefln
This is on Windows 7. Using a def file to stop the terminal window coming up. win.def EXETYPE NT SUBSYSTEM WINDOWS bug.d import std.stdio; import std.string; void main() { auto f = File( z.txt, w ); scope( exit ) f.close; string foo = bar; foreach( n; 0 .. 10 ) { writefln( %s, foo ); f.write( format( count duck-u-lar: %s\n, n ) ); } } output (from in z.txt): count duck-u-lar: 0
Re: Best way in D2 to rotate a ubyte[4] array
Tom: What is the most efficient way of implement a rotation of ubyte[4] array? By rotation I mean: rotateRight([1, 2, 3, 4]) - [4, 1, 2, 3] Two versions, I have done no benchmarks so far: import std.c.stdio: printf; union Four { ubyte[4] a; uint u; } void showFour(Four f) { printf(f.u: %u\n, f.u); printf(f.a: [%d, %d, %d, %d]\n, cast(int)f.a[0], cast(int)f.a[1], cast(int)f.a[2], cast(int)f.a[3]); } void main() { Four f; f.a[] = [1, 2, 3, 4]; showFour(f); f.u = (f.u 8) | (f.u 24); showFour(f); printf(\n); // alternative f.a[] = [1, 2, 3, 4]; uint u2 = f.u; showFour(f); printf(u2: %u\n, u2); asm { rol u2, 8; } f.u = u2; showFour(f); } /* dmd -O -release test.d __Dmain comdat pushEBP mov EBP,ESP sub ESP,8 push4 mov EAX,offset FLAT:_D12TypeInfo_xAh6__initZ push4 push3 push2 push1 push4 mov dword ptr -8[EBP],0 pushEAX callnear ptr __d_arrayliteralT add ESP,018h pushEAX lea EAX,-8[EBP] pushEAX callnear ptr _memcpy mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,-8[EBP] mov ECX,-8[EBP] shl EAX,8; = shr ECX,018h or EAX,ECX mov -8[EBP],EAX mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,offset FLAT:_DATA[024h] pushEAX callnear ptr _printf mov EAX,offset FLAT:_D12TypeInfo_xAh6__initZ push4 push4 push3 push2 push1 push4 pushEAX callnear ptr __d_arrayliteralT add ESP,018h pushEAX lea EAX,-8[EBP] pushEAX callnear ptr _memcpy mov EAX,-8[EBP] mov -4[EBP],EAX mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,offset FLAT:_DATA[028h] pushdword ptr -4[EBP] pushEAX callnear ptr _printf add ESP,024h rol -4[EBP],8 ; = mov EAX,-4[EBP] mov -8[EBP],EAX mov EAX,-4[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov ESP,EBP pop EBP ret */ In theory a C/C++/D compiler has to compile an expression like (x 8)|(x24) with a ROL instruction, in practice DMD doesn't do it. Months ago I have asked the two (four in X86) roll instructions to be added to the Phobos core intrinsics module, but I am not sure what Walter answered me. Bye, bearophile
Re: Best way in D2 to rotate a ubyte[4] array
On 03/09/2011 03:41 PM, Tom wrote: What is the most efficient way of implement a rotation of ubyte[4] array? By rotation I mean: rotateRight([1, 2, 3, 4]) - [4, 1, 2, 3] TIA, Tom; I don't know of anything more efficient than: ubyte[4] bytes = [1,2,3,4]; bytes = bytes[$-1] ~ bytes[0..$-1]; // Rotate left bytes = bytes[1..$] ~ bytes[0]; // Rotate right Both static arrays and dynamic arrays (ubyte[] bytes = [1,2,3,4];) perform about the same between 1 and 10 milling rotations in either direction. I think a temporary array might be created for the rhs, and then have the values of the rhs array copied to the lhs array, but I don't know. With static arrays, I'm not sure there would be a way to get around it with out at least a temporary value for the one that's moving between the first and last positions.
std.path.shell throws exception with garbage string
import std.process; void main() { char[] chBuffer = new char[](256); chBuffer[] = '\0'; chBuffer[0..3] = dir.dup; auto result = shell(chBuffer.idup); } It does two things: 1. It prints out the result of the shell invocation to stdout. This shouldn't happen. 2. It throws this: std.file.FileException@std\file.d(295): 5a5785b9a9ef300e292f021170a6bb2e34b80c86bb8decbb6b9b8d3b5e852cd This sample works: import std.process; void main() { string chBuffer = dir; auto result = shell(chBuffer); } Here the shell invocation isn't printed to the screen, but stored in result like it should be. The problem is I'm working with the win32 API and I can't use lovely D strings. I've tried using to!string and .idup on the call to 'shell', I've tried appending null's to the char[] but nothing seems to help. What am I doing wrong?
Re: Best way in D2 to rotate a ubyte[4] array
On Wednesday, March 09, 2011 15:35:29 Kai Meyer wrote: On 03/09/2011 03:41 PM, Tom wrote: What is the most efficient way of implement a rotation of ubyte[4] array? By rotation I mean: rotateRight([1, 2, 3, 4]) - [4, 1, 2, 3] TIA, Tom; I don't know of anything more efficient than: ubyte[4] bytes = [1,2,3,4]; bytes = bytes[$-1] ~ bytes[0..$-1]; // Rotate left I'm stunned that this works. I'd even consider reporting it as a bug. You're concatenating a ubyte[] ont a ubyte... bytes = bytes[1..$] ~ bytes[0]; // Rotate right You're concatenating a ubyte onto a slice of the array (so it's ubyte[] instead of ubyte[4]). That will result in a temporary whose value will then be assigned to the original ubyte[4]. Both static arrays and dynamic arrays (ubyte[] bytes = [1,2,3,4];) perform about the same between 1 and 10 milling rotations in either direction. I think a temporary array might be created for the rhs, and then have the values of the rhs array copied to the lhs array, but I don't know. With static arrays, I'm not sure there would be a way to get around it with out at least a temporary value for the one that's moving between the first and last positions. Honestly, given that this is 4 ubytes, I would fully expect that the fastest way to do this would involve casting it to a unit and shifting it - something along the lines of what Bearophile suggested. I'd be _very_ suprised if this implementation were faster, since it involves creating a temporary array. - Jonathan M Davis
Re: Best way in D2 to rotate a ubyte[4] array
On 03/09/2011 04:25 PM, bearophile wrote: Tom: What is the most efficient way of implement a rotation of ubyte[4] array? By rotation I mean: rotateRight([1, 2, 3, 4]) - [4, 1, 2, 3] Two versions, I have done no benchmarks so far: import std.c.stdio: printf; union Four { ubyte[4] a; uint u; } void showFour(Four f) { printf(f.u: %u\n, f.u); printf(f.a: [%d, %d, %d, %d]\n, cast(int)f.a[0], cast(int)f.a[1], cast(int)f.a[2], cast(int)f.a[3]); } void main() { Four f; f.a[] = [1, 2, 3, 4]; showFour(f); f.u = (f.u 8) | (f.u 24); showFour(f); printf(\n); // alternative f.a[] = [1, 2, 3, 4]; uint u2 = f.u; showFour(f); printf(u2: %u\n, u2); asm { rol u2, 8; } f.u = u2; showFour(f); } /* dmd -O -release test.d __Dmain comdat pushEBP mov EBP,ESP sub ESP,8 push4 mov EAX,offset FLAT:_D12TypeInfo_xAh6__initZ push4 push3 push2 push1 push4 mov dword ptr -8[EBP],0 pushEAX callnear ptr __d_arrayliteralT add ESP,018h pushEAX lea EAX,-8[EBP] pushEAX callnear ptr _memcpy mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,-8[EBP] mov ECX,-8[EBP] shl EAX,8;= shr ECX,018h or EAX,ECX mov -8[EBP],EAX mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,offset FLAT:_DATA[024h] pushEAX callnear ptr _printf mov EAX,offset FLAT:_D12TypeInfo_xAh6__initZ push4 push4 push3 push2 push1 push4 pushEAX callnear ptr __d_arrayliteralT add ESP,018h pushEAX lea EAX,-8[EBP] pushEAX callnear ptr _memcpy mov EAX,-8[EBP] mov -4[EBP],EAX mov EAX,-8[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov EAX,offset FLAT:_DATA[028h] pushdword ptr -4[EBP] pushEAX callnear ptr _printf add ESP,024h rol -4[EBP],8 ;= mov EAX,-4[EBP] mov -8[EBP],EAX mov EAX,-4[EBP] callnear ptr _D4test8showFourFS4test4FourZv mov ESP,EBP pop EBP ret */ In theory a C/C++/D compiler has to compile an expression like (x 8)|(x24) with a ROL instruction, in practice DMD doesn't do it. Months ago I have asked the two (four in X86) roll instructions to be added to the Phobos core intrinsics module, but I am not sure what Walter answered me. Bye, bearophile I love it. I've done a little benchmark that just repeats the rotate left a certain number of times, and then a rotate right a certain number of times. It looks like shifting ( | ) is faster than the process of copying the uint value, shifting it, and copying it back. If I move the assignment for the rol outside of the for loop, the rol is about twice as fast. http://dl.dropbox.com/u/12135920/rotate.d Both are anywhere from 30 to 80 times faster than the slicing method I proposed (also included in the rotate.d file). -- Rotating static array left to right 500 times Finished in1971.46 milliseconds Rotating static array right to left 500 times Finished in1987.60 milliseconds Rotating dynamic array left to right 500 times Finished in1932.40 milliseconds Rotating dynamic array right to left 500 times Finished in1981.71 milliseconds Shifting Union left to right 500 times Finished in 33.46 milliseconds Shifting Union right to left 500 times Finished in 34.26 milliseconds Rolling Union left to right 500 times Finished in 67.51 milliseconds Rolling Union right to left 500 times Finished in 67.47 milliseconds Rolling Union left to right 500 times with assignment with temporary variable outside of the loop Finished in 28.81 milliseconds Rolling Union right to left 500 times with assignment with temporary variable outside of the loop Finished in 25.57 milliseconds