Re: Synchronized classes have no public members
On Tuesday, 20 October 2015 at 18:15:05 UTC, Bruno Medeiros wrote: On 16/10/2015 08:02, Jacob Carlborg wrote: On 2015-10-16 08:49, Dicebot wrote: As far as I understand topic is about deprecating direct field access of synchronized classes, method calls in synhronized classes and `synchronized () {}` blocks will remain untouched. Is it even possible to do synchronized classes in Java? That is, but synchronized on the class declaration as in D. No, it's not possible. `synchronized` in Java can only apply to methods, or the synchronized statement. And (for a change), rightly so that it's not possible. This synchronized class feature seems to me a clumsy mis-feature. At first glance at least. This change seems like a good idea. As far as having synchronized classes go. I think they can be useful. If, as some of the respondents have said a synchronized class is wrong, then perhaps their classes are too big and indeed require fine grained locks everywhere. Or, if it is performance you are after, then that is the way you might do it. If, however, you would like better defense against multi-threaded related breakage against your non time-critical class, a class wide lock, surely, would be of benefit.
Re: iterate over a directory, dealing with permission errors
On Friday, 18 September 2015 at 14:35:39 UTC, Dmitry Olshansky wrote: On 18-Sep-2015 15:03, Adrian Matoga wrote: On Friday, 18 September 2015 at 11:35:45 UTC, John Colvin wrote: Posting here instead of learn because I think it uncovers a design flaw void main(string[] args) { import std.file : dirEntries, SpanMode; import std.stdio : writeln; foreach(file; dirEntries(args[1], SpanMode.depth)) writeln(file.name); } Modify this program such that it will print " access denied" instead of crashing with an exception whenever it hits a permissions problem. Remember that you might not even have permission to read the directory given in args[1]. Remember that access permissions can change at any time. It can be done, but it is seriously ugly. https://github.com/D-Programming-Language/phobos/pull/931 I had to move to some urgent stuff instead of improving the PR and later I forgot about it. The discussion points out what not to do when solving this issue. :) FYI https://github.com/D-Programming-Language/phobos/pull/2768 I came across the same problem a few years ago. I can't remember if a bug was raised. it would be very handy to document the way to get around this in the dirEntries pages, especially if it involves a little convolution.
Re: GDC vs dmd speed
On 14/10/2013 22:22, Walter Bright wrote: On 10/14/2013 12:24 PM, Spacen Jasset wrote: dmd32 v2.063.2 with flags: ["-O", "-release", "-noboundscheck", "-inline"] gdc 4.6 (0.29.1-4.6.4-1ubuntu4) Which I assume might be v2.020? with flags: ["-O2"] dmd uses the x87 for 32 bit code for floating point, while gdc uses the SIMD instructions, which are faster. For 64 bit code, dmd uses SIMD instructions too. Thanks Walter. I shall find a 64 bit system at some point to compare.
Re: GDC vs dmd speed
On 14/10/2013 22:06, bearophile wrote: Spacen Jasset: const float pi = 3.14159265f; float dx = cast(float)(Clock.currSystemTick.length % (TickDuration.ticksPerSec * 10)) / (TickDuration.ticksPerSec * 10); float xRot = sin(dx * pi * 2) * 0.4f + pi / 2; float yRot = cos(dx * pi * 2) * 0.4f; float yCos = cos(yRot); float ySin = sin(yRot); float xCos = cos(xRot); float xSin = sin(xRot); float ox = 32.5f + dx * 64; float oy = 32.5f; float oz = 32.5f; for (int x = 0; x < width; ++x) { float ___xd = cast(float)(x - width / 2) / height; for (int y = 0; y < height; ++y) { float __yd = cast(float)(y - height / 2) / height; float __zd = 1; The performance difference between the DMD and GDC compile is kind of expected for FP-heavy code. Also try the new LDC2 compiler (ldmd2 for the same compilation switches) that sometimes is better than GDC. More comments: - There is a PI in std.math (but it's not a float); - Add immutable/const to every variable that doesn't need to change. This is a good habit like washing your hands before eating; - "for (int x = 0; x < width; ++x)" ==> "foreach (immutable x; 0 .. width)"; - I suggest to avoid many leading/trailing newlines in identifier names; - It's probably worth replacing all those "float" with another name, like "FP" and then define "alias FP = float;" at the beginning. So you can see how much performance you lose/gain using floats/doubles. In many cases in my code there is no difference, but float are less precise. Floats can be useful when you have many of them, in a struct or array. Floats can also be useful when you call certain numerical functions that compute their result by approximation, but on some CPUs sin/cos are not among those functions. Bye, bearophile Thank you. I may take up some of those suggestions. It was a direct port of some c++ hence the style.
GDC vs dmd speed
Hello, Whilst porting some C++ code I have discovered that the compiled output from the gdc compiler seems to be 47% quicker than the dmd compiler. The code I believe that is the 'busy' code is below. Although I could provide a complete test if anyone is interested. Is this an expected result and/or is there something I could change to make the compilers perform similarly. The function render_minecraft gets called repeatedly to render single frames. framebufindex is a simple function to return a buffer index. Perhaps it is not being inlined? rgba is another simple function. further details are: dmd32 v2.063.2 with flags: ["-O", "-release", "-noboundscheck", "-inline"] gdc 4.6 (0.29.1-4.6.4-1ubuntu4) Which I assume might be v2.020? with flags: ["-O2"] // render the next frame into the given 'frame_buf' void render_minecraft(void * private_renderer_data, uint32_t * frame_buf) { render_info * info = cast(render_info *)private_renderer_data; const float pi = 3.14159265f; float dx = cast(float)(Clock.currSystemTick.length % (TickDuration.ticksPerSec * 10)) / (TickDuration.ticksPerSec * 10); float xRot = sin(dx * pi * 2) * 0.4f + pi / 2; float yRot = cos(dx * pi * 2) * 0.4f; float yCos = cos(yRot); float ySin = sin(yRot); float xCos = cos(xRot); float xSin = sin(xRot); float ox = 32.5f + dx * 64; float oy = 32.5f; float oz = 32.5f; for (int x = 0; x < width; ++x) { float ___xd = cast(float)(x - width / 2) / height; for (int y = 0; y < height; ++y) { float __yd = cast(float)(y - height / 2) / height; float __zd = 1; float ___zd = __zd * yCos + __yd * ySin; float _yd = __yd * yCos - __zd * ySin; float _xd = ___xd * xCos + ___zd * xSin; float _zd = ___zd * xCos - ___xd * xSin; uint32_t col = 0; uint32_t br = 255; float ddist = 0; float closest = 32; for (int d = 0; d < 3; ++d) { float dimLength = _xd; if (d == 1) dimLength = _yd; if (d == 2) dimLength = _zd; float ll = 1 / (dimLength < 0 ? -dimLength : dimLength); float xd = (_xd) * ll; float yd = (_yd) * ll; float zd = (_zd) * ll; float initial = ox - cast(int)ox; if (d == 1) initial = oy - cast(int)oy; if (d == 2) initial = oz - cast(int)oz; if (dimLength > 0) initial = 1 - initial; float dist = ll * initial; float xp = ox + xd * initial; float yp = oy + yd * initial; float zp = oz + zd * initial; if (dimLength < 0) { if (d == 0) xp--; if (d == 1) yp--; if (d == 2) zp--; } while (dist < closest) { uint tex = info.map[mapindex(xp, yp, zp)]; if (tex > 0) { uint u = cast(uint32_t)((xp + zp) * 16) & 15; uint v = (cast(uint32_t)(yp * 16) & 15) + 16; if (d == 1) { u = cast(uint32_t)(xp * 16) & 15; v = (cast(uint32_t)(zp * 16) & 15); if (yd < 0) v += 32; } uint32_t cc = info.texmap[u + v * 16 + tex * 256 * 3]; if (cc > 0) { col = cc; ddist = 255 - cast(int)(dist / 32 * 255); br = 255 * (255 - ((d + 2) % 3) * 50) / 255; closest = dist; } } xp += xd; yp += yd; zp += zd; dist += ll; } } const uint32_t r = cast(uint32_t)(((col >> 16) & 0xff) * br * ddist / (255 * 255)); const uint32_t g = cast(uint32_t)(((col >> 8) & 0xff) * br * ddist / (255 * 255)); const uint32_t b = cast(uint32_t)(((col) & 0xff) * br * ddist / (255 * 255)); frame_buf[framebufindex(x, y)] = rgba(r, g, b); } } }
Re: std.parallelism is accepted into Phobos
On 26/04/2011 18:37, Andrei Alexandrescu wrote: On 4/26/11 11:32 AM, Russel Winder wrote: On Tue, 2011-04-26 at 09:16 -0700, Sean Kelly wrote: Bootstrapping issue. Make is almost guaranteed to exist, while the fancier tools are not. That may be the case but Makefiles are either incomprehensibly complicated or platform specific -- it is nigh on impossible to create a single Makefile that does its job sensibly on Linux, Solaris, Mac OS X and Windows. In reality if you target GNU make things are very portable. The advantage of tools such as Waf, SCons, CMake is that they manage the platform issues for you. I hope that's not the only advantage. Anyhow, the problem here is that you already mentioned three tools, none of which I know. If you could write the equivalent of posix.mak in your favorite tool, we'd be in a better position to evaluate how it improves our build process. Andrei Egr... I *might* give it a go at some point in Scons. Which is the only tool I've found to (my) liking. It supports D too. I must say that when dsss and rebuild were maintained it was very handy indeed to be able to it with. (yeah - almost) identical build files on the platforms I built for. Win & linux at the time. I can't but think you haven't written enough makefiles Andrei, for enough cross platform development, but I dare say you have.
Re: Twitter hashtag for D?
On 30/07/2009 10:12, MIURA Masahiro wrote: > Walter Bright wrote: >> How about #d-lang ? >> >> #dpl ? > > I just tested those two. > Although noone else uses #d-lang, it seems that twitter.com > doesn't treat it as a hashtag (because it contains a dash?). > #dpl gives a few false positives. So what do people currently use for C and C++ then?
Re: LLVM Coding Standards
On 12/04/2011 11:21, Mafi wrote: Am 12.04.2011 00:31, schrieb Spacen Jasset: std::getline(is, line); while (line.size() != 0) { ...some things... std::getline(is, line); } What's wrong with while( std::getline(is, line), (line.size() != 0) ) { //... some things } I mean, that's what the comma operator is for. Ah yes well. I was wondering who was going to throw that in. Well...okay. But sometimes it's not just one statement that you always need to do before your test condition. Also, I would say that sort of code is a little bit more difficult to read. I don't mind typing a little bit more to be able to make it look a bit better. As other posters have pointed out, it seems to me, at least, that having a way to express your model/idea or view of a problem directly is the most useful thing a language can give you. In other words less code structure design caused by safety and/or other issues, and more problem, higher level design visible in the code. As an example, perhaps not a great one. The RAII pattern. It's a code type of design/solution used to manage resources, and specifically to prevent resource leaks, rather than anything in particular to do with a problem that is being solved. e.g. That of reading and processing a file. So we end up with (a) solution to some problem (b) solution to the method of expressing the solution to the problem as you have put above: while( std::getline(is, line), (line.size() != 0) ) { There is a strong component of (b), rather than just (a), which ideally in utopia we don't want to spend time on thinking about.
Re: LLVM Coding Standards
On 11/04/2011 20:58, spir wrote: [slightly OT] Hello, I'm reading (just for interest) the LLVM Coding Standards at http://llvm.org/docs/CodingStandards.html. Find them very interesting because their purposes are clearly explained. Below sample. Denis That seem all fairly sensible. It also reminds me of open source projects written in C, where GOTO is used, like so: HANDLE handle1 = open(...); ... if (out_of_memory) goto cleanup; if (invalid_format) goto cleanup; ... cleanup: if (handle1) close(handle1); if (handle2) close(handle2); This code uses the dreaded goto statement, but I belive you can see that the author is trying to make the code more readable, or at least get rid of the nested indents/multiple cleanup problem you inevitably come across at some points in C code. It does tend to be more readable than the alternative, too. I think that people like to follow rules, that is as soon as they have internalised them and made them their own. What this means is that they often then follow them to a fault, and you get deeply nested, but "structured" code, where instead you would be better of with more logically linear code as in the case of the early exit. Coding standards should probably just say: try and write readable code. Everyone knows what readable code looks like. It just not always quick or easy to make it that way. While I am on the subject, I've *always* thought major languages have poor loop constructs: (A) for (;;) { std::getline(is, line); if (line.size() == 0) break; ...some things... } You have to call getline always at least once, then you need to test if the line is empty to terminate the loop. So how do you do it another way? (B) std::getline(is, line); while (line.size() != 0) { ...some things... std::getline(is, line); } Isn't there something a bit wrong here? N.B. a do .. while doesn't help here either. in (A) there is no duplication, in essence what I am saying, is that there should be a loop whereby you can put the exit condition where you need it, but *also* the compiler should check you have an exit condition to prevent mistakes. This whole WHILE vs DO vs FOR loops thing is strange to me. Instead you could just have: loop { ... if (condition) exit; ... } instead of WHILE and DO. Whereby you *must* have an exit condition. But I suppose you need a FOR loop because the following may be error prone. int x=0; loop { if x > 9 exit; ... x++; } So you would then end up with a LOOP a FOREVER (perhaps which is for(;;) by convention anyway) and a FOR loop. I'll put the coffee down now...
Re: Well, it's been a total failure
On 11/09/2010 23:52, Vladimir G. Ivanovic wrote: I'm running Fedora 13.x86_64 and I've tried various ways of getting a D compiler to work. None have succeeded. 1. a. I can't install dmd 2.048: # rpm -Uvh /downloads/dmd-2.048-0.i386.rpm error: Failed dependencies: gcc(x86-32)>= 4.2.3 is needed by dmd-2.048-0.i386 I don't know what package will satisfy this dependency. b. dmd is a closed compiler. Not good. I'm really not comfortable running a compiler for which I don't have access to the source. The risk of undetected malware is too great. c. So, I give up on dmd. 2. I can't run ldc because a. The ldc RPM requires Tango, even though this is not an RPM dependency for ldc, i.e. you can install ldc without any errors. b. The installation instructions for Fedora on the LDC web site are incorrect. "yum install ldc" works, but "yum install tango" doesn't. "yum install tango-devel" is the correct command. (This is the first time I've heard of-devel without a corresponding.) c. After I've gotten everything installed, it still doesn't work. I get $ ldc hello.d hello.d(5): Error: module stdio cannot read file 'std/stdio.d' d. OK, so I link /usr/include/d/tango/stdc to /usr/include/d/tango/std, but it still doesn't work. I get: $ ldc hello.d hello.d(8): Error: undefined identifier writefln hello.d(8): Error: function expected before (), not writefln of type int e. ldc only supports D v1. f. All of this is too much for me. I give up on ldc. 3. I can't get gdc to compile. a. First I have to get gcc-4.4.4 to compile, but that requires a 4 year old version of automake. I have to downgrade. b. After that's fixed, I'm still running into errors that prevent a build. The errors change from changeset to changeset. So, I'm giving up on gdc. Getting a D compiler to run on x86_64 Linux is too hard. I'm giving up on D. I'm posting this message not as a plea for help, but to illustrate how hard it is to get D to run on Fedora.x86_64. The success of D depends on high quality, open source compilers being available (my belief), and so far, D doesn't seem to be mature enough to be considered, at least on Fedora.x86_64. But, on the plus side, the existence of the book "The D Programming Language" is a major step in getting D accepted as a serious system programming language. Maybe installation will improve and D will move forward. --- Vladimir when installing an rpm package it's usually best to try yum localinstall /downloads/dmd-2.048-0.i386.rpm first and yum will then go off and fetch the dependencies if (a) they are available and (b) they have been specified properly in the rpm package
Re: Errors in TDPL
Andrei Alexandrescu wrote: On 06/21/2010 05:35 PM, Spacen Jasset wrote: I am only on page ten, I believe I saw a minor typo somewhere in the preface, that's all so far. I look forward to pondering the rest in the coming days. oh yes. Preface "D is a language that attempts to consistently do the right thing within the constraints it choose: sys" etc missing s, I gues. Thanks for your note. The current text uses "chose", i.e. the past tense of "to choose", so I think it is correct. Andrei heh. I can't read. I loose a cookie.
Re: Errors in TDPL
I am only on page ten, I believe I saw a minor typo somewhere in the preface, that's all so far. I look forward to pondering the rest in the coming days. oh yes. Preface "D is a language that attempts to consistently do the right thing within the constraints it choose: sys" etc missing s, I gues.
Re: [OT] The One Hundred Year Data Model
Justin Johansson wrote: Perhaps off topic for this NG, though certainly a good topic for LtU, but nevertheless D people might have some interesting insight on the topic of data models (for programming languages). So I'll begin with saying, "Forget the Hundred-Year Language" c.f. http://www.paulgraham.com/hundred.html and http://tapestryjava.blogspot.com/2008/12/clojure-hundred-year-language.html and drop the notion of the "Next Big Language" per se. Let's take a step back and instead ask what might be the Next Big Data Model or the Hundred-Year Data Model in the same vein as Paul Graham contemplates (link above) a Hundred-Year Programming Language. While discussions about programming languages, syntax, static vs dynamic typing, etc are about ubiquitous and can be as emotional as political and religious ideological discussions, it seems (to me at least) that in-depth discussions about data models are few and far between. Apart from the Third Manifesto (the relational database data model) made famous in decades past http://www.thethirdmanifesto.com/ there have been few advancements in abstract data models since then. While there may be others, the only significant new data model in the last decade that I know of is the XQuery 1.0 and XPath 2.0 Data Model (XDM) http://www.w3.org/TR/xpath-datamodel/ With the above preamble, I would like to ask members of the D community to contemplate about what the ubiquitous data model of the future, perhaps the Hundred-Year Data Model, might be in shape or form, taking a programming language agnostic position. Cheers, Justin Johansson What about the dimensional database model. Where each attribute is effectively a dimension. This has been around for a while and it popular in data warehousing applications. Data is king these days and extracting information from it easily is what everyone wants to do that has a lot of data. I am surprised to find that there don't really appear to be many open source libraries to support this sort of thing. Like for example sqllite in the RDBMS field. There is jbase with it's multi valued fields and so on, but as far as I can tell it is built on top of a relational model which is more OLAP. Whereas I think I am thinking more along the lines of MOLAP.
Re: Patches, bottlenecks, OpenSource
bearophile wrote: After the last posts about patches, I can write something myself about this topic :-) I am far from being an expert of big software projects, but I think I am now able to understand some things of the D project. I like D, it's an alive project, but from what I've seen D so far is not having a so significant success. I think lot of people (especially young programmers) don't want "a better C++", they want no stinking C++ at all. They want something quite different. (On the other hand now I am not so sure Scala will have a widespread success, its type system makes it not easy to learn). A C++-class language compiler is a big project that requires quite different skill sets: in the D compiler there is work to do on the vector operations, lent/uniqueness, multi-core, 64 bit, work for keep it well updated on Mac/Linux/Win and well packaged, development of Phobos and its data structures, work on the back-end to use the SSE registers and future CPUs or GPUs, tuning of the associative arrays, future improvements of the core language, improvements in the unit test system, large improvement in the D web site and the documentation, big improvements needed for the GC to turn it into something more modern, and so on. Such things have to be refined, debugged, efficient, cared of, polished. So as D develops and grows it will get hard for a single person to write all patches to the language and other parts. As D grows this will become a bottleneck, and from what I've seen it already is. The disadvantage of allowing other people the rights to patch the front-end is some lost control, but Walter can review patches after they are already applied & working (in a big project having a director is positive). This can speed up the patching process itself. The compiler can become more like a patchwork of different programming styles, and Walter has to ask to other people how some parts not written by him work. This can be doable if the front-end becomes more modular. LLVM shows a more modular design that can be copied. This is a lot of refractoring work. Increasing openess can increase the influx of devs and their willingness to help the project. This is quite positive and its importance can't be underestimated. Pyrex (http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/ ) was forked in Cython (http://www.cython.org/ ) because Pyrex author was too much slow in patching. A fork will not happen in D soon because there are not enough people yet that care and work for D enough to make a fork viable. D is not a project as alive as Pyrex. Cython is a tiny rib of the Python language circus. I suggest to give patching rights to Don, it can be one more step forward toward Open Sourcing D development style :-) In future such right can be given to other few good people that have shown to deserve enough trust. I am not sure the future of D is in the dmd back-end, llvm looks in better shape for this purpose. LLVM is able to support 99.9% of the features of D2 (and I presume llvm devs are willing to add the few missing things if someone gives them patches; from the unladen-swallow project I have seen they are open enough), and it supports a large number of good features currently not done/present by/in the back-end of dmd. Currently ldc is a compiler much better than dmd (try it if you don't believe me!), but its devs are not appreciating D2 language yet much. I don't like to see the Walter-locomotive so detached from the present nice ldc D1 compiler, and from the future of LDC and llvm. I'd still like ldc to become the official D2 compiler, developed and cared of :-) Bye, bearophile It's all true. I also think we should use a DVCS too so that Walter can apply patches easily and we can upload patch branches directly to somewhere that can be merged and or reconciled by the original authors in case of conflict. It's all good. The history of the patch itself can also be tracked even after merging. I suggest Bazaar but I am deeply biased on the matter. Further, Don et al. could act as gatekeeper to prevent Walter getting bogged down with too many patches. And then things will be all good and I may find the time to get interested in D again.
Re: Converting Optlink from Assembler to C
Vladimir Panteleev wrote: On Mon, 30 Nov 2009 23:02:13 +0200, Walter Bright wrote: http://www.reddit.com/r/programming/comments/a9lxo/assembler_to_c/ The link posted on reddit in broken: "You are not authorised to view this resource. You need to login." Broken link: http://dobbscodetalk.com/index.php?option=com_myblog&show=Assembler-to-C.html&Itemid=29 Working link: http://dobbscodetalk.com/index.php?option=com_myblog&show=Assembler-to-C.html&Itemid=29 Note the HTML entities... The article doesn't really say why optlink? Why not use another linker? (which may already be written in c)
Re: CPAN for D
Clay Smith wrote: Bill Baxter wrote: On Tue, Nov 10, 2009 at 3:08 PM, Walter Bright wrote: http://www.reddit.com/r/programming/comments/a2nfz/guido_people_want_cpan/ http://search.cpan.org/ Over and over, I hear that CPAN is one of the great reasons people use Java. Such for D would be a tremendous boost to the D community. CPAN is so bad that people run away from Perl in horror over to comfortable but boring old Java? :-P DSSS was supposed to be a sort of CPAN for D. I think it's still the easiest way to get the Derelict library installed. Unfortunately it's really only a very humble start. It lacks any sort of versioning, and has no web face. And now it's unmaintained. --bb It would be great if there could be a new DSSS maintainer, or someone to fix DSSS. Then I'd imagine a site like dsource.org could easily add CPAN like functionality, however it would need to make sure to only search for completed / near completed projects. Yes DSSS is/was good. It probably needs more than one point of contact / maintainer. I'd also suggest using bzr/hg/git (in that order) such that users could more easily contribute changes, and it may help provide more visibility to exactly who and what is happening on the project, if branches can be registered. As for CPAN. It's the only thing that is good about Perl.
Re: What is an ANNOTATION?
sclytrack wrote: == Quote from Lars T. Kyllingstad (pub...@kyllingen.nospamnet)'s article Lars T. Kyllingstad wrote: So, it seems that @annotations will become a part of D. But which of the existing attributes should become annotations, and which should remain as they are? What is the rule for determining whether a new feature should be introduced in terms of annotations? @safe @pure @nothrow @immutable int foo() { ... } -Lars Sorry, bit of a typo in the message subject there. ;) -Lars a) Things that shows up in the reflection b) When added to stuff it doesn't create a new type or overload. c) Annotations provide data about a software program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. (wikipedia) Pick one, ... or two. (c) is the one.
Re: bug #2607 building DMD against and older version of glibc [was: objdump src]
dsimcha wrote: == Quote from Spacen Jasset (spacenjas...@yahoo.co.uk)'s article Spacen Jasset wrote: I am trying to build dmd such that it will work on older versions of ...stuff Given that there will be a book out in the fairly near future, and people may therefore want to run dmd on GNU/Linux. Perhaps bug #2607 would be useful to fix. That is; ensure that dmd will run on a respectable set of GNU/linux versions easily. (and perhaps this applies to FreeBSD too) I'm forced to use Linux caveman edition (kernel 2.6.8 or something) on some of the computers I deal with for my research because my sysadmin refuses to upgrade. Before DMD came with buildable source this was problematic because I was constantly searching for one of the few nodes that DMD would run on. However, now I don't see it as much of a problem, as it takes approximately 2 minutes to just compile from source. Furthermore, I think I asked Walter about this back when it was still a problem and he said changing the glibc version would break people with a newer version. Ok. glibc is forward compatible, on purpose. We build stuff for redhat 2 at our workplace, which works on 5 (and ubuntu and suse) without any trouble. Drepper goes into a lot of detail here: http://people.redhat.com/drepper/dsohowto.pdf The problem Walter had was First with glibc, which was fixed by building on an older platform (there are other ways but this is easiest). Then Second, with the older libc++ which was missing on the newer systems. The latter is fixed by either: a) static link in only libc++ and libgcc b) ship libc++ and change the link loader path with the -L options (requires a startup stub I think if you don't always use the same installation location i.e. /usr/local/dmd Which is a very similar way you fix missing msvcr71.dll problems on windows.
bug #2607 building DMD against and older version of glibc [was: objdump src]
Spacen Jasset wrote: I am trying to build dmd such that it will work on older versions of ...stuff Given that there will be a book out in the fairly near future, and people may therefore want to run dmd on GNU/Linux. Perhaps bug #2607 would be useful to fix. That is; ensure that dmd will run on a respectable set of GNU/linux versions easily. (and perhaps this applies to FreeBSD too)
objdump src
I am trying to build dmd such that it will work on older versions of GNU/Linux as well as newer versions with a single binary. This requires some special attention for libc++ and lib_gcc, (or a link loaded wrapper script) I have proceeded like this to start off: --- linux.mak.orig 2009-11-08 23:07:22.0 + +++ linux.mak 2009-11-08 23:30:29.0 + @@ -2,6 +2,7 @@ C=backend TK=tk ROOT=root +STATIC_LIBCPP=--static-libgcc -lm -Wl,-Bstatic,-lstdc++,-Bdynamic CC=g++ -m32 @@ -86,7 +87,7 @@ all: dmd dmd: id.o optabgen $(DMD_OBJS) - gcc -m32 -lstdc++ $(COV) $(DMD_OBJS) -o dmd + gcc -m32 $(COV) $(DMD_OBJS) -o dmd $(STATIC_LIBCPP) clean: rm -f $(DMD_OBJS) dmd optab.o id.o impcnvgen idgen id.c id.h \ Which works assuming you compile it on redhat 3 and then it is upward compatible (glibc wise). But objdump and the other tools also need similar treatment, but I can't seem to find their source.
Re: "with" should be deprecated with extreme prejudice
BCS wrote: Hello Andrei, Robert Fraser wrote: This. Just issue an error/warning if something in the with shadows something outside. That way, if the struct/class/template/whatever changes, affected code will be notified. No warning, error! warning if it exist, error if used? It probably should be an error. Then though, you have the unfortunate thing whereby adding something to say struct X will shadow and generate a compile error, which you can only fix by: a) Not using with b) rename identifier in your sturct c) rename local variable I seem to remember some language (I think it's VB) supporting with that required a "." prefix in front of with enclosures. Like so: auto a = 0; with (foo) { a = 0; .a = 1; } This is a bit dangerous too though, in terms of typos.
Re: D1 and Phobos Fixes
Walter Bright wrote: Spacen Jasset wrote: This goes for the compiler too, but presumably a lot of the the code is shared and it isn't so easy. Many compiler patches get posted to bugzilla. About half of them are correct, the other half aren't (often fixing the symptom rather than the cause of the problem). Essentially, the compiler is fairly complicated and not well commented; I want to review any changes before they're committed. Otherwise I'll lose track of how it works and how it's supposed to work. Yes fair enough. What about phobos? Do many patches for phobos come in?
Re: D1 and Phobos Fixes
Jason House wrote: Spacen Jasset Wrote: Since D1 is stable I wonder why bugs in Phobos1 and/or D can't be ironed out quicker by allowing certain members of the D community to commit changes to phobos directly. This goes for the compiler too, but presumably a lot of the the code is shared and it isn't so easy. I presume that all the bug fixes for phobos are incorporated by Walter, and wonder if this need by the case. Would it not be useful to have a "team" of some sort that can operate on phobos1 directly, and take submissions from the wider community. These fixes of course would be bug fixes only, I dand perhaps in some cases fixes where no behaviour has been defined, but existing behaviour is (clearly) wrong or non intentional. Direct annonymous access can lead to quality control issues. I doubt that'll happen. It's much more reasonable to give write access on a case by case basis. If you want to make a case for yourself, start by adding quality patches into bugzilla. If they don't get no reaction, you can complain publicly on this newsgroup. More than just Walter has write access to Phobos. Andrei, Sean, and Don also have access. I'm pretty sure Don and Sean got access after frequent high quality contributions. Ok. I did mean some "trusted" people rather than everyone, and I also mean perhaps someone(s) that can funnel patches in from the community, as a gatekeeper. I don't know if Andrei, Sean, or Don do that sort of thing on a regular basis. I don't mean to whine, but people here have, and continue complain about the rate of fixes from time to time. So perhaps for D1/phobos1 a case can be made for more of a community momentum as it were. It's just an idea. I have been following D's development for a few years now. Following is the word too, I haven't done much with it, so my contributions are minimal. I might aim to change that though. Can't sit around when there are projects to be worked on. Perhaps someone with commit rights could look at 2429 (has patch) or 2413 They aren't that important on the one hand, but on the other hand they are bugs. Perhaps since the focus will be switching to D2 it isn't so important for fix D1 bugs now anyway. Since this is the modern age of distributed revision control we could use such a system. I have played around with launchpad and bzr git and see that the model works quite well. (I am not necessarily suggesting bzr and launchpad here) While bugzilla patch files and svn work quite well. It is possible, and probably easier to stage branches in various states that people can pull from and then submit patches to, which ultimately can then potentially be put in the main phobos branch, and in the mean time people can pull down development branches to get the latest fixes that aren't in the official release yet. I am going off on all sorts of tangents here, but a colleague of mine has just ventured into D (After Andrei banded the D word about at the ACCU conference), and I think it could help a lot if people were able to contribute more easily to D/phobos or any other related projects. I am not sure I am getting across what I think I mean with all this waffle, so perhaps less of that and more doing is in order.
D1 and Phobos Fixes
Since D1 is stable I wonder why bugs in Phobos1 and/or D can't be ironed out quicker by allowing certain members of the D community to commit changes to phobos directly. This goes for the compiler too, but presumably a lot of the the code is shared and it isn't so easy. I presume that all the bug fixes for phobos are incorporated by Walter, and wonder if this need by the case. Would it not be useful to have a "team" of some sort that can operate on phobos1 directly, and take submissions from the wider community. These fixes of course would be bug fixes only, and perhaps in some cases fixes where no behaviour has been defined, but existing behaviour is (clearly) wrong or non intentional.
Re: OT: Worthwhile *security-competent* web host?
Nick Sabalausky wrote: Anyone know of a reliable, reasonably-priced web host that...and here's the key part...actually understands even the most basic security concepts? ... snip Not really, no. In which country? I have some suggestions if you can live with out a control panel and therefore want to do a bit of DIY:- Firstly there is freeshell.org (US) Which has charity status, and so you can't really do any commercial things with it. You get a shell account on netbsd, and they like a donation now and then, but it's sort of optional. In the UK there is bytemark hosting, starting at ~150 GBP (per year) for user mode linux. I believe bytemark to be competent. xtrahost for a slightly more, but with xen virtualisation and they offer windows, too. I think both the above offer some sort of plesk control panel, but that may only be for domain management.
Re: Glibc hell
Walter Bright wrote: Spacen Jasset wrote: For us Linux DMD users a bug should be raised against dmd so that Walter will hopefully compile against an older glibc on future releases. Yet when I do that the other half of the linux users have a problem. Do you know what problems they had Walter, I think this problem should be able to be ironed out somehow. I believe that is is ok to statically link with the c++ libraries* and we do this at our workplace, otherwise the target users also have to have the correct c++ libraries installed. The link flags I use for our builds are like this: LINK_FLAGS=-Wl,-Bstatic,-lstdc++,-Bdynamic -static-libgcc and are passed to gcc for the linking phase. Would it be possible for you to generate a version of DMD built in this way for testing as well as how you do it currently? If you have already done this and found no way to make it work then perhaps it's not worthwhile, but this really should work on Linux. * - you don't get the "benefit" of bug fixes via the dynamic library updates.
Re: Glibc hell
Steven Schveighoffer wrote: "Spacen Jasset" wrote Steven Schveighoffer wrote: ...chop For us Linux DMD users a bug should be raised against dmd so that Walter will hopefully compile against an older glibc on future releases. As long as it doesn't cause weird problems. I have had weird stuff happen when you are running an older app against a newer lib (specifically, the program exited silently). Ok, fair enough. I've done it it a few times, and we have not had problems. Perhaps some sort of trial with DMD would be helpful. Plus, I think the OP essentially was looking to run programs that DMD compiled, not necessarily running DMD itself. -Steve Ideally, we (as in someone) will want to do both of these things.
Re: Glibc hell
Steven Schveighoffer wrote: "Spacen Jasset" wrote Steven Schveighoffer wrote: You may have to statically link (which, of course, is not officially supported by glibc, for very stupid reasons). I am not sure that the reasons are "stupid". It is similar, for example to kernel32.dll on windows, which you cannot link to statically *at all* libc is a comparable interface in that it calls into the kernel. The reasons are not so concrete. It's more a matter of opinion than requirement. For example, see this web page: http://people.redhat.com/drepper/no_static_linking.html I ran into this when trying to get busybox working. It took me a while to figure out, but I ended up using a dynamic glibc. Apparently, there are bugs in glibc with static linking that the developers refuse to fix because "static linking glibc isn't a valid requirement." Yet some very important programs are statically linked. For example, ldconfig and nash (Red Hat's system loader shell). It just seems stupid to me when they preach that you should *always* dynamically link, yet there are cases where they found it more suitable to statically link. Just my opinion. -Steve It is the way it is at the end of the day. I don't consider it too unreasonable to require dynamic linking for things such as glibc. (again, just like kernel32.dll and user32.dll on windows) At our company we compile against older versions of glib c and the same code runs without problem on the latest versions. This is not the case with static linking it often breaks unless you run on the same system you linked on. As for bugs in glibc preventing static linking I can't comment except to say that the the kernel apis do sometimes change and I have read that static linking will case crashes, which it does. It appears they don't want to maintain static compatibility in this way. However, if you do build glibc from source there is a configurable option to provide compatability wrappers, I noticed this when building a cross compiler. I know that static linking of glibc is sometimes used so that the libraries are not required for base operating system components, in particular some maintenance binaries are linked in this way so that a system can be recovered without the need for a working usable /usr/lib directory. Unfortunately it does seem easier to compile against an older glibc using an older distribution, but there may be a better way. Installing an older gcc may suffice. At our company we took the "easy way" and used a virtual machine. For us Linux DMD users a bug should be raised against dmd so that Walter will hopefully compile against an older glibc on future releases.
Re: Glibc hell
Steven Schveighoffer wrote: "dsimcha" wrote Apparently, DMD for Linux requires a non-ancient version of Glibc to work, and worse yet, all the stuff compiled by it requires a similarly non-ancient version. The problem is that I'm trying to run some jobs on a cluster that has an ancient version of Linux that my sysadmin doesn't want to upgrade. Given that i don't have any root/admin privileges, and will never get them, does anyone know of a workaround to make DMD, or at least stuff compiled by it, work with ancient versions of glibc? P.S. Nothing illegal/unethical, such as cracking the root acct. please. You may have to statically link (which, of course, is not officially supported by glibc, for very stupid reasons). I'm not sure how much of the new glibc is required by dmd, part of glibc is in the dynamic linker, which I think is hard to use a local copy of. However, the linker does support a way to override which libraries to use. I'd suggest the following. I assume you have a local account with some space. mkdir ~/libs cp my_glibc.so ~/libs cp my_supporting_lib.so ~/libs ... # bash syntax, not sure of csh syntax export LD_LIBRARY_PATH=~/libs What this does is make the dynamic linker use ~/libs to look for dynamic libs to link in before the default ones. You can check which libraries the dynamic linker is using to resolve dependencies by using 'ldd executable'. Hope this helps -Steve I am not sure that the reasons are "stupid". It is similar, for example to kernel32.dll on windows, which you cannot link to statically *at all* libc is a comparable interface in that it calls into the kernel. Linking to an older libc should ensure a good level of compatibility going forward, but I am not an expert on this. We do this for some of our software at the moment, which works without trouble.
Re: Glibc hell
dsimcha wrote: Apparently, DMD for Linux requires a non-ancient version of Glibc to work, and worse yet, all the stuff compiled by it requires a similarly non-ancient version. The problem is that I'm trying to run some jobs on a cluster that has an ancient version of Linux that my sysadmin doesn't want to upgrade. Given that i don't have any root/admin privileges, and will never get them, does anyone know of a workaround to make DMD, or at least stuff compiled by it, work with ancient versions of glibc? P.S. Nothing illegal/unethical, such as cracking the root acct. please. Get Walter to recompile DMD on centos 2, or install the gcc-compat libraries and compiler and again recompile DMD with that so it's forward compatible. glibc shouldn't be statically linked, it leads to crashes. libc++ however can be.
Re: Threads
DF wrote: /** * Testing. */ module Test; import std.thread; import std.stdio; class DerivedThread : Thread { this() { super(&run); } private : int run() { writefln("Derived thread running.\n" ); return 0; } } void main() { Thread derived = new DerivedThread(); derived.start(); } This code makes no output. Why? Not quite sure, but you could try derived.wait() in main to wait for the thread to finish.