Re: Descent with compile-time debug for testing
I do not why but every time I install Descent into my Eclipse 3.4.2 on MacOSX, on the exit (quiting eclipse) it gives me a JavaNullPointer error... it's a pity. Cheers! Ary Borenszweig Wrote: Hi! I just uploaded a new test version of Descent (0.5.6) with the new compile-time debugging feature. I tested it with some functions and templates and it seems to be working (but not with string mixins,) so I wanted you to play with it a little and see what you think, what could be improved or what is wrong, etc. As always, you can update from Eclipse itself as described here: http://www.dsource.org/projects/descent And now a little explanation about how to get it working: right click on the function call or template instance you want to debug at compile-time, select Source - Debug at Compile-Time, and that's it! The debugger interface will appear and you can step-into/over/return, or continue, place breakpoints (but not yet in external files, sorry,) and also it will break on errors, and you'll have the full stack trace to see what went wrong. :-) (maybe I'll do a video about this later, for the curious) Just note that if you do this: --- int foo(int x) { return x * 2; } void main() { int x = foo(10); } --- if you try to debug foo(10), it will show the debugger interface, but stepping into will end the debugging session. Why? Because in the semantic analysis for that code, the call foo(10) isn't evaluated at compile time (as Descent just sneaks into the normal semantic analysis of the module.) For this you have to write: void main() { const x = foo(10); } and now foo(10) is evalauted at compile time, since it's return value is assigned to a const value. (const int will also work.) In the execution, you can see variables, analyze expressions (it supports any kinf of expression, like writing 1 + 2*3 will print 7.) This is only for D1, in D2 it will not work as expected. Enjoy!
Re: Descent with compile-time debug for testing
Pablo Ripolles escribió: I do not why but every time I install Descent into my Eclipse 3.4.2 on MacOSX, on the exit (quiting eclipse) it gives me a JavaNullPointer error... it's a pity. Cheers! And what's on the Error Log?
Re: Descent with compile-time debug for testing
Hello, just before it closes there appears a window message tittled Problems saving worksppace and the message is: Problems occurred while trying to save the state of the workbench. On details it's written: Problems ocurred during save. java.lang.NullPointerException Thanks! Ary Borenszweig Wrote: Pablo Ripolles escribió: I do not why but every time I install Descent into my Eclipse 3.4.2 on MacOSX, on the exit (quiting eclipse) it gives me a JavaNullPointer error... it's a pity. Cheers! And what's on the Error Log?
Re: Taunting
Reply to Ary, (I came to this conclusion when trying to debug the scrapple:units project). I'm sorry g OTOH that is a rater pathological cases. One option that might be doable (I don't know how the inside works so I'm guessing here) is to have the debug more highlight expression that can undergo CTFE and constant folding. The user would work by selecting an expression and it would be replace with it's value. to make things manageable, long string expressions could be truncated and accessed via pop up and as for string mixins, run the result thought the general code formatter.
Re: Descent with compile-time debug for testing
Reply to Pablo, Hello, just before it closes there appears a window message tittled Problems saving worksppace and the message is: Problems occurred while trying to save the state of the workbench. On details it's written: Problems ocurred during save. java.lang.NullPointerException somewhere there should be a .log file that has alls sorts of goodies like stacktraces for the error and the like. Thanks! Ary Borenszweig Wrote: And what's on the Error Log?
ODBCD
I have an alpha version of an ODBCD working. I have tested it so far with MySQL and SQL Express 2005 on Windows XP. Both have their shortcomings, but I don't think they have that much to do with the ODBCD code. If you have a different database and/or ODBC driver, and are interested in testing/helping, please email me and I'll send you a zip. It's quite small and relatively easy to work with, since ODBC drivers tend to produce pretty reasonable error information, but it still needs a lot of work. This will eventually be part of DCat - http://www.britseyeview.com/dcat/ - however because ODBC now seems to be on everyone else's back burner I'm thinking of trying some closer-to-native implementations as well. Steve
Re: Descent with compile-time debug for testing
Hello Ary! Yes indeed, it is fixed! and yes, I had no D project on that workspace. Thanks! Ary Borenszweig Wrote: Hmm, that's strange. You should only get that error if you don't have any D project in the workspace. Anyway, it's a bug and I fixed it, you can update from Eclipse and it should be fixed. Please tell me if this happened and you had at least one D project in the workspace... Pablo Ripolles wrote: OK there you go! Let's see if this helps. Thanks! BCS Wrote: Reply to Pablo, Hello, just before it closes there appears a window message tittled Problems saving worksppace and the message is: Problems occurred while trying to save the state of the workbench. On details it's written: Problems ocurred during save. java.lang.NullPointerException somewhere there should be a .log file that has alls sorts of goodies like stacktraces for the error and the like. Thanks! Ary Borenszweig Wrote: And what's on the Error Log?
Re: Why are void[] contents marked as having pointers?
On Mon, 01 Jun 2009 02:21:33 +0300, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: To argue that convincingly, you'd need to disable conversions from arrays of class objects to void[]. You're right. Perhaps implicit cast of reference types to void[] should result in an error. -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: Why are void[] contents marked as having pointers?
On Mon, 01 Jun 2009 05:28:39 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: std.boxer is actually a valid counter-example for my post. The specific fix is simple: replace the void[] with void*[]. The generic fix is just to add a line to http://www.digitalmars.com/d/garbage.html adding that hiding your only reference in a void[] results in undefined behavior. I don't think this should be an inconvenience to any projects? What do you use for may contain unaligned pointers? Sorry, what do you mean? I don't understand why such a type is needed? Implementing support for scanning memory ranges for unaligned pointers will slow down the GC even more. -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: Why are void[] contents marked as having pointers?
Vladimir Panteleev wrote: On Mon, 01 Jun 2009 02:21:33 +0300, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: To argue that convincingly, you'd need to disable conversions from arrays of class objects to void[]. You're right. Perhaps implicit cast of reference types to void[] should result in an error. If only there were a way to indicate that void[]s could contain pointers, then they would behave uniformly across types... Oh wait.
Re: Why are void[] contents marked as having pointers?
On Sun, 31 May 2009 23:24:09 +0300, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Another alternative would be to allow implicitly casting arrays of any type to const(ubyte)[] which is always safe. But I think this is too much ado about nothing - you're avoiding the type system to start with, so use ubyte, insert a cast, and call it a day. If you have too many casts, the problem is most likely elsewhere so that argument I'm not buying. I've thought about this for a bit. If we allow any *non-reference* type except void[] to implicitly cast to ubyte[], but still allow implicitly casting ubyte[] to void[], it will put ubyte[] in the perfect spot in the type hierarchy - it'll allow safely (portability issues notwithstanding) getting the representation of value-type (POD) arrays, while still allowing abstracting it even further to the might have pointers type - at which point it is unsafe to access individual bytes, which void[] disallows without casts. -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: visualization of language benchmarks
Nick Sabalausky wrote: Denis Koroskin 2kor...@gmail.com wrote in message news:op.uuthxivwo7c...@soldat.creatstudio.intranet... On Mon, 01 Jun 2009 03:21:42 +0400, Tim Matthews tim.matthe...@gmail.com wrote: Knud Soerensen wrote: Tim Matthews wrote: It's things like this that make me want to get into visualization. Great article! Where's the D It is on 3,3 called Dlang. OK it is was on the 05 chart but I was expecting it to be on the updated 09 chart though. They seem to believe D is less of a player now. IIRC, there was no stable 64bit D compiler for Linux at the moment they moved to new hardware and thus D support was dropped. So they're benchmarks are only accurate for 64-bit? The shootout have 32-bit and 64-bit versions of the benchmarks, but they wanted to have the same benchmarks on both architectures. I don't know which version was used to generate the charts though. Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: visualization of language benchmarks
Jérôme M. Berger wrote: Nick Sabalausky wrote: Denis Koroskin 2kor...@gmail.com wrote in message news:op.uuthxivwo7c...@soldat.creatstudio.intranet... On Mon, 01 Jun 2009 03:21:42 +0400, Tim Matthews tim.matthe...@gmail.com wrote: Knud Soerensen wrote: Tim Matthews wrote: It's things like this that make me want to get into visualization. Great article! Where's the D It is on 3,3 called Dlang. OK it is was on the 05 chart but I was expecting it to be on the updated 09 chart though. They seem to believe D is less of a player now. IIRC, there was no stable 64bit D compiler for Linux at the moment they moved to new hardware and thus D support was dropped. So they're benchmarks are only accurate for 64-bit? The shootout have 32-bit and 64-bit versions of the benchmarks, but they wanted to have the same benchmarks on both architectures. I don't know which version was used to generate the charts though. Jerome Well now that LDC supports 64-bit, could we convince them to put it back in?
Re: Source control for all dmd source
Christopher Wright wrote: Leandro Lucarella wrote: Jérôme M. Berger, el 31 de mayo a las 19:03 me escribiste: Leandro Lucarella wrote: Well, that's great to hear! As Robert said, please tell if you need any help. Please, please, please consider using a distributed SCM (I think git would be ideal but mercurial is good too). That would make merging and branching a lot easier. Git is a bad choice because of its poor Windows support. Mercurial or Bazaar are much better in this regard. That's a mith, Git is pretty much supported in Windows now. I know people that uses it in a regular basis. See: http://code.google.com/p/msysgit/ I've used tortoise-git: http://code.google.com/p/tortoisegit/ It worked pretty well, though I didn't spend much time with it. Wow, tortoisegit? Seems so bloated, look at those screen shots, it's so confusing! IMO it's wrong to just put the same command-line commands on the right-click menu, and call that a gui. The humble git-gui is much better.
Compiling in 64 system 32 bit program
I have GDC 64 on Linux Fedora 11, i wanna compile program to 32 bit systems but i have 64 bit system. Is this possible? when i type --target-help i have this error gdc: error trying to exec 'cc1': execvp: Not Found
Re: Compiling in 64 system 32 bit program
Dan900 wrote: I have GDC 64 on Linux Fedora 11, i wanna compile program to 32 bit systems but i have 64 bit system. Is this possible? when i type --target-help i have this error gdc: error trying to exec 'cc1': execvp: Not Found To compile 32-bit programs, you use the -m32 flag. --anders
Re: Compiling in 64 system 32 bit program
[dan...@localhost bin]$ ./gdc -m32 dupa.d /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status ??
Re: Compiling in 64 system 32 bit program
Dan900 wrote: [dan...@localhost bin]$ ./gdc -m32 dupa.d /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status You will also need the 32-bit runtime libraries*... By default you would only have the 64-bit variants. * the package would be glibc-devel, I believe So something like: yum install glibc-devel.i386 --anders
Re: Compiling in 64 system 32 bit program
Yeah Thanks! It Work!. yum install glibc-devel.i586 Dan900 wrote: [dan...@localhost bin]$ ./gdc -m32 dupa.d /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status You will also need the 32-bit runtime libraries*... By default you would only have the 64-bit variants. * the package would be glibc-devel, I believe So something like: yum install glibc-devel.i386 --anders
Re: Source control for all dmd source
hasen wrote: ... Wow, tortoisegit? Seems so bloated, look at those screen shots, it's so confusing! Wow, git cli? Seems so bloated, look at all those commands, it's so confusing! add am annotate apply archive bisect blame branch bundle cat-file check-attr checkout checkout-index check-ref-format cherry cherry-pick citool clean clone commit commit-tree config count-objects describe diff diff-files diff-index diff-tree fast-export fast-import fetch fetch-pack fetch--tool fmt-merge-msg got-for-each-ref format-patch fsck fsck-objects gc get-tar-commit-id grep hash-object http-fetch http-push index-pack init init-db log lost-found ls-files ls-remote ls-tree mailinfo mailsplit merge merge-base merge-index merge-octopus (wtf?) merge-one-file merge-our merge-recursive merge-resolve merge-subtree mergetool merge-tree mktag mktree mv name-rev pack-objects pack-redundant pack-refs parse-remote patch-id peek-remote prune prune-packed pull push quiltimport read-tree rebase rebase-interactive receive-pack reflog relink remote repack repo-config request-pull rerere reset revert rev-list rev-parse rm send-pack shortlog show show-branch show-index show-ref sh-setup stage stash status stripspace submodule svn symbolic-ref tag tar-tree unpack-file unpack-objects update-index update-ref update-server-info upload-archive upload-pack var verify-pack verify-tag web-browse whatchanged write-tree As for your assertion that tortoisegit is confusing, let's look at those screenshots. The first is a context-dependant list of git commands. If you think that's confusing, then I can't imagine you're able to use the git cli at all, so let's assume you don't mean that one. Next we have a shot of it detecting a patch set and giving you context commands for those. I fail to see how that could be confusing. The commit dialog? Well, that's basically git commit -a, except you can actually modify what files are being committed in the dialog itself and it has a built-in editor. Actually, if you look at them, the dialogs are all just GUI versions of the various git tools themselves. So if you really find those confusing, I can only assume you find Git itself confusing. IMO it's wrong to just put the same command-line commands on the right-click menu, and call that a gui. Yes, because heaven forbid anyone makes our lives as developers easier. After all, no developer should EVER use a simplified interface when what they SHOULD be doing is memorising every command and all of its switches. The heresy! The humble git-gui is much better. Integrating those commands into the shell itself so that only the commands that make sense in context are shown, and actually tell the user what they're going to do is much better. Incidentally, I use both. TortoiseGit is great for anything of relative complexity as it actually helps you do it correctly (it took me a while to work out how to properly apply patch sets without it). What's more, it also helps you learn how to use git without constantly screwing up.
Re: Source control for all dmd source
Daniel Keep wrote: hasen wrote: ... Wow, tortoisegit? Seems so bloated, look at those screen shots, it's so confusing! Wow, git cli? Seems so bloated, look at all those commands, it's so confusing! It *is* confusing, it doesn't claim to be intuitive, it's aimed for developers, they're supposed to read tutorials/man pages to know how it works. Providing a gui that maps a pull button to a pull command is confusing because GUI's are supposed to be discoverable and intuitive. No wonder I never knew how to properly use svn, I was so dependant on those GUIs, that I never knew what was going on under the surface. At least for me, the only sensible way to use svn is through the command line. GUIs for svn and git are confusing because well, you're not exactly sure what they're doing. As for your assertion that tortoisegit is confusing, let's look at those screenshots. The first is a context-dependant list of git commands. If you think that's confusing, then I can't imagine you're able to use the git cli at all, so let's assume you don't mean that one. Yes, I think it's confusing. Read above. I do use the git cli, it's easier because it makes more sense. Next we have a shot of it detecting a patch set and giving you context commands for those. I fail to see how that could be confusing. I haven't used patches with git, so I don't know what's in that corner of git. The commit dialog? Well, that's basically git commit -a, except you can actually modify what files are being committed in the dialog itself and it has a built-in editor. Yea? What are these options? [Sign] [OK] what's the difference between them? Actually, if you look at them, the dialogs are all just GUI versions of the various git tools themselves. So if you really find those confusing, I can only assume you find Git itself confusing. Look at this and tell me it's not confusing: http://www.jensroesner.de/wgetgui/wgetgui.png IMO it's wrong to just put the same command-line commands on the right-click menu, and call that a gui. Yes, because heaven forbid anyone makes our lives as developers easier. After all, no developer should EVER use a simplified interface when what they SHOULD be doing is memorising every command and all of its switches. The heresy! Well, if it makes your life easier, good for you. The humble git-gui is much better. Integrating those commands into the shell itself so that only the commands that make sense in context are shown, and actually tell the user what they're going to do is much better. Incidentally, I use both. TortoiseGit is great for anything of relative complexity as it actually helps you do it correctly (it took me a while to work out how to properly apply patch sets without it). What's more, it also helps you learn how to use git without constantly screwing up. I think if you don't know what you're doing you'll screw up anyway. I used to use SmartSVN, TortoiseSVN, and probably some other GUIs, and every so often I would screw something up and have no idea how to fix it. Why? because the GUI made me feel comfortable using svn without really knowing what the hell was going on. Many items under menus were confusing as hell. Many options for various tasks were also confusing as hell.
Re: visualization of language benchmarks
Robert Fraser wrote: Jérôme M. Berger wrote: Nick Sabalausky wrote: Denis Koroskin 2kor...@gmail.com wrote in message news:op.uuthxivwo7c...@soldat.creatstudio.intranet... On Mon, 01 Jun 2009 03:21:42 +0400, Tim Matthews tim.matthe...@gmail.com wrote: Knud Soerensen wrote: Tim Matthews wrote: It's things like this that make me want to get into visualization. Great article! Where's the D It is on 3,3 called Dlang. OK it is was on the 05 chart but I was expecting it to be on the updated 09 chart though. They seem to believe D is less of a player now. IIRC, there was no stable 64bit D compiler for Linux at the moment they moved to new hardware and thus D support was dropped. So they're benchmarks are only accurate for 64-bit? The shootout have 32-bit and 64-bit versions of the benchmarks, but they wanted to have the same benchmarks on both architectures. I don't know which version was used to generate the charts though. Jerome Well now that LDC supports 64-bit, could we convince them to put it back in? From the FAQ: Why don't you include language X? 8--- Is the language implementation * Used? There are way too many dead languages and unused new languages - see The Language List and Computer Languages History * Interesting? Is there something significant and interesting about the language, and will that be revealed by these simple benchmark programs? (But look closely and you'll notice that we sometimes include languages just because we find them interesting.) If that wasn't discouraging enough: in too many cases we've been asked to include a language implementation, and been told that of course programs would be contributed, but once the language didn't seem to perform as-well-as hoped no more programs were contributed. We're interested in the whole range of performance - not just in the 5 programs which show a language implementation at it's best. We have no ambition to measure every Python implementation or every Haskell implementation or every C implementation - that's a chore for all you Python enthusiasts and Haskell enthusiasts and C enthusiasts, a chore which might be straightforward if you use our measurement scripts. We are unable to publish measurements for many commercial language implementations simply because their license conditions forbid it. We will accept and reject languages in a capricious and unfair fashion - so ask if we're interested before you start coding. 8=== http://shootout.alioth.debian.org/u64/faq.php#acceptable So we can always ask, but we have to be careful how we phrase it: somebody asked about LLVM and LDC on the forums and the discussion centred around LLVM as a C compiler: https://alioth.debian.org/forum/forum.php?thread_id=14508forum_id=999group_id=30402 Moreover, we have to be prepared to argue that D is used (should be easy: just point at the number of projects on dsource) and interesting. The second is a lot more difficult because the definition of interesting is subjective: 8--- Yes, there are just too many languages. Interesting means more like unusual - http://shootout.alioth.debian.org/u32/ats.php http://shootout.alioth.debian.org/u32/benchmark.php?test=alllang=lisaaclang2=gppbox=1 8=== https://alioth.debian.org/forum/message.php?msg_id=181473group_id=30402 Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Source control for all dmd source
Daniel Keep wrote: Incidentally, I use both. TortoiseGit is great for anything of relative complexity as it actually helps you do it correctly (it took me a while to work out how to properly apply patch sets without it). hg unbundle patchset Sorry, couldn't resist ;) Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Why are void[] contents marked as having pointers?
Vladimir Panteleev wrote: On Mon, 01 Jun 2009 05:28:39 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: std.boxer is actually a valid counter-example for my post. The specific fix is simple: replace the void[] with void*[]. The generic fix is just to add a line to http://www.digitalmars.com/d/garbage.html adding that hiding your only reference in a void[] results in undefined behavior. I don't think this should be an inconvenience to any projects? What do you use for may contain unaligned pointers? Sorry, what do you mean? I don't understand why such a type is needed? Implementing support for scanning memory ranges for unaligned pointers will slow down the GC even more. Because you can have a struct with align(1) that contains pointers. Then these pointers can be unaligned. Then an array of those structs cast to a void*[] would contain pointers, but as an optimization, the GC would consider the pointers in this array aligned because you tell it they are.
Re: Why are void[] contents marked as having pointers?
On Mon, 01 Jun 2009 02:18:46 +0300, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Vladimir Panteleev wrote: On Mon, 01 Jun 2009 00:00:45 +0300, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: const(ubyte)[] getRepresentation(T)(T[] data) { return cast(typeof(return)) data; } This is functionally equivalent to (forgive the D1): ubyte[] getRepresentation(void[] data) { return cast(ubyte[]) data; } Since no allocation is done in this case, the use of void[] is safe, and it doesn't instantiate a version of the function for every type you call it with. I remarked about this in my other reply. Which is why I wrote forgive the D1 :) I've yet to switch to D2, but it's obvious that the const should be there to ensure safety. -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: Why are void[] contents marked as having pointers?
On Mon, 01 Jun 2009 14:10:57 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: On Mon, 01 Jun 2009 05:28:39 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: std.boxer is actually a valid counter-example for my post. The specific fix is simple: replace the void[] with void*[]. The generic fix is just to add a line to http://www.digitalmars.com/d/garbage.html adding that hiding your only reference in a void[] results in undefined behavior. I don't think this should be an inconvenience to any projects? What do you use for may contain unaligned pointers? Sorry, what do you mean? I don't understand why such a type is needed? Implementing support for scanning memory ranges for unaligned pointers will slow down the GC even more. Because you can have a struct with align(1) that contains pointers. Then these pointers can be unaligned. Then an array of those structs cast to a void*[] would contain pointers, but as an optimization, the GC would consider the pointers in this array aligned because you tell it they are. The GC will not see unaligned pointers, regardless if they're in a struct or void[] array. The GC doesn't know the type of the data it's scanning - it just knows if it might contain pointers or it definitely doesn't contain pointers. -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: Source control for all dmd source (Git propaganda =)
Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05)
Re: forward ranges must offer a save() function
On Sat, 30 May 2009 13:18:06 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: If we want to allow people to create ranges that are classes (as opposed to structs) the requirement for a save() function is a must. This is because copying class ranges with Range copy = original; only creates a new alias for original; the two share the same state. So this solves our forward vs. input ranges dilemma quite nicely: the save() function at the same time allows distinction between the two (input ranges don't define a save() function) and also allows ranges to be implemented as classes as well as structs. To summarize: Input ranges: empty front popFront Forward ranges: + save Bidir ranges: + back popBack Random-access ranges: + opIndex opIndexAssign The disadvantage is that all non-input ranges must define save(), which is a trivial functions in most cases: Range save() { return this; } I have two questions that came to mind: 1. What is the use case for classes as ranges? I can't think of one. With structs, you can control the aliasing behavior when it is copied, so the issue is less critical. 2. Even if there is a use case, why put the band-aid on the ranges that aren't affected? That is, why affect all ranges except input ranges when input ranges are the issue? Even if this does work, you are going to have mistakes where you copy the range instead of calling .save, which will compile successfully. Since you have to special case it, why not just put a flag in the non-copyable input ranges? Like an enum noCopy = true; or something. Then you can special case those and have them fail to compile when they don't support the algo. For classes, you wouldn't need to do this, because they are by default reference types, and this can be statically checked. The point is, make the frequent usage easy, and the one-time range design require the careful usage. -Steve
Re: Source control for all dmd source
Leandro Lucarella wrote: Jérôme M. Berger, el 1 de junio a las 12:23 me escribiste: Daniel Keep wrote: Incidentally, I use both. TortoiseGit is great for anything of relative complexity as it actually helps you do it correctly (it took me a while to work out how to properly apply patch sets without it). hg unbundle patchset Sorry, couldn't resist ;) git am mailbox Sorry, me either ;) Well, *I* am not the one who said it was difficult to find out with git ;) Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Source control for all dmd source (Git propaganda =)
Leandro Lucarella wrote: Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) Even under svn, we can track it with git-svn
Re: Source control for all dmd source (Git propaganda =)
Leandro Lucarella wrote: Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) Oh, I agree. However, IMO git is a poor choice. Mercurial, Bazaar or svn would be better. For some interesting reading: - Here are the reasons that pushed Python to use Mercurial (includes the reasons for the change from centralized to distributed SCM and the reasons for choosing Mercurial): http://www.python.org/dev/peps/pep-0374/ - Here's the same for Mozilla: http://weblogs.mozillazine.org/preed/2007/04/version_control_system_shootou_1.html Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Source control for all dmd source (Git propaganda =)
Jérôme M. Berger wrote: - Here's the same for Mozilla: http://weblogs.mozillazine.org/preed/2007/04/version_control_system_shootou_1.html From the article: While they've made recent progress, Git was lacking in Win32 support and it was unclear that this would ever change and if it did change, it was unclear that Git-on-Win32 would ever become something more than a second-class citizen. As good, performant Win32 (and Mac and Linux) is a hard-requirement, Git lost in early Kombat rounds. This is unfortunate because (as we would soon find out), lots of issues with the other systems did just work in Git. git's Win32 support via TortoiseGit is nearly as good as SVN's now, so I can't see that being an issue here. So I'd say Git is a great choice for the DMDFE.
Re: Source control for all dmd source (Git propaganda =)
Jérôme M. Berger wrote: Leandro Lucarella wrote: Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) Oh, I agree. However, IMO git is a poor choice. Mercurial, Bazaar or svn would be better. After having used both git and svn, I'll have to VERY strongly disagree with that last part. I'd imagine that *any* half-way sane DVCS would be better than svn. As for the others, you don't provide any objective reasons for WHY they're better than git. For some interesting reading: - Here are the reasons that pushed Python to use Mercurial (includes the reasons for the change from centralized to distributed SCM and the reasons for choosing Mercurial): http://www.python.org/dev/peps/pep-0374/ The weakness of Git's support for Windows relative to the others is probably a valid point; I haven't used the other two. It's worth noting, however, that I *do* use Git in both Windows and Linux and I've only felt disadvantaged on Windows in one specific case: the Windows build lacks svn integration. [1] Another reason was it's not Python, which is obviously irrelevant in this case. The only other reason was popularity. The table doesn't really specify WHY the others were more popular. This could be either that mercurial is much easier to use, or the majority of Python devs prefer typing two characters instead of three. :P - Here's the same for Mozilla: http://weblogs.mozillazine.org/preed/2007/04/version_control_system_shootou_1.html That's over three years old and the only reason stated is that it is lacking Win32 support which is obviously no longer true. [1] Apparently this is because the svn integration depends on the perl svn bindings, and no one can get both perl AND perl-svn compiling with msys.
Re: Source control for all dmd source (Git propaganda =)
Daniel Keep wrote: Jérôme M. Berger wrote: Leandro Lucarella wrote: Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) Oh, I agree. However, IMO git is a poor choice. Mercurial, Bazaar or svn would be better. After having used both git and svn, I'll have to VERY strongly disagree with that last part. I'd imagine that *any* half-way sane DVCS would be better than svn. As for the others, you don't provide any objective reasons for WHY they're better than git. Well, the reason *I* don't use git is that at the time I started using a DVCS, it didn't run at all on Windows (some people reported partial success with cygwin but that was all). Even if support has improved, it still seems to me like Windows is a second rate citizen in the git world, and this leads me to worry about how git handles the idiosyncrasies of Windows. Plus, my experience with other cygwin/msys based projects leave me worried about git's speed on Windows (although I haven't tested it). Moreover, everything I've read on the web seems to indicate that git is difficult to use if you want to do more than add/commit/update. Mercurial is *very* easy to both setup and use on windows as well as linux. I'm less familiar with Bazaar, but from what I've seen it's very similar to Mercurial. Finally, no matter how good it is, TortoiseGIT is not enough (in the same way that TortoiseCVS, TortoiseSVN and TortoiseHg are not enough). You need good command-line support so that you can access it easily from custom tools (for example to generate releases automatically). On a side note, we have to ask ourselves: is a DSCM really needed for D? So long as there are only a few developers, a centralized system might be enough (in which case svn becomes the only real choice). Jerome PS: http://texagon.blogspot.com/2008/02/use-mercurial-you-git.html -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Source control for all dmd source
Reply to hasen, IMO it's wrong to just put the same command-line commands on the right-click menu, and call that a gui. I like exactly that kind of approach, it gives you the full power of the command line without having to keep a command shell(s) around in the correct directory(s). All the advantages of both the CLI word and the GUI world. OTOH that is all theoretical as I just installed it about a week ago and have already discovered that several key features are unavailable from it (not hooked up it would seem).
Re: Source control for all dmd source
Reply to hasen, Daniel Keep wrote: hasen wrote: ... Wow, tortoisegit? Seems so bloated, look at those screen shots, it's so confusing! Wow, git cli? Seems so bloated, look at all those commands, it's so confusing! It *is* confusing, it doesn't claim to be intuitive, it's aimed for developers, they're supposed to read tutorials/man pages to know how it works. No they're not. The difference between a technical and non-technical user is that the technical one *will* read manual if they have to. If there is any other choice, no program should *ever* be designed so that the user is forced to read the manual. Whenever possible, no matter the user, the UI should be intuitive. (OK I'll grant that in the technical user case some intuitiveness can be traded for speed of use but in this case I don't think their is a conflict) Providing a gui that maps a pull button to a pull command is confusing because GUI's are supposed to be discoverable and intuitive. Both GUIs and CLIs are. No wonder I never knew how to properly use svn, I was so dependant on those GUIs, that I never knew what was going on under the surface. Odd, I started using SVN via a GUI and now I rarely if ever use the CLI and I never had a problem figuring out what was going on under the surface. At least for me, the only sensible way to use svn is through the command line. GUIs for svn and git are confusing because well, you're not exactly sure what they're doing. That is not true. I am sure of what my SVN GUI is doing. As for Git, I have yet to have any problem with it that were cause by it hiding what it was doing (the only problem I've had is it not even showing some functionality) Actually, if you look at them, the dialogs are all just GUI versions of the various git tools themselves. So if you really find those confusing, I can only assume you find Git itself confusing. Look at this and tell me it's not confusing: http://www.jensroesner.de/wgetgui/wgetgui.png Is that from TortoiusGit? IMO it's wrong to just put the same command-line commands on the right-click menu, and call that a gui. Yes, because heaven forbid anyone makes our lives as developers easier. After all, no developer should EVER use a simplified interface when what they SHOULD be doing is memorising every command and all of its switches. The heresy! Well, if it makes your life easier, good for you. It does. The humble git-gui is much better. Integrating those commands into the shell itself so that only the commands that make sense in context are shown, and actually tell the user what they're going to do is much better. Incidentally, I use both. TortoiseGit is great for anything of relative complexity as it actually helps you do it correctly (it took me a while to work out how to properly apply patch sets without it). What's more, it also helps you learn how to use git without constantly screwing up. I think if you don't know what you're doing you'll screw up anyway. I used to use SmartSVN, TortoiseSVN, and probably some other GUIs, and every so often I would screw something up and have no idea how to fix it. Yup, I known what you are talking about. But what does that have to do with GUIs? Why? because the GUI made me feel comfortable using svn without really knowing what the hell was going on. I wouldn't feel very comfortable using svn or git without knowing what it's doing regardless of the UI. It seems you don't like GUIs. Obviously, others doesn't agree with you.
Re: Source control for all dmd source
BCS wrote: It seems you don't like GUIs. Obviously, others doesn't agree with you. Actually no, I don't hate guis, but I come from windows, and in the world of windows, you get used to expect programs to work without you having to read manuals. For example, I used to play with the Half-Life SDK, and while I had to read some tutorials about how to download it and set it up, etc, the rest just worked in an out-of-the-box kind of way. I never had to read any manuals for Visual Studio, it just kinda worked, the expectations you have about it turn out to be true most of the time, and if they're not, you can sort if work it out in your head and discover how it works. With something like svn, you can't have any expectation; you just have to learn it through the manual/tutorials. What the hell is a commit? what's a log? what's revert? etc etc. There's just no other way around it other than to read and learn all these concepts. And just what the hell is a working directory? That thing kept confusing the hell out of me. Now, git comes with its own set of concepts, that are completely different from svn. checkout in git has nothing to do with checkout in svn. What's a branch? what's merging? where/how does merging happen? There's just no way you could work with git without knowing about all of this. It's not complicated or anything, but you have to learn it. So, when you learn to use it from the command line, then what's the point of the gui? See, because I come from windows, GUI to me means that I don't have to learn anything; that I can just sail through it and it will somehow work out on its own. That's why GUIs that just offer buttons that map directly to CLI commands are confusing. Imagine a file explorer where you have to right click a directory and choose cd from the menu. and then it will cd into that directory, but not show you the files inside it, instead it shows a blank, and then you right click somewhere in that empty area and choose ls. better yet, ls would bring a sub-menu, where one of the choices is -al, or maybe just be smart and write it out as all, to confuse the user, and make him think Oh, so it usually lists the first 10 files only unless I choose all? To conserve screen space maybe?
Re: Source control for all dmd source
hasen wrote: BCS wrote: It seems you don't like GUIs. Obviously, others doesn't agree with you. Actually no, I don't hate guis, but I come from windows, and in the world of windows, you get used to expect programs to work without you having to read manuals. For example, I used to play with the Half-Life SDK, and while I had to read some tutorials about how to download it and set it up, etc, the rest just worked in an out-of-the-box kind of way. I never had to read any manuals for Visual Studio, it just kinda worked, the expectations you have about it turn out to be true most of the time, and if they're not, you can sort if work it out in your head and discover how it works. With something like svn, you can't have any expectation; you just have to learn it through the manual/tutorials. What the hell is a commit? what's a log? what's revert? etc etc. There's just no other way around it other than to read and learn all these concepts. And just what the hell is a working directory? That thing kept confusing the hell out of me. Now, git comes with its own set of concepts, that are completely different from svn. checkout in git has nothing to do with checkout in svn. What's a branch? what's merging? where/how does merging happen? There's just no way you could work with git without knowing about all of this. It's not complicated or anything, but you have to learn it. So, when you learn to use it from the command line, then what's the point of the gui? See, because I come from windows, GUI to me means that I don't have to learn anything; that I can just sail through it and it will somehow work out on its own. That's why GUIs that just offer buttons that map directly to CLI commands are confusing. I don't see TortoiseSVN as a one-to-one mapping between buttons and command-line actions. For example when you select SVN Commit you can add files from there, ignore files, revert changes, see differences. So it basically unified all the tools in a comfortable way. It also lets you see which files are now and gives you the option to add them. How do you do that with the command line?
Re: Source control for all dmd source
Reply to hasen, BCS wrote: It seems you don't like GUIs. Obviously, others doesn't agree with you. Now, git comes with its own set of concepts, that are completely different from svn. checkout in git has nothing to do with checkout in svn. What's a branch? what's merging? where/how does merging happen? There's just no way you could work with git without knowing about all of this. It's not complicated or anything, but you have to learn it. So, when you learn to use it from the command line, then what's the point of the gui? See, because I come from windows, GUI to me means that I don't have to learn anything; that I can just sail through it and it will somehow work out on its own. That's why GUIs that just offer buttons that map directly to CLI commands are confusing. OK I see where you are coming from and why we are seeing things so differently. To me, a GUI is not about It just works kinds of things. That's just good design and can be had (with more design work) in a CLI as well. To me, a GUI is about having more dimensions of input, more simultaneous channels of communication. I can see many different bits of information in a non linear way. I can have many contexts available at the same time. I can access many action from the same point. I'd much rather select from a set of files with check boxes than by typing in file names. Imagine a file explorer where you have to right click a directory and choose cd from the menu. and then it will cd into that directory, but not show you the files inside it, instead it shows a blank, and then you right click somewhere in that empty area and choose ls. better yet, ls would bring a sub-menu, where one of the choices is -al, or maybe just be smart and write it out as all, to confuse the user, and make him think Oh, so it usually lists the first 10 files only unless I choose all? To conserve screen space maybe? There is a big difference; with Git/SVN many of the commands have side effect and many of the others take a bit of time and space to operate on. The way I've got SVN/Git set up the most useful information (file status) is shown right up front as icon overlays. I don't /want/ any more information that that because the next most useful bits (diffs for instance) wouldn't fit on any screen I could use.
Re: Source control for all dmd source (Git propaganda =)
Jérôme M. Berger, el 1 de junio a las 19:55 me escribiste: Daniel Keep wrote: Jérôme M. Berger wrote: Leandro Lucarella wrote: Anyways, my point was, putting DMDFE in a SCM would be great, even when it's svn. For me the ideal would be Git, Mercurial or other distributed SCM would be nice, but even svn is better than we have now =) Oh, I agree. However, IMO git is a poor choice. Mercurial, Bazaar or svn would be better. After having used both git and svn, I'll have to VERY strongly disagree with that last part. I'd imagine that *any* half-way sane DVCS would be better than svn. As for the others, you don't provide any objective reasons for WHY they're better than git. Well, the reason *I* don't use git is that at the time I started using a DVCS, it didn't run at all on Windows (some people reported partial success with cygwin but that was all). Even if support has improved, it still seems to me like Windows is a second rate citizen in the git world, and this leads me to worry about how git handles the idiosyncrasies of Windows. Plus, my experience with other cygwin/msys based projects leave me worried about git's speed on Windows (although I haven't tested it). Why don't you test it and stop talking about what you think it's going on and start talking about what's *really* going on. It doesn't seems very fair to discard something just because you have the feeling that it wouldn't work well (specially when other people use it and say it works well). Anyway, I insist that the main point is having DMDFE in a SCM. If Walter feels comfortable with svn *now*, I think it should be svn *now*. I prefer some no-ideal SCM *now* than the ideal SCM *in a distant future*. We can always migrate the repo to something else when the time is right... Moreover, everything I've read on the web seems to indicate that git is difficult to use if you want to do more than add/commit/update. Mercurial is *very* easy to both setup and use on windows as well as linux. I'm less familiar with Bazaar, but from what I've seen it's very similar to Mercurial. Again I've read. =) I'm telling you, git is easy, it just a little harder to get used to it, but it's so much better when you do... Finally, no matter how good it is, TortoiseGIT is not enough (in the same way that TortoiseCVS, TortoiseSVN and TortoiseHg are not enough). You need good command-line support so that you can access it easily from custom tools (for example to generate releases automatically). You have a good command-line support. On a side note, we have to ask ourselves: is a DSCM really needed for D? So long as there are only a few developers, a centralized system might be enough (in which case svn becomes the only real choice). I think it's not *needed*, as a SCM is not *needed* either. Having one (SCM) just would make things easier (and might encourage developers to hack DMDFE, as following the changes in a repo might give you a better insight about how is written and maybe people can spot bugs too). Having a DSCM would make things easier for people that integrates DMDFE in other projects than DMD. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05)
Re: Source control for all dmd source (Git propaganda =)
On Mon, 01 Jun 2009 18:20:49 +0300, Jérôme M. Berger jeber...@free.fr wrote: [snip] Since no one posted this link in this thread yet: http://whygitisbetterthanx.com/ -- Best regards, Vladimir mailto:thecybersha...@gmail.com
Re: Why are void[] contents marked as having pointers?
Vladimir Panteleev wrote: On Mon, 01 Jun 2009 14:10:57 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: On Mon, 01 Jun 2009 05:28:39 +0300, Christopher Wright dhase...@gmail.com wrote: Vladimir Panteleev wrote: std.boxer is actually a valid counter-example for my post. The specific fix is simple: replace the void[] with void*[]. The generic fix is just to add a line to http://www.digitalmars.com/d/garbage.html adding that hiding your only reference in a void[] results in undefined behavior. I don't think this should be an inconvenience to any projects? What do you use for may contain unaligned pointers? Sorry, what do you mean? I don't understand why such a type is needed? Implementing support for scanning memory ranges for unaligned pointers will slow down the GC even more. Because you can have a struct with align(1) that contains pointers. Then these pointers can be unaligned. Then an array of those structs cast to a void*[] would contain pointers, but as an optimization, the GC would consider the pointers in this array aligned because you tell it they are. The GC will not see unaligned pointers, regardless if they're in a struct or void[] array. The GC doesn't know the type of the data it's scanning - it just knows if it might contain pointers or it definitely doesn't contain pointers. Okay, so currently the GC doesn't do anything interesting with its type information. You're suggesting that that be enforced and codified.
Unique as a transitive type?
Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? 2. What benefits do you forsee? Here are my thoughts: 1) Escape analysis - construction of unique objects must be careful with how they manipulate their members. This would require scope parameters (AKA lent in Bartosz's blog/literature). After construction, it may also be necessary to update the manipulate the unique object while preserving its uniqueness. Personally, I like the idea of scope by default, but I'm probably in the minority. 2) Plugs a gaping hole in the type system. Even with shared added, I had a moderate multi-threaded code base compile and run without use of shared or casting. That boiled down to an inability to start threads without (silently) subverting the type system. It's also possible to optimize operations on unique data in ways that one can't do when a rogue write reference might exist somewhere. That's not the same as invariant, but it's similar. Unique can be implicitly cast to scope invariant.
Re: Operator overloading, structs
bearophile wrote: This post is mostly about D2 operator overloading in general, but I also talk about problems in the API of Tango BigInts. A small program that use multiprecision integers of Tango: import tango.stdc.stdio: printf; import tango.math.BigInt: bint = BigInt; void main() { bint i = 1; // #1 if (i) // #2 i++; auto a = [10, 20, 30, 40]; printf(%d\n, a[i]); // #3 } If you replace BigInt with int that program works. With bigInteger it doesn't work, because I think: - In #1 it doesn't call static opCall. Works for me with int. I've fixed BigInt so that it now works with long as well. - In #2 D doesn't have a standard method that returns true/false. In Python2.6 such method is named __nonzero__ and in python3 it's named __bool__. - In #3 there's no implicit cast to int. I think that improving such D2 operator overloading is very good, it allows to use BigInts, SafeInts, Complex, etc, in a quite more transparent way, allowing very similar code to work with native and user defined types. --- Regarding specifically BigInt: D1 has opCast, but here I think not even a[cast(long)i] works, because BigInt doesn't define it yet. When a big int can't be converted to long, it can throw an exception. Yes, that's not a bad idea. Until a more efficient solution is found, I think BigInts must have a toString too. In most situations you don't print such numbers, you do lot of computations with them and then you finally print some of them. So often the time spent printing them is not much). Not toString(), though. You MUST be able to specify if you want leading zeros, and if you want hex or decimal. I just (a) haven't got around to it; and (b) haven't been sure about the interface should be. But I hate toString() so much, I'm _never_ going to support it. It's an OMDB (over my dead body). Sorry. I think BigInt also may enjoy a lot methods like: int opEquals(int y), etc. Ouch! That's a simple omission. Fixed in Tango SVN 4717.
Re: Operator overloading, structs
- In #2 D doesn't have a standard method that returns true/false. In Python2.6 such method is named __nonzero__ and in python3 it's named __bool__. No one has commented about that, but I think that having a way to overload cast(bool)foo is important. I use it in Python and I have shown why and how it can be used in D too. It's an easy thing to do and I think it's safe. Don: D1 has opCast, but here I think not even a[cast(long)i] works, because BigInt doesn't define it yet. When a big int can't be converted to long, it can throw an exception. Yes, that's not a bad idea. But eventually an implicit cast will be better (even if a bit less safe) in D2. Not toString(), though. You MUST be able to specify if you want leading zeros, and if you want hex or decimal. toString() is for the default case, when you just want the decimal number with no leading zeros. Then you can add other methods to output hex and all you want. (In what situations do you want to print leading zeros of a big multiprecision integral value? I have never faced such need so far). But I hate toString() so much, I'm _never_ going to support it. It's an OMDB (over my dead body). Sorry. I have patched a copy of the bigint module to add the toString method. I'll keep using such patch for my programs. I want you well alive and happy too. Fixed in Tango SVN 4717. Thank you. Bye, bearophile
Re: Unique as a transitive type?
In the short time I was at the Kahili coffeehouse summit I heard Walter say more than once that he was troubled by the idea of more transitive construction types because he was afraid of the combinatorial explosion that would occur. And that was just adding 'unique'. Bartosz was trying to talk him into 'unique' and 'lent'. I don't know if Bartosz and Andrei and Walter came to any agreement -- they seemed to be in three mutually exclusive frames of mind at the start anyway. From my knothole I think unique and lent would be positive additions, even if it does offend Walter's finer sensibilities! Paul Jason House Wrote: Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? 2. What benefits do you forsee? Here are my thoughts: 1) Escape analysis - construction of unique objects must be careful with how they manipulate their members. This would require scope parameters (AKA lent in Bartosz's blog/literature). After construction, it may also be necessary to update the manipulate the unique object while preserving its uniqueness. Personally, I like the idea of scope by default, but I'm probably in the minority. 2) Plugs a gaping hole in the type system. Even with shared added, I had a moderate multi-threaded code base compile and run without use of shared or casting. That boiled down to an inability to start threads without (silently) subverting the type system. It's also possible to optimize operations on unique data in ways that one can't do when a rogue write reference might exist somewhere. That's not the same as invariant, but it's similar. Unique can be implicitly cast to scope invariant.
Re: Unique as a transitive type?
On Mon, 01 Jun 2009 19:16:55 -0400, Jason House jason.james.ho...@gmail.com wrote: Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? 2. What benefits do you forsee? Here are my thoughts: 1) Escape analysis - construction of unique objects must be careful with how they manipulate their members. This would require scope parameters (AKA lent in Bartosz's blog/literature). After construction, it may also be necessary to update the manipulate the unique object while preserving its uniqueness. Personally, I like the idea of scope by default, but I'm probably in the minority. 2) Plugs a gaping hole in the type system. Even with shared added, I had a moderate multi-threaded code base compile and run without use of shared or casting. That boiled down to an inability to start threads without (silently) subverting the type system. It's also possible to optimize operations on unique data in ways that one can't do when a rogue write reference might exist somewhere. That's not the same as invariant, but it's similar. Unique can be implicitly cast to scope invariant. 1) Well, the difficulty I see, is that you can't just add unique. When I think of unique, I generally think of it as being shallow - i.e. that object is unique, but it can contain references to non-unique objects (not-transitive). Though this makes construction, etc, easier, there are times you'd like to have non-unique members which are protected by the uniqueness of the parent. This adds a 'owned' type, which starts to run into Escape analysis issues. Then there's ref unique, which is an odd duck: first the unique is moved into the function, and then on scope(exit) it is moved back out. This subtly comes into play with things like tasks/futures. Of course, there's also 'lent', which allows unique to be used with a wider range of functions. 2) To me, unique is all about performance. Shared can do all the same things, it just cost's taking a thin-lock to do so, which is pretty cheap. Having it as a type (instead of its current form as library template) does have some integration benefits with the lent system, but that's about it. BTW, shared isn't implemented yet (beyond basic keyword recognition)
Re: Unique as a transitive type?
Jason House wrote: Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? Move semantics. Just passing a unique value type to a function, a copy is performed. 2. What benefits do you forsee? Message passing could be done without needless copying and with a reasonable degree of safety. I've done this before in C++ by storing the reference in a modified shared_ptr (so it can live in containers), then asserting is_unique() and transferring the reference to an auto_ptr when passing into the message. Kind of messy, but it works well enough.
Re: Unique as a transitive type?
Sean Kelly Wrote: Jason House wrote: Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? Move semantics. Just passing a unique value type to a function, a copy is performed. Why would copying occur? Can't a function simply pass it by reference and then zero out that register when done? 2. What benefits do you forsee? Message passing could be done without needless copying and with a reasonable degree of safety. I've done this before in C++ by storing the reference in a modified shared_ptr (so it can live in containers), then asserting is_unique() and transferring the reference to an auto_ptr when passing into the message. Kind of messy, but it works well enough.
Re: Unique as a transitive type?
Jason House wrote: Sean Kelly Wrote: Jason House wrote: Andrei has stated previously that unique was left out of the type system because it added little value to the const system. Now that shared and multithreading are here, unique has more value. I have two basic questions: 1. What would make unique difficult to add? Move semantics. Just passing a unique value type to a function, a copy is performed. Why would copying occur? Can't a function simply pass it by reference and then zero out that register when done? Technically, yes. I meant more from a language standpoint. A copy theoretically occurs when returning a value from a function as well.
Re: Operator overloading, structs
bearophile wrote: - In #2 D doesn't have a standard method that returns true/false. In Python2.6 such method is named __nonzero__ and in python3 it's named __bool__. No one has commented about that, but I think that having a way to overload cast(bool)foo is important. I use it in Python and I have shown why and how it can be used in D too. It's an easy thing to do and I think it's safe. It's definitely required for completeness. In C++ there's a hack to do it safely (you return a pointer to a private member class). Still, I wonder if D could simply do something like defining that for classes: if (x) is always transformed into if ( x!=0 ) if (!x) is always transformed into if ( x==0 ) Don: D1 has opCast, but here I think not even a[cast(long)i] works, because BigInt doesn't define it yet. When a big int can't be converted to long, it can throw an exception. Yes, that's not a bad idea. But eventually an implicit cast will be better (even if a bit less safe) in D2. On second thoughts, y = x.toLong or y = to!(long)(x) is probably better. Casts are evil, implicit casts even more so. Not toString(), though. You MUST be able to specify if you want leading zeros, and if you want hex or decimal. toString() is for the default case, when you just want the decimal number with no leading zeros. Then you can add other methods to output hex and all you want. (In what situations do you want to print leading zeros of a big multiprecision integral value? I have never faced such need so far). BigFloat, for example.
Re: legal identifier check
BCS wrote: Hello Saaa, You have to write it yourself. Here's a good starting point: http://www.digitalmars.com/d/1.0/lex.html#identifier Yes, that was my starting point and it seemed quite complex, thus my question :) I think I'll stay with my simple check for now as it isn't really necessary to be as strict as D's identifiers. Just thought that if there was an easy check I'd implement that. Thanks anyways everybody. if you are only working with ASCII: use the regex `_A-Za-z[_A-Za-z0-9]*` Shouldn't that be [_A-Za-z][_A-Za-z0-9]*? Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: legal identifier check
Reply to Jérôme, BCS wrote: Hello Saaa, You have to write it yourself. Here's a good starting point: http://www.digitalmars.com/d/1.0/lex.html#identifier Yes, that was my starting point and it seemed quite complex, thus my question :) I think I'll stay with my simple check for now as it isn't really necessary to be as strict as D's identifiers. Just thought that if there was an easy check I'd implement that. Thanks anyways everybody. if you are only working with ASCII: use the regex `_A-Za-z[_A-Za-z0-9]*` Shouldn't that be [_A-Za-z][_A-Za-z0-9]*? Jerome Oops :(
Re: const confusion
Dnia 2009-05-31, nie o godzinie 15:36 -0400, Jarrett Billingsley pisze: On Sun, May 31, 2009 at 3:26 PM, Witold Baryluk bary...@smp.if.uj.edu.pl wrote: Horrible. How to ensure constness of data, and still have possibility of changing references of local variables? Rebindable. http://www.digitalmars.com/d/2.0/phobos/std_typecons.html#Rebindable Thanks. I was already thinking about implementing something like this. It is very thin, and probably doesn't eat even single byte more than original reference. So generally we need to cheat: union with const and non-const version + opAssign/opDot, and some hidden casts. If everybody is doing this, why not. Only one problem is that i need to make some wrappers for it: alias Rebindable!(C) CC; first try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1)); // oops doesn't work c2 = c2.b(); second try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1.opDot())); // ok, works c2 = c2.b(); define some function on original data: int something(in C c) { return c.a; } something(c2); // oops, doesn't work something(c2.opDot()); // ok, works So generally now i need to overload all my functions to support also Rebindable!(C), where I will unwrap object and call original function? The same with constructors. Can't be this done more simpler? As I remember there was something like opCast (for explicit casts)? Maybe Rebindable should have it casting to original type (with const)?
Re: const confusion
On Mon, Jun 1, 2009 at 4:01 PM, Witold Baryluk bary...@smp.if.uj.edu.pl wrote: Dnia 2009-05-31, nie o godzinie 15:36 -0400, Jarrett Billingsley pisze: On Sun, May 31, 2009 at 3:26 PM, Witold Baryluk bary...@smp.if.uj.edu.pl wrote: Horrible. How to ensure constness of data, and still have possibility of changing references of local variables? Rebindable. http://www.digitalmars.com/d/2.0/phobos/std_typecons.html#Rebindable Thanks. I was already thinking about implementing something like this. It is very thin, and probably doesn't eat even single byte more than original reference. So generally we need to cheat: union with const and non-const version + opAssign/opDot, and some hidden casts. If everybody is doing this, why not. Only one problem is that i need to make some wrappers for it: alias Rebindable!(C) CC; first try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1)); // oops doesn't work c2 = c2.b(); second try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1.opDot())); // ok, works c2 = c2.b(); define some function on original data: int something(in C c) { return c.a; } something(c2); // oops, doesn't work something(c2.opDot()); // ok, works So generally now i need to overload all my functions to support also Rebindable!(C), where I will unwrap object and call original function? The same with constructors. Can't be this done more simpler? As I remember there was something like opCast (for explicit casts)? Maybe Rebindable should have it casting to original type (with const)? This seems like a perfect application for opImplicitCast, a feature that has been bandied about for years and which Andrei seems to have hinted at coming soon. Using implicit casts, you would be able to make a perfectly transparent wrapper type such as Rebindable. For now, you're stuck overloading on Rebindable!(T) :\
Re: const confusion
On Mon, 01 Jun 2009 16:01:04 -0400, Witold Baryluk bary...@smp.if.uj.edu.pl wrote: Dnia 2009-05-31, nie o godzinie 15:36 -0400, Jarrett Billingsley pisze: On Sun, May 31, 2009 at 3:26 PM, Witold Baryluk bary...@smp.if.uj.edu.pl wrote: Horrible. How to ensure constness of data, and still have possibility of changing references of local variables? Rebindable. http://www.digitalmars.com/d/2.0/phobos/std_typecons.html#Rebindable Thanks. I was already thinking about implementing something like this. It is very thin, and probably doesn't eat even single byte more than original reference. So generally we need to cheat: union with const and non-const version + opAssign/opDot, and some hidden casts. If everybody is doing this, why not. Only one problem is that i need to make some wrappers for it: alias Rebindable!(C) CC; first try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1)); // oops doesn't work c2 = c2.b(); second try: auto c1 = CC(new C(1)); auto c2 = CC(new C(2, c1.opDot())); // ok, works c2 = c2.b(); define some function on original data: int something(in C c) { return c.a; } something(c2); // oops, doesn't work something(c2.opDot()); // ok, works So generally now i need to overload all my functions to support also Rebindable!(C), where I will unwrap object and call original function? The same with constructors. Can't be this done more simpler? As I remember there was something like opCast (for explicit casts)? Maybe Rebindable should have it casting to original type (with const)? You are running into limitations that are planned to be fixed. For example, rebindable probably shouldn't use opDot anymore... it should use alias this. With opDot, you don't have implicit casting back to the original type. But alias this provides that, not sure if aliasing a union member has been tested... Also, I believe Rebindable!(const C) is what you really want (I've argued in the past that Rebindable should just assume that it's type should be const). Rebindable!(T) is just an alias to T if T is not const, which is IMO absolutely useless. Another thing I just noticed, which probably should be fixed, there is an alias for get which gets the original item. get's a pretty common member name, I don't think it should be overridden by Rebindable. In fact, I think rebindable needs almost a rewrite with the recent developments of D2. The goal of Rebindable is to transparently implement the sort of tail-const behavior you want without any of the pain you are currently experiencing. If it doesn't work seamlessly (except for where you wish to explicitly define this is a rebindable reference), then it's not finished. Andrei? -Steve
[Issue 3034] Template instance name wrongly mangled as LName
http://d.puremagic.com/issues/show_bug.cgi?id=3034 Shin Fujishiro rsi...@gmail.com changed: What|Removed |Added Attachment #384 is|0 |1 obsolete|| --- Comment #1 from Shin Fujishiro rsi...@gmail.com 2009-06-01 03:07:57 PDT --- Created an attachment (id=387) -- (http://d.puremagic.com/issues/attachment.cgi?id=387) Fix the problem (DMD 2.030) I forgot to deal with TemplateMixin. It should be mangled as LName. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3043] New: Template symbol arg cannot be demangled
http://d.puremagic.com/issues/show_bug.cgi?id=3043 Summary: Template symbol arg cannot be demangled Product: D Version: 2.030 Platform: All OS/Version: All Status: NEW Keywords: patch, spec Severity: minor Priority: P2 Component: DMD AssignedTo: bugzi...@digitalmars.com ReportedBy: rsi...@gmail.com Created an attachment (id=388) -- (http://d.puremagic.com/issues/attachment.cgi?id=388) Patch (DMD 2.030) === Problem === Under the current spec, a template symbol argument is mangled to an LName: TemplateArg: T Type // type argument V Type Value// value argument S LName // symbol argument LName: Number Name This rule is troublesome for demangling. When Name is a QualifiedName (e.g. template symbol), which starts with a Number, then there will be contiguous Numbers in a mangled argument: S Number Number Name Number Name A demangler will not be able to demangle such input correctly. For example, this code module test; struct Temp(alias a) {} template sym() {} pragma(msg, Temp!(sym).mangleof); prints 4test20__T4TempS94test3symZ4Temp. Here sym is mangled to S94test3sym; the Number is 9 and the Name is 4test3sym. But a demangler will recognize the Number and the Name as 94 and test3sym, respectively. === Proposal === A template symbol argument may be (a) template declaration, template instance, template mixin, package, module, (b) variable or function. (a) is mangled to a QualifiedName and (b) is mangled to a MangledName. These two groups should be treated differently. My proposal is this: TemplateArg: S TemplateSymbolArg TemplateSymbolArg: QualifiedName // (a) qualified name M LName // (b) mangled var/func name (_D, _Z, etc.) This grammar does not generate contiguous Numbers. The prefix M is necessary to avoid a same-mangled-name collision between QualifiedName and LName. The attached patch modifies DMD 2.030 so that template symbol argument is mangled with this rule. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3044] New: Bus error compiling the following code
http://d.puremagic.com/issues/show_bug.cgi?id=3044 Summary: Bus error compiling the following code Product: D Version: 2.030 Platform: Other OS/Version: Mac OS X Status: NEW Keywords: ice-on-valid-code Severity: normal Priority: P2 Component: DMD AssignedTo: bugzi...@digitalmars.com ReportedBy: s...@invisibleduck.org import std.typecons; void fn(T)( T val ) { T* tmp = new T; } void main() { fn( tuple(5) ); } -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3045] New: Can't use ref with foreach on tuple
http://d.puremagic.com/issues/show_bug.cgi?id=3045 Summary: Can't use ref with foreach on tuple Product: D Version: 2.030 Platform: Other OS/Version: Mac OS X Status: NEW Keywords: rejects-valid Severity: normal Priority: P2 Component: DMD AssignedTo: bugzi...@digitalmars.com ReportedBy: s...@invisibleduck.org This code should compile: void fn(T...)( ref T args ) { foreach( ref e; args ) e = e.init; } void main() { int x, y; fn( x, y ); } -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---