Re: Dejan Lekic created the D Developers Network on LinkedIn
Nick Sabalausky wrote: Dejan Lekic dejan.le...@gmail.com wrote in message news:pxvtdhlbncuaonfhi...@forum.dlang.org... and eventually make this group a convenient place to post important news related to the D and its community, job announces, good ideas, etc. A linkedin group for those who like that sort of thing is one thing, but D-related announcements, ideas, discussion, etc. belong here in the D newsgroup so they're available to everyone, not hidden behind linkedin's gates, and all in one unified place. We don't need to be fracturing the D community. Nick, nobody said LinkedIn is going to be the *main place* for announcements. That would be, as you said, a bad idea. I doubt we will have any serious discussion there that can't be available on our news server. However, based from my LinkedIn experience with other FLOSS groups, it is very much possible that, for an example, job-announcements by some of the future recruiters, are going to be posted on LinkedIn only. You will not expect someone from a recruiting agency to go through various newsgroups and post announcements, do you? :) - I certainly would not.
Re: Dejan Lekic created the D Developers Network on LinkedIn
Jesse Phillips wrote: On Tuesday, 20 March 2012 at 19:09:59 UTC, Dejan Lekic wrote: My idea when I made it was to gather professionals who use D *in production environments* Does that mean you shouldn't join if D only supports a product going to production, and isn't officially software used in the company? Everybody is welcome (otherwise I would make the group invite-only), but the idea is to have more professionals there, than just enthusiasts who use D in some toy project... New D users will either become professionals after some time of evaluation and adoption, or they will give up D. I honestly do not think the LinkedIn group needs people who forget D after few weeks of trying. :) It is also misleading when a recruiter sees someone a member of the DDN who never wrote any serious piece of D code. Do you agree?
Re: D web apps: cgi.d now supports scgi
Adam D. Ruppe wrote: https://github.com/adamdruppe/misc-stuff-including-D-programming-language- web-stuff some docs: http://arsdnet.net/web.d/cgi.html http://arsdnet.net/web.d/cgi.d.html The file cgi.d in there is my base library for web apps. Previously, it spoke regular CGI, FastCGI (with help from a C lib) and HTTP (with help from the netman.d and httpd.d files in that github). Now, in addition to those options, it also speaks SCGI - all by itself - and it can speak http without needing helper modules. The new embedded http server should work on all platforms too, not just linux like the old one, but I haven't tested it yet. This finishes out all the major web app interfaces that I'm aware of. To use them, you write your app with a GenericMain and always communicate through the Cgi object it passes you. === import arsd.cgi; void hello(Cgi cgi) { cgi.write(Hello, world! ~ cgi.request(name) ~ \n); } mixin GenericMain!hello; === And then compile: dmd hello.d arsd/cgi.d # builds a CGI binary dmd hello.d arsd/cgi.d -version=fastcgi # FastCGI. needs libfcgi C lib dmd hello.d arsd/cgi.d -version=scgi # SCGI dmd hello.d arsd/cgi.d -version=embedded_httpd # built-in http server The API is the same with all four options. With cgi or fastcgi, you put the binary where your web server can run it. With scgi and embedded_httpd, you run the binary. It persists as an application server. On the command line, you can say use the option --port 5000 for example to change the listening tcp port. The default for httpd right now is 8085. The default for scgi is 4000. Well, I don't have much else to say, but since it now does all four of the big interfaces easily, I thought I'd say something here. If you're interested in web programming with D, this will lay the foundation for you. Amazing! Well-done Adam!
Re: dcaflib
Nathan M. Swan wrote: In a post from a few weeks ago, someone mentioned terminal colors. Currently, I have one that works with bash (cmd pending) at https://github.com/carlor/dcaflib. Example code: import dcaflib.ui.terminal; import std.stdio; void main() { fgColor = TermColor.RED; writeln(this is red!); fgColor = TermColor.BLUE; writeln(this is blue!); } Nathan, what terminals are supported? Only ANSI / VT* or some other types of terminals as well?
Adam Wilson is now a GSoC 2012 mentor!
We're very happy and honored to had Adam Wilson on board as a GSoC 2012 mentor. Adam brings solid project management experience and has a specific interest in the Mono-D project. Please join me in welcoming Adam to the ranks of GSoC mentors! Thanks, Andrei
Re: Adam Wilson is now a GSoC 2012 mentor!
On Monday, 26 March 2012 at 15:27:29 UTC, Andrei Alexandrescu wrote: We're very happy and honored to had Adam Wilson on board as a GSoC 2012 mentor. Adam brings solid project management experience and has a specific interest in the Mono-D project. Please join me in welcoming Adam to the ranks of GSoC mentors! Thanks, Andrei On 03/26/2012 08:27 AM, Andrei Alexandrescu wrote: We're very happy and honored to had Adam Wilson on board as a GSoC 2012 mentor. Adam brings solid project management experience and has a specific interest in the Mono-D project. Please join me in welcoming Adam to the ranks of GSoC mentors! Thanks, Andrei Welcome Adam and congratulation Alex. I am using Mono-D and I almost enjoy it. One thing is for sure: Code LookUp / Intellisense is great in Mono-D, ,Code outline simply rox, and MNono-D is (in this regard) light years ahead of Visual D. The pure speed in which Alex's code analyzer is working is just xtreme amazing. Alex ? Benchmarks ? But it is a GTK# and MONO based project and this means it is finally a C# project. I am pretty sure that we will have a complete wxWidgets 2.9.3 binding in a few days/weeks. (and we will have a TOOL to create almost automatic wxWidgets 2.4. 2.5, 3.0 bindings) incl. say Gtk 3.0 and iOS support) So. wouldn't make more sense to ask Alex to port and enhance his code analyzer into D2 as GSOC project to become part of a wxD2 driven IDE ? I think, Yep. Despite that, Alex, thanks for Mono-D, very well done. My 2 cents, Bjoern
Re: dcaflib
On Monday, 26 March 2012 at 13:56:46 UTC, Dejan Lekic wrote: Nathan, what terminals are supported? Only ANSI / VT* or some other types of terminals as well? I've only tested it on OSX Terminal, but I read about the features on a Linux website, and have used the ANSI escape code Wikipedia article as reference, so I assume it's ANSI. NMS
Re: Dejan Lekic created the D Developers Network on LinkedIn
Dejan Lekic dejan.le...@gmail.com wrote in message news:jkprtm$1us3$1...@digitalmars.com... Jesse Phillips wrote: On Tuesday, 20 March 2012 at 19:09:59 UTC, Dejan Lekic wrote: My idea when I made it was to gather professionals who use D *in production environments* Does that mean you shouldn't join if D only supports a product going to production, and isn't officially software used in the company? Everybody is welcome (otherwise I would make the group invite-only), but the idea is to have more professionals there, than just enthusiasts who use D in some toy project... New D users will either become professionals after some time of evaluation and adoption, or they will give up D. I honestly do not think the LinkedIn group needs people who forget D after few weeks of trying. :) No argument with that. It is also misleading when a recruiter sees someone a member of the DDN who never wrote any serious piece of D code. Do you agree? That seems to be based on the false (but disturbingly common around drooling-HR-monkey circles) myth that a candidate with, for example, 10*X amount of experience in language Y is better than a candidate with X experience in each of 10*Y languages. Or at least it would seem to help propagate that myth. HR-morons have this retarded idea that all programming langauges are completely different from each other and have very little transferrable skills, which of course, any real programmer knows to be an obvious load of complete bullshit. Besides, 9 tmes out of 10, the only thing those mouth-breathers in HR bother to look at is degrees (which, of course, is even more retarded).
Re: Adam Wilson is now a GSoC 2012 mentor!
On Mon, 26 Mar 2012 11:15:54 -0700, BLS bizp...@orange.fr wrote: On Monday, 26 March 2012 at 15:27:29 UTC, Andrei Alexandrescu wrote: We're very happy and honored to had Adam Wilson on board as a GSoC 2012 mentor. Adam brings solid project management experience and has a specific interest in the Mono-D project. Please join me in welcoming Adam to the ranks of GSoC mentors! Thanks, Andrei On 03/26/2012 08:27 AM, Andrei Alexandrescu wrote: We're very happy and honored to had Adam Wilson on board as a GSoC 2012 mentor. Adam brings solid project management experience and has a specific interest in the Mono-D project. Please join me in welcoming Adam to the ranks of GSoC mentors! Thanks, Andrei Welcome Adam and congratulation Alex. I am using Mono-D and I almost enjoy it. One thing is for sure: Code LookUp / Intellisense is great in Mono-D, ,Code outline simply rox, and MNono-D is (in this regard) light years ahead of Visual D. The pure speed in which Alex's code analyzer is working is just xtreme amazing. Alex ? Benchmarks ? But it is a GTK# and MONO based project and this means it is finally a C# project. I am pretty sure that we will have a complete wxWidgets 2.9.3 binding in a few days/weeks. (and we will have a TOOL to create almost automatic wxWidgets 2.4. 2.5, 3.0 bindings) incl. say Gtk 3.0 and iOS support) So. wouldn't make more sense to ask Alex to port and enhance his code analyzer into D2 as GSOC project to become part of a wxD2 driven IDE ? I think, Yep. Despite that, Alex, thanks for Mono-D, very well done. My 2 cents, Bjoern I think that the best thing that we can do right now is to focus on bringing the parser to completion. It's still missing some key features of D, especially in terms of code-completion and syntax highlighting. It's also missing UFCS from 2.058, which is a pretty big deal I think. For a full list of tasks that Alex would like to get done please see this list: https://github.com/aBothe/Mono-D/blob/master/MonoDevelop.DBinding/Remaining%20features.txt As to an IDE written in D, that's a HUGE project and well outside the scope of what can be accomplished in a GSoC project. It takes millions of lines of code to make a *DECENT* IDE. Not to mention that UI design is something that will always polarize the community, some basically want a glorified VIM/EMACS while other will settle for nothing less than a Visual Studio clone, still more people will want a radically different UI from anything previously seen (I personally am intrigued by Code-Bubbles for instance). Plus why bother with that when we can integrate into existing solutions like MonoDevelop or Visual Studio *much* quicker. I personally think that Mono-D represents the most capable path forward for D IDE's right now, maybe later that might change as D grows, but for the moment we need an complete IDE fast, and integration can deliver that. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: Adam Wilson is now a GSoC 2012 mentor!
On Monday, 26 March 2012 at 19:43:56 UTC, Adam Wilson wrote: I think that the best thing that we can do right now is to focus on bringing the parser to completion. It's still missing some key features of D, especially in terms of code-completion and syntax highlighting. It's also missing UFCS from 2.058, which is a pretty big deal I think. For a full list of tasks that Alex would like to get done please see this list: https://github.com/aBothe/Mono-D/blob/master/MonoDevelop.DBinding/Remaining%20features.txt As to an IDE written in D, that's a HUGE project and well outside the scope of what can be accomplished in a GSoC project. It takes millions of lines of code to make a *DECENT* IDE. Not to mention that UI design is something that will always polarize the community, some basically want a glorified VIM/EMACS while other will settle for nothing less than a Visual Studio clone, still more people will want a radically different UI from anything previously seen (I personally am intrigued by Code-Bubbles for instance). Plus why bother with that when we can integrate into existing solutions like MonoDevelop or Visual Studio *much* quicker. I personally think that Mono-D represents the most capable path forward for D IDE's right now, maybe later that might change as D grows, but for the moment we need an complete IDE fast, and integration can deliver that. And one of the very nice things about Mono-D is that the parser is completely standalone. It would not be difficult to integrate into Visual Studio in the future. Both are done in C#, and both are somewhat similar to code for. Instead of making a D specific IDE, we can just use a very nice plugin for both Visual Studio and Mono-D, with being able to use the same code-base for the logic.
Re: Adam Wilson is now a GSoC 2012 mentor!
On 03/26/2012 01:11 PM, Kapps wrote: And one of the very nice things about Mono-D is that the parser is completely standalone. It would not be difficult to integrate into Visual Studio in the future Well, I am almost on Windows.(Not valid for all of us) (AFAIK) almost everyting to integrate D into Visual Studio is done in D. (incl. IDL stuff) correct if I am wrong. So yes... Alex's code Analyser should fit. as NET assembly But as well as D shared linrary Writing a State of the Art D2 IDE will not necessarily require a million lines of code . I am convinced that developing in wxD2*** code will be very close to what you do in wxPython, maybe even smarter. But I am loosing the point. Even if Alex carries on in Mono-D during GSOC it is a good thing. And.. if we are not able to translate C# stuff into D2 than the D2 design fails..
Re: Adam Wilson is now a GSoC 2012 mentor!
On Mon, 26 Mar 2012 14:52:19 -0700, bls bizp...@orange.fr wrote: On 03/26/2012 01:11 PM, Kapps wrote: And one of the very nice things about Mono-D is that the parser is completely standalone. It would not be difficult to integrate into Visual Studio in the future Well, I am almost on Windows.(Not valid for all of us) (AFAIK) almost everyting to integrate D into Visual Studio is done in D. (incl. IDL stuff) correct if I am wrong. So yes... Alex's code Analyser should fit. as NET assembly But as well as D shared linrary Yes, and IMHO that is really holding it back because not everything that VS has to offer is available via COM. For example anything that wants to touch VS's WPF interface directly needs to go through .NET. In the case of integrations, building the integration in anything other than the language used to build the IDE itself is intentionally tying one hand behind your back in the name of 'purity'. I support Alex's choice to use C# to build the Mono-D binding, its the most sensible decision that can be made. Writing a State of the Art D2 IDE will not necessarily require a million lines of code . Mono is over a million, Visual Studio is almost as much as the Windows Kernel (5m+ IIRC), and Eclipse ... well I don't what they are doing wrong over there but the bloat is epic. In other words, a good IDE is a massively complicated beast. Integrations are much quicker and we don't have to reinvent the wheel all over the place. I am convinced that developing in wxD2*** code will be very close to what you do in wxPython, maybe even smarter. But I am loosing the point. Even if Alex carries on in Mono-D during GSOC it is a good thing. And.. if we are not able to translate C# stuff into D2 than the D2 design fails.. Actually, I'm porting the ANTLR Runtime from C# to D right now. The languages are VERY similar, where the whole thing falls apart is the standard library, or the fact that Phobos is brutally underpowered compared to the .NET BCL. I wrote a List(T) class just to make the pain stop. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: Adam Wilson is now a GSoC 2012 mentor!
Hi there, Yeah I'm very grateful that Adam wants to be a mentor for GSoC this year - Nevertheless I'm still not sure which feature(s) I want to focus - since there are so many features that sound interesting but are obviously complex and time-intensive (like showing all possible methods e.g. after a string literal, so it'd be then that you're typing asdf. and all available/matching methods will pop up) -- things like CTFE and pre-compile time mixin resolution also are interesting and surely features which hard to fit in a relatively strict time table. So my actual problem/goal is to fill those 3 months efficently. - My application to GSoC and other formal things are going to follow 'later on' - so I guess in a couple of days. Oh btw, there's a new Mono-D version :D
Mono-D 0.3.5
Couple of bug fixes + new refactoring feature: - [Expression Evaluator] Began with the expression eval stuff -- added few class stubs - [Resolver] Fixed 2 small completion bugs (very precise, indeed! :-D) - [Parser] Fixed block boundaries determination bug - [Highlighting] Small highlighting change (added __vector keyword and recolored 'mixin') - [Building/Settings] Small usability improvement - [Settings] Fixed saving-bug - [Generic] Add 'lib' prefix to library name when creating new (linux/mac) projects - [Refactoring] Added renaming validation check - [Refactoring] Finished a rough implementation of symbol import refactoring (accessible via keystroke or context menu - just hover an undefined symbol, and right-click) - [Internal] Further code refactoring -- a lot of code could be abstracted - [Completion] Improved method parameter insight Article: http://mono-d.alexanderbothe.com/?p=355 Issues: https://github.com/aBothe/Mono-D/issues
Re: Adam Wilson is now a GSoC 2012 mentor!
Hi, to make it absolutely sure !! I hope that Alex's project will make it. (and as one who has worked on a concrete project with Alex, having several private phone conversations, I am sure that Alex will deliver pretty cool stuff. Most probably more than one might expect.) On 03/26/2012 03:00 PM, Adam Wilson wrote: ono is over a million, Visual Studio is almost as much as the Windows Kernel (5m+ IIRC), and Eclipse ... well I don't what they are doing wrong over there but the bloat is epic. In other words, a good IDE is a massively complicated beast. Integrations are much quicker and we don't have to reinvent the wheel all over the place. IMO this is questionable. What do you count as required LOC ? Say this is what could be done by Plug-Ins. SVN / GIT support, Database Explorer, ER Designer UML Designer XML/XSL support SOAP/REST support etc. So the core IDE has to support a flexible Doc/View Model a Plug-In Architecture, and Source code analysis. Maybe an internal project-management that supports a build/make tool. Debug Support. Period. All that visual stuff, say panel docking, gui persistence has not to be written from the scratch.. It is part of the GUI lib. Exotic stuff, You want the best ever Ultimate Development Environment. Say you want Realtime developer collaboration/Video conferencing ... a piece of cake in Python (using async IO/ XMPP ) No rocket science at all. I am convinced that developing in wxD2*** code will be very close to what you do in wxPython, maybe even smarter. But I am loosing the point. Even if Alex carries on in Mono-D during GSOC it is a good thing. And.. if we are not able to translate C# stuff into D2 than the D2 design fails.. Actually, I'm porting the ANTLR Runtime from C# to D right now. The languages are VERY similar, where the whole thing falls apart is the standard library, or the fact that Phobos is brutally underpowered compared to the .NET BCL. I wrote a List(T) class just to make the pain stop. Well, here I definitely should shut up.. std.collections... Anyway from time to time I think it would make sense to port the MOMO/NET collection stuff into D. Simply to make porting of dot net code possible without too much pain. but that's an other story. Thanks for being a Mentor for this Project.
Re: D web apps: cgi.d now supports scgi
On 3/25/12 12:43 PM, Adam D. Ruppe wrote: https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff some docs: http://arsdnet.net/web.d/cgi.html http://arsdnet.net/web.d/cgi.d.html Very nice! I'd recommend moving those two html pages to github's wiki, or some other wiki. If people start using your library they can contribute with explanations, example usages, etc. I also see many empty or short sections in those documents, which again I think is asking for a wiki. I'm also not sure about the format you provide for getting the code: many unrelated modules all in a single directory. If I want to start developing web apps using your framework I need to clone that repo, think which files to import, etc. If all the related web stuff were in a separate repository, I could just clone it, import an all file and that's it. (well, the last point isn't really your fault, something like Jacob Carlborg's Orbit is really needed to make D code universally accessible and searchable)
Re: avgtime - Small D util for your everyday benchmarking needs
On 3/23/12 4:11 PM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! --jm give_me_d_average
Re: avgtime - Small D util for your everyday benchmarking needs
On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote: On 3/23/12 4:11 PM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! --jm give_me_d_average Hahahah, naahh, prefiero avgtime o timestats, porque timestab autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm
Re: D web apps: cgi.d now supports scgi
On Tuesday, 27 March 2012 at 00:53:45 UTC, Ary Manzana wrote: I'd recommend moving those two html pages to github's wiki, or some other wiki. If people start using your library they can contribute with explanations, example usages, etc. Yeah, I started that for the dom.d but haven't gotten around to much yet. I also see many empty or short sections in those documents, which again I think is asking for a wiki. or for me to finally finish writing it :) I'm also not sure about the format you provide for getting the code: many unrelated modules all in a single directory. They aren't really unrelated; most of them worth together to some extent. If you grab web.d for instance, you also need to grab cgi.d, dom.d, characterencodings.d, sha.d, html.d, and color.d. If you are doing a database site, you can add database.d and mysql.d (or postgres.d or sqlite.d) to the list. curl.d and csv.d are nice for working with external sources of data. rtud.d depends on cgi.d for pushing real time updates. So, all of it really does go together, but you don't necessarily need all of it. dom.d and characterencodings.d can be used independently. cgi.d has no external dependencies. etc. They are independent, but complementary. (well, the last point isn't really your fault, something like Jacob Carlborg's Orbit is really needed to make D code universally accessible and searchable) I could add my build.d up there too... which offers auto downloading and module adding, but it is kinda slow (it runs dmd twice).
Re: D web apps: cgi.d now supports scgi
On 3/27/12 10:25 AM, Adam D. Ruppe wrote: On Tuesday, 27 March 2012 at 00:53:45 UTC, Ary Manzana wrote: I'd recommend moving those two html pages to github's wiki, or some other wiki. If people start using your library they can contribute with explanations, example usages, etc. Yeah, I started that for the dom.d but haven't gotten around to much yet. (snip) (well, the last point isn't really your fault, something like Jacob Carlborg's Orbit is really needed to make D code universally accessible and searchable) I could add my build.d up there too... which offers auto downloading and module adding, but it is kinda slow (it runs dmd twice). How slow is it comparing it to a developer doing it manually? :-)
Re: avgtime - Small D util for your everyday benchmarking needs
On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote: On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote: On 3/23/12 4:11 PM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! --jm give_me_d_average Hahahah, naahh, prefiero avgtime o timestats, porque timestab autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm El nombre lo dije en broma :-P Me sorprendió muchísimo verte en la lista! Pensé Juanma?. Qué loco que te guste D. A mí me gusta también, pero tiene algunas cosas feas y que lamentablemente no veo que vayan a cambiar pronto... (o nunca). So you are using D for work?
Re: avgtime - Small D util for your everyday benchmarking needs
On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote: Juan Manuel Cabo wrote: like the unix 'time' command `version linux' is missing. -manfred Done!, it works in windows now too. (release 0.5 in github). --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Tuesday, 27 March 2012 at 03:39:56 UTC, Ary Manzana wrote: On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote: On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote: On 3/23/12 4:11 PM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! --jm give_me_d_average Hahahah, naahh, prefiero avgtime o timestats, porque timestab autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm El nombre lo dije en broma :-P [...] ahhaha, ya se que lo dijiste en broma! --jm
Re: some regex vs std.ascii vs handcode times
On Thursday, 22 March 2012 at 04:29:41 UTC, Jay Norwood wrote: On the use of larger files ... yes that will be interesting, but for these current measurements the file reads are only taking on the order of 30ms for 20MB, which tells me they are already either being cached by win7, or else by the ssd's cache. I'll use the article instructions below and put the files being read into the cache prior to the test, so that the file read time should be small and consistent relative to the other buffer processing time inside the loops. http://us.generation-nt.com/activate-windows-file-caching-tip-tips-tricks-2130881-0.html Thanks I tried using a ramdisk from imdisk, because the above article was just for caching network drives to your local disk. The first set of times are from the ssd, the second from the ram disk, and both are about the same. So I guess win7 is caching these file reads already. I got imdisk for the ramdisk here http://www.ltr-data.se/opencode.html/#ImDisk These are the times for the imdisk reads (still executing from G hard drive , but reading from F ram disk) G:\d\a7\a7\Releasewctest f:\al*.txt finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 31 ms finished wcp_byLine! time: 525 ms finished wcp_byChunk! time: 22 ms finished wcp_lcByChunk! time: 33 ms finished wcp_lcDcharByChunk! time: 30 ms finished wcp_lcRegex! time: 141 ms finished wcp_lcCtRegex! time: 104 ms finished wcp_lcStdAlgoCount! time: 139 ms finished wcp_lcChar! time: 37 ms finished wcp_wcPointer! time: 121 ms finished wcp_wcCtRegex! time: 1269 ms finished wcp_wcRegex! time: 2908 ms finished wcp_wcRegex2! time: 2693 ms finished wcp_wcSlices! time: 179 ms finished wcp_wcStdAscii! time: 222 ms This is reading from the ssd Intel 510 series 120GB G:\d\a7\a7\Releasewctest h:\al*.txt finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 32 ms finished wcp_byLine! time: 518 ms finished wcp_byChunk! time: 23 ms finished wcp_lcByChunk! time: 33 ms finished wcp_lcDcharByChunk! time: 31 ms finished wcp_lcRegex! time: 159 ms finished wcp_lcCtRegex! time: 89 ms finished wcp_lcStdAlgoCount! time: 144 ms finished wcp_lcChar! time: 34 ms finished wcp_wcPointer! time: 118 ms finished wcp_wcCtRegex! time: 1273 ms finished wcp_wcRegex! time: 2889 ms finished wcp_wcRegex2! time: 2688 ms finished wcp_wcSlices! time: 175 ms finished wcp_wcStdAscii! time: 220 ms I added the source and the test text files on github https://github.com/jnorwood/wc_test
Issue with module destructor order
Hi, I came across a issue with the order in which druntime calls the module destructors. I created a small repro case for it: file a.d: module a; import logger; shared static this() { log(a init); } shared static ~this() { log(a deinit); } file b.d: module b; import mix; class Foo { mixin logIt; } file logger.d: module logger; import std.stdio; shared static this() { writefln(logger init); } void log(string message) { writefln(message); } shared static ~this() { writefln(logger deinit); } file main.d: module main; import std.stdio; import a; import b; int main(string[] argv) { auto foo = new Foo(); writefln(main); return 0; } mix.d: module mix; public import logger; mixin template logIt() { static shared this() { log(typeof(this).stringof ~ init); } static shared ~this() { log(typeof(this).stringof ~ deinit); } } Compile with: dmd a.d b.d logger.d main.d mix.d -oftest.exe Running gives the following output: logger init a init shared(Foo) init main a deinit logger deinit shared(Foo) deinit That means that the logger module destructor is called before the b module actually gets destructed which leads to the logger beeing used in a destructed state. In my real world case this leads to a fiel operation on a closed file handle. Is this intended behaviour or is this a bug? I assume this happens because of the mixin template and the public import. I'm using dmd 2.058. -- Kind Regards Benjamin Thaut
Re: How to use D for cross platform development?
I am using D for cross platform development. I recently implemented C wrappers for D. It works fine (Mac OS X). I could also create a Python module that consists of both D and C code (the C code is really just the wrapper for the module's functionality that is completely in D). It also works with Lua. I think the decision to make C logic part of the language was a very very good idea. The dream of every cross platform developer.
Issue 3789, stucts equality
Issue 3789 is an enhancement request, I think it fixes one small but quite important problem in D design. The situation is shown by this simple code: struct String { char[] data; } void main () { auto foo = String(foo.dup); auto bar = String(foo.dup); assert(bar !is foo, structs aren't the same bit-wise); assert(bar == foo, oops structs aren't equal); } The D Zen says D is designed to be safe on default and to perform unsafe (and faster) things on request. Not comparing the strings as strings in the following code breaks the Principle of least astonishment, so breaks that rule. An acceptable alternative to fixing Bug 3789 is statically disallowing the equal operator (==) in such cases (or even in all cases). D has the is for the situations where you want to perform bitwise comparison, for structs too. For the other situations where I use == among struts, I want it do the right thing, like comparing its contained strings correctly instead of arbitrarily deciding to use bitwise comparison of the sub-struct that represents the string. There is already a patch for this, from the extra-good Kenji Hara: https://github.com/D-Programming-Language/dmd/pull/387 Making == work as is for structs means using an operator for the purpose of the other operator, and it has caused some bugs in my code. And it will cause bugs in D code to come. Another example, reduced/modified from a real bug in a program of mine: import std.stdio; struct Foo { int x; string s; } void main () { int[Foo] aa; aa[Foo(10, hello)] = 1; string hel = hel; aa[Foo(10, hel ~ lo)] = 2; writeln(aa); } Here D defines a hashing for the Foo struct, but it uses the standard == to compare the struct keys. So the output is this, that I believe is what almost no one will ever want: [Foo(10, hello):1, Foo(10, hello):2] Bye, bearophile
Re: reading formatted strings: readf(%s, stringvar)
First of all thank your very much for your assistance. On Sunday, 25 March 2012 at 15:04:30 UTC, Ali Çehreli wrote: On 03/25/2012 06:00 AM, Tyro[17] wrote: I am trying to figure out the cause of my problem in the following post: http://forum.dlang.org/thread/qfbugjkrerfboqhvj...@forum.dlang.org Sorry that I missed your question there. :( and encountered something peculiar about reading strings. Whenever a distinct terminator is indicated in the input format (ex. %s@, @ being the terminator), readf() leaves the terminator on the input buffer after reading the data. If no terminator is specified the only way to indicate end of input is to use ctrl-d (ctrl-z on windows), however that causes the eof indicator to be set to true and the stream is marked as empty for all future attempts to access stdin. void main(string[] args) { string s1; double d; string s2; writeln(Enter a @ terminated string (multiline ok):); readf( %s@, s1); auto arr = s1.split(); if (!stdin.eof()) { writeln(The stream is not empty.); } else { writeln(The stream is empty.); } writeln(Enter another string (terminated with cntrl-d|ctrl-z):); I am not sure about the cntrl-d|ctrl-z part though. Since it terminates the input, the program should not be able to read any more characters. readf( %s, s2); // No matter how many read attempts I advise reading string by readln(). You can call chomp() to get rid of whitespace around it: while (s2.length == 0) { s2 = chomp(readln()); } You can achieve the same with: readf( %s\n, s2); My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. Andrew
Re: reading formatted strings: readf(%s, stringvar)
First of all thank your very much for your assistance. On Sunday, 25 March 2012 at 15:04:30 UTC, Ali Çehreli wrote: On 03/25/2012 06:00 AM, Tyro[17] wrote: I am trying to figure out the cause of my problem in the following post: http://forum.dlang.org/thread/qfbugjkrerfboqhvj...@forum.dlang.org Sorry that I missed your question there. :( and encountered something peculiar about reading strings. Whenever a distinct terminator is indicated in the input format (ex. %s@, @ being the terminator), readf() leaves the terminator on the input buffer after reading the data. If no terminator is specified the only way to indicate end of input is to use ctrl-d (ctrl-z on windows), however that causes the eof indicator to be set to true and the stream is marked as empty for all future attempts to access stdin. void main(string[] args) { string s1; double d; string s2; writeln(Enter a @ terminated string (multiline ok):); readf( %s@, s1); auto arr = s1.split(); if (!stdin.eof()) { writeln(The stream is not empty.); } else { writeln(The stream is empty.); } writeln(Enter another string (terminated with cntrl-d|ctrl-z):); I am not sure about the cntrl-d|ctrl-z part though. Since it terminates the input, the program should not be able to read any more characters. readf( %s, s2); // No matter how many read attempts I advise reading string by readln(). You can call chomp() to get rid of whitespace around it: while (s2.length == 0) { s2 = chomp(readln()); } You can achieve the same with: readf( %s\n, s2); My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. Andrew
Re: Getting around the non-virtuality of templates
On Sun, 25 Mar 2012 18:36:08 -0400, Stewart Gordon smjg_1...@yahoo.com wrote: I'm coming up against some interesting challenges while porting stuff in my utility library to D2. Firstly, D2 uses opBinary and opOpAssign, rather than the operator-specific op* and op*Assign. While the latter still work, they aren't mentioned in the current D2 docs. Which would imply that they're on the way out; however, there's no mention at https://github.com/D-Programming-Language/d-programming-language.org/blob/master/deprecate.dd (See also http://d.puremagic.com/issues/show_bug.cgi?id=7779 ) Still, it seems clear that opBinary/opOpAssign is the D2 way of doing it. But it has the drawback that, because it's a template, it isn't virtual. One way about it is to keep the D1-style op functions and make opUnary/opBinary/opOpAssign call these. But is there a better way? I have definitely had issues with this. In dcollections, I have versions of opBinary commented out, because at the time of writing, templates weren't allowed in the D compiler. I filed this bug: http://d.puremagic.com/issues/show_bug.cgi?id=4174 Looks like it hasn't been closed yet... So for now, I use the undocumented old-style functions. One other thing that this wrapper method loses is covariance, which I use a lot in dcollections. I haven't filed a bug on it, but there is at least a workaround on this one -- the template can capture the type of this from the call site as a template parameter. The other isn't a D2-specific issue, though D2 increases the significance of it. I have a method with the signature Set opAnd(bool delegate(Element) dg) I would like to enable a user of the library to pass in a delegate whose parameter is any type to which Element is implicitly convertible. This could be the same type as Element with the top-level constancy changed (probably the main use case), or a type that is distinct beyond the constancy level. Turning it into a template Set opAnd(E)(bool delegate(E) dg) would address this, but prevent overriding with the appropriate code for each set implementation. What I would do is this (assuming template interfaces worked): Set opAnd(E)(bool delegate(E) dg) if(is(E == Element)) { // call protected virtual opAnd equivalent which takes delegate of Element } Set opAnd(E)(bool delegate(E) dg) if(!is(E == Element) implicitlyConvertsTo!(E, Element)) { bool _dg(Element e) { return dg(e); } // call protected virtual opAnd equivalent with _dg } Note, with proper delegate implicit conversions, you could probably get some better optimization (including delegates that only differ by const) by checking if the delegate implicitly converts instead of the element. -Steve
Re: reading formatted strings: readf(%s, stringvar)
On 3/26/12 5:55 AM, Tyro[17] wrote: You can achieve the same with: readf( %s\n, s2); My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. I made the decision for the current behavior while implementing readf. Basically I tried to avoid what I think was a mistake of scanf, i.e. that of stopping string reading at the first whitespace character, which is fairly useless. Over the years scanf was improved with %[...] which allows reading strings with any characters in a set. Anyway, if I understand correctly, there's no way to achieve what you want unless you read character-by-character and define your own control character. There's no out-of-band character that means end of this input, but not that of the file. Andrei
Re: Use tango.core.Atomic.atomicLoad and atomicStore from Tango
Can somebody tell me what is wrong in my code?
Re: Regex performance
On Sunday, 25 March 2012 at 16:31:40 UTC, James Blewitt wrote: I'm currently trying to figure out what I'm doing differently in my original program. At this point I am assuming that I have an error in my code which causes the D program to do much more work that its Ruby counterpart (although I am currently unable to find it). When I know more I will let you know. James Blewitt That was the same type of thing I was seeing with very simple regex expressions. The regex was on the order of 30 times slower than hand code for finding words in strings. The ctRegex is on the order of 13x slower than hand code. The times below are from parallel processing on 100MB of text files, just finding the word boundaries. I uploaded that tests in https://github.com/jnorwood/wc_test I believe in all these cases the files are being cached by the os, since I was able to see the same measurements from a ramdisk done with imdisk. So in these cases the file reads are about 30ms of the result. The rest is cpu time, finding the words. This is with default 7 threads finished wcp_wcPointer! time: 98 ms finished wcp_wcCtRegex! time: 1300 ms finished wcp_wcRegex! time: 2946 ms finished wcp_wcRegex2! time: 2687 ms finished wcp_wcSlices! time: 157 ms finished wcp_wcStdAscii! time: 225 ms This is processing the same data with 1 thread finished wcp_wcPointer! time: 188 ms finished wcp_wcCtRegex! time: 2219 ms finished wcp_wcRegex! time: 5951 ms finished wcp_wcRegex2! time: 5502 ms finished wcp_wcSlices! time: 318 ms finished wcp_wcStdAscii! time: 446 ms And this is processing the same data with 13 threads finished wcp_wcPointer! time: 93 ms finished wcp_wcCtRegex! time: 1110 ms finished wcp_wcRegex! time: 2531 ms finished wcp_wcRegex2! time: 2321 ms finished wcp_wcSlices! time: 136 ms finished wcp_wcStdAscii! time: 200 ms The only change in the program that is uploaded is to add the suggested defaultPoolThreads(13); at the start of main to change the ThreadPool default thread count.
Re: parallel optimizations based on number of memory controllers vs cpus
On Friday, 23 March 2012 at 13:56:09 UTC, Timon Gehr wrote: On program startup, do: ThreadPool.defaultPoolThreads(14); // or 13 Yes, thank you. I just tried adding that. The gains aren't scaleable in this particular test, which is apparently dominated by cpu processing, but even here you can see incremental improvements at 13 vs 7 threads on all the numbers. I'd probably have to identify operations that were being limited by memory accesses in order to see the type of gains stated in that other app. This is with default 7 threads finished wcp_wcPointer! time: 98 ms finished wcp_wcCtRegex! time: 1300 ms finished wcp_wcRegex! time: 2946 ms finished wcp_wcRegex2! time: 2687 ms finished wcp_wcSlices! time: 157 ms finished wcp_wcStdAscii! time: 225 ms This is processing the same data with 1 thread finished wcp_wcPointer! time: 188 ms finished wcp_wcCtRegex! time: 2219 ms finished wcp_wcRegex! time: 5951 ms finished wcp_wcRegex2! time: 5502 ms finished wcp_wcSlices! time: 318 ms finished wcp_wcStdAscii! time: 446 ms And this is processing the same data with 13 threads finished wcp_wcPointer! time: 93 ms finished wcp_wcCtRegex! time: 1110 ms finished wcp_wcRegex! time: 2531 ms finished wcp_wcRegex2! time: 2321 ms finished wcp_wcSlices! time: 136 ms finished wcp_wcStdAscii! time: 200 ms These were from the tests uploaded at https://github.com/jnorwood/wc_test. The only change in the program that is uploaded is to add the suggested defaultPoolThreads(13) at the start of main; at the start of main to change the ThreadPool default thread count.
Re: Array ops give sharing violation under Windows 7 64 bit?
Seems like you got your answer (anti-virus). I've been struggling with this for creating a temp file, then deleting. Disabling AV isn't a solution so I have it run delete when the program exits :P Also, I've been using system(pause) to do waiting ha, if the user closes the console with the X, this command reads it as any key and will execute the next command, which can complete before the application receives TERM. So now all my system(pause)s have a thread wait so nothing is run. MS needs to get rid of the damn file locks!
Re: Use tango.core.Atomic.atomicLoad and atomicStore from Tango
Problem has been solved by me.
Re: reading formatted strings: readf(%s, stringvar)
On 03/26/2012 04:55 AM, Tyro[17] wrote: void main(string[] args) { string s1; double d; string s2; writeln(Enter a @ terminated string (multiline ok):); readf( %s@, s1); auto arr = s1.split(); if (!stdin.eof()) { Ah! That's one of the problems. stdin has no idea whether there are more characters available. eof() being true is not dependable unless an attempt to read a character is made and failed. This is the case in C and C++ as well. writeln(The stream is not empty.); } else { writeln(The stream is empty.); } writeln(Enter another string (terminated with cntrl-d|ctrl-z):); I am not sure about the cntrl-d|ctrl-z part though. Since it terminates the input, the program should not be able to read any more characters. I would like to repeat: ending the stream is not a solution because you want to read more data. readf( %s, s2); // No matter how many read attempts That's the actual problem, and ironically is already known to you. :) Use a \n at the end of that format string. I advise reading string by readln(). You can call chomp() to get rid of whitespace around it: while (s2.length == 0) { s2 = chomp(readln()); } You can achieve the same with: readf( %s\n, s2); Thank you. However, that method does not remove trailing whitespace. My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. If I understand you correctly, the following program works for me: import std.stdio; import std.string; void main(string[] args) { string s1; double d; string s2; writeln(Enter a @ terminated string (multiline ok):); readf( %s@, s1); auto arr = s1.split(); writeln(Enter a line of string:); readf( %s\n, s2); writeln(Enter a decimal value:); readf( %s, d); writeln(d = , d); writeln(arr = , arr); writeln(s = , s2); } Ali
regex direct support for sse4 intrinsics
The sse4 capabilities include a range mode of string matching, that lets you match characters in a 16 byte string based on a 16 byte set of start and stop character ranges. See the _SIDD_CMP_RANGES mode in the table. For example, the pattern in some of our examples for finding the start of a word is a-zA-Z, and for other characters in the word a-zA-Z0-9. Either of these patterns could be tested for match on a 16 byte input in a single operation in the sse4 engine. http://msdn.microsoft.com/en-us/library/bb531465.aspx Looking at the msft intrinsics, it seems like the D ones could be more efficient and elegant looking using D slices, since they are passing the string and length of string as separate parameters. It would be good if the D regex processing could detect simple range match patterns and use the sse4 extensions when available. There is also an article from intel where they demo use of these instructions for xml parsing. http://software.intel.com/en-us/articles/xml-parsing-accelerator-with-intel-streaming-simd-extensions-4-intel-sse4/
Re: regex direct support for sse4 intrinsics
On 26.03.2012 22:14, Jay Norwood wrote: The sse4 capabilities include a range mode of string matching, that lets you match characters in a 16 byte string based on a 16 byte set of start and stop character ranges. See the _SIDD_CMP_RANGES mode in the table. For example, the pattern in some of our examples for finding the start of a word is a-zA-Z, and for other characters in the word a-zA-Z0-9. Either of these patterns could be tested for match on a 16 byte input in a single operation in the sse4 engine. http://msdn.microsoft.com/en-us/library/bb531465.aspx Nice idea. Though I can assure you that the biggest problem is not in comparing things at all. The most time is spent accessing various lookup tables, saving/loading state, updating match location and so on. Especially so because of Unicode. Also add an overhead of UTF decoding into the mix (that I plan to avoid). To put it bluntly: R-T version spends around 20% of time doing actual matching at best (averaged, depends on pattern), half of this could be avoided by optimizing framework code. C-T version spends around 35% on on matching, but still the main inefficiency in the framework part of code. C-T matcher admittedly has very simple framework. Speaking more of run-time version of regex, it is essentially running a VM that executes instructions that do various kinds of match-this, match-that. The VM dispatch code is quite slow, the optimal _threaded_ code requires either doing it in _assembly_ or _computed_ goto in the language. The VM _dispatch_ takes up to 30% of time in the default matcher. Looking at the msft intrinsics, it seems like the D ones could be more efficient and elegant looking using D slices, since they are passing the string and length of string as separate parameters. It would be good if the D regex processing could detect simple range match patterns and use the sse4 extensions when available. There is also an article from intel where they demo use of these instructions for xml parsing. http://software.intel.com/en-us/articles/xml-parsing-accelerator-with-intel-streaming-simd-extensions-4-intel-sse4/ Worth looking into. -- Dmitry Olshansky
Re: Regex performance
On 26.03.2012 20:00, Jay Norwood wrote: On Sunday, 25 March 2012 at 16:31:40 UTC, James Blewitt wrote: I'm currently trying to figure out what I'm doing differently in my original program. At this point I am assuming that I have an error in my code which causes the D program to do much more work that its Ruby counterpart (although I am currently unable to find it). When I know more I will let you know. James Blewitt That was the same type of thing I was seeing with very simple regex expressions. The regex was on the order of 30 times slower than hand code for finding words in strings. This is a sad fact of life, the general tool can't beat highly specialized things. Ideally it can be on par though. Even in the best case ctRegex has to do a lot of things a simple == '\n' doesn't do, like storing boundaries of match. That's something to keep in mind. By the way, regex does fine job on (semi-)fixed strings of length = 3-4, often easily beating plain find/indexOf. I haven't tested Boyer-Moore version of find, that should be faster then regex for sure. The ctRegex is on the order of 13x slower than hand code. The times below are from parallel processing on 100MB of text files, just finding the word boundaries. I uploaded that tests in https://github.com/jnorwood/wc_test I believe in all these cases the files are being cached by the os, since I was able to see the same measurements from a ramdisk done with imdisk. So in these cases the file reads are about 30ms of the result. The rest is cpu time, finding the words. This is with default 7 threads finished wcp_wcPointer! time: 98 ms finished wcp_wcCtRegex! time: 1300 ms finished wcp_wcRegex! time: 2946 ms finished wcp_wcRegex2! time: 2687 ms finished wcp_wcSlices! time: 157 ms finished wcp_wcStdAscii! time: 225 ms This is processing the same data with 1 thread finished wcp_wcPointer! time: 188 ms finished wcp_wcCtRegex! time: 2219 ms finished wcp_wcRegex! time: 5951 ms finished wcp_wcRegex2! time: 5502 ms finished wcp_wcSlices! time: 318 ms finished wcp_wcStdAscii! time: 446 ms And this is processing the same data with 13 threads finished wcp_wcPointer! time: 93 ms finished wcp_wcCtRegex! time: 1110 ms finished wcp_wcRegex! time: 2531 ms finished wcp_wcRegex2! time: 2321 ms finished wcp_wcSlices! time: 136 ms finished wcp_wcStdAscii! time: 200 ms The only change in the program that is uploaded is to add the suggested defaultPoolThreads(13); at the start of main to change the ThreadPool default thread count. -- Dmitry Olshansky
Re: Regex performance
Hello everybody, Thanks once again for the interest in my problem. I have posted the details and source code that recreates (at least for me) the poor performance. I didn't know how to post the code to the forum, so I posted it to my blog instead (see post update): http://jblewitt.com/blog/?p=462 Again, if I'm doing something stupid in my code (which is possible) then I apologise in advance. I'll take a look at the ctRegex as soon as I can. Regards, James
Re: reading formatted strings: readf(%s, stringvar)
On Monday, 26 March 2012 at 14:41:41 UTC, Andrei Alexandrescu wrote: On 3/26/12 5:55 AM, Tyro[17] wrote: You can achieve the same with: readf( %s\n, s2); My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. I made the decision for the current behavior while implementing readf. Basically I tried to avoid what I think was a mistake of scanf, i.e. that of stopping string reading at the first whitespace character, which is fairly useless. Couldn't the state of stdin be checked upon entrance into readf and reopened if it is already closed? Wouldn't that accomplish the desired effect while avoiding the pitfalls of scanf? Over the years scanf was improved with %[...] which allows reading strings with any characters in a set. Anyway, if I understand correctly, there's no way to achieve what you want unless you read character-by-character and define your own control character. There's no out-of-band character that means end of this input, but not that of the file. Andrei
Re: regex direct support for sse4 intrinsics
On 03/26/2012 09:10 PM, Dmitry Olshansky wrote: Speaking more of run-time version of regex, it is essentially running a VM that executes instructions that do various kinds of match-this, match-that. The VM dispatch code is quite slow, the optimal _threaded_ code requires either doing it in _assembly_ or _computed_ goto in the language. Language-enforced tail call optimization would probably work too.
Re: reading formatted strings: readf(%s, stringvar)
On Monday, 26 March 2012 at 17:34:37 UTC, Ali Çehreli wrote: On 03/26/2012 04:55 AM, Tyro[17] wrote: readf( %s, s2); // No matter how many read attempts That's the actual problem, and ironically is already known to you. :) Use a \n at the end of that format string. Thanks. I'le use chomp(readln()) in the future. I advise reading string by readln(). You can call chomp() to get rid of whitespace around it: while (s2.length == 0) { s2 = chomp(readln()); } You can achieve the same with: readf( %s\n, s2); Thank you. However, that method does not remove trailing whitespace. My goal however, is not to read one line of information. Rather, it is to read multiple lines of information from standard input. I get close to being able to do so if i don't including \n as a part of my format string or if I changing your suggestion to while (!stdin.eol()) { s2 = chomp(readln()); } but again I run into the predicament was before, a need to close the the stream with Ctrl-D/Ctrl-Z. If I understand you correctly, the following program works for me: import std.stdio; import std.string; void main(string[] args) { string s1; double d; string s2; Actually the problem is right here: writeln(Enter a @ terminated string (multiline ok):); readf( %s@, s1); auto arr = s1.split(); I don't want to provide an explicit terminator, but instead rely on Ctrl-D/Ctrl-Z to do the job while being able to continue processing read request. As explained by Andrei, this is not possible. But in my mind if the stdin stream can be opened once, it can be opened again. What is the negative effect of testing if it is closed and reopening it on entering readf? Especially since there is a unique implementation of readf to deal with input from stdin. What is wrong with implementing reopen() in File for specific use with stdin and then implementing readf like this: uint readf(A...)(in char[] format, A args) { if(stdin.eof) stdin.reopen(); return stdin.readf(format, args); } Andrew
Re: reading formatted strings: readf(%s, stringvar)
On 03/26/2012 02:12 PM, Tyro[17] wrote: I don't want to provide an explicit terminator, but instead rely on Ctrl-D/Ctrl-Z to do the job while being able to continue processing read request. As explained by Andrei, this is not possible. But in my mind if the stdin stream can be opened once, it can be opened again. What is the negative effect of testing if it is closed and reopening it on entering readf? Especially since there is a unique implementation of readf to deal with input from stdin. What is wrong with implementing reopen() in File for specific use with stdin and then implementing readf like this: uint readf(A...)(in char[] format, A args) { if(stdin.eof) stdin.reopen(); return stdin.readf(format, args); } Andrew That doesn't fit the way standard input and output streams work. These streams are bound to the application from the environment that has started them. The program itself does not have a way of manipulating how these streams are ended or connected. Imagine that your program's stdin if piped from the output of another process: other | yours Once 'other' finishes with its output, that's the end of the input of 'yours'. 'yours' cannot communicate to the environment that it would like to continue reading more. What you are asking for could be achieved only if both the environment and the program agreed that this would be the case. Maybe I am missing something but that has been standard on many environments. Ali
Re: Issue 3789, stucts equality
El 26/03/2012 13:58, bearophile escribió: Issue 3789 is an enhancement request, I think it fixes one small but quite important problem in D design. The situation is shown by this simple code: struct String { char[] data; } void main () { auto foo = String(foo.dup); auto bar = String(foo.dup); assert(bar !is foo, structs aren't the same bit-wise); assert(bar == foo, oops structs aren't equal); } The D Zen says D is designed to be safe on default and to perform unsafe (and faster) things on request. Not comparing the strings as strings in the following code breaks the Principle of least astonishment, so breaks that rule. Maybe it makes more sense that struct==struct applies == to each of its fields. It would be the same as bitwise comparison for simple primitive types, but would be more useful with other types such as strings.
Re: Regex performance
On 27.03.2012 0:27, James Blewitt wrote: Hello everybody, Thanks once again for the interest in my problem. I have posted the details and source code that recreates (at least for me) the poor performance. I didn't know how to post the code to the forum, so I posted it to my blog instead (see post update): http://jblewitt.com/blog/?p=462 Again, if I'm doing something stupid in my code (which is possible) then I apologise in advance. No need to apologize, but you are using 2.054, which is unfashionable :) More importantly 2.054 contains old and rusty version of std.regex, the new version was included in 2.057+. BTW The current release is 2.058. I'll take a look at the ctRegex as soon as I can. Yup, just update compiler+phobos. Regards, James -- Dmitry Olshansky
Re: Regex performance
On 03/26/2012 02:41 PM, Dmitry Olshansky wrote: On 27.03.2012 0:27, James Blewitt wrote: Hello everybody, Thanks once again for the interest in my problem. I have posted the details and source code that recreates (at least for me) the poor performance. I didn't know how to post the code to the forum, so I posted it to my blog instead (see post update): http://jblewitt.com/blog/?p=462 Again, if I'm doing something stupid in my code (which is possible) then I apologise in advance. No need to apologize, but you are using 2.054, which is unfashionable :) More importantly 2.054 contains old and rusty version of std.regex, the new version was included in 2.057+. BTW The current release is 2.058. I'll take a look at the ctRegex as soon as I can. Yup, just update compiler+phobos. Regards, James My unofficial results comparing 2.056 to 2.058 on 64 bits: shakespeare.txt, 2.056 - 1868 msecs shakespeare.txt, 2.058 - 632 msecs data.csv, 2.056 - 51953 msecs data.csv, 2.058 - 1329 msecs That last line is pretty impressive. :) Ali
Re: Regex performance
On 27 March 2012 11:05, Ali Çehreli acehr...@yahoo.com wrote: My unofficial results comparing 2.056 to 2.058 on 64 bits: shakespeare.txt, 2.056 - 1868 msecs shakespeare.txt, 2.058 - 632 msecs data.csv, 2.056 - 51953 msecs data.csv, 2.058 - 1329 msecs That last line is pretty impressive. :) Dmitry did impressive work over those few version of Phobos/DMD. The performance is even more impressive when you consider that std.regex supports things like named matching and lookbehind that often slow down a regex (also kinda removes the regular from the name regular expression, technically) -- James Miller
Re: Array ops give sharing violation under Windows 7 64 bit?
On 3/26/12, Walter Bright newshou...@digitalmars.com wrote: On 3/25/2012 2:50 PM, Kagamin wrote: Microsoft has antivirus bundled with windows. Go to security center and see whether Windows Defender is working. Well, I'll be hornswoggled. That did the trick! I really don't think that's the bottom issue. I've had defender off, and I can still reproduce the issue but it seems to happen in random phases. Several hundred runs it's ok, and then it's not ok.
Re: Regex performance
On Monday, 26 March 2012 at 22:05:34 UTC, Ali Çehreli wrote: My unofficial results comparing 2.056 to 2.058 on 64 bits: shakespeare.txt, 2.056 - 1868 msecs shakespeare.txt, 2.058 - 632 msecs data.csv, 2.056 - 51953 msecs data.csv, 2.058 - 1329 msecs That last line is pretty impressive. :) Ali Unofficial 2.056/2.058/Ruby 1.9.3 Windows 32bit data.csv: data.csv, 2.056 - 76351 msecs data.csv, 2.058 - 2573 msecs data.csv, 1.9.3 - 9170 msecs Also I had to modify line 48 of the ruby file not knowing what I'm doing: if text.force_encoding(UTF-8) =~ /#{rule}/u Couldn't build it with ctRegex (Some Error, then ran out of memory).
Re: reading formatted strings: readf(%s, stringvar)
On 3/26/12 2:52 PM, Tyro[17] wrote: Couldn't the state of stdin be checked upon entrance into readf and reopened if it is already closed? That won't work. Wouldn't that accomplish the desired effect while avoiding the pitfalls of scanf? I don't think this is a pitfall. Essentially you don't have a definition of what constitutes a chunk of input. Once you get that define, you should be able to express it more or less easily. Andrei
Re: some regex vs std.ascii vs handcode times
On Monday, 26 March 2012 at 07:10:00 UTC, Jay Norwood wrote: On Thursday, 22 March 2012 at 04:29:41 UTC, Jay Norwood wrote: On the use of larger files ... yes that will be interesting, but for these current measurements the file reads are only taking on the order of 30ms for 20MB, which tells me they are already either being cached by win7, or else by the ssd's cache. I'll use the article instructions below and put the files being read into the cache prior to the test, so that the file read time should be small and consistent relative to the other buffer processing time inside the loops. http://us.generation-nt.com/activate-windows-file-caching-tip-tips-tricks-2130881-0.html Thanks I tried using a ramdisk from imdisk, because the above article was just for caching network drives to your local disk. The first set of times are from the ssd, the second from the ram disk, and both are about the same. So I guess win7 is caching these file reads already. I got imdisk for the ramdisk here http://www.ltr-data.se/opencode.html/#ImDisk These are the times for the imdisk reads (still executing from G hard drive , but reading from F ram disk) G:\d\a7\a7\Releasewctest f:\al*.txt finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 31 ms finished wcp_byLine! time: 525 ms finished wcp_byChunk! time: 22 ms finished wcp_lcByChunk! time: 33 ms finished wcp_lcDcharByChunk! time: 30 ms finished wcp_lcRegex! time: 141 ms finished wcp_lcCtRegex! time: 104 ms finished wcp_lcStdAlgoCount! time: 139 ms finished wcp_lcChar! time: 37 ms finished wcp_wcPointer! time: 121 ms finished wcp_wcCtRegex! time: 1269 ms finished wcp_wcRegex! time: 2908 ms finished wcp_wcRegex2! time: 2693 ms finished wcp_wcSlices! time: 179 ms finished wcp_wcStdAscii! time: 222 ms This is reading from the ssd Intel 510 series 120GB G:\d\a7\a7\Releasewctest h:\al*.txt finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 32 ms finished wcp_byLine! time: 518 ms finished wcp_byChunk! time: 23 ms finished wcp_lcByChunk! time: 33 ms finished wcp_lcDcharByChunk! time: 31 ms finished wcp_lcRegex! time: 159 ms finished wcp_lcCtRegex! time: 89 ms finished wcp_lcStdAlgoCount! time: 144 ms finished wcp_lcChar! time: 34 ms finished wcp_wcPointer! time: 118 ms finished wcp_wcCtRegex! time: 1273 ms finished wcp_wcRegex! time: 2889 ms finished wcp_wcRegex2! time: 2688 ms finished wcp_wcSlices! time: 175 ms finished wcp_wcStdAscii! time: 220 ms I added the source and the test text files on github https://github.com/jnorwood/wc_test I downloaded and tried your benchmark. I first tried it with the ten 10Mb files that you put in github, then truncated them to 2Mb to get results comparable to the test you said did. * Used dmd 2.058. * I tested both Windows7 64bit and then booted into Linux Kubuntu 64bits to test there too. * I tested in the following desktop computer, previously disabling cpu throttling (disabled coolquiet in the bios setup). vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4000+ stepping: 1 cpu MHz : 2109.443 cache size : 512 KB * The computer has 4Gb of RAM. * Runned wcTest many times (more than 10) before saving the results. Results in Windows7 x64 with ten 10Mb files: --- finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 130 ms finished wcp_byLine! time: 1574 ms finished wcp_byChunk! time: 133 ms finished wcp_lcByChunk! time: 207 ms finished wcp_lcDcharByChunk! time: 181 ms finished wcp_lcRegex! time: 579 ms finished wcp_lcCtRegex! time: 365 ms finished wcp_lcStdAlgoCount! time: 511 ms finished wcp_lcChar! time: 188 ms finished wcp_wcPointer! time: 438 ms finished wcp_wcCtRegex! time: 5448 ms finished wcp_wcRegex! time: 17277 ms finished wcp_wcRegex2! time: 15524 ms finished wcp_wcSlices! time: 632 ms finished wcp_wcStdAscii! time: 814 ms Results in Windows7 x64 with ten 2Mb files: --- finished wcp_nothing! time: 1 ms finished wcp_whole_file! time: 27 ms finished wcp_byLine! time: 329 ms finished wcp_byChunk! time: 34 ms finished wcp_lcByChunk! time: 79 ms finished wcp_lcDcharByChunk! time: 79 ms finished wcp_lcRegex! time: 298 ms finished wcp_lcCtRegex! time: 150 ms finished wcp_lcStdAlgoCount! time: 216 ms finished wcp_lcChar! time: 77 ms finished wcp_wcPointer! time: 127 ms finished wcp_wcCtRegex! time: 3250 ms finished wcp_wcRegex! time: 6164 ms finished wcp_wcRegex2! time: 5724 ms finished wcp_wcSlices! time: 171 ms finished wcp_wcStdAscii! time: 194 ms Results in Kubuntu 64bits with ten 2Mb files: -- finished wcp_nothing! time: 0 ms finished wcp_whole_file! time: 28 ms finished wcp_byLine! time: 212 ms finished wcp_byChunk! time: 20 ms finished wcp_lcByChunk! time: 90 ms finished wcp_lcDcharByChunk! time: 77 ms finished wcp_lcRegex! time: 190
Re: some regex vs std.ascii vs handcode times
On Tuesday, 27 March 2012 at 00:23:46 UTC, Juan Manuel Cabo wrote: [] I forgot to mention that for Linux Kubuntu x64 I had to put change the type of a variable to auto in this line of wcTest.d: auto c_cnt = input.length; And that in windows7 x64, the default for dmd is -m32, so I compiled and tried your benchmark that way in windows. --jm
Re: some regex vs std.ascii vs handcode times
On Tuesday, 27 March 2012 at 00:23:46 UTC, Juan Manuel Cabo wrote: I downloaded and tried your benchmark. I first tried it with the ten 10Mb files that you put in github, then truncated them to 2Mb to get results comparable to the test you said did. --jm All my times were with the 10 10MB files that I uploaded. I tried on ssd, on hd and on an imdisk ramdrive, and got about the same times for all. I suppose win7 is caching these. I'd expect several of the measured times to be dominated by disk read time if they weren't somehow being cached. Thanks for testing on the other systems.
Re: std.json
On 03/25/2012 08:26 AM, AaronP wrote: Could I get a hello, world example of parsing json? The docs look simple enough, but I could still use an example. For what it's worth, I've just sent the following program to a friend before seeing this thread. 1) Save this sample text to a file named json_file { employees: [ { firstName:John , lastName:Doe }, { firstName:Anna , lastName:Smith }, { firstName:Peter , lastName:Jones } ] } 2) The following program makes struct Employee objects from that file: import std.stdio; import std.json; import std.conv; import std.file; struct Employee { string firstName; string lastName; } void main() { // Assumes UTF-8 file auto content = to!string(read(json_file)); JSONValue[string] document = parseJSON(content).object; JSONValue[] employees = document[employees].array; foreach (employeeJson; employees) { JSONValue[string] employee = employeeJson.object; string firstName = employee[firstName].str; string lastName = employee[lastName].str; auto e = Employee(firstName, lastName); writeln(Constructed: , e); } } The output of the program: Constructed: Employee(John, Doe) Constructed: Employee(Anna, Smith) Constructed: Employee(Peter, Jones) Ali
Re: GC collecting too much..
On Sun, 25 Mar 2012 22:18:02 +0200, bearophile bearophileh...@lycos.com wrote: On Sunday, 25 March 2012 at 19:15:05 UTC, simendsjo wrote: I'm doing some coding against a c library, and Ds GC keeps collecting c owned objects (I think - disabling the GC makes everything work) Three alternative solutions: - Allocate from the C heap the memory that C will need to use, and free it manually or in a struct destructor (RAII) or with scope(exit). - Keep a pointer to the D-GC memory in the D code too. - In core.memory there are ways to disable scanning of a memory zone. Maybe it's usable for your purposes too. Bye, bearophile I've been able to find where the code fails, but now I don't understand what's happening at all. Is calling GC.collect() from an extern(C) function undefined? The following code just starts a mongoose web server and tries to run GC.collect in the handler. extern(C) void* cb(mg_event event, mg_connection* conn, mg_request_info* request_info) { GC.collect(); // segfault return null; } void main() { auto opts = [listening_ports, 6969].map!(toUTFz!(char*))().array(); mg_start(cb, null, cast(const(char**))opts); GC.collect(); // no problem readln(); } If I collect memory from main(), it works as expected (removing the collect from cb of course) while(readln().chomp() != q) { GC.collect(); } The documentation in core.memory is a bit sparse on how the GC works. Are there any articles on the D GC?
Re: GC collecting too much..
On Mon, 26 Mar 2012 10:13:35 +0200, simendsjo simend...@gmail.com wrote: On Sun, 25 Mar 2012 22:18:02 +0200, bearophile bearophileh...@lycos.com wrote: On Sunday, 25 March 2012 at 19:15:05 UTC, simendsjo wrote: I'm doing some coding against a c library, and Ds GC keeps collecting c owned objects (I think - disabling the GC makes everything work) Three alternative solutions: - Allocate from the C heap the memory that C will need to use, and free it manually or in a struct destructor (RAII) or with scope(exit). - Keep a pointer to the D-GC memory in the D code too. - In core.memory there are ways to disable scanning of a memory zone. Maybe it's usable for your purposes too. Bye, bearophile I've been able to find where the code fails, but now I don't understand what's happening at all. Is calling GC.collect() from an extern(C) function undefined? The following code just starts a mongoose web server and tries to run GC.collect in the handler. extern(C) void* cb(mg_event event, mg_connection* conn, mg_request_info* request_info) { GC.collect(); // segfault return null; } void main() { auto opts = [listening_ports, 6969].map!(toUTFz!(char*))().array(); mg_start(cb, null, cast(const(char**))opts); GC.collect(); // no problem readln(); } If I collect memory from main(), it works as expected (removing the collect from cb of course) while(readln().chomp() != q) { GC.collect(); } The documentation in core.memory is a bit sparse on how the GC works. Are there any articles on the D GC? It seems threads created in the c library is totally unknown to D. How can I make D aware of these threads when there is no library support for it?
Re: Map with maintained insertion order
I've done that in C++ before using Boost.MultiIndex, and i saw a post about a D port of that recently (I don't have the link to hand though).
Re: GC collecting too much..
On 03/26/2012 11:55 AM, simendsjo wrote: It seems threads created in the c library is totally unknown to D. How can I make D aware of these threads when there is no library support for it? You may be looking for this: http://dlang.org/phobos/core_thread.html#thread_attachThis
Re: GC collecting too much..
On Mon, 26 Mar 2012 17:10:34 +0200, Timon Gehr timon.g...@gmx.ch wrote: On 03/26/2012 11:55 AM, simendsjo wrote: It seems threads created in the c library is totally unknown to D. How can I make D aware of these threads when there is no library support for it? You may be looking for this: http://dlang.org/phobos/core_thread.html#thread_attachThis Thanks, but I tried that too and couldn't get it to work. I added the following: extern(C) handler() { synchronized // needed here to avoid the GC to collect while attaching thread? { if(!Thread.getThis()) // thread unknown to D { thread_attachThis(); assert(Thread.getThis()); // now D knows about it } } GC.collect(); // still segfaults } Actually, using attachThis segfaults GC.collect() outside the thread handling code too.
Re: Making sense of ranges
On Sat, 24 Mar 2012 19:07:01 -0400, Stewart Gordon smjg_1...@yahoo.com wrote: On 24/03/2012 18:57, Ali Çehreli wrote: snip Iterating an output range is also by popFront(). So what it says is, put this element to the output range and advance the range. There is a gotcha about this when the output range is a slice: Whatever is just put into the range is popped right away! :) [2] I'm beginning to get it now: the purpose of an output range is to put new data into the underlying container. So once you've put something in, the remaining range is what's left to be populated. I had been thinking of outputting in terms of appending to the range, hence the confusion. The output range is almost an entirely orthogonal concept to an input range. It basically defines a way to output elements. How an output range directs its elements is up to the output range. It may append, it may overwrite, it may prepend, it can do anything it wants. The only commonality is that a *writable* input range can also be an output range (writable meaning r.front = x works). -Steve
Re: Rewrite of std.range docs (Was: Re: Making sense of ranges)
On Monday, 26 March 2012 at 00:50:32 UTC, H. S. Teoh wrote: This thread has further convinced me that std.range's docs *need* this rewrite. So here's my first attempt at it: https://github.com/quickfur/phobos/tree/stdrange_docs I find that opening to be much better. Look forward to the improvement.
[Issue 7774] I've found a bug in D2
http://d.puremagic.com/issues/show_bug.cgi?id=7774 --- Comment #4 from Andrey Derzhavin vangelisfore...@yandex.ru 2012-03-25 23:46:44 PDT --- (In reply to comment #3) You have found a bug in DMD 2.058, not in D2. Smaller test case: struct A{ double x; void f(double d){x%=cast(double)d;} } void main(){ auto a=A(10); a.f(8); assert(a.x!=10.0); } Works with DMD 2.059head, fails with DMD 2.058. How can I get a DMD 2.059 linux version or DMD 2.0? -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7783] compiler generated struct equality doesn't compare array fields
http://d.puremagic.com/issues/show_bug.cgi?id=7783 --- Comment #1 from Kenji Hara k.hara...@gmail.com 2012-03-25 23:54:49 PDT --- I think this is just same as bug 3789. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7774] I've found a bug in D2
http://d.puremagic.com/issues/show_bug.cgi?id=7774 --- Comment #5 from timon.g...@gmx.ch 2012-03-26 00:04:47 PDT --- (In reply to comment #4) How can I get a DMD 2.059 linux version or DMD 2.0? It hasn't been released yet, but you can build it from source: https://github.com/D-Programming-Language/dmd -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3731] Derived class implicitly convertible to base class with arbitrary change of constancy
http://d.puremagic.com/issues/show_bug.cgi?id=3731 --- Comment #10 from Steven Schveighoffer schvei...@yahoo.com 2012-03-26 03:11:50 PDT --- (In reply to comment #9) (In reply to comment #1) The solution would be to make it illegal to have a mutable class reference to the base class. No, because a Zmienna is perfectly allowed to be mutable. It's Stala that isn't. I think you misunderstand. I don't mean because Stala exists, Zmienna should be prevented from being mutable, even for cases where Stala is not involved. I mean, it should be illegal to have a mutable Stala reference point at a Zmienna. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3731] Derived class implicitly convertible to base class with arbitrary change of constancy
http://d.puremagic.com/issues/show_bug.cgi?id=3731 --- Comment #11 from Stewart Gordon s...@iname.com 2012-03-26 03:57:16 PDT --- (In reply to comment #10) I think you misunderstand. I don't mean because Stala exists, Zmienna should be prevented from being mutable, even for cases where Stala is not involved. I mean, it should be illegal to have a mutable Stala reference point at a Zmienna. I haven't misunderstood. There's no such thing as a mutable Stala reference. Because Stala is an immutable class, any reference to a Stala is automatically immutable. See for yourself: - void main() { Stala st = new Stala(5); pragma(msg, typeof(st)); // immutable(Stala) st = new Stala(4);// errors, because st is immutable Zmienna zm = st; // accepts-invalid } - The bug is that DMD allows the implicit conversion of immutable(Stala) to Zmienna. It's just mb_id in my last example. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3789] Structs members that require non-bitwise comparison not correctly compared
http://d.puremagic.com/issues/show_bug.cgi?id=3789 --- Comment #15 from bearophile_h...@eml.cc 2012-03-26 04:30:41 PDT --- This is an answer to Walter to Bug 7783 : (In reply to comment #2) In the absense of a user-defined opEquals for the struct, equality is defined as a bitwise compare. This is working as expected. Not a bug. I agree, it's not a DMD implementation bug because here it's working as designed. But I have voted bug 3789 time ago because I consider this in top 15 of the design decisions to fix/change, because this is a large source of bugs, it's unnatural. D is designed to be safe on default. Not comparing the strings as strings in the following code breaks the Principle of least astonishment: struct Foo { string name; bool b; } void main() { auto a = Foo(foobar.idup, true); auto b = Foo(foobar.idup, true); assert(a == b); } For me the only acceptable alternative to fixing Bug 3789 is statically disallowing the equal operator (==) in such cases. D language has the is for the situations where you want to perform bitwise comparison of structs too. This is OK. For the other situations where I use == among struts, I want it do the right thing, like comparing its contained strings correctly instead of arbitrarily choosing bitwise comparison of the sub-struct that represents the string. Making == work as is for structs means using an operator for the purpose of the other operator, and it has caused some bugs in my code. And it will cause bugs in future D code. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3789] Structs members that require non-bitwise comparison not correctly compared
http://d.puremagic.com/issues/show_bug.cgi?id=3789 --- Comment #16 from Steven Schveighoffer schvei...@yahoo.com 2012-03-26 04:46:26 PDT --- (In reply to comment #15) This is an answer to Walter to Bug 7783 : (In reply to comment #2) In the absense of a user-defined opEquals for the struct, equality is defined as a bitwise compare. This is working as expected. Not a bug. I agree, it's not a DMD implementation bug because here it's working as designed. The statement above is not completely true: struct S { string x; bool opEquals(const ref S other) const { return x == x; } } struct T { S s; // no user-defined opEquals } void main() { T t1, t2; t1.s.x = foobar.idup; t2.s.x = foobar.idup; assert(t1 == t2); } This passes in 2.058. Essentially, bitwise comparison is done unless a member requires special case opEquals, in which case a generated opEquals function is made for that struct. I contend that for the compiler to defer to user-defined structs, but not to builtin types on how to compare themselves is not only inconsistent, but leads to endless boilerplate code. It does not help anyone. We have 'is', which does a bitwise compare. There is no reason to make == duplicate that functionality. For the rare cases where you actually *need* bitwise comparison on arrays or floats, you can define a special opEquals. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3789] Structs members that require non-bitwise comparison not correctly compared
http://d.puremagic.com/issues/show_bug.cgi?id=3789 --- Comment #17 from Simen Kjaeraas simen.kja...@gmail.com 2012-03-26 08:07:29 PDT --- To further underline this point: struct S { string x; bool opEquals(const ref S other) const { return x == x; } } struct T { S s; string r; } void main() { T t1, t2; t1.s.x = foobar.idup; t1.r = foobaz.idup; t2.s.x = foobar.idup; t2.r = foobaz.idup; assert(t1 == t2); } Yes, the string values in r are compared not bitwise, but for content. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7783] compiler generated struct equality doesn't compare array fields
http://d.puremagic.com/issues/show_bug.cgi?id=7783 --- Comment #4 from d...@dawgfoto.de 2012-03-26 13:38:49 PDT --- I didn't found the specs. http://dlang.org/expression.html#EqualExpression -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3789] Structs members that require non-bitwise comparison not correctly compared
http://d.puremagic.com/issues/show_bug.cgi?id=3789 --- Comment #18 from bearophile_h...@eml.cc 2012-03-26 15:42:03 PDT --- See also the thread: http://forum.dlang.org/thread/jkpllu$1hss$1...@digitalmars.com -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 5550] std.range.enumerate()
http://d.puremagic.com/issues/show_bug.cgi?id=5550 --- Comment #1 from bearophile_h...@eml.cc 2012-03-26 15:40:12 PDT --- This is a basic implementation of enumerate() (it's only an InputRange, but probably a richer range too should be supported). The demo code in main() shows the task of processing the Sieve of Eratosthenes flags without and with an enumerate(). The version with enumerate() is shorter and simpler to understand than the version with zip. import std.stdio, std.algorithm, std.range, std.typecons, std.traits, std.array; struct Enumerate(R) { R r; int i; alias r this; @property Tuple!(typeof(this.i), typeof(R.init.front)) front() { return typeof(return)(i, this.r.front); } void popFront() { this.r.popFront(); this.i++; } } Enumerate!R enumerate(R)(R range, int start=0) if (isInputRange!R) { return Enumerate!R(range, start); } void main() { // not prime flags, from a Sieve of Eratosthenes. // 0 = prime number, 1 = not prime number. starts from index 2. auto flags = [0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1]; // not using enumerate iota(2, int.max).zip(flags).filter!q{!a[1]}().map!q{a[0]}().writeln(); iota(int.max).zip(flags).filter!q{!a[1]}().map!q{a[0] + 2}().writeln(); // using enumerate //flags.enumerate(2).filter!q{!a[1]}().map!q{a[0]}().writeln(); // error filter!q{!a[1]}(flags.enumerate(2)).map!q{a[0]}().writeln(); } Note: this produces a compilation error, I don't know why yet, maybe it's a bad interaction with alias this: flags.enumerate(2).filter!q{!a[1]}().map!q{a[0]}().writeln(); -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 5550] std.range.enumerate()
http://d.puremagic.com/issues/show_bug.cgi?id=5550 --- Comment #2 from bearophile_h...@eml.cc 2012-03-26 17:20:14 PDT --- The problem was caused by the alias this. This avoids the problem: import std.stdio, std.algorithm, std.range, std.typecons, std.traits, std.array; struct Enumerate(R) { R r; int i; @property bool empty() { return this.r.empty; } @property Tuple!(typeof(this.i), typeof(R.init.front)) front() { return typeof(return)(i, this.r.front); } void popFront() { this.r.popFront(); this.i++; } } Enumerate!R enumerate(R)(R range, int start=0) if (isInputRange!R) { return Enumerate!R(range, start); } void main() { auto flags = [0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1]; flags.enumerate(2).filter!q{!a[1]}().map!q{a[0]}().writeln(); } The last line is noisy but it's readable. This is less noisy but longer: flags.enumerate(2).filter!(t = !t[1])().map!(t = t[0])().writeln(); -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7784] New: stack overflow in Expression::apply with cyclic references
http://d.puremagic.com/issues/show_bug.cgi?id=7784 Summary: stack overflow in Expression::apply with cyclic references Product: D Version: D2 Platform: All OS/Version: All Status: NEW Keywords: CTFE, ice Severity: normal Priority: P2 Component: DMD AssignedTo: nob...@puremagic.com ReportedBy: d...@dawgfoto.de --- Comment #0 from d...@dawgfoto.de 2012-03-26 17:53:04 PDT --- cat bug.d CODE struct Foo { void bug() { // cyclic reference tab[A] = Bar(this); auto pbar = A in tab; // triggers stack overflow in Expression::apply for hasSideEffect auto bar = *pbar; } Bar[string] tab; } struct Bar { Foo* foo; int val; } int ctfe() { auto foo = Foo(); foo.bug(); return 0; } enum run = ctfe(); CODE We should probably flag all literal expression during visiting and either skip them or return a different apply result. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7768] More readable template error messages
http://d.puremagic.com/issues/show_bug.cgi?id=7768 --- Comment #3 from github-bugzi...@puremagic.com 2012-03-26 18:42:04 PDT --- Commit pushed to master at https://github.com/D-Programming-Language/dmd https://github.com/D-Programming-Language/dmd/commit/28c0da04f3640f3313b1229161232fee53ca1cd2 Merge pull request #839 from 9rnsr/fix7768 Issue 7768 - More readable template error messages -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7785] New: [CTFE] ICE when slicing pointer to variable
http://d.puremagic.com/issues/show_bug.cgi?id=7785 Summary: [CTFE] ICE when slicing pointer to variable Product: D Version: D2 Platform: All OS/Version: All Status: NEW Severity: normal Priority: P2 Component: DMD AssignedTo: nob...@puremagic.com ReportedBy: d...@dawgfoto.de --- Comment #0 from d...@dawgfoto.de 2012-03-26 20:02:09 PDT --- cat bug.d CODE bool foo() { int val; auto p = val; auto ary = p[0 .. 1]; return true; } enum res = foo(); CODE dmd -c bug -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7768] More readable template error messages
http://d.puremagic.com/issues/show_bug.cgi?id=7768 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution||FIXED -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7757] Inout function with lazy inout parameter doesn't compile
http://d.puremagic.com/issues/show_bug.cgi?id=7757 --- Comment #2 from github-bugzi...@puremagic.com 2012-03-26 21:00:55 PDT --- Commit pushed to master at https://github.com/D-Programming-Language/dmd https://github.com/D-Programming-Language/dmd/commit/e74a26ad05629d72d353552ef144ac69c3e9994a Merge pull request #836 from 9rnsr/fix7757 Issue 7757 - Inout function with lazy inout parameter doesn't compile -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 7757] Inout function with lazy inout parameter doesn't compile
http://d.puremagic.com/issues/show_bug.cgi?id=7757 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution||FIXED -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 2367] Overloading error with string literals
http://d.puremagic.com/issues/show_bug.cgi?id=2367 --- Comment #13 from github-bugzi...@puremagic.com 2012-03-26 22:14:43 PDT --- Commit pushed to master at https://github.com/D-Programming-Language/dmd https://github.com/D-Programming-Language/dmd/commit/58f284e625cb6591f476c1fbcad38a6dc66ca039 Merge pull request #834 from 9rnsr/fix2367 Issue 2367 - Overloading error with string literals -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 2367] Overloading error with string literals
http://d.puremagic.com/issues/show_bug.cgi?id=2367 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution||FIXED -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---