[OT] - A hacker stole $31M of Ether — how it happened, and what it means for Ethereum
See - https://medium.freecodecamp.org/a-hacker-stole-31m-of-ether-how-it-happened-and-what-it-means-for-ethereum-9e5dc29e33ce A long read. Someone has stolen $31M of Ether. Interesting quote near the end of the article: In blockchain, code is intrinsically unrevertible. Once you deploy a bad smart contract, anyone is free to attack it as long and hard as they can, and there’s no way to take it back if they get to it first. Unless you build intelligent security mechanisms into your contracts, if there’s a bug or successful attack, there’s no way to shut off your servers and fix the mistake. Being on Ethereum by definition means everyone owns your server. A common saying in cybersecurity is “attack is always easier than defense.” Blockchain sharply multiplies this imbalance. It’s far easier to attack because you have access to the code of every contract, know how much money is in it, and can take as long as you want to try to attack it. And once your attack is successful, you can potentially steal all of the money in the contract. Imagine that you were deploying software for vending machines. But instead of a bug allowing you to simply steal candy from one machine, the bug allowed you to simultaneously steal candy from every machine in the world that employed this software. Yeah, that’s how blockchain works. But can a digital wallets/crypto currency ever be secure ? Nick
Re: [your code here] HexViewer
On Wednesday, 2 August 2017 at 19:39:18 UTC, Andre Pany wrote: This application opens the file passed as argument and display the content in hex and text format: Is this code in GitHub or DUB ? Is there a link ? Nick
Re: Experience with https://www.patreon.com
On Thursday, 6 July 2017 at 13:53:08 UTC, Andrei Alexandrescu wrote: Does anyone have experience with https://www.patreon.com either as a patron or creator? Thanks! -- Andrei I support someone who is a pure intellectual now, and can no longer survive in the university system, with his radical ideas of economics. He provides various content at different subscription points. What do you have in mind? Nick
Re: DConf 2017 Berlin - Streaming ?
On Friday, 5 May 2017 at 01:41:25 UTC, bachmeier wrote: Nick https://www.youtube.com/watch?v=IqiXMN03968 thanks :)
Re: DConf 2017 Berlin - Streaming ?
On Thursday, 4 May 2017 at 07:28:42 UTC, Mike Parker wrote: https://www.youtube.com/watch?v=MqrJZg6PgnM&utm_content=buffercc4c1&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer Wow - walters talk has 1,000 views already !! How do we see the next talk? Nick
Re: NG technical issues: Is it just me?
On Thursday, 20 April 2017 at 21:16:38 UTC, David Gileadi wrote: On 4/20/17 2:05 PM, lawrence wrote: On 04/20/2017 02:09 PM, Timon Gehr wrote: On 20.04.2017 21:45, Nick Sabalausky (Abscissa) wrote: [snip] It's not just you. I have the same issues. I used to have this same problem, until I sent the server settings to check for updates every 2 minutes. I don't have this problem, but my server settings check every 2 minutes as well. the browser access is very fast as an alternative.
DConf 2017 Berlin - Streaming ?
Hi Can anyone advise if there will be live streaming or will there only YouTube videos after the event. Not that I'm complaining. thanks Nick
Re: Walter and Andrei and community relationship management
On Thursday, 6 April 2017 at 19:27:50 UTC, Andrei Alexandrescu wrote: We commit to be more formal about the process, but overall it is correct that we have more say in what gets in the language. Allow me to add a couple of things. First, this is the way things are commonly done in language design - a small committee defines a formal process and ultimately decides on features. In fact it is unusual that we put up unfinished ideas up for discussion, which we hope has the raises the level of responsibility in the community. I understand how what we did has been misunderstood as us just considering ourselves exempt from the due process. We have a very strong interest to follow a formal process and have the trail serve as a template to follow. (That intent is visible in https://github.com/dlang/DIPs/blob/master/DIPs/DIP1005.md as well, with the unexpected twist of an interesting idea that obsoleted it. The idea has come from Daniel Nielsen in this forum and has been adapted with credit in https://github.com/dlang/druntime/pull/1756.) Second, we are very much open to increasing the size of our committee. This is already happening - it is obvious that known strong contributors with a good track record and who make consistently valuable have a huge impact on the language and library definition. Fortunately we have quite a few of those. In contrast, our attention is more difficult to be commanded by commentators who have little history of pull requests, good-quality DIPs, articles etc. and attempt to strong-arm us into pursuing underspecified ideas. Third, all of this is a process not an immutable status. We are learning leadership on the job, and although I think we have made large strides since only e.g. one year ago, there is much more to improve. Expect more changes in the future and please bear with us and grant us your understanding as we are getting the hang of it. Thank you for the detailed reply. It helps the understanding by the community.
Re: Walter and Andrei and community relationship management
On Thursday, 6 April 2017 at 19:17:53 UTC, Walter Bright wrote: There's one big difference. The proposal I put forth is fairly complete, and I am well along implementing it. deadalnix's requires a great deal of further work just to figure out what it means - as presented, it is not much more than an idea. Nor is it a simple idea. It will upend D's type system. It'll likely affect much of the semantic code in the compiler, and will require a lot of retrofitting in Phobos. Who knows how extensive that will be. I understand. It was a major change, and you likely felt the risks were not worth it. I don't know any language process that would accept it as it stands - it would get bounced back with "needs more work". Yes, but if you had detailed which areas, he might of been more receptive. Somebody has to work on it to move it forward - who do you propose should do it? We don't have a team anywhere whose job it is to create detailed proposals based on other peoples' ideas (which appear in the forum every day). Things rarely move forward unless a champion for it self-selects with the will and motivation to push it relentlessly. That sets a high bar. Can you give an example when this has worked well, or have they been mostly minor changes? (The general attitude of the C++ committee is if no champion emerges for a proposal that is willing to fix it and address all concerns about it and fight for it, then the proposal is not worth considering. It works for them.) So this is your and Andreis approach? If so, perhaps you want to document it, so everyone understands. If you or anyone else wants to be the champion for deadalnix's idea, I encourage you to do so. Collaborate here or in any way that works for you. I'm not going to shut you or anyone down on such discussions. I have already done a review of it and identified where it needs more work, so the next step is up to you. No, its his big idea, and I don't understand it well enough to push it. But I also think that your vision of the language, seems to be fluid at present, with the requirements to support a GC, ARC, and the ability to remove the run-time. Again perhaps you and Andrei want to confirm this direction. My intent for this post, was to bring to both your attentions, how this was perceived by the outsiders/community, and a perceived (if incorrect) double standard. That was all. (I also did not submit it as a DIP because the DIP process at the time was in limbo due to Dicebot exiting it. Now that Mike Parker is the new DIP czar, things should be moving again.) Good to hear.
Re: What are we going to do about mobile?
On Friday, 7 April 2017 at 14:47:03 UTC, Marco Leise wrote: Am Thu, 06 Apr 2017 05:24:07 + schrieb Joakim : D is currently built and optimized for that dying PC platform. As long as the world still needs headless machines running web sites, simulations, cloud services, ...; as long as we still need to edit office documents, run multimedia software to edit photos and video, play AAA video games the "PC master race" way; I'm confident that we have a way to go until all the notebooks, PCs and Macs disappear. :) I'd say we just have /more/ fully capable computers around us nowadays. I'd probably roughly split it into - web/cloud server machines, often running VMs - scientific computation clusters - desktops (including notebooks) - smart phones - embedded devices running Linux/Android (TVs, receivers, refrigerators, photo boxes, etc...) perhaps we need need real data as to what markets are really growing ?
Re: What are we going to do about mobile?
On Saturday, 8 April 2017 at 05:37:24 UTC, Jethro wrote: On Thursday, 6 April 2017 at 05:24:07 UTC, Joakim wrote: I have been saying for some time now that mobile is going to go after the desktop next (http://forum.dlang.org/thread/rionbqmtrwyenmhmm...@forum.dlang.org), Samsung just announced it, for a flagship device that will ship tens of millions: [...] The D community should start a D based operation system for the android and possibly iphone devices. Since D can compile in to many different languages, the OS could be platform agnostic. for industrial usage, how about QNX o/s on ARM processors. This is a big market.
Walter and Andrei and community relationship management
I'm going to address this post to Walter and Andrei, as the joint captains of the D ship, so to speak. As an outsider I can see there are two major issues, at play, at present, in this series of threads. 1. The technical proposals and arguments for x & y, or against x & y. On one side is Walter & Andrei with Walters proposal, and the other is Deadalnix. 2. The relationship management, between the co-captains and the community. And the perception of the different rules for the captains verses the community, which causes resentment, and friction, to members of this community. I'm not going to discuss item 1 in-depth, because item 2 is far more important. So let's go back to the beginning. Andrei starts a thread, 4 days ago, titled 'Exceptions in @nogc code'. "Walter and I discussed the following promising setup: Use "throw new scope Exception" from @nogc code. That will cause the exception to be allocated in a special stack-like region. If the catching code uses "catch (scope Exception obj)", then a reference to the exception thus created will be passed to catch. At the end of the catch block there's no outstanding reference to "obj" so it will be freed. All @nogc code must use this form of catch. If the catching code uses "catch (Exception obj)", the exception is cloned on the gc heap and then freed. Finally, if an exception is thrown with "throw new Exception" it can be caught with "catch (scope Exception obj)" by copying the exception from the heap into the special region, and then freeing the exception on the heap. Such a scheme preserves backward compatibility and leverages the work done on "scope". Deadalnix enters the discussion in Andrei's thread, with: "It doesn't need any kind of throw new scope Exception, and was proposed, literally, years ago during discussion around DIP25 and alike. I urge you to reconsider the proposal that were made at the time. They solve all the problems you are discovering now, and more. And, while more complex that DIP25 alone, considering DIP25+DIP1000+this thing+the RC object thing, you are already in the zone where the "simple" approach is not so simple already. Things are unfolding exactly as predicted at the time. Ad hoc solutions to various problems are proposed one by one and the overall complexity is growing much larger than initially proposed solutions." Others (Eugene Wissner & Dmitry Olshansky) also comment on the proposed syntax. Walters responds to Deadalnix by asking for more details: > It doesn't need any kind of throw new scope Exception, and was proposed, > literally, years ago during discussion around DIP25 and alike. "A link to that proposal would be appreciated." Walter adds an additional requirement in response to Deadalnix "This does not address the stated need (by many programmers) to not even have to link in the GC code. A solution that falls short of this will be rejected. The rejections may not be technically well founded, but we're not in a good position to try to educate the masses on this. A solution that does not require linking to the GC sidesteps that problem." Walter then adds the following: > I urge you to reconsider the proposal that were made at the time. They solve all > the problems you are discovering now, and more. And, while more complex that > DIP25 alone, considering DIP25+DIP1000+this thing+the RC object thing, you are > already in the zone where the "simple" approach is not so simple already. "I did some searching for this previous proposal discussion, but could not find it. Instead, I'll go by your description of it here. I've written a more fleshed out proposal and started a new thread with it. Feel free to rip it to shreds! " Walter then starts a new thread, 3 days ago, titled: Proposal: "Exceptions and @nogc". Within the post he lays sub-headings out: Problem; Solution; throw Expression; catch (Exception e); Chained Exceptions; Copying Exceptions; Legacy Code Breakage; Conclusion; and References. It's like a DIP in structure, but its NOT a formal DIP. Walters replies to a request by Rikki Cattermole for a DIP. >And as probably expected, DIP please. Its a big set of changes and warrants > documenting in that form. "If it survives the n.g. discussion I will. Though the DIP process is in limbo at the moment since Dicebot is no longer running it." Here Walter is saying, lets discuss it thoroughly on the n.g. and then LATER, if it's any good, he will formal put it into a DIP. Fair enough. Discussions continue. Deadalix then composes a long reply, in Andreis thread, to Walters request. He is hopeful he will be heard: "The forum search isn't returning anything useful so I'm not sure how to get that link. However, it goes
Re: Exceptions in @nogc code
On Sunday, 2 April 2017 at 21:27:07 UTC, deadalnix wrote: On Saturday, 1 April 2017 at 22:08:27 UTC, Walter Bright wrote: On 4/1/2017 7:54 AM, deadalnix wrote: It doesn't need any kind of throw new scope Exception, and was proposed, literally, years ago during discussion around DIP25 and alike. A link to that proposal would be appreciated. The forum search isn't returning anything useful so I'm not sure how to get that link. However, it goes roughly as follow. Note that it's a solution to solve DIP25+DIP1000+RC+nogc exception and a sludge of other issues, and that comparing it to any of these independently will yield the obvious it is more complex. But that wouldn't be a fair comparison, as one should compare it to the sum of all these proposals, not to any of them independently. [snip] This mechanism solves numerous other issues. Notably and non exhaustively: - General reduction in the amount of garbage created. - Ability to transfers ownership of data between thread safely (without cast to/from shared). - Safe std.parralelism. - Elaborate construction of shared and immutable objects. - Safe reference counting. - Safe "arena" style reference counting such as: https://www.youtube.com/watch?v=JfmTagWcqoE - Solves problems with collection ownership and alike. This silence is killing me! Can one assume that Walter is thinking about deadalnix's detailed proposal above, and that he will give a formal response, once he has given it serious thought, and discussed it with Andrei ? cheers Nick
OT: Tiobe Index - December Headline: What is happening to good old language C?
The programming language of all programming languages C is consistently going down since November 2015. The language was in a range of 15% to 20% for more than 15 years and this year it suddenly started to suffer. Its ratings are now less than 10% and there is no clear way back to the top. So what happened to C? Some months ago we already listed some possible reasons: it is not a language that you think of while writing programs for popular fields such as mobile apps or websites, it is not evolving that much and there is no big company promoting the language. May be there are more reasons. If you happen to know one, please share it with us. source: http://www.tiobe.com/tiobe-index/ (Dec 2016) cheers Nick
Re: D Flowgraph GUI Interface
On Monday, 5 December 2016 at 12:35:46 UTC, D.Rex wrote: Howdy, I am embarking on a project to create a Flowgraph (node based) GUI interface, much like Blender's Node Editor or Unreal ENgine 4's Blueprint System, for other future projects, I have been looking around for many months now on tutorials but I can never quite find anything. but I really like D as a language, and want this interface to be written in it. Cheers! try here https://s3.amazonaws.com/gamedev-tutorials/Tutorials/Scripting-Flow_Graph-(01)_Introduction_to_Flow_Graph.pdf
Re: Earthquakes - New Zealand
On Monday, 14 November 2016 at 14:25:02 UTC, Meta wrote: On Sunday, 13 November 2016 at 12:17:22 UTC, rikki cattermole wrote: I saw in the news that there was a tsunami and many aftershocks coming after the initial quake. Hope everyone's okay. The top of the south island and Wellington took the biggest hits. Wellington is recovering well, but the south island is another matter, and will take longer. Nick
Re: Unum II announcement
On Monday, 10 October 2016 at 05:32:55 UTC, Nick B wrote: On Saturday, 8 October 2016 at 00:35:31 UTC, Nick B wrote: On Sunday, 25 September 2016 at 02:22:01 UTC, Nick B wrote: I suggest that now, programmers would/may have a choice: be slow and correct, or fast and incorrect, and that would depend if real accuracy is important or not, the types of problems being work on, and cost of failure. (see examples in John Powerpoint presentation). Hi Everyone. Here is a link: [ http://ubiquity.acm.org/article.cfm?id=2913029 ] of an interview between John L Gustafson & Walter Tichy (Professor of Computer Science) published in the ACM Digital Library. Published April 2016. pdf - 11 pages. It is a easy and informative read with a computer science lens. I loved this quote at the end of this interview: "Unums are to floats what floats are to integers. They are the next step in the evolution of computer arithmetic". Perhaps what D will need in the future. cheers Nick
Re: Unum II announcement
On Saturday, 8 October 2016 at 00:35:31 UTC, Nick B wrote: On Sunday, 25 September 2016 at 02:22:01 UTC, Nick B wrote: I suggest that now, programmers would/may have a choice: be slow and correct, or fast and incorrect, and that would depend if real accuracy is important or not, the types of problems being work on, and cost of failure. (see examples in John Powerpoint presentation). But I will ask John G, on the types of users showing interest in UNUMS. Hi. Below is a copy of John's reply, which is interesting and insightful! [starts] There are some kinds of problems that can only be solved by unums and not by floats. Initially, those are the main focus. Examples include: * Global optimization where proof is needed that all optima have been found * Root-finding methods for fully general functions, including non-differentiable functions and other poorly-behaved functions * N-body dynamics with rigorous bounds on the orbital trajectories that grow only linearly in the number of time steps * Methods that need ultra-fast but ultra-low-precision initial solution with guaranteed mathematical correctness * Solutions of systems of nonlinear equations that also reveal whether the problem is stiff or unstable. It is a misconception, more common than I would like, that the purpose of unums is to substitute for floats in existing floats and then show some kind of superiority. That can happen in terms of getting better answers with fewer bits, and I gave some examples in my book, but they won't be "faster," whatever that means. Floats are a guess about the answer, so they contain no rigorous mathematical bound on the answer; how do I compare their speed at guessing, with the speed of a method that is rigorous? Most people don't even think about the information in an answer as the goal of a benchmark, and just measure the time to finish an algorithm and print a result. Put another way, if you don't care whether an answer is mathematically correct, then I can compute very fast indeed. Instantly, in fact. [ends] Insightful indeed. Of course, these types of problems may be too specialised for the general D community. I really don't know for sure. I decided to pop this [John G's reply] up again, in case anyone was interested in * rigorous mathematical bound [solutions] on [an] answer * even if this is for a small D audience. Nick
Re: Unum II announcement
On Sunday, 25 September 2016 at 02:22:01 UTC, Nick B wrote: I suggest that now, programmers would/may have a choice: be slow and correct, or fast and incorrect, and that would depend if real accuracy is important or not, the types of problems being work on, and cost of failure. (see examples in John Powerpoint presentation). But I will ask John G, on the types of users showing interest in UNUMS. Hi. Below is a copy of John's reply, which is interesting and insightful! [starts] There are some kinds of problems that can only be solved by unums and not by floats. Initially, those are the main focus. Examples include: * Global optimization where proof is needed that all optima have been found * Root-finding methods for fully general functions, including non-differentiable functions and other poorly-behaved functions * N-body dynamics with rigorous bounds on the orbital trajectories that grow only linearly in the number of time steps * Methods that need ultra-fast but ultra-low-precision initial solution with guaranteed mathematical correctness * Solutions of systems of nonlinear equations that also reveal whether the problem is stiff or unstable. It is a misconception, more common than I would like, that the purpose of unums is to substitute for floats in existing floats and then show some kind of superiority. That can happen in terms of getting better answers with fewer bits, and I gave some examples in my book, but they won't be "faster," whatever that means. Floats are a guess about the answer, so they contain no rigorous mathematical bound on the answer; how do I compare their speed at guessing, with the speed of a method that is rigorous? Most people don't even think about the information in an answer as the goal of a benchmark, and just measure the time to finish an algorithm and print a result. Put another way, if you don't care whether an answer is mathematically correct, then I can compute very fast indeed. Instantly, in fact. [ends] Insightful indeed. Of course, these types of problems may be too specialised for the general D community. I really don't know for sure. In light of what John has stated above, I should therefore correct my previous statement: "I suggest that now, programmers going forward will have a choice: be slower with a rigorous mathematical bound answer, or use a fast and incorrect guess, and that would depend if real accuracy is important or not, the types of problems being work on, and the cost of failure". Nick
Re: Unum II announcement
On Sunday, 25 September 2016 at 21:47:15 UTC, H. S. Teoh wrote: A dub package seems like the best approach at present. I would be quite interested in such a library solution, FWIW. If it turns out to be a good idea, then we can consider putting it into Phobos, or perhaps even the language. But let's not jump the gun here. T Agreed. This would be a good first step. Just need to wait for a reference implementation and test cases to be developed, and published. Nick
Re: Unum II announcement
On Saturday, 24 September 2016 at 12:56:48 UTC, Andrei Alexandrescu wrote: From what I read in the freely-available materials on Unum (actually I also skimmed the book) it seems to me Unum is predicated on a hardware implementation. It seems there would be little interest in a slow software emulation. -- Andrei Andrei I suggest that now, programmers would/may have a choice: be slow and correct, or fast and incorrect, and that would depend if real accuracy is important or not, the types of problems being work on, and cost of failure. (see examples in John Powerpoint presentation). But I will ask John G, on the types of users showing interest in UNUMS. Nick
Re: Unum II announcement
On Wednesday, 17 August 2016 at 16:03:22 UTC, H. S. Teoh wrote: On Wed, Aug 17, 2016 at 03:44:48AM +, Nick B via Digitalmars-d wrote: On Monday, 15 August 2016 at 00:42:16 UTC, H. S. Teoh wrote: [...] > Thanks to operator overloading and alias this, we can > probably do a pretty good job implementing unums as a > library so that people can try it out. It may be easier to link to the reference C implementation first. Easier, certainly. But I think D may offer some advantages over a purely C implementation. It's worth exploring, at any rate. For anyone interested, John Gustafson has posted recently a Powerpoint presentation (dated 2 June 2016), which details the failures (both technical and dollar wise) of Floating Point calculations. At the end of the presentation, he details the state of the current implementations (eg Lawrence Livermore National Lab, and others etc), and the proposed implementations in hardware! http://www.johngustafson.net/presentations/UnumArithmetic-ICRARseminar.pptx enjoy. Nick
Re: Unum II announcement
On Wednesday, 17 August 2016 at 11:34:15 UTC, Seb wrote: If you want Andrei or Walter's opinion on whether they could in principle imagine Unum as part of Phobos or even the language, you should ping them directly via mail. Agreed.
Re: Implement the "unum" representation in D ?
On Wednesday, 21 September 2016 at 07:52:18 UTC, Ethan Watson wrote: On Tuesday, 20 September 2016 at 22:52:57 UTC, Nic Brummell Is there some central repository with links to the active projects? I'll try and wrap my head fully around the math before we get to that point though. Ethan There is the other thread on this subject. Its called: » General » Unum II announcement there is a lot a technical discussion on Unums at: https://groups.google.com/forum/#!forum/unum-computing Note that a Skip Cave is even claiming to have UNums running in the J programming language. You may also want to read John G post at the bottom of this thread Called"The Great Debate: John Gustafson and William Kahan (Video?)" cheers N.
Re: Implement the "unum" representation in D ?
On Tuesday, 20 September 2016 at 22:52:57 UTC, Nic Brummell wrote: If anyone is still interested in this concept whatsoever, we are holding a mini-workshop on the current developments of Unums at the University of California Santa Cruz on Oct 24th. We'd love to have some participation from interested parties, including presentations on any attempts to implement (in D?) etc. Please see https://systems.soe.ucsc.edu/2016-symposium or contact me via here. Nic. Nic Thanks for the heads up. John Gustafson will have the best understanding as to the progress to implement this in "C" I believe. Perhaps you could post back an update after the conference ? Nick
Re: Unum II announcement
On Monday, 15 August 2016 at 00:42:16 UTC, H. S. Teoh wrote: On Sun, Aug 14, 2016 at 09:08:31PM +, Nick B via Digitalmars-d wrote: While I certainly hope this research would eventually revolutionize numerical applications, it would take a long time before it would catch on, and until then, I think it's better to have unums available as a library rather than introducing it into the language itself. Having it available via dub or something like that would probably be a very good idea. I totally agree. Thanks to operator overloading and alias this, we can probably do a pretty good job implementing unums as a library so that people can try it out. It may be easier to link to the reference C implementation first. I note that I haven't seen any feedback from Walter or Andrei. I can only assume that neither are interested in this subject. Nick
Re: Unum II announcement
On Wednesday, 10 August 2016 at 08:36:41 UTC, Nick B wrote: On Tuesday, 9 August 2016 at 09:45:58 UTC, Nick B wrote: http://www.johngustafson.net/pubs/RadicalApproach.pdf Please note that Figure 8 on page 9 has errors. Please note these errors have now been corrected, and the paper is distributed under the terms of the Creative Commons Attribution-Non Commercial 3.0 License which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is properly cited." Does anyone have any feedback on this paper ? Does anyone feel that Unums should be part of D ? Nick
Re: Unum II announcement
On Tuesday, 9 August 2016 at 09:45:58 UTC, Nick B wrote: On Tuesday, 23 February 2016 at 21:47:10 UTC, H. S. Teoh wrote: On Tue, Feb 23, 2016 at 09:20:23PM +, Nick B via Digitalmars-d wrote: ? I hope to be able to present a link to the finalised paper on Unums within 24 to 48 hours. Here is the link to the paper as promised. http://www.johngustafson.net/pubs/RadicalApproach.pdf Please note that Figure 8 on page 9 has errors. Three of the four-bit unums on this figure is wrong, as you have duplicates of 0110, 0100, 0010. For the correct diagram please see page 20 on http://www.johngustafson.net/presentations/Unums2.0.pdf Nick
Re: Unum II announcement
On Tuesday, 23 February 2016 at 21:47:10 UTC, H. S. Teoh wrote: On Tue, Feb 23, 2016 at 09:20:23PM +, Nick B via Digitalmars-d wrote: I do want to clarify, though, that I think at this point implementing unum in the D compiler is almost certainly premature. What I had in mind was more of a unum library that early adopters can use to get a feel for how things would work. If the unum system turns out to have garner enough support that it starts getting hardware support, then it should be relatively easy to transition it into a built-in type. I don't see this happening for at least another 5-10, though. It took at least as long (probably longer) for hardware manufacturers to adopt the IEEE standard, and right now unums aren't even standardized yet. T just a quick update re Unum 2.0 I hope to be able to present a link to the finalised paper on Unums within 24 to 48 hours. Also here, below, is an quick update, on the software development on Unums from Prof Gustafson. "Work is progressing on a reference C implementation on two fronts, and I am presently trying to unite the two. One is the coding by Isaac Yonemoto, which is being funded by my Singapore employer (A*STAR)... which is like having a team of about eight programmers, right there. He's an ultra-fast programmer and a crack mathematician as well, so he finds amazing shortcuts and insights that I don't notice. Isaac tends to code in Julia first, and C second. When he does C, he almost does assembler; he carefully studies what x86 instructions are available and squeezes every cycle. The other is a small team at UC Santa Cruz with one applied math grad student doing most of the coding and a couple of faculty advisors (plus me dialing in now and then), and just about has plus-minus-times-divide working, the last I heard. This is part of an Open Source initiative, so you can bet that this will be documented and released into the wild the way open source software should be. That one is going for C directly. There may be other efforts. Some MIT folks also created a Julia version of type 2 unums, and it may be almost as fast as C. There may be others."
Re: D mentioned but Rust wins
On Tuesday, 17 May 2016 at 05:39:41 UTC, Nick B wrote: source: http://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-empire/ also But Go’s “memory footprint”—the amount of computer memory it demands while running Magic Pocket—was too high for the massive storage systems the company was trying to build. Dropbox needed a language that would take up less space in memory, because so much memory would be filled with all those files streaming onto the machine. So, in the middle of this two-and-half-year project, they switched to Rust on the Diskotech machines.
D mentioned but Rust wins
[snip] Crowling, Turner, and others originally built Magic Pocket using a new programming language from Google called Go. Here too, Dropbox is riding a much larger trend, languages designed specifically for the new world of massively distributed online systems. Apple has one called Swift, Mozilla makes one called Rust, and there’s an independent one called D. All these languages let coders build software quickly that runs quickly—even executed across hundreds or thousands of machines. source: http://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-empire/
Re: How are you enjoying DConf? And where to go next?
On Friday, 6 May 2016 at 14:13:35 UTC, Andrei Alexandrescu wrote: The atmosphere here is great, and I'm curious how it feels for those who are watching remotely. Is the experience good? What can we do better? I also viewed from New Zealand, but I thought it was hard to view the images, and to see the persons face when they are talking. Sometimes the images were of the speaker standing on the stage at a distance. Too much of the stream was black around the side of the speaker. The best way to stream/record such a event of someone talking to slides, is a split image approach. Part of the image is a close up of the speaker, with good audio, and the rest of the image is on the presentation. Later when its a Q&A this can change, so you can see the interaction with the audience. Nick
Re: DConf 2016 Berlin - Live Streaming ?
On Thursday, 21 April 2016 at 00:42:51 UTC, Dicebot wrote: Almost certainly yes. I will make announcement as soon as some last details are figured out (expect it within 24 hours ;)). Hi. Any news on this this ? Nick
DConf 2016 Berlin - Live Streaming ?
Hi. Can anyone advise if there will be live streaming of this event ? cheers Nick
Re: Opportunity: Software Execution Time Determinism
On Wednesday, 13 April 2016 at 15:50:40 UTC, Nordlöw wrote: I'm currenly in an industry with extremely high demands on software determinism, both in space and time (hard realtime). I'm aware of the lack of absolute time-determinism in the CPU architectures of today. What is absolute time-determinism in a CPU architectures ? and why is it important in hard real time environments ? Nick
Re: Suggestion for a book
On Wednesday, 2 March 2016 at 17:12:39 UTC, karabuta wrote: Whilst coding in D, there so many approaches one can take to structure his project. As the code base grow large, one can get really confused as to how best to structure code (modules, directories, classes, using class or structs, utilizing language features, etc.). Making a good decision initially will save a project a whole lot of development time (+ debugging & testing time). This would certainly be a useful book. I would buy one. Nick
Re: Unum II announcement
On Thursday, 25 February 2016 at 10:36:08 UTC, Robbert van Dalen wrote: On Wednesday, 24 February 2016 at 21:43:59 UTC, Nick B wrote: On Wednesday, 24 February 2016 at 20:11:39 UTC, Robbert van Dalen wrote: Nick, I've just asked Dr. Gustafson to create a group on his behalf and he was fine with it: https://groups.google.com/forum/#!forum/unum-computing It would be nice if you'd subscribe to it: Robbert, I will subscribe, as you suggested. I see that the new user group has been getting page views already. Thats quick. Nick
Re: Unum II announcement
On Wednesday, 24 February 2016 at 21:14:46 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 24 February 2016 at 20:59:20 UTC, Timon Gehr wrote: The basic idea for Unums seems that you get an estimate of the bounds and then recompute using higher precision or better algorithm when necessary. Agreed. This seems to be my understanding as well. Nick
Re: Unum II announcement
On Wednesday, 24 February 2016 at 20:11:39 UTC, Robbert van Dalen wrote: Sorry to hijack this thread, but I think unum II is the best thing since sliced bread! :) It would be great if Dr. Gustafson would initiate a google group so we can discuss the inner workings of unum II. If not, I guess I will start one :) Where do you suggest that such a group hang out ? Robert, what is your background ? I have asked Dr. Gustafson how he would like to respond to some of the questions raised on this forum. Nick
Re: Unum II announcement
On Wednesday, 24 February 2016 at 21:07:01 UTC, Ola Fosheim Grøstad wrote: Implement unum-computing as GPU-compute-shaders. They are present in OpenGL ES 3.1, so they will become ubiquitous. Are you sure ? Here is a link to the spec (pdf, 505 pages) and I can find no mention of unums ? https://www.khronos.org/registry/gles/specs/3.1/es_spec_3.1.pdf Nick
Re: Unum II announcement
On Tuesday, 23 February 2016 at 21:47:10 UTC, H. S. Teoh wrote: On Tue, Feb 23, 2016 at 09:20:23PM +, Nick B via Digitalmars-d wrote: On Tuesday, 23 February 2016 at 18:35:47 UTC, H. S. Teoh wrote: What I had in mind was more of a unum library that early adopters can use to get a feel for how things would work. If the unum system turns out to have garner enough support that it starts getting hardware support, then it should be relatively easy to transition it into a built-in type. I agree that it need to become a test library first for the early adopters to play with. Nick
Re: Unum II announcement
On Tuesday, 23 February 2016 at 18:35:47 UTC, H. S. Teoh wrote: This is very interesting, and looks more promising than the previous unum presentation. While it's too early to hope for a hardware implementation, I'm interested in implementing a software emulation in D. D's powerful templating system could let us experiment with various implementation possibilities (e.g., different word sizes, variable size vs. fixed sizes, etc.), to get a feel for how it would work in real-life. >> T Excellent suggestion. At least one firm supporter. Would Andrei or Walter like to comment ? Please note that the error identified in this thread has now been corrected in the Powerpoint presentation. The error may still be in the PDF. Nick
Re: Unum II announcement
On Monday, 22 February 2016 at 17:15:54 UTC, Charles wrote: On Monday, 22 February 2016 at 13:11:47 UTC, Guillaume Piolat wrote: Slide 12, 0101 is repeated. The top one should actually be 0111 I believe (this error also repeats). I will check with John re this error. Aside from that, the notes were super useful, not sure if you could add them in there. Its likely that we can not add the Notes to the PDF, which is why I recommended to everyone, to download the presentation, and read it via Powerpoint, then you can see all the Notes. Nick
Unum II announcement
"For those of you who think you have already seen unums, this is a different approach. Every one of the slides here is completely new and has not been presented before the Multicore 2016 conference [in Wgtn, NZ]." Here is the link as promised to the new presentation by John Gustafson: http://www.johngustafson.net/unums.html I strongly recommend that you download the presentation [Powerpoint, 35 pages] as there are lots of Notes with the presentation. Note that the previous thread re Unums, can be found here: https://forum.dlang.org/thread/quzsjahniokjotvta...@forum.dlang.org I welcome any feedback, especially from Walter or Andrei. cheers Nick
Re: Implement the "unum" representation in D ?
On Saturday, 20 February 2016 at 23:25:40 UTC, Nick B wrote: On Wednesday, 17 February 2016 at 16:35:41 UTC, jmh530 wrote: On Wednesday, 17 February 2016 at 08:11:21 UTC, Nick B wrote: Having just looked at the slides again, I believe this will break compatibility with std.math, (for example it throws out NaN), just as D has broken full compatibility with all of C++. UNUM II is also proposing to break completely from IEEE 754 floats and gain Computation with mathematical rigor ... Can anyone tell me who are the maths experts, and hard science users, around here ? Nick
Re: Implement the "unum" representation in D ?
On Wednesday, 17 February 2016 at 16:35:41 UTC, jmh530 wrote: On Wednesday, 17 February 2016 at 08:11:21 UTC, Nick B wrote: Wrt phobos, I would just recommend that whatever unum library gets eventually written has a companion with the equivalent of the functions from std.math. Having just looked at the slides again, I believe this will break compatibility with std.math, (for example it throws out NaN), just as D has broken full compatibility with all of C++. I hope to have a link to the revised presentation within 7 days. Can anyone tell me who are the maths experts, and hard science users, around here ? Nick
Re: Implement the "unum" representation in D ?
Hi John Gustafson was in town (Wellington, NZ) for the Multicore World Conference 2016 ( http://www.multicoreworld.com/) conference. I caught up with him, tonight, and spoke to him for about two hours. Here is a quick summary of what we discussed. John has just redesigned Unums, to address the design issues in version 1.0. He presented his Powerpoint presentation to the conference, with the details of Unums 2.0 (this is a tentative name at present). Its a improved design, but I will only brief detail it: "It will have more dynamic range with 16-bit values than IEEE half-precision, but only by a small amount. Still remarkable to be uniformly better in dynamic range and precision, with support for inexact values and perfect reciprocation. If a language supports just one unum data type, John believes it should be the 16-bit one". John has agreed to provide a link to the Powerpoint presentation, in a couple of weeks, and then later, a link to his new published paper on the subject, when it is ready. There will likely be a new book, building on version 1.0, and, again, tentatively titled 'Unums 2.0'. I also discussed with him, about integrating it with D. At the present, there is a 'C' codebase under construction, but this could be rewritten in D in the future. D may require some language changes, and a new phobos library, to support this advanced functionality. Of course Walter will have decide if he wants this advanced numbering system as a part of D. As an aside, John mentioned that Rex Computing (http://www.rexcomputing.com/) is using Unums with the Julia language, for their new hyper-efficient processor architecture. It will be interesting to see what these whiz kids deliver in time. cheers Nick
Re: OT: 'conduct unbecoming of a hacker'
On Wednesday, 10 February 2016 at 02:11:25 UTC, Laeeth Isharc wrote: http://sealedabstract.com/rants/conduct-unbecoming-of-a-hacker/ (His particular suggestion about accept patches by default is not why I post this). ' ... Hacking should be about making things. And yet a great many of our institutions are set up to discourage, distract, destroy, and derail the making of anything. It’s time we called it what it is: conduct unbecoming of a hacker. ' Great post, and very funny. Nick
Re: vibe.d benchmarks
On Thursday, 31 December 2015 at 12:44:37 UTC, Daniel Kozak wrote: V Thu, 31 Dec 2015 12:26:12 + yawniek via Digitalmars-d napsáno: obvious typo and thanks for investigating etienne. @daniel: i made similar results over the network. i want to redo them with a more optimized setup though. my wrk server was too weak. the local results are still relevant as its a common setup to have nginx distribute to a few vibe instances locally. One thing I forgot to mention I have to modify few things vibe.d has (probably) bug it use threadPerCPU instead of corePerCPU in setupWorkerThreads, here is a commit which make possible to setup it by hand. https://github.com/rejectedsoftware/vibe.d/commit/f946c3a840eab4ef5f7b98906a6eb143509e1447 (I just modify vibe.d code to use all my 4 cores and it helps a lot) can someone tell me what changes need to be commited, so that we have a chance at getting some decent (or even average) benchmark numbers ?
Re: vibe.d benchmarks
On Monday, 28 December 2015 at 13:10:59 UTC, Charles wrote: On Monday, 28 December 2015 at 12:24:17 UTC, Ola Fosheim Grøstad wrote: https://www.techempower.com/benchmarks/ The entries for vibe.d are either doing very poorly or fail to complete. Maybe someone should look into this? Sönke is already on it. http://forum.rejectedsoftware.com/groups/rejectedsoftware.vibed/post/29110 Correct me if I am wrong here, but as far I can tell there is no independent benchmarks showing performance (superior or good enough) of D verses Go, or against just about any other language, as well ? https://www.techempower.com/benchmarks/#section=data-r11&hw=peak&test=json&l=cnc&f=zik0vz-zik0zj-zik0zj-zik0zj-hra0hr
Re: Implement the "unum" representation in D ?
On Thursday, 17 September 2015 at 23:53:30 UTC, Anthony Di Franco wrote: I read the whole book and did not regret it at all, but I was already looking for good interval arithmetic implementations. I found that the techniques are not too different (though improved in important ways) from what is mainstream in verified computing. It would seem to depend on how much people want the standard library to support verified numerical computing. Anthony Good to know that you enjoyed reading the book. Can you describe what YOU mean by 'verified numerical computing', as I could not find a good description of it, and why is it important to have it. Nick
OT: Hack (Type design features to improve legibility in the harsh conditions of the screen)
an interesting set of features http://sourcefoundry.org/hack/ destroy ? Nick .
Re: Rant after trying Rust a bit
On Thursday, 23 July 2015 at 06:46:14 UTC, ponce wrote: I've not used Rust, but don't plan to. On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote: While code.dlang.org has 530 packages, crates.io has 2610 packages, I think this tells something foremost about the size of the community. More people leads to more code. But does it reflect the size of the community? Look at these numbers, below. D is ranked no 26, Rust is not in the top 50 !! http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html Nick
Re: Implement the "unum" representation in D ?
On Monday, 13 July 2015 at 21:25:12 UTC, Per Nordlöw wrote: Can we do anything useful with unums in numeric algorithms if only have forward or bidirectional access? Similar to algorithms such as Levenshtein that are compatible with UTF-8 and UTF-16, Andrei? :) Question for Andrei, above, if he would like to reply.
Re: Implement the "unum" representation in D ?
On Sunday, 12 July 2015 at 03:52:32 UTC, jmh530 wrote: On Saturday, 11 July 2015 at 03:02:24 UTC, Nick B wrote: FYI John Gustafson book is now out: I wouldn't have known about this way to deal with it if you hadn't bumped this thread. So thanks, it's interesting (not sure if this book will adequately address Walter's original concern that it won't really reduce power consumption). I also found the discussion of rational numbers earlier in the thread interesting. I glad that you guys have found this interesting :) Nick
Re: Implement the "unum" representation in D ?
On Thursday, 20 February 2014 at 10:10:13 UTC, Nick B wrote: Hi everyone. John Gustafson Will be presenting a Keynote on Thursday 27th February at 11:00 am The abstract is here: http://openparallel.com/multicore-world-2014/speakers/john-gustafson/ There is also a excellent background paper, (PDF - 64 pages) which can be found here: FYI John Gustafson book is now out: It can be found here: http://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?s=books&ie=UTF8&qid=1436582956&sr=1-1&keywords=John+Gustafson&pebp=1436583212284&perid=093TDC82KFP9Y4S5PXPY Here is one of the reviewers comments: 9 of 9 people found the following review helpful This book is revolutionary By David Jefferson on April 18, 2015 This book is revolutionary. That is the only way to describe it. I have been a professional computer science researcher for almost 40 years, and only once or twice before have I seen a book that is destined to make such a profound change in the way we think about computation. It is hard to imagine that after 70 years or so of computer arithmetic that there is anything new to say about it, but this book reinvents the subject from the ground up, from the very notion of finite precision numbers to their bit-level representation, through the basic arithmetic operations, the calculation of elementary functions, all the way to the fundamental methods of numerical analysis, including completely new approaches to expression calculation, root finding, and the solution of differential equations. On every page from the beginning to the end of the book there are surprises that just astonished me, making me re-think material that I thought had been settled for decades. The methods described in this book are profoundly different from all previous treatments of numerical methods. Unum arithmetic is an extension of floating point arithmetic, but mathematically much cleaner. It never does rounding, so there is no rounding error. It handles what in floating point arithmetic is called "overflow" and "underflow" in a far more natural and correct way that makes them normal rather than exceptional. It also handles exceptional values (NaN, +infinity, -infinity) cleanly and consistently. Those contributions alone would have been a profound contribution. But the book does much more. One of the reasons I think the book is revolutionary is that unum-based numerical methods can effortlessly provide provable bounds on the error in numerical computation, something that is very rare for methods based on floating point calculations. And the bounds are generally as tight as possible (or as tight as you want them), rather than the useless or trivial bounds as often happens with floating point methods or even interval arithmetic methods. Another reason I consider the book revolutionary is that many of the unum-based methods are cleanly parallelizable, even for problems that are normally considered to be unavoidably sequential. This was completely unexpected. A third reason is that in most cases unum arithmetic uses fewer bits, and thus less power, storage, and bandwidth (the most precious resources in today’s computers) than the comparable floating point calculation. It hard to believe that we get this advantage in addition to all of the others, but it is amply demonstrated in the book. Doing efficient unum arithmetic takes more logic (e.g. transistors) than comparable floating point arithmetic does, but as the author points out, transistors are so cheap today that that hardly matters, especially when compared to the other benefits. Some of the broader themes of the book are counterintuitive to people like me advanced conventional training, so that I have to re-think everything I “knew” before. For example, the discussion of just what it means to “solve” an equation numerically is extraordinarily thought provoking. Another example is the author’s extended discussion of how calculus is not the best inspiration for computational numerical methods, even for problems that would seem to absolutely require calculus-based thinking, such as the solution of ordinary differential equations. Not only is the content of the book brilliant, but so is the presentation. The text is so well written, a mix of clarity, precision, and reader friendliness that it is a pure pleasure to read, rather then the dense struggle that mathematical textbooks usually require of the reader. But in addition, almost every page has full color graphics and diagrams that are completely compelling in their ability to clearly communicate the ideas. I cannot think of any technical book I have ever seen that is so beautifully illustrated all the way through. I should add that I read the Kindle edition on an iPad, and for once Amazon did not screw up the presentation of a technical book, at least for this pl
Re: Stable partition3 implementation
On Friday, 10 July 2015 at 00:39:16 UTC, Xinok wrote: On Thursday, 9 July 2015 at 21:57:39 UTC, Xinok wrote: I found this paper which describes an in-place algorithm with O(n) time complexity but it's over my head at the moment. [snip] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.25.5554&rep=rep1&type=pdf from the pdf, above, in case readers, like me, don't know the context of a Stable Partition implementation: Stable minimum space partitioning in linear time Jyrki Katajainen1 and Tomi Pasanen2 Abstract. In the stable 0-1 sorting problem the task is to sort an array of n elements with two distinct values such that equal elements retain their relative input order. Recently, Munro, Raman and Salowe [BIT 1990] gave an algorithm which solves this problem in O(nlog*n) 3 time and constant extra space. We show that by a modification of their method the stable 0-1 sorting is possible in O(n) time and O(1) extra space. Stable three-way partitioning can be reduced to stable 0-1 sorting. This immediately yields a stable minimum space quicksort, which sorts multisets in asymptotically optimal time with high probability. CR categories: E.5, F.2.2. The stable 0-1 sorting problem is defined as follows: Given an array of n elements and a function f mapping each element to the set {0,1}, the task is to rearrange the elements such that all elements, whose f-value is zero, become before elements, whose f-value is one. Moreover, the relative order of elements with equal f-values should be maintained. For the sake of simplicity, we hereafter refer to bits instead of the f-values of elements. Stable partitioning is a special case of stable 0-1 sorting, where the f-values are obtained by comparing every element xi to some pivot element xj (which will not take part in partitioning):
Re: PHP verses C#.NET verses D.
On Thursday, 25 June 2015 at 13:48:38 UTC, Etienne wrote: On Wednesday, 24 June 2015 at 05:34:08 UTC, Nick B wrote: On Tuesday, 23 June 2015 at 11:22:40 UTC, Etienne Cimon wrote: Thanks for the responses and your details replies. I'm going to talk to the CEO of the company described, at the beginning of this long thread, and find out they are willing to consider a proof of concept web site based on your work. This could take a couple of weeks. What is the best way to get in touch with you if I have more questions ? Nick Skype: Etcimon or gmail the same username Thanks again. Nick
Re: PHP verses C#.NET verses D.
On Tuesday, 23 June 2015 at 11:22:40 UTC, Etienne Cimon wrote: Nick I don't have current performance results because I've been focused on adding features, but these results were taken on a previous version: https://atilanevesoncode.wordpress.com/2013/12/05/go-vs-d-vs-erlang-vs-c-in-real-life-mqtt-broker-implementation-shootout/ Etienne Thanks for the responses and your details replies. I'm going to talk to the CEO of the company described, at the beginning of this long thread, and find out they are willing to consider a proof of concept web site based on your work. This could take a couple of weeks. What is the best way to get in touch with you if I have more questions ? Nick
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. Etienne Do you have an performance numbers, as to how fast your web application/server is, or is this based on your personal experience ? Nick
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. I would say 3 months. So it'll probably be a year considering how off my last estimates were. Of course, I never calculated any help (and haven't gotten any really) Etienne Would you like to detail what still needs to be completed/on the to-do list ? What would be the best way to learn it ? Does it need documentation as well ? Nick
Re: PHP verses C#.NET verses D.
On Friday, 19 June 2015 at 11:28:30 UTC, Etienne Cimon wrote: On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: Will you explain how it is different to Vibe.d ? It has HTTP/2, a new encryption library, it uses a native TCP event library, lots of refactoring. In short, the entire thing is in D rather than linking with OpenSSL and libevent. Etienne Can you explain the benefits of writing these libraries in D, as against just linking to these libraries. Is it for faster execution, or better debugging, or some other reason ?
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. I would say 3 months. So it'll probably be a year considering how off my last estimates were. Etienne - Interesting back story. So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. It's standalone, works cross-platform, etc. Will you explain how it is different to Vibe.d ? Thanks everyone for their suggestions. Thanks Etienne on the heads-up on your new web application/server executable. If anyone wants to add any final comments they are most welcome. Nick
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. I would say 3 months. So it'll probably be a year considering how off my last estimates were. Etienne - Interesting back story. Will this be under a Boost licence ? Will you provide a link ? Even the vibe.d library was much more advanced than what I could find with an open source license that allowed static compilation at the time (1 yr 1/2 ago), so I went forward with that and worked my way through. [snip\] So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. It's standalone, works cross-platform, etc. Will you explain how it is different to Vibe.d ?
Re: PHP verses C#.NET verses D.
On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: On Wednesday, 17 June 2015 at 16:28:42 UTC, Etienne wrote: I've been working on developing the entire Web Stack in D, down from the kernel to the multiplexed HTTP/2 protocol and the high-level framework that queries the database and serves the response in json. Your work looks very interesting, although I haven't used it yet. Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. If I did so at a high personal investment cost, it was so those insane web development languages wouldn't bother me anymore. In D, if you have a bug, you are 100% certain that you can resolve it yourself. You have all the C/C++ tools available to go all the way down to the memory and debug anything you want. You have statically typed language that allows huge projects to breathe very healthy and increment features at low cost. Nothing beats D in my opinion. It's 20 years ahead of everything out there. All you need to do, is know how to use it and understand it enough to resolve any bugs you come up with. Any chance you could write a bit more on this? Your personal story and why you believe this. Yes I too would be interested on more background as to your opinion, as why its 20 years ahead of everything else out there.
Re: PHP verses C#.NET verses D.
On Wednesday, 17 June 2015 at 04:51:44 UTC, Rikki Cattermole wrote: On 17/06/2015 6:41 a.m., Nick B wrote: On Tuesday, 16 June 2015 at 06:29:46 UTC, Rikki Cattermole Oh please say Christchurch! sorry for the confusion. Its Wellington.
Re: PHP verses C#.NET verses D.
On Tuesday, 16 June 2015 at 08:47:40 UTC, John Colvin wrote: On Monday, 15 June 2015 at 23:53:06 UTC, Nick B wrote: Hi. Any comments or suggestions on the above? Both C# and D sound like good fits there. It depends on whether it's the sort of team who like to innovate and explore new possibilities or whether they want a completely fleshed out, stable ecosystem. Is anyone else able to comment on the comparisions/differences between C#.Net & D ?? Any comments on cost ? Any comments on getting bugs fixed ?
Re: PHP verses C#.NET verses D.
On Tuesday, 16 June 2015 at 06:29:46 UTC, Rikki Cattermole wrote: On 16/06/2015 11:53 a.m., Nick B wrote: Hi. There is a startup in New Zealand that I have some dealings with at present. Any comments or suggestions on the above? Hello follow Kiwi! Hello kiwi from the south Island. :)
PHP verses C#.NET verses D.
Hi. There is a startup in New Zealand that I have some dealings with at present. They have build most of their original code in PHP, (as this was quick and easy) but they also use some C#.net for interfacing to accounting appls on clients machines. The core PHP application runs in the cloud at present and talks to accountings applications in the cloud. They use the PHP symfony framework. High speed in not important, but accuracy, error handling, and scalability is, as they are processing accounting transactions. They have a new CEO on board, and he would like to review the companies technical direction. Their client base is small but growing quickly. I know that PHP is not a great language, and my knowledge of D is reasonable, while I have poor knowledge of C#.net. Looking to the future, as volumes grow, they could: 1. Stay with PHP & C#.net, and bring on servers as volumes grow. 2. Migrate to C#.net in time 3. Migrate to D in time. Any comments or suggestions on the above?
Re: Interrogative: What's a good blog title?
On Monday, 27 April 2015 at 23:44:51 UTC, Nick B wrote: On Monday, 27 April 2015 at 23:33:15 UTC, Justin Whear wrote: On Mon, 27 Apr 2015 23:30:11 +, Anonymouse wrote: assumeBlog Or perhaps, getting D-syntax specific, "assumeUnique(wisdom)" How about '[A] Research Scientist Musings' ? Has there been a decision ?
Re: Interrogative: What's a good blog title?
On Monday, 27 April 2015 at 23:33:15 UTC, Justin Whear wrote: On Mon, 27 Apr 2015 23:30:11 +, Anonymouse wrote: assumeBlog Or perhaps, getting D-syntax specific, "assumeUnique(wisdom)" How about '[A] Research Scientist Musings' ?
Re: Today's programming challenge - How's your Range-Fu ?
On Monday, 20 April 2015 at 03:39:54 UTC, ketmar wrote: On Mon, 20 Apr 2015 01:27:36 +, Nick B wrote: Perhaps Unicode needs to be rebuild from the ground up ? alas, it's too late. now we'll live with that "unicode" crap for many years. Perhaps. or perhaps not. This community got together under Walter and Andrei leadership to building a new programming language, on the pillars of the old. Perhaps a new Unicode standard, could start that way as well ?
Re: Today's programming challenge - How's your Range-Fu ?
On Sunday, 19 April 2015 at 19:58:28 UTC, ketmar wrote: On Sun, 19 Apr 2015 07:54:36 +, John Colvin wrote: it's not crazy, it's just broken in all possible ways: http://file.bestmx.net/ee/articles/uni_vs_code.pdf Ketmar Great link, and a really good arguement about the problems with Unicode. Quote from 'Instead of Conclusion' Yes. This is the root of Unicode misdesign. They mixed up two mutually exclusive approaches. They blended badly two different abstraction levels: the textual level which corresponds to a language idea and the graphical level which does not care of a language, yet cares of writing direction, subscripts, superscripts and so on. In other words we need two different Unicodes built on these two opposite principles, instead of the one built on an insane mix of controversial axioms. end quote. Perhaps Unicode needs to be rebuild from the ground up ?
Re: Thoughts on replacement languages (Reddit + D)
On Sunday, 11 January 2015 at 22:55:52 UTC, Andrei Alexandrescu wrote: On 1/11/15 2:54 PM, Nick B wrote: On Sunday, 11 January 2015 at 17:44:59 UTC, Andrei Alexandrescu wrote: Ionno how to measure that with the data we have. -- Andrei Perhaps its better to have a number (average or mean) than no number. Just ask 50 or 100 uers (or more) for their number of downloads for the last 12 or 18 months. This is turn will give you a guess-estimate as to the size of the community. If the number is small, say 4, then this will indicate that the community is near 100,000 users. Nick
Re: Thoughts on replacement languages (Reddit + D)
On Sunday, 11 January 2015 at 17:44:59 UTC, Andrei Alexandrescu wrote: On 1/11/15 9:43 AM, Andrei Alexandrescu wrote: I just regenerated the 28-day moving average graph: erdani.com/d/downloads.daily.png http://erdani.com/d/downloads.daily.png that is. -- Andrei Looking at the chart it is showing a sustained 36,000 downloads (1200 x 30) per month, currently. Perhaps a interesting question is how often an average user, does a download ? Nick
Re: Lost a new commercial user this week :(
On Tuesday, 16 December 2014 at 05:48:27 UT https://drive.google.com/file/d/0B-EiBquZktsLc0czUzZVeGlLM00/view?usp=sharing No guarantees of how long it'll stay up there. And to reiterate, its only just a start. Agreed its a great start. Nick
Re: Lost a new commercial user this week :(
On Sunday, 14 December 2014 at 15:03:27 UTC, Sönke Ludwig wrote: Lastly, when judging all these things, please always try to remember that almost all the work that goes into D (and vibe.d) is non-profit and everyone usually only contributes what (s)he is missing. If I would get payed through a support contract for my work on vibe.d, I could adjust my priorities to suit the requirements of others more, but like this I still have to somehow make sure to be able to pay my bills and can't just work full time to help other (commercial) projects (although I always try to help as far as possible). Sonke, Can you advise how much a support contract for a individual or company seriously interested in using vibe.d might cost ?
Re: More radical ideas about gc and reference counting
On Saturday, 10 May 2014 at 08:18:30 UTC, w0rp wrote: I've seen this discussion ("it's almost performance-free", "it's a performance killer") so many times, I can't even say who has the burden of proof anymore. I wish that someone would take the time and implement ARC in D. That's the only way to prove anything. If you implement it and you can provide clear evidence for its advantages, then that just ends all discussions. How hard would this be, exactly ? Perhaps then, and only then, could you make a apples to apples comparison ? Nick
Re: More radical ideas about gc and reference counting
I forgot to add these comments by walter at the top of my previous post: [walter Bright wrote ] the thing is, GC is a terrible and unreliable method of managing non-memory resource lifetimes. Destructors for GC objects are not guaranteed to ever run. So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Nick
Re: More radical ideas about gc and reference counting
[ monarch_dodra wrote ] Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. https://issues.dlang.org/show_bug.cgi?id=2757 Resource Management. A issue that has been discussed since 2009, and still no *GOOD* solution. Look at these arguements made back then. email 23 Mar 2009 from the D.d list. Subject : "Re: new D2.0 + C++ language". Sat, 21 Mar 2009 20:16:07 -0600, Rainer Deyke wrote: > Sergey Gromov wrote: >> I think this is an overstatement. It's only abstract write >> buffers >> where GC really doesn't work, like std.stream.BufferedFile. >> In any >> other resource management case I can think of GC works fine. > > OpenGL objects (textures/shader programs/display lists). > SDL surfaces. > Hardware sound buffers. > Mutex locks. > File handles. > Any object with a non-trivial destructor. > Any object that contains or manages one of the above. > > Many of the above need to be released in a timely manner. For > example, > it is a serious error to free a SDL surface after closing the > SDL video > subsystem, and closing the SDL video subsystem is the only > way to close > the application window under SDL. Non-deterministic garbage > collection > cannot work. > > Others don't strictly need to be released immediately after > use, but > should still be released as soon as reasonably possible to > prevent > resource hogging. The GC triggers when the program is low on > system > memory, not when the program is low on texture memory. > > By my estimate, in my current project (rewritten in C++ after > abandoning > D due to its poor resource management), about half of the > classes manage > resources (directly or indirectly) that need to be released > in a timely > manner. The other 50% does not need RAII, but also wouldn't > benefit > from GC in any area other than performance. The language set up the defaults when these are to run. The programmer has to override the defaults. [Sure this crude, but it deterministic] [comment by dsimcha inm 2009 ] Come to think of it, as simple and kludgey sounding as it is, this is an incredibly good idea if you have an app that does a lot of sitting around waiting for input, etc. and therefore not allocating memory and you want an easy way to make sure it releases resources in a reasonable amount of time. This belongs in an FAQ somewhere.
Re: re-open of Issue 2757
On Thursday, 17 April 2014 at 10:10:34 UTC, Brad Roberts via Digitalmars-d wrote: According to the modification history for that bug, you reopened it back on May 4, 2009. Walter merely changed the version id recently from 1.041 to D1. https://issues.dlang.org/show_activity.cgi?id=2757 Why not close it instead, if this has not be implemented or is not planed to be implemented ?
re-open of Issue 2757
I have noticed that Walter has re-open this enhancement (re Resourcement Management) quite recently (Feb 2014). I originally filed it in 2009. Is anyone able to say why ? Nick
Re: Implement the "unum" representation in D ?
Hi I will ask my question again. Is there any interest in this format within the D community ? Nick
Re: Implement the "unum" representation in D ?
On Thursday, 20 February 2014 at 10:10:13 UTC, Nick B wrote: Hi everyone. I'm attend the SKA conference in Auckland next week and I would like to discuss a opportunity for the D community. Sorry if I was not clear what the SKA is. In a nutshell is a truely massive telescope project which will require massive computing resources. https://www.skatelescope.org/ Nick
Implement the "unum" representation in D ?
Hi everyone. I'm attend the SKA conference in Auckland next week and I would like to discuss a opportunity for the D community. I am based in Wellington, New Zealand. In Auckland, NZ, from Tuesday to Friday next week there will be two seminars held. The first 2 days (Tuesday and Wednesday) are for the multicore conference. Details are http://www.multicoreworld.com/ Here is the schedule for 2 days (thursday & friday) of the SKA conference: http://openparallel.com/multicore-world-2014/computing-for-ska/schedule-computing-for-ska-2014/ John Gustafson Will be presenting a Keynote on Thursday 27th February at 11:00 am The abstract is here: http://openparallel.com/multicore-world-2014/speakers/john-gustafson/ There is also a excellent background paper, (PDF - 64 pages) which can be found here: http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPrecision1.pdf The math details are beyond me, but I understand his basic idea. I would like to bring your attention to Page 34 and his comments re "standard committees" and page 62 and his comments "Coded in Mathematica for now. Need a fast native version.." I am sure you can see where I am going with this 1. Would it be possible to implement the "unum" representation in D and therefore make it a contender for the SKA ? 2. Is there any interest in this format within the D community Destroy. Nick
Re: Graphics Library for D
On Monday, 6 January 2014 at 04:11:07 UTC, Adam Wilson wrote: Hello Fellow D Heads, Recently, I've been working to evaluate the feasibility and reasonability of building out a binding to Cinder in D. And while it is certainly feasible to wrap Cinder, that a binding would be necessarily complex and feel very unnatural in D. So after talking it over with Walter and Andrei, we feel that, while we like how Cinder is designed and would very much like to have something like it available in D, wrapping Cinder is not the best approach in the long-term. With that in mind, we would like to start a discussion with interested parties about building a graphics library in the same concept as Cinder, but using an idiomatic D implementation from the ground up. I assume that the licence will be BOOST ?? A picture is worth a thousand words, therefore is this the type of graphics library output you are refering to: http://marcinignac.com/projects/cindermedusae/ Nick
Re: Rectangular multidimensional arrays for D
On Tuesday, 8 October 2013 at 17:26:46 UTC, Stefan Frijters wrote: andrei wrote: * We need to have a battery of multidimensional array shapes along with simple iteration and access primitives, at least for interfacing with scientific libraries that define and expect such formats. I'm thinking rectangular (generally hyperrectangular) matrices, triangular matrices, sparse matrices, and band matrices. I too are interesteed in this area as well. Dennis do you only plan to focus on multidimensional arrays only, or will you incorporate the above matrices as well ?? What features are you proposing ? Nick
OT; Will MS kill .NET ?
Hi. I can across this off-topic. It is 2 years old, but it seems well written. http://i-programmer.info/professional-programmer/i-programmer/2591-dumping-net-microsofts-madness.html Here is a preview book from MS, to back up this point above: http://blogs.msdn.com/b/microsoft_press/archive/2012/08/20/free-ebook-programming-windows-8-apps-with-html-css-and-javascript-second-preview.aspx Has anyone heard anything more up to date ? Will MS just let .NET slowly die ? Nick
Re: GPGPUs
On Tuesday, 13 August 2013 at 18:35:28 UTC, luminousone wrote: On Tuesday, 13 August 2013 at 16:27:46 UTC, Russel Winder wrote: The era of GPGPUs for Bitcoin mining are now over, they moved to ASICs. http://developer.amd.com/wordpress/media/2012/10/hsa10.pdf This will be available on AMD APUs in December, and will trickle out to arm and other platforms over time. What a very interesting concept redesigned from the ground up. How about this for the future: D2 > LLVM Compiler > HSAIC Finaliser > Architected Queueing Language Here is a another usefull link: http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/ Nick
Re: Cryptography
On Monday, 5 August 2013 at 18:11:55 UTC, Larry wrote: Well if it is cryptographically strong or not. Again, I cannot judge of it. But thanks to remind people to read before asking. if you want something secure you could check out Blowfish by Bruce Schneier. http://www.schneier.com/paper-blowfish-fse.html Here is a link to some of his other algorithms: http://www.schneier.com/cryptography.html
Re: memory allocation in dmd
On Thursday, 27 June 2013 at 22:12:49 UTC, Nick B wrote: On Sunday, 23 June 2013 at 15:22:22 UTC, Jacob Carlborg wrote: On 2013-06-23 15:12, qznc wrote: That would be SystemTap on Linux. However, I wonder if it is the right tool for the job. [snip] here is a comparion of Systemtap and DTrace http://sourceware.org/systemtap/wiki/SystemtapDtraceComparison here is the SystemTap FAQ http://sourceware.org/systemtap/wiki/SystemTapFAQ it looks to be a very usefull tool. Nick
Re: Migrating dmd to D?
On Sunday, 31 March 2013 at 23:48:31 UTC, Zach the Mystic wrote: On Sunday, 31 March 2013 at 18:31:33 UTC, Suliman wrote: So, what the final decision about porting D to D? It's not a "final decision", but Daniel Murphy/yebblies has already made so much progress with his automatic conversion program, https://github.com/yebblies/magicport2 that I feel like he carries the torch right now. Please refer to this discussion: http://forum.dlang.org/thread/kgn24n$5u8$1...@digitalmars.com#post-kgumek:242tp4:241:40digitalmars.com Basically: 1) Daniel seems to have this project under control, and he's way ahead of anyone else on it. 2) The current hurdle is the glue layer. 3) The project is mostly being kept private, presumably because he wants to come out with a finished product. 4) All I know is, my gut says YES! Question. Does this imply that once Daniel has finished this task, the code will be frozen and a new major release i.e. D 3.0 announced ? Nick
Re: Implementing Sparse Vectors With Associative Arrays/Compiler Bug?
On Thursday, 7 March 2013 at 07:03:04 UTC, Ed wrote: I'm new to D and am trying to implement simple sparse vectors using associative arrays, but I'm getting fairly large floating point errors. Example code for sparse dot product: import std.stdio; import std.math; import std.random; static import std.datetime; int main(string[] args) { double[int] v1; double[int] v2; Random gen; gen.seed(cast(uint)std.datetime.Clock.currTime().stdTime()); double accum = 0; double val; foreach(i;1 .. 1000) { val = uniform(-1000.0,1000.0,gen); accum += val * val; v1[i] = val; v2[i] = val; } double accum2 = 0; double v2Val; foreach(k;v1.byKey()) { v2Val= v2.get(k,0); if(v2Val != 0) { accum2 += v1.get(k,0) * v2Val; } } writefln("accum - accum2 = %e", accum - accum2); return 0; } This outputs values such as: accum - accum2 = -4.172325e-07 accum - accum2 = 2.384186e-07 accum - accum2 = 4.172325e-07 Are errors of this magnitude to be expected using doubles, or is this a compiler bug? Hi Ed I also interested in simple sparse vectors. Any chance this code could be published or put in a library ? Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Sunday, 16 December 2012 at 14:58:46 UTC, Andrei Alexandrescu wrote: [Request again] Would Andre like to make any comment, especially on the martin fowler article ? Others commenting here have said that they consider that the pattern has some value ? What I think about a paper upon first sight is unlikely to be all that insightful, and I already have a long reading list. Why do you find it so important? Are you considering proposing a pattern implementation for Phobos and would like to gauge initial interest? Thanks, Andrei Are you considering proposing a pattern implementation for Phobos and would like to gauge initial interest? Yes I am interested to see if there is ommunity interest in this pattern, and also if it could in included in Phobos, some time in the future. Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Wednesday, 12 December 2012 at 08:09:40 UTC, Dmitry Olshansky wrote: 12/12/2012 6:00 AM, Nick B пишет: On Monday, 10 December 2012 at 23:04:56 UTC, Nick B wrote: On Monday, 10 December 2012 at 20:08:32 UTC, Andrei Alexandrescu wrote: On 12/9/12 10:58 PM, Nick B wrote: [about the Disruptor framework] Would Andrei like to comment on any of the comments so far ?? Sorry, I'd need to acquire expertise in Disruptor before discussing it. I found http://disruptor.googlecode.com/files/Disruptor-1.0.pdf quite difficult to get into because it spends the first page stating how good Disruptor is, then the next 3 pages discussing unrelated generalities, to then start discussing implementation details still without really defining the pattern. I had to stop there. Is there a more concise description of the pattern? Ok, does anyone consider that this pattern, though not well described, has any value ? Nick It surely has. I'd consider http://martinfowler.com/articles/lmax.html to be quite nice description as it helped me to decipher the most of details. The paper itself lacks illustrative material, focuses too much on overcoming Java limitations and generally too terse with important (to me) details of the framework. [Request again] Would Andre like to make any comment, especially on the martin fowler article ? Others commenting here have said that they consider that the pattern has some value ? Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Monday, 10 December 2012 at 23:04:56 UTC, Nick B wrote: On Monday, 10 December 2012 at 20:08:32 UTC, Andrei Alexandrescu wrote: On 12/9/12 10:58 PM, Nick B wrote: [about the Disruptor framework] Would Andrei like to comment on any of the comments so far ?? Sorry, I'd need to acquire expertise in Disruptor before discussing it. I found http://disruptor.googlecode.com/files/Disruptor-1.0.pdf quite difficult to get into because it spends the first page stating how good Disruptor is, then the next 3 pages discussing unrelated generalities, to then start discussing implementation details still without really defining the pattern. I had to stop there. Is there a more concise description of the pattern? Andrei I have looked, but I think the answer is to this question is no. Nick Correction. The answer to this question is actually YES. Read the martinfowler article. See the link below. see Dmitry comments below, as well as I'd consider [Dmitry comments] http://martinfowler.com/articles/lmax.html to be quite nice description as it helped me to decipher the most of details. The paper itself lacks illustrative material, focuses too much on overcoming Java limitations and generally too terse with important (to me) details of the framework. Totally agree. Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Thursday, 13 December 2012 at 16:07:13 UTC, Dmitry Olshansky wrote: 12/13/2012 4:59 AM, David Piepgrass пишет: Maybe, but I'm still not clear what are the differences between a normal ring buffer (not a new concept) and this "disruptor" pattern.. Key differences with a typical lock-free queue: But for start I'd define it is a framework for concurrent processing of a stream of tasks/requests/items on a well structured multi-staged pipeline. An excellent one sentence description! [snip] There is also highly flexible (policy-based design) selection of how consumers wait on data: - either busy _spin_ on it thus getting the highest responsiveness at the cost of wasted CPU cycles - lazy spin (that yields) no outright burning of CPU resources but higher latency - and even locking with wait-notify that saves greatly on CPU but kills responsiveness and throughput (but gives freedom to spend CPU elsewhere) Thanks for pointing this out. [snip] Another important IMHO observation is that the order of processed items is preserved* and this is interesting property if you consider doing the same stages as lock-free queues with a pool of consumers at each stage. Things will get O-o-O very quickly. what does O-o-O mean ? > Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Monday, 10 December 2012 at 23:04:56 UTC, Nick B wrote: On Monday, 10 December 2012 at 20:08:32 UTC, Andrei Alexandrescu wrote: On 12/9/12 10:58 PM, Nick B wrote: [about the Disruptor framework] Would Andrei like to comment on any of the comments so far ?? Sorry, I'd need to acquire expertise in Disruptor before discussing it. I found http://disruptor.googlecode.com/files/Disruptor-1.0.pdf quite difficult to get into because it spends the first page stating how good Disruptor is, then the next 3 pages discussing unrelated generalities, to then start discussing implementation details still without really defining the pattern. I had to stop there. Is there a more concise description of the pattern? Ok, does anyone consider that this pattern, though not well described, has any value ? Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Monday, 10 December 2012 at 20:08:32 UTC, Andrei Alexandrescu wrote: On 12/9/12 10:58 PM, Nick B wrote: [about the Disruptor framework] Would Andrei like to comment on any of the comments so far ?? Sorry, I'd need to acquire expertise in Disruptor before discussing it. I found http://disruptor.googlecode.com/files/Disruptor-1.0.pdf quite difficult to get into because it spends the first page stating how good Disruptor is, then the next 3 pages discussing unrelated generalities, to then start discussing implementation details still without really defining the pattern. I had to stop there. Is there a more concise description of the pattern? Andrei I have looked, but I think the answer is to this question is no. Nick
Re: the Disruptor framework vs The Complexities of Concurrency
On Saturday, 8 December 2012 at 19:54:29 UTC, Dmitry Olshansky wrote: 12/8/2012 9:08 PM, Nick Sabalausky пишет: On Fri, 07 Dec 2012 19:55:50 +0400 Dmitry Olshansky wrote: 12/7/2012 1:43 PM, deadalnix пишет: On Friday, 7 December 2012 at 09:03:58 UTC, Dejan Lekic wrote: On Friday, 7 December 2012 at 09:00:48 UTC, Nick B wrote: [Andrei's comment ] Cross-pollination is a good thing indeed. I came across this while searching the programme of the conference that Walter is attending in Australia. This gentleman, Martin Thompson http://www.yowconference.com.au/general/details.html?speakerId=2962 The main idea, is in this paper (11 pages, pdf): http://disruptor.googlecode.com/files/Disruptor-1.0.pdf and here is a review of the architecure by Martin Fowler: http://martinfowler.com/articles/lmax.html Fascinating. So the last problem is I don't see how it cleanly scales with the number of messages: there is only one instance of a specific consumer type on each stage. How do these get scaled if one core working on each is not enough? As Fowler's articles mentions at one point, you can have multiple consumers of the same type working concurrently on the same ring by just simply having each of them skip every N-1 items (for N consumers of the same type. Ie, if you have two consumers of the same type, one operates on the even #'d items, the other on the odd. I thought about that even-odd style but it muddies waters a bit. Now producers or however comes next the "circle" have to track all of the split-counters (since they could outpace each other at different times). The other way is to have them contend on a single counter with CAS but again not as nice. The other moment is that system becomes that more dependent on a single component failure and they get around this be running multiple copies of the whole system in sync. A wise move to ensure stability of a complex system (and keeping in mind the stack exchange reliability requirements). Would Andrei like to comment on any of the comments so far ??