On Wednesday, 26 June 2013 at 15:52:33 UTC, Joakim wrote:
I suggest you read my original post more carefully. I have not suggested closing up the entire D toolchain, as you seem to imply. I have suggested working on optimization patches in a closed-source manner and providing two versions of the D compiler: one that is faster, closed, and paid, with these optimization patches, another that is slower, open, and free, without the optimization patches.

Over time, the optimization patches are merged back to the free branch, so that the funding from the closed compiler makes even the free compiler faster, but only after some delay so that users who value performance will actually pay for the closed compiler. There can be a hard time limit, say nine months, so that you know any closed patches from nine months back will be opened and applied to the free compiler. I suspect that the money will be good enough so that any bugfixes or features added by the closed developers will be added to the free compiler right away, with no delay.

Perhaps you'd like to explain to the maintainers of GDC and LDC why, after all they've done for D, you think it would be acceptable to turn to them and say: "Hey guys, we're going to make improvements and keep them from you for 9 months so we can make money" ... ?

Or doesn't the cooperative relationship between the 3 main D compilers mean much to you?

Thanks for the work that you and Don have done with Sociomantic. Why do you think more companies don't do this? My point is that if there were money coming in from a paid compiler, Walter could fund even more such work.

Leaving aside the moral issues, you might consider that any work paid for by revenues would be offset by a drop in voluntary contributions, including corporate contributors. And sensible companies will avoid "open core" solutions.

A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/

I think this ignores the decades-long history we have with open source software by now. It is not merely "wanting to make the jump," most volunteers simply do not want to do painful tasks like writing documentation or cannot put as much time into development when no money is coming in. Simply saying "We have to try harder to be professional" seems naive to me.

Odd that you talk about ignoring things, because the general trend we've seen in the decades-long history of free software is that the software business seems to getting more and more open with every year. These days there's a strong expectation of free licensing.

If I understand your story right, the volunteers need to put a lot of effort into "bootstrapping" the project to be more professional, companies will see this and jump in, then they fund development from then on out? It's possible, but is there any example you have in mind? The languages that go this completely FOSS route tend not to have as much adoption as those with closed implementations, like C++.

It's hardly fair to compare languages without also taking into account their relative age. C++ has its large market share substantially due to historical factors -- it was a major "first mover", and until the advent of D, it was arguably the _only_ language that had that combination of power/flexibility and performance.

So far as compiler implementations are concerned, I'd say that it was the fact that there were many different implementations that helped C++. On the other hand, proprietary implementations may in some ways have damaged adoption, as before standardization you'd have competing, incompatible proprietary versions which limited the portability of code.

And yet the linux kernel ships with many binary blobs, almost all the time. I don't know how they legally do it, considering the GPL, yet it is much more common to run a kernel with binary blobs than a purely FOSS version. The vast majority of linux installs are due to Android and every single one has significant binary blobs and closed-source modifications to the Android source, which is allowed since most of Android is under the more liberal Apache license, with only the linux kernel under the GPL.

The binary blobs are nevertheless part of the vanilla kernel, not something "value added" that gets charged for. They're irrelevant to the development model of the kernel -- they are an irritation that's tolerated for practical reasons, rather than a design feature.

Again, I don't know how they get away with all the binary drivers in the kernel, perhaps that is a grey area with the GPL. For example, even the most open source Android devices, the Nexus devices sold directly by Google and running stock Android, have many binary blobs:

https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, where you can "change it in a closed way" because you are not "distributing a binary." Google takes advantage of this to run linux on a million servers powering their search engine, but does not release the proprietary patches for their linux kernel.

So if one looks at linux in any detail, hybrid models are more the norm than the exception, even with the GPL. :)

But no one is selling proprietary extensions to the kernel (not that they could anyway, but ...). They're building services that _use_ the kernel, and in building those services they sometimes write patches to serve particular needs.

In a similar way, other companies are building services using D and sometimes they write replacements for existing D functionality that better serve their needs (e.g. Leandro's GC work).

I believe all of these projects have commercial implementations, with the possible exception of Haskell. Still, all of them combined have much less market share than C++, possibly because they use the weaker consulting/support commercial model most of the time. One of the main reasons C++ is much more popular is that it has very high-performance closed implementations, do you disagree? I'm suggesting D will need something similar to get as popular.

C++'s popularity is most likely largely down to historical contingency. It was a first-mover in its particular design, and use begets use.

What you should rather be thinking of is: can you think of _any_ major new programming language (as in, rising to prominence in the last 10 years and enjoying significant cross-platform success) that wasn't open source?

The only one I can think of is C#, which was able to succeed simply because if Microsoft makes a particular language a key tool for Windows development, that language will get used.

But the others?  The reference versions are all open.

Let me turn this argument around on you: if there is always competition from ldc and gdc, why are you so scared of another option of a slightly-closed, paid compiler? If it's not "a good business," it will fail and go away. I think it would be very successful.

No one is scared of the idea of a slightly or even wholly closed, paid compiler -- anyone is free to implement one and try to sell it.

People are objecting to the idea of the reference implementation of the D language being distributed in a two-tier version with advanced features only available to paying customers.

You need to understand the difference between proprietary implementations of a language existing, versus the mainstream development work on the language following any kind of proprietary model.

OK, so it looks like you are fine with commercial models that keep all the source open, but not with those that close _any_ of the source.

For the reference implementation of the D language?  Absolutely.

The problem is that your favored consulting or support models are much weaker business models than a product model, which is much of the reason why Microsoft still makes almost two orders of magnitude more revenue with their software products than Red Hat makes with their consulting/support model.

Microsoft is a virtual monopolist in at least two areas of commercial software -- desktop OS and office document suites. The business models that work for them are not necessarily going to bring success for other organizations. A large part of Red Hat's success comes from the fact that it offers customers a different deal from Microsoft.

I am suggesting a unique hybrid product model because I think it will bring in the most money for the least discomfort. That ratio is one that D developers often talk about optimizing in technical terms, I'm suggesting the same in business terms. :)

I suggest that you have not thought through the variety of different business options available. It is a shame that your first thought for commercialization of D was "Let's close bits of it up!" instead of, "How can I make a successful commercial model of D that works _with_ its wonderful open community?"

First off, they both have commercial implementations. Second, they still only have a small fraction of the share as C++: part of this is probably because they don't have as many closed, performant implementations as C++ does.

The reference implementations are free software, and if they weren't, they'd never have got any decent traction.

I realize this is a religious issue for some people and they cannot be convinced. In a complex, emerging field like this, it is easy to claim that if OSS projects just try harder, they can succeed. But after two decades, it has never happened, without stepping back and employing a hybrid model.

I think your understanding of free software history is somewhat flawed, and that you conflate rather too many quite different business models under the single title of "hybrid". (To take just one of your examples, Red Hat's consulting/support model is not the same as paying for closed additions to the software. The _software_ of Red Hat Enterprise Linux is still free!)

You also don't seem to readily appreciate the differences between what works for software services, versus what works for programming languages -- or the impact that free vs. non-free can have on adoption rates. (D might have gained more traction much earlier, if not for fears around the DMD backend.)

I have examined the evidence and presented arguments for those who are willing to listen, as I'm just about pragmatically using whatever model works best. I think recent history has shown that hybrid models work very well, possibly the best. :)

Please name a recent successful programming language (other than those effectively imposed by diktat by big players like Microsoft or Apple) that did not build its success around a reference implementation that is free software. Then we'll talk. :-)

Reply via email to