On Monday, 17 December 2012 at 21:03:04 UTC, Rob T wrote:
On Monday, 17 December 2012 at 18:14:54 UTC, foobar wrote:
At the moment we may use git commands but really we are still developing on mostly a subversion model. Walter used to accept patches and those were simply replaced by pull requests. There isn't any change in the mental model required to really benefit from a decentralized system such as git. This is what the process discussion is ultimately meant to fix.

I think you've made a very good point. In order to make the most out of the way decentralized systems work vs a centralized one, will require an adjustment in the way people think. If we choose to follow the centralized model nothing will be gained when using a decentralized service, however centralization has its place and there's an obvious need to merge decentralized branches into a common centralize branch so that there's a common branch to use for testing and performing releases (etc).

I find it's great to debate ideas in here first, not the talk page, but any conclusions or interesting points of view should be posted into the wiki talk page so that it is not lost. IMO this one should go in the wiki if only to remind people that we have the flexibility of decentralized model to take advantage of.

--rt

DVCS is not about centralized vs. non centralized. This is a common misunderstanding which I too had when I started using git. The actual difference is a client-server topology (CVS, SVN, etc) vs. P2P or perhaps "free-form" topology. By making all users equal with their own full copies of the repository, git and similar systems made the topology an aspect of the human work-flow instead of part of the technical design & implementation.

This gives you the freedom to have whatever topology you want - a star, a circle, whatever. For instance, Linux is developed with a "web of trust". the topology represents trust relationships. Thus, all the people Linus is pulling from directly are "core" developers he personally trust. They in turn trust other developers, and so on and so forth. Linus's version is the semi-official release of the Linux kernel but it is not the only release. For instance, Linux distributions can have their own repositories, and Google maintains their own repository for the android fork. So in fact, there are *multiple* repositories that represent graph roots in this "web of trust" topology.

What about D?
The current git-hub repository owned by Walter and the core devs (the github organization) is the official repository. *But*, we could also treat other compilers as roots. more over, there's no requirement for developers to go through the "main" github repository to share and sync. E.g Walter can pull directly from Don's repository *without* going through a formal branch on github. This in fact should be *the default workflow* for internal collaboration to reduce clutter and facilitate better organization.

This is why I'm arguing fiercely against having any sort of official alpha stage. There is no need standardizing this and it only mixes "private" developer only code and builds with builds aimed at end-users (those would be us, people writing D code and compiling with DMD). If you look on SVN servers, you often find an endless list of developer folders, "test" branches, "experimental" branches, etc, etc.

As a user, this is highly annoying to figure out what branches are meant for user consumption (releases, betas, preview for a new feature) and what isn't, dev X's private place to conduct experiments. This is only a result of the client-server topology imposed by svn architecture.

The official process should only standardize and focus on the points where integration is required. Everything else is much better served as being left alone. On the other hand, I think that we need to add more focus on pre/post stages of the actual coding. This means, the planning, setting priorities (aka roadmap, mile-stones, etc) as well as reducing regressions. I saw a post suggesting an automated build-bot that will run the test suit and build nightlies. What about DIPs, how do they integrate in the work-flow? I think Andrei talked about the DIPs before but I don't thik it was discussed as part of this thread.

Reply via email to