Yes, Git allows for many different models for development. As Julia is a 
pretty small project (compared to the linux kernel), we have a much simpler 
structure. Julia is also in a early phase so we are exploring different 
options, and we need a branch to distribute and try out new ideas. We 
also occasionally do backwards incompatible changes, so it is really great 
that we have now left the single rolling release model that we had before 
0.3.0, but keep a stable branch with 0.3 without BC breaking changes.

Currently we are maintaining two main branches (master and release-0.3). I 
can definitely see the point that it would be great to have an additional 
develop branch, in order to have more widespread testing of changes before 
committing to master. Unfortunately that will require tons of extra effort 
and confusion and heighten the barrier for contribution. Considering the 
current size of the community, I don't think it will be much of a blessing.

We test all significant changes in a branch backed PR and run automated 
regression tests on multiple platforms. Some issues will naturally not be 
caught in such a process, and some will only occasionally be trigger a test 
failure when a race condition occurs. Some issues still will only fail the 
build on a VM with a specific processor (or amount of memory), so it will 
be hard to figure out.

Ivar

kl. 17:18:09 UTC+1 torsdag 1. januar 2015 skrev Ismael VC følgende:
>
> Ok, so the branching models in "git book" are just examples not 
> instructions.
>
>
> Thanks Sean!
>
> On Thu, Jan 1, 2015 at 9:36 AM, Sean Marshallsay <srm....@gmail.com 
> <javascript:>> wrote:
>
>> Ismael,
>>
>> I think you're over-compliacating Julia's workflow slightly, in that 
>> first image you posted (
>> http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows 
>> <http://www.google.com/url?q=http%3A%2F%2Fgit-scm.com%2Fbook%2Fen%2Fv2%2FGit-Branching-Branching-Workflows&sa=D&sntz=1&usg=AFQjCNFyzdxyVKlrvni3irK1a6FIfrJXFg>)
>>  
>> just replace the word "master" with "stable/release" and the word "develop" 
>> with "master" and that's pretty much it.
>>
>> On Thursday, 1 January 2015 09:14:29 UTC, Ismael VC wrote:
>>>
>>> Tobias: I don't think that Julia is more frequently broken, but Dann 
>>> experienced this (his blog post started this discussion), I have also 
>>> experienced it several times (but I'm an inexpert noob) and of course I'm 
>>> sure other have also experienced this.
>>>
>>> I just wanted to know the advantages of Julia's approach compared to 
>>> following things by the "book".
>>>
>>> I know the correct thing it to check if there is an open issue or else 
>>> open an issue (I've spent last year studying git and a lot of stuff), and 
>>> all I know comes from whatever I have available to study from, like the git 
>>> book, and since I clearly don't understand, I just want to understand the 
>>> issue.
>>>
>>> Keno: I certainly want to provide feedback and learn, you'll be having 
>>> me around a lot starting from this year. :D
>>>
>>> Obviously I didn't follow the 0.3 dev cycle, but now I have configured 
>>> gmail, to recieve absolutely every notification from the Julia project.
>>>
>>> As a mater of fact I'll start building Julia from master again tonight 
>>> and report any issues I might encounter, something I stopped doing because 
>>> of my lack of knowledge and the availability of binaries.
>>>
>>> Thanks you both for taking the time to answer my concerns.
>>>
>>> On Thu, Jan 1, 2015 at 2:45 AM, Tobias Knopp <tobias...@googlemail.com> 
>>> wrote:
>>>
>>>> Hi Ismael,
>>>>
>>>> why do you think that master is more frequently broken in Julia than in 
>>>> other projects?
>>>> This really does not happen often. People develop in branches and after 
>>>> serious review they are merged to master.
>>>>
>>>> This discussion further is to isolated and does not take into account 
>>>> that Julia is a programming language and that it is very important to 
>>>> testbed language changes during a development period.
>>>>
>>>> The discussion is, by the way, very funny because we had during the 0.3 
>>>> dev period effectively a "rolling release", i.e. development snapshots 
>>>> were 
>>>> regularly made and these were kept stable.
>>>>
>>>> Cheers,
>>>>
>>>> Tobi
>>>>
>>>>
>>>>
>>>> Am Donnerstag, 1. Januar 2015 09:12:59 UTC+1 schrieb Ismael VC:
>>>>>
>>>>> Perhaps we could add a diagram of the Julia work flow because I think 
>>>>> we are using neither one of those models, we don't have lieutenants, nor 
>>>>> a 
>>>>> dictator do we?
>>>>>  
>>>>>
>>>>> ​I'm sorry for the ugly diagram, I just want to really understand the 
>>>>> current work flow, so correct me if I'm wrong.
>>>>>
>>>>> I don't know about Linux, but I wonder how frequently do they happen 
>>>>> to have a broken master? I this also common situation among distributed 
>>>>> open source projects? (I'm going to study Rust's approach too.) 
>>>>>
>>>>> I just thought that the master had to be as stable as possible 
>>>>> (reference) and by using the dictator/lieutenant/public_dev approach I 
>>>>> assume one gets way more testing but also that one needs way more 
>>>>> resources. 
>>>>>
>>>>> After all Linus has to really trust his lieutenants, as the key in 
>>>>> this model is delegation and trust.
>>>>>
>>>>> Since Julia uses neither (a mix?), whats the advantage of the current 
>>>>> approach?
>>>>>
>>>>>
>>>>> On Thu, Jan 1, 2015 at 1:27 AM, Viral Shah <vi...@mayin.org> wrote:
>>>>>
>>>>>> While the basic assert based tests are good enough for me, I do wish 
>>>>>> that the test framework could be more flexible. Some of this is historic 
>>>>>> - 
>>>>>> we started out not wanting a separate set of unit vs. comprehensive test 
>>>>>> suites. The goal with the unit tests was to have something that could be 
>>>>>> easily tested rapidly during development and catch regressions in the 
>>>>>> basic 
>>>>>> system. This evolved into something more than what it was intended to 
>>>>>> do. 
>>>>>> We even added some very basic perf tests to this framework.
>>>>>>
>>>>>> I find myself wanting a few more things from it as I have worked on 
>>>>>> the ARM port on and off. Some thoughts follow.
>>>>>>
>>>>>> I'd love to be able to run the entire test suite, knowing how many 
>>>>>> tests there are in all, how many pass and how many fail. Over time, it 
>>>>>> is 
>>>>>> nice to know how the total number of tests has increased along with the 
>>>>>> code in base. Currently, on ARM, tons of stuff fails and I run all the 
>>>>>> tests by looping over all the test files, and they all give up after the 
>>>>>> first failure.
>>>>>>
>>>>>> If I had, say, the serial number of the failing cases, I can keep 
>>>>>> repeatedly testing just those as I try to fix a particular issue. 
>>>>>> Currently, the level of granularity is a whole test file.
>>>>>>
>>>>>> Documentation of the test framework in the manual has been on my 
>>>>>> mind. We have it in the standard library documentation, but not in the 
>>>>>> manual. This has been on my mind for a while.
>>>>>>
>>>>>> Code coverage is essential - but that has already been discussed in 
>>>>>> detail in this thread, and some good work has already started.
>>>>>>
>>>>>> Beyond basic correctness testing, numerical codes need to also have 
>>>>>> tests for ill-conditioned inputs. For the most part, we depend on our 
>>>>>> libraries to be well-tested (LAPACK, FFTW, etc.), but increasingly, we 
>>>>>> are 
>>>>>> writing our own libraries. Certainly package authors are pushing 
>>>>>> boundaries 
>>>>>> here.
>>>>>>
>>>>>> A better perf test framework would also be great to have. Ideally, 
>>>>>> the perf test coverage would cover everything, and also have the ability 
>>>>>> to 
>>>>>> compare against performance in the past. Elliot's Codespeed was meant to 
>>>>>> do 
>>>>>> this, but somehow it hasn't worked out yet. I am quite hopeful that we 
>>>>>> will 
>>>>>> figure it out.
>>>>>>
>>>>>> Stuff like QuickCheck that generate random test cases are useful, but 
>>>>>> I am not convinced that should be in Base.
>>>>>>
>>>>>> -viral
>>>>>>
>>>>>> On Tuesday, December 30, 2014 3:35:27 AM UTC+5:30, Jameson wrote:
>>>>>>>
>>>>>>> I imagine there are advantages to frameworks in that you can 
>>>>>>> expected failures and continue through the test suite after one fails, 
>>>>>>> to 
>>>>>>> give a better % success/failure metric than Julia's simplistic go/no-go 
>>>>>>> approach.
>>>>>>>
>>>>>>> I used JUnit many years ago for a high school class, and found that, 
>>>>>>> relative to `@assert` statements, it had more options for asserting 
>>>>>>> various 
>>>>>>> approximate and conditional statements that would otherwise have been 
>>>>>>> very 
>>>>>>> verbose to write in Java. Browsing back through it's website now (
>>>>>>> http://junit.org/ under Usage and Idioms), it apparently now has 
>>>>>>> some more features for testing such as rules, theories, timeouts, and 
>>>>>>> concurrency). Those features would likely help improve testing coverage 
>>>>>>> by 
>>>>>>> making tests easier to describe.
>>>>>>>
>>>>>>> On Mon Dec 29 2014 at 4:45:53 PM Steven G. Johnson <
>>>>>>> steve...@gmail.com> wrote:
>>>>>>>
>>>>>>>> On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski 
>>>>>>>> wrote: 
>>>>>>>>
>>>>>>>>> I didn't read through the broken builds post in detail – thanks 
>>>>>>>>> for the clarification. Julia basically uses master as a branch for 
>>>>>>>>> merging 
>>>>>>>>> and simmering experimental work. It seems like many (most?) projects 
>>>>>>>>> don't 
>>>>>>>>> do this, and instead use master for stable work.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Yeah, a lot of projects use the Gitflow model, in which a develop 
>>>>>>>> branch is used for experimental work and master is used for (nearly) 
>>>>>>>> release candidates. 
>>>>>>>>
>>>>>>>> I can understand where Dan is coming from in terms of finding 
>>>>>>>> issues continually when using Julia, but in my case it's more commonly 
>>>>>>>> "this behavior is annoying / could be improved" than "this behavior is 
>>>>>>>> wrong".  It's rare for me to code for a few hours in Julia without 
>>>>>>>> filing 
>>>>>>>> issues in the former category, but out of the 300 issues I've filed 
>>>>>>>> since 
>>>>>>>> 2012, it looks like less than two dozen are in the latter "definite 
>>>>>>>> bug" 
>>>>>>>> category.
>>>>>>>>
>>>>>>>> I'm don't understand his perspective on "modern test frameworks" in 
>>>>>>>> which FactCheck is light-years better than a big file full of asserts. 
>>>>>>>>  
>>>>>>>> Maybe my age is showing, but from my perspective FactCheck (and its 
>>>>>>>> Midje 
>>>>>>>> antecedent) just gives you a slightly more verbose assert syntax and a 
>>>>>>>> way 
>>>>>>>> of grouping asserts into blocks (which doesn't seem much better than 
>>>>>>>> just 
>>>>>>>> adding a comment at the top of a group of asserts).   Tastes vary, of 
>>>>>>>> course, but Dan seems to be referring to some dramatic advantage that 
>>>>>>>> isn't 
>>>>>>>> a matter of mere spelling.  What am I missing?
>>>>>>>>
>>>>>>>>>  
>>>>>
>>>
>

Reply via email to