Slightly OT, but I imagine the try/catch Dan refers to is the display
system. Unfortunately it is a horribly brittle way to implement that code
that still now has the potential to cause bugs (due to the fact that you
can't tell where in the stack the error came from). I'm prototyping
something to try and solve that and a lot of the other issues with the
current display system, though who knows if it'll ever end up in Base.

On 29 December 2014 at 19:38, Stefan Karpinski <ste...@karpinski.org> wrote:

> There's lots of things that are very legitimate complaints in this post
> but also other things I find frustrating.
>
> *On-point*
>
> Testing & coverage could be much better. Some parts of Base were written a
> long time ago before we wrote tests for new code. Those can have a scary
> lack of test coverage. Testing of Julia packages ranges from non-existent
> to excellent. This also needs a lot of work. I agree that the the current
> way of measuring coverage is nearly useless. We need a better approach.
>
> The package manager really, really needs an overhaul. This is my fault and
> I take full responsibility for it. We've been waiting a frustratingly long
> time for libgit2 integration to be ready to use. Last I checked, I think
> there was still some Windows bug pending.
>
> Julia's uptime on Travis isn't as high as I would like it to be. There
> have been a few periods (one of which Dan unfortunately hit), when Travis
> was broken for weeks. This sucks and it's a relief whenever we fix the
> build after a period like that. Fortunately, since that particularly bad
> couple of weeks, there hasn't been anything like that, even on Julia
> master, and we've never had Julia stable (release-0.3 currently) broken for
> any significant amount of time.
>
> Documentation of Julia internals. This is getting a bit better with the 
> developer
> documentation <http://julia.readthedocs.org/en/latest/devdocs/julia/>
> that has recently been added, but Julia's internals are pretty inscrutable.
> I'm not convinced that many other programming language implementations are
> any better about this, but that doesn't mean we shouldn't improve this a
> lot.
>
> *Frustrating*
>
> Mystery Unicode bug – Dan, I've been hearing about this for months now.
> Nobody has filed any issues with UTF-8 decoding in years (I just checked).
> The suspense is killing me – what is this bug? Please file an issue, no
> matter how vague it may be. Hell, that entire throwaway script can just be
> the body of the issue and other people can pick it apart for specific bugs.
>
> The REPL rewrite, among other things, added tests to the REPL. Yes, it was
> a disruptive transition, but the old REPL needed to be replaced. It was a
> massive pile of hacks around GNU readline and was incomprehensible and
> impossible to test. Complaining about the switch to the new REPL which
> *is* actually tested seems misplaced.
>
> Unlike Python, catching exceptions in Julia is not considered a valid way
> to do control flow. Julia's philosophy here is closer to Go's than to
> Python's – if an exception gets thrown it should only ever be because the
> *caller* screwed up and the program may reasonably panic. You can use
> try/catch to handle such a situation and recover, but any Julia API that
> requires you to do this is a broken API. So the fact that
>
> When I grepped through Base to find instances of actually catching an
>> exception and doing something based on the particular exception, I could
>> only find a single one.
>
>
> actually means that the one instance is actually a place where we're doing
> it wrong and hacking around something we know to be broken. The next move
> is to get rid of that one instance, not add more code like this. The UDP
> thing is a problem and needs to be fixed.
>
> The business about fixing bugs getting Dan into the to 40 is weird. It's
> not quite accurate – Dan is #47 by commits (I'm assuming that's the metric
> here) with 28 commits, so he's in the top 50 but not the top 40. There are
> 23 people who have 100 commits or more, and that's roughly the group I
> would consider to be the "core devs". This paragraph is frustrating because
> it gives the imo unfair impression that not many people are working on
> Julia. Having 23+ people working actively on a programming language
> implementation is a lot.
>
> Ranking of how likely Travis builds are to fail by language doesn't seem
> meaningful. A huge part of this is how aggressively each project uses
> Travis. We automatically test just about everything, even completely
> experimental branches. Julia packages can turn Travis testing on with a
> flip of a switch. So lots of things are broken on Travis because we've made
> it easy to use. We should, of course, fix these things, but other projects
> having higher uptime numbers doesn't imply that they're more reliable – it
> probably just means they're using Travis less.
>
> In general, a lot of Dan's issues only crop up if you are using Julia
> master. The Gadfly dates regression is probably like this and the two weeks
> of Travis failures was only on master during a "chaos month" – i.e. a month
> where we make lots of reckless changes, typically right after releasing a
> stable version (in this case it was right after 0.3 came out). These days,
> I've seen a lot of people using Julia 0.3 for work and it's pretty smooth
> (package management is by far the biggest issue and I just take care of
> that myself). If you're a normal language user, you definitely should not
> be using Julia master.
>

Reply via email to