On 2018-09-27 16:55, Andrew Dinn wrote:
Hi Raffaello,

On 27/09/18 15:20, raffaello.giulie...@gmail.com wrote:
Hi Andrew,
On the other side, in April this year I submitted another quite fast and
supposedly correct algorithm on this mailing list and I referred to an
accompanying paper by myself that gives full explanations on that
variant. Except for a couple of persons in private, nobody cared to send
me any observation or comment, neither on the code nor on the paper.

I'm sorry I didn't see that post. I would have been very happy to review
the paper as well as the code. Unfortunately, none of us have time to
catch everything and we certainly don't always see every contribution.


I understand that most people are busy, so I was not really surprised not to get feedback on a rather tiny issue in the overall huge codebase.



The present algorithm is superior. I have the theory in notes, in my
head, on napkins, on paper sheets all over my desk and floors. But
rather than spending time on the paper itself, like I did almost in vain
for the April variant, I preferred investing it in coding, for several
reasons:
* Only code executes, not a paper.
* Only code gives results that can be compared against.
* Only code can give indications on performance enhancements.
* Only code is interesting to be submitted to the OpenJDK.
* Having a paper without having tried the ideas in code is half the fun
and half as useful.

I think this only presents one side of the argument here. For code of
anything but the most basic complexity. Assuming that by paper you mean
anything that goes beyond executable statements, including comments,
list discussions and reviews like this one, design notes and documents,
specifications et al


My point is about priorities and the past experience with almost zero feedback on the former implementation, not that a paper isn't due or useless. On the contrary, I'm the first that would not trust my own code without an explanation.



Only a paper tells you what an executing piece of code is actually doing

Only paper tells you what the results produced by that code need to be
compared against to determine correctness, accuracy, etc

Only paper can tell you whether achieved performance is worse than or
better than can be expected (or where in between it lies)

Only paper can explain what OpenJDK is supposed to be doing, why and how
the specific elements of the implementation achieve that what/why i.e.
withouth that audit trail OpenJDK will be dead in the water in no time
at all

Having code without the paper to tell you what ideas it implements is no
fun and n use at all.


Why "no use at all"? That's unfair.

It might not be fun currently, but it is quite useful anyway in its present form, having produced, as of today, some 400 billions correct results in 1/13 of the time.



I think that last one exemplifies a key asymmetry that always needs to
be borne in mind. If your last contribution did not get any signifcant
review on this or some other list then I think we really messed up.


Except where noted above, I agree with these observations. Currently, however, I'm in the lucky position to have both the explanation and the code, so they don't apply for myself in this particular case.

I'm thinking on how to present the ideas in some sketchy form, just to share the fun and mathematically convince the inclined ones, before the paper is ready.



Greetings
Raffaello

Reply via email to