Rather than distract the questions posed in the previous thread, I've created a new one to discuss concerns.


As I’ve said before, my concern is not theoretical vs experimental.
It’s “what is this change, why was it made, what are the effects, and
are its effects justifiable?”

Example 1: Startable

What is this change?
An interface for service implementations to implement to export and start threads after construction.
Why was it made?
So service implementations with final fields can freeze their final fields before the services reference is shared.
What are the effects?
Service implementations can delay exporting and starting threads until after their constructor completes. Ensures that other threads see the fully constructed state of final fields and their referents and not the default initialisation value.
Are its effects justifiable?
Considering the minimal effort required to implement, it allows correct and safe use of final in services, this is a big win in my opinion.


Example 2: Replace TaskManager with ExecutorService

What is this change?
    Replace TaskManager with ExecutorService
Why was it made?
    Task.runAfter(List tasks, int position) is fundamentally broken.
    ExecutorService implementations have far superior throughput.
ExecutorService implementations are maintained by others, reducing our maintenance burden.
What are the effects?
RandomStressTest only uses 188 threads instead of 1000 and APPEARS to increase throughput by 270 times (although other changes to the codebase may also be contributing). Customers can fine tune and change the ExecutorService implementations to best suit their environment if they so choose.
Are its effects justifiable?
    Ooh yeah.

There are many more changes of course and that's your main concern.

Code has been committed to skunk, for people to review, in public,
they might misinterpret my suggestions on the mail list, but they will
have a better understanding after reading the code, I welcome their
participation.

There are 1557 files modified between 2.2.2 and qa_refactor.   62
deleted.   214 added.   How practical is it to review?

Hmm, four years work, when was the last time you reviewed Linux before you used it? Don't get me wrong, I want you to review the code, if we divide review among 5 people, it's only 300 files each.

The problem with incremental change & release is we would have to release with known failures, rather than with the most stable build possible, this process is not something I want, it is something I'm forced to do until the build stabilises. Releasing with known failures risks exposing more bugs than we can handle in deployment.


I am open to other solutions, but I don't like the suggestion that non
compliant code is preferred until a test case that proves failure can
be found.   What happens when the test case only fails on one platform,
but is not easily repeatable?   What if the fix just changes timing to
mask the original problem?
Testing doesn't prove the absence of errors?   Reducing errors further
requires additonal tools like FindBugs and public open peer review.

Trunk development was halted because of unrelated test failures.
It was halted because the community got nervous and started asking for
review-then-commit, so you moved over to qa_refactor.

No, it was halted because River is riddled with concurrency bugs (some won't acknowledge) and the build was starting to experience failures because of those bugs. Prior to that event, the majority of changes were made in skunk and merged with trunk after successful testing in skunk. Until the most minor of changes caused test failures the community was working together, developing. Once these bugs are fixed we won't face the development destabilisation challenges they present.

Those bugs are obvious if you diff qa refactor's files while browsing svn.

The 2.2 branch has numerous bugs, its test suite just experienced 47 failures on Windows and I haven't seen any jtreg test results, it hasn't been tested properly on Java 7 prior to release. Don't get me wrong your release efforts are definitely appreciated, I think you need to cut me some slack and show me the same patience as I've shown you.

The qa refactoring build has had 1000's of test runs, been tested on as many different OS and CPU arch as possible to deliberately expose bugs and fix them. There isn't even a beta release candidate yet.

Please start new threads for your issues as I have done here, yes your view and opinions are valid and must be given due consideration, but try to make any corporate procedure requests achievable for volunteers, we are not a corporation with deep pockets and endless resources.

    From 2.2.2 to
trunk - 1225 files modified, 32 deleted, 186 added.   Same problem, which
we discussed last year.

Related email threads:

http://mail-archives.apache.org/mod_mbox/river-dev/201211.mbox/%3C50B49395.30408%40qcg.nl%3E
http://mail-archives.apache.org/mod_mbox/river-dev/201211.mbox/%3CCAMPUXz9z%2Bk1XBgzfCONmRXh8y5yBtMjJNeUeTR03YEepe-60Zg%40mail.gmail.com%3E

In the end, you’re right - qa_refactor is a skunk branch, so you can go
ahead and make all the changes you want. I just think that if your
intention is to get at least two other people to sign off on it as a
release, you’d do well to make it easy to review, and possibly even
include the community in the decision-making along the way.

The possiblity of failure exists in every project, it's how we overcome challenges that makes the difference. If customers see an improvement in stability and scalability, adoption will increase. I won't be making a beta release until the build has stabilised, so there's plenty of time for people to review the code in the mean time.

Get of your bums people, lets review the code, find mistakes I've made, help me fix them. You can't eat an Elephant with one bite, pick off something you know and review it.

Regards,

Peter

Regards,

Greg.






Reply via email to