Re: Next steps

2020-11-21 Thread Dan Rollo
Hi Peter,

Sounds good to me.

If you have any idiot proof tasks, let me know, I’d be happy to help.

Dan


> On Nov 21, 2020, at 6:16 PM, Peter Firmstone  
> wrote:
> 
> Hello River folk,
> 
> What I had in mind next was to SVN move the existing trunk out of the way, 
> then SVN move the modular branch to trunk, then SVN move other relevant code 
> branches we wanted to keep into trunk as well.  Then finally we can ask INFRA 
> to migrate us to git.
> 
> Please feel free to discuss or mention any ideas you have, or if you think it 
> should be done a little differently.  I'd like to retain our SVN history when 
> making the git transition, so we can track everything back to the original 
> Sun Microsystems contribution in 2007.
> 
> For the next fortnight, I'll be in remote Queensland, so hoping to get some 
> time over Christmas to get this done, and all help will be gladly welcomed.
> 
> -- 
> Regards,
> Peter
> 



Re: Next steps

2020-11-21 Thread Peter Firmstone

If you feel confident enough to have a go at SVN move. :-)

On 11/22/2020 11:03 AM, Dan Rollo wrote:

Hi Peter,

Sounds good to me.

If you have any idiot proof tasks, let me know, I’d be happy to help.

Dan



On Nov 21, 2020, at 6:16 PM, Peter Firmstone  
wrote:

Hello River folk,

What I had in mind next was to SVN move the existing trunk out of the way, then 
SVN move the modular branch to trunk, then SVN move other relevant code 
branches we wanted to keep into trunk as well.  Then finally we can ask INFRA 
to migrate us to git.

Please feel free to discuss or mention any ideas you have, or if you think it 
should be done a little differently.  I'd like to retain our SVN history when 
making the git transition, so we can track everything back to the original Sun 
Microsystems contribution in 2007.

For the next fortnight, I'll be in remote Queensland, so hoping to get some 
time over Christmas to get this done, and all help will be gladly welcomed.

--
Regards,
Peter


--
Regards,
 
Peter Firmstone

0498 286 363
Zeus Project Services Pty Ltd.



Re: Next steps

2020-11-22 Thread Peter Firmstone
No pressure if you're not comfortable, I should have some time in a 
fortnight.



On 11/22/2020 4:21 PM, Peter Firmstone wrote:

If you feel confident enough to have a go at SVN move. :-)

On 11/22/2020 11:03 AM, Dan Rollo wrote:

Hi Peter,

Sounds good to me.

If you have any idiot proof tasks, let me know, I’d be happy to help.

Dan


On Nov 21, 2020, at 6:16 PM, Peter Firmstone 
 wrote:


Hello River folk,

What I had in mind next was to SVN move the existing trunk out of 
the way, then SVN move the modular branch to trunk, then SVN move 
other relevant code branches we wanted to keep into trunk as well.  
Then finally we can ask INFRA to migrate us to git.


Please feel free to discuss or mention any ideas you have, or if you 
think it should be done a little differently.  I'd like to retain 
our SVN history when making the git transition, so we can track 
everything back to the original Sun Microsystems contribution in 2007.


For the next fortnight, I'll be in remote Queensland, so hoping to 
get some time over Christmas to get this done, and all help will be 
gladly welcomed.


--
Regards,
Peter


--
Regards,
 
Peter Firmstone

0498 286 363
Zeus Project Services Pty Ltd.



Re: Next steps

2020-11-26 Thread Bishnu Gautam
Hello River Folk

Are there any one who is working with gRPC (A web RPC framework)? It seems an 
interesting project from which we can learn a lot while building application in 
River. Let me know if we can integrated service discovery by using gRPC 
framework in web. If anyone can show a prototype application for developer 
integrating River with gRPC then River will get wider attention among the 
developers. Please let me know if someone is working or anyone who is working 
in this area.

Bishnu Prasad Gautam

From: Peter Firmstone 
Sent: Sunday, November 22, 2020 11:08 AM
To: dev@river.apache.org 
Subject: Re: Next steps

No pressure if you're not comfortable, I should have some time in a
fortnight.


On 11/22/2020 4:21 PM, Peter Firmstone wrote:
> If you feel confident enough to have a go at SVN move. :-)
>
> On 11/22/2020 11:03 AM, Dan Rollo wrote:
>> Hi Peter,
>>
>> Sounds good to me.
>>
>> If you have any idiot proof tasks, let me know, I’d be happy to help.
>>
>> Dan
>>
>>
>>> On Nov 21, 2020, at 6:16 PM, Peter Firmstone
>>>  wrote:
>>>
>>> Hello River folk,
>>>
>>> What I had in mind next was to SVN move the existing trunk out of
>>> the way, then SVN move the modular branch to trunk, then SVN move
>>> other relevant code branches we wanted to keep into trunk as well.
>>> Then finally we can ask INFRA to migrate us to git.
>>>
>>> Please feel free to discuss or mention any ideas you have, or if you
>>> think it should be done a little differently.  I'd like to retain
>>> our SVN history when making the git transition, so we can track
>>> everything back to the original Sun Microsystems contribution in 2007.
>>>
>>> For the next fortnight, I'll be in remote Queensland, so hoping to
>>> get some time over Christmas to get this done, and all help will be
>>> gladly welcomed.
>>>
>>> --
>>> Regards,
>>> Peter
>>>
--
Regards,

Peter Firmstone
0498 286 363
Zeus Project Services Pty Ltd.



Re: Next steps

2020-12-07 Thread Phillip Rhodes
On Thu, Nov 26, 2020 at 8:16 PM Bishnu Gautam  wrote:
>
> Hello River Folk
>
> Are there any one who is working with gRPC (A web RPC framework)? It seems an 
> interesting project from
> which we can learn a lot while building application in River. Let me know if 
> we can integrated service discovery
> by using gRPC framework in web. If anyone can show a prototype application 
> for developer integrating River
> with gRPC then River will get wider attention among the developers. Please 
> let me know if someone is working
> or anyone who is working in this area.

Sounds interesting. I'd love to see this idea fleshed out more, and
possibly worked up as a prototype. I haven't exactly been working on
anything similar, but the thought has at least crossed my mind to look
at how it might be possible to use River with protocols that aren't
quite so Java centric.  IIOP or something would possibly be of
interest as well. Or maybe Thrift.


Phil


Re: Next steps after 2.2.1 release

2013-04-06 Thread Patricia Shanahan

On 4/6/2013 7:26 PM, Greg Trasuk wrote:
...

Once we have a stable set of regression tests, then OK, we could
think about improving performance or using Maven repositories as the
codebase server.

...

I think there is something else you need before it would be a good idea 
to release any changes for the sake of performance - some examples of 
workloads whose performance you want to improve, and that are in fact 
improved by the changes.


I've worked on many performance campaigns, at several different 
companies including Cray Research and Sun Microsystems. Each campaign 
has been based on one or more benchmarks. The benchmarks could be 
industry-standard benchmarks, or could be sanitized versions of user 
workloads.


I've had many opportunities to compare measurements to expert 
predictions, including my own predictions, about what needs to be 
changed to improve performance. In no case have the expert predictions 
matched measurement.


Based on that experience I had serious concerns about working on River 
performance with no benchmarks. Are there any River users who care 
enough about performance to help with creating benchmarks, and running 
them in highly parallel environments? If not, maybe performance changes 
should not be a priority.


Without benchmarks, changes intended to improve performance may carry 
functional risk for no gain, or even for a performance regression.


Patricia


Re: Next steps after 2.2.1 release

2013-04-06 Thread Peter Firmstone
Performance is very difficult to determine with simple tests, so what 
I've been doing is using qa suite stress tests that run for several 
minutes for my benchmarks, because I test on different hardware, the 
individual results are only comparable on that hardware.


I run jvisualvm to identify hotspots, then manually inspect the code for 
possible improvements, for example the modification might be as simple 
as replacing a call to array.length in a loop, with an instance field.  
I don't make any performance improvements that make the code harder to 
read, just simple things like replacing string concatenation in loops.


Generally though my focus isn't on performance, it's on correctness and 
long standing Jini issues users have identified.


Despite how good people think our code is, we have inherited a lot of 
legacy code that is very hard to reason about.




Patricia Shanahan wrote:

On 4/6/2013 7:26 PM, Greg Trasuk wrote:
...

Once we have a stable set of regression tests, then OK, we could
think about improving performance or using Maven repositories as the
codebase server.

...

I think there is something else you need before it would be a good 
idea to release any changes for the sake of performance - some 
examples of workloads whose performance you want to improve, and that 
are in fact improved by the changes.


I've worked on many performance campaigns, at several different 
companies including Cray Research and Sun Microsystems. Each campaign 
has been based on one or more benchmarks. The benchmarks could be 
industry-standard benchmarks, or could be sanitized versions of user 
workloads.


I've had many opportunities to compare measurements to expert 
predictions, including my own predictions, about what needs to be 
changed to improve performance. In no case have the expert predictions 
matched measurement.


Based on that experience I had serious concerns about working on River 
performance with no benchmarks. Are there any River users who care 
enough about performance to help with creating benchmarks, and running 
them in highly parallel environments? If not, maybe performance 
changes should not be a priority.


Without benchmarks, changes intended to improve performance may carry 
functional risk for no gain, or even for a performance regression.


Patricia





Re: Next steps after 2.2.1 release

2013-04-06 Thread Peter Firmstone

Greg,

You need to spend some more time figuring out why those tests are 
failing, that isn't normal, when 280 tests fail, there's usually 
something wrong with configuration.  The qa test suite just isn't that 
brittle ;)   The tests that fail due to concurrency errors don't fail 
often, we're talking 3/10 you might get a failure (if you're lucky) and 
likely, you won't see any failures on single core hardware.  The Jenkins 
tests tend to be more likely to fail because they run concurrent with 
other builds, in a busy network environment (sockets in use etc), with 
lots of superfluous discovery processes going in the background.


If you need some help with the source tree, I'm quite familiar with it, 
don't be afraid to ask questions.


This is a big distribution, there is a lot of code, it is initially 
difficult to find your way around, but patience will be rewarded.


There are 3 test suites:

1. trunk/test
2. trunk/qa/src
3. trunk/qa/jtreg

The first is the junit test suite, the second is the qa integration test 
suite and tck, which simulates a network environment with all the 
services, the last is the jtreg regression test suite.


When you've done further investigation and better understand the code, 
then we'll be in a better position for discussion, right now the 
immediate focus should be on a 2.2.1 release.


I'll switch to your branch and run the tests if you like.

I'd also like to encourage you to look at the code in 
skunk/qa-refactoring, it's well documented and should be easy to read, 
I've followed Kent Beck's recommendations for code readability.


I hope this helps.

Regards,

Peter.


Greg Trasuk wrote:

OK, so in my last message I talked about how (speaking only for myself) I'm a 
little nervous about the state of the trunk.

So what now?  


Problems we need to avoid in this discussion:
-

- Conflation of source tree structure issues with build tool selection.
- Conflation of Maven build, Maven as codebase provider (artifact urls), and 
posting artifacts to Maven Central
- Wish lists of pet features
- Bruised egos and personal criticisms.

Issues I see, in no particular order:
--
- We've done changes both to the test framework and the code, and lots of them. 
 We should do one or the other, or small amounts of coevolution, if absolutely 
necessary.
- Really, I'd like to see a completely separate integration test, and have the 
TCK tests separated out again.
- The source tree is incomprehensible
- The tests appear to be awfully sensitive to their environment.  Insofar as 
when I run them locally on an untouched source tree, I get 280 failures.
- There have been changes to class loading and security subsystems.  These subsystems are 
core to Jini, and the changes were made to the existing source, so there's no way to 
"opt-out" of the changes.  I'd like to see radical changes be optional until 
proven in the field, where possible.  In the case of policy providers and class loaders, 
that should be easy to do.
- Similarly, it seems there have been some changes to the JERI framework.
- There are ".jar" files in our repository.  I'll stipulate that the licensing 
has been checked, but it smells bad.

Discussion
-
I guess the biggest thing I'd like to see is stability in the test framework.  
Perhaps it needs refactoring or reorganization, but if so, we need to be very 
careful to separate it from changes to the core functionality.

Next, I'd like for it to be easier to comprehend the source tree.  I think a good way to 
do that is to separate out (carefully) the core Jini package (basically the contents of 
jsk-platform.jar) and the service implementations.  There's no reason that we have to 
have one huge everything-but-the-kitchen-sink distribution.  That's just a holdover from 
how Sun structured the JTSK - It was literally a "starter kit".  To me it would 
be fine to have separate deliverables for the platform and the services.

While we're separating out the services, it might also be a decent time to implement 
Maven-based builds if we think that's a good idea.  I'd start with Reggie.  It would also 
be a good time to get rid of the "com.sun.jini" packages.

Aside:  I'm personally ambivalent on Maven (which is to say I'm nowhere near as 
negative on it as I once was).  I do agree with Dennis, though, that the jars 
and appropriate poms need to be published to Maven Central.  There's no doubt 
that users will appreciate that.

Once we have a stable set of regression tests, then OK, we could think about 
improving performance or using Maven repositories as the codebase server.

I realize this won't be popular, but my gut feel is that we need to step back 
to the 2.2 branch and retrace our steps a little, and go through the evolution 
again in a more measured fashion.

Proposal


1 - Release version 2.2.1 from the 2.2 branch.
2 - Create a separate sour

Re: Next steps after 2.2.1 release

2013-04-07 Thread Dan Creswell
On 7 April 2013 05:24, Patricia Shanahan  wrote:

> On 4/6/2013 7:26 PM, Greg Trasuk wrote:
> ...
>
>  Once we have a stable set of regression tests, then OK, we could
>> think about improving performance or using Maven repositories as the
>> codebase server.
>>
> ...
>
> I think there is something else you need before it would be a good idea to
> release any changes for the sake of performance - some examples of
> workloads whose performance you want to improve, and that are in fact
> improved by the changes.
>

Indeed. And if we do these changes in a confined, one small step at a time
type fashion, we can build micro-benchmarks as we go along.

Later, if it makes sense we can look at more macro tests such as e.g.
Lookup or JavaSpace or Transaction etc.


Re: Next steps after 2.2.1 release

2013-04-07 Thread Patricia Shanahan

On 4/7/2013 1:04 AM, Dan Creswell wrote:

On 7 April 2013 05:24, Patricia Shanahan  wrote:


On 4/6/2013 7:26 PM, Greg Trasuk wrote:
...

  Once we have a stable set of regression tests, then OK, we could

think about improving performance or using Maven repositories as the
codebase server.


...

I think there is something else you need before it would be a good idea to
release any changes for the sake of performance - some examples of
workloads whose performance you want to improve, and that are in fact
improved by the changes.



Indeed. And if we do these changes in a confined, one small step at a time
type fashion, we can build micro-benchmarks as we go along.

Later, if it makes sense we can look at more macro tests such as e.g.
Lookup or JavaSpace or Transaction etc.



Programmer constructed micro-benchmarks are good at answering the
question "How am I doing at optimizing X?". They are useless for
answering the question that needs to be asked first: "What, if anything,
should I be optimizing?".

To find out what, if anything, needs optimization one has to start from
real workloads that are running slower than desired, and measure them.
Or, in some cases, from industry benchmarks that are known to be
correlated with real workloads in some field.

Patricia



Re: Next steps after 2.2.1 release

2013-04-07 Thread Dan Creswell
True, for a definition of micro-benchmark you have decided for yourself
rather than asked me to clarify

On 7 April 2013 09:37, Patricia Shanahan  wrote:

> On 4/7/2013 1:04 AM, Dan Creswell wrote:
>
>> On 7 April 2013 05:24, Patricia Shanahan  wrote:
>>
>>  On 4/6/2013 7:26 PM, Greg Trasuk wrote:
>>> ...
>>>
>>>   Once we have a stable set of regression tests, then OK, we could
>>>
 think about improving performance or using Maven repositories as the
 codebase server.

  ...
>>>
>>> I think there is something else you need before it would be a good idea
>>> to
>>> release any changes for the sake of performance - some examples of
>>> workloads whose performance you want to improve, and that are in fact
>>> improved by the changes.
>>>
>>>
>> Indeed. And if we do these changes in a confined, one small step at a time
>> type fashion, we can build micro-benchmarks as we go along.
>>
>> Later, if it makes sense we can look at more macro tests such as e.g.
>> Lookup or JavaSpace or Transaction etc.
>>
>>
> Programmer constructed micro-benchmarks are good at answering the
> question "How am I doing at optimizing X?". They are useless for
> answering the question that needs to be asked first: "What, if anything,
> should I be optimizing?".
>
> To find out what, if anything, needs optimization one has to start from
> real workloads that are running slower than desired, and measure them.
> Or, in some cases, from industry benchmarks that are known to be
> correlated with real workloads in some field.
>
> Patricia
>
>


Re: Next steps after 2.2.1 release

2013-04-07 Thread Peter Firmstone
Oh, hang on, are you developing on Windows?  If so, it's not supported 
in 2.2.0.


Peter Firmstone wrote:

Greg,

You need to spend some more time figuring out why those tests are 
failing, that isn't normal, when 280 tests fail, there's usually 
something wrong with configuration.  The qa test suite just isn't that 
brittle ;)   The tests that fail due to concurrency errors don't fail 
often, we're talking 3/10 you might get a failure (if you're lucky) 
and likely, you won't see any failures on single core hardware.  The 
Jenkins tests tend to be more likely to fail because they run 
concurrent with other builds, in a busy network environment (sockets 
in use etc), with lots of superfluous discovery processes going in the 
background.


If you need some help with the source tree, I'm quite familiar with 
it, don't be afraid to ask questions.


This is a big distribution, there is a lot of code, it is initially 
difficult to find your way around, but patience will be rewarded.


There are 3 test suites:

1. trunk/test
2. trunk/qa/src
3. trunk/qa/jtreg

The first is the junit test suite, the second is the qa integration 
test suite and tck, which simulates a network environment with all the 
services, the last is the jtreg regression test suite.


When you've done further investigation and better understand the code, 
then we'll be in a better position for discussion, right now the 
immediate focus should be on a 2.2.1 release.


I'll switch to your branch and run the tests if you like.

I'd also like to encourage you to look at the code in 
skunk/qa-refactoring, it's well documented and should be easy to read, 
I've followed Kent Beck's recommendations for code readability.


I hope this helps.

Regards,

Peter.


Greg Trasuk wrote:
OK, so in my last message I talked about how (speaking only for 
myself) I'm a little nervous about the state of the trunk.


So what now? 
Problems we need to avoid in this discussion:

-

- Conflation of source tree structure issues with build tool selection.
- Conflation of Maven build, Maven as codebase provider (artifact 
urls), and posting artifacts to Maven Central

- Wish lists of pet features
- Bruised egos and personal criticisms.

Issues I see, in no particular order:
--
- We've done changes both to the test framework and the code, and 
lots of them.  We should do one or the other, or small amounts of 
coevolution, if absolutely necessary.
- Really, I'd like to see a completely separate integration test, and 
have the TCK tests separated out again.

- The source tree is incomprehensible
- The tests appear to be awfully sensitive to their environment.  
Insofar as when I run them locally on an untouched source tree, I get 
280 failures.
- There have been changes to class loading and security subsystems.  
These subsystems are core to Jini, and the changes were made to the 
existing source, so there's no way to "opt-out" of the changes.  I'd 
like to see radical changes be optional until proven in the field, 
where possible.  In the case of policy providers and class loaders, 
that should be easy to do.
- Similarly, it seems there have been some changes to the JERI 
framework.
- There are ".jar" files in our repository.  I'll stipulate that the 
licensing has been checked, but it smells bad.


Discussion
-
I guess the biggest thing I'd like to see is stability in the test 
framework.  Perhaps it needs refactoring or reorganization, but if 
so, we need to be very careful to separate it from changes to the 
core functionality.


Next, I'd like for it to be easier to comprehend the source tree.  I 
think a good way to do that is to separate out (carefully) the core 
Jini package (basically the contents of jsk-platform.jar) and the 
service implementations.  There's no reason that we have to have one 
huge everything-but-the-kitchen-sink distribution.  That's just a 
holdover from how Sun structured the JTSK - It was literally a 
"starter kit".  To me it would be fine to have separate deliverables 
for the platform and the services.


While we're separating out the services, it might also be a decent 
time to implement Maven-based builds if we think that's a good idea.  
I'd start with Reggie.  It would also be a good time to get rid of 
the "com.sun.jini" packages.


Aside:  I'm personally ambivalent on Maven (which is to say I'm 
nowhere near as negative on it as I once was).  I do agree with 
Dennis, though, that the jars and appropriate poms need to be 
published to Maven Central.  There's no doubt that users will 
appreciate that.


Once we have a stable set of regression tests, then OK, we could 
think about improving performance or using Maven repositories as the 
codebase server.


I realize this won't be popular, but my gut feel is that we need to 
step back to the 2.2 branch and retrace our steps a little, and go 
through the evolution again in

Re: Next steps after 2.2.1 release

2013-04-07 Thread Greg Trasuk

On Sun, 2013-04-07 at 06:34, Peter Firmstone wrote:
> Oh, hang on, are you developing on Windows?  If so, it's not supported 
> in 2.2.0.
> 

OSX 10.8 in this case, although I also plan to run the test suite on
Windows XP32 and Win7-64.  Possibly also Solaris 10.  As I recall the
"Windows not supported" was primarily about spaces in the file path.  If
that's the case, then IBM's Websphere is not supported on Windows
either. I was planning to arrange for a space-free directory path. 
People have been running Jini on Windows since the dawn of time.

Greg.

> Peter Firmstone wrote:
> > Greg,
> >
> > You need to spend some more time figuring out why those tests are 
> > failing, that isn't normal, when 280 tests fail, there's usually 
> > something wrong with configuration.  The qa test suite just isn't that 
> > brittle ;)   The tests that fail due to concurrency errors don't fail 
> > often, we're talking 3/10 you might get a failure (if you're lucky) 
> > and likely, you won't see any failures on single core hardware.  The 
> > Jenkins tests tend to be more likely to fail because they run 
> > concurrent with other builds, in a busy network environment (sockets 
> > in use etc), with lots of superfluous discovery processes going in the 
> > background.
> >
> > If you need some help with the source tree, I'm quite familiar with 
> > it, don't be afraid to ask questions.
> >
> > This is a big distribution, there is a lot of code, it is initially 
> > difficult to find your way around, but patience will be rewarded.
> >
> > There are 3 test suites:
> >
> > 1. trunk/test
> > 2. trunk/qa/src
> > 3. trunk/qa/jtreg
> >
> > The first is the junit test suite, the second is the qa integration 
> > test suite and tck, which simulates a network environment with all the 
> > services, the last is the jtreg regression test suite.
> >
> > When you've done further investigation and better understand the code, 
> > then we'll be in a better position for discussion, right now the 
> > immediate focus should be on a 2.2.1 release.
> >
> > I'll switch to your branch and run the tests if you like.
> >
> > I'd also like to encourage you to look at the code in 
> > skunk/qa-refactoring, it's well documented and should be easy to read, 
> > I've followed Kent Beck's recommendations for code readability.
> >
> > I hope this helps.
> >
> > Regards,
> >
> > Peter.
> >
> >
> > Greg Trasuk wrote:
> >> OK, so in my last message I talked about how (speaking only for 
> >> myself) I'm a little nervous about the state of the trunk.
> >>
> >> So what now? 
> >> Problems we need to avoid in this discussion:
> >> -
> >>
> >> - Conflation of source tree structure issues with build tool selection.
> >> - Conflation of Maven build, Maven as codebase provider (artifact 
> >> urls), and posting artifacts to Maven Central
> >> - Wish lists of pet features
> >> - Bruised egos and personal criticisms.
> >>
> >> Issues I see, in no particular order:
> >> --
> >> - We've done changes both to the test framework and the code, and 
> >> lots of them.  We should do one or the other, or small amounts of 
> >> coevolution, if absolutely necessary.
> >> - Really, I'd like to see a completely separate integration test, and 
> >> have the TCK tests separated out again.
> >> - The source tree is incomprehensible
> >> - The tests appear to be awfully sensitive to their environment.  
> >> Insofar as when I run them locally on an untouched source tree, I get 
> >> 280 failures.
> >> - There have been changes to class loading and security subsystems.  
> >> These subsystems are core to Jini, and the changes were made to the 
> >> existing source, so there's no way to "opt-out" of the changes.  I'd 
> >> like to see radical changes be optional until proven in the field, 
> >> where possible.  In the case of policy providers and class loaders, 
> >> that should be easy to do.
> >> - Similarly, it seems there have been some changes to the JERI 
> >> framework.
> >> - There are ".jar" files in our repository.  I'll stipulate that the 
> >> licensing has been checked, but it smells bad.
> >>
> >> Discussion
> >> -
> >> I guess the biggest thing I'd like to see is stability in the test 
> >> framework.  Perhaps it needs refactoring or reorganization, but if 
> >> so, we need to be very careful to separate it from changes to the 
> >> core functionality.
> >>
> >> Next, I'd like for it to be easier to comprehend the source tree.  I 
> >> think a good way to do that is to separate out (carefully) the core 
> >> Jini package (basically the contents of jsk-platform.jar) and the 
> >> service implementations.  There's no reason that we have to have one 
> >> huge everything-but-the-kitchen-sink distribution.  That's just a 
> >> holdover from how Sun structured the JTSK - It was literally a 
> >> "starter kit".  To me it would be fine to have separate deliv

Re: Next steps after 2.2.1 release

2013-04-07 Thread Dennis Reedy

On Apr 6, 2013, at 1026PM, Greg Trasuk wrote:
> 
> Proposal
> 
> 
> 1 - Release version 2.2.1 from the 2.2 branch.

+1 for this, we really need to get this done, and get this done before the end 
of the month.


> 2 - Create a separate source tree for the test framework.  This could come 
> from the "qa_refactor" branch, but the goal should be to successfully test 
> the 2.2.1 release.  Plus it should be a no-brainer to pull it down and run it 
> on a local machine.

I would actually like to have the (at least) unit tests part of the same source 
tree. I can see integration tests being in a separate module, but it should not 
be a separate source tree. When River gets built we should automatically run 
unit tests. Integration testing can happen as a separate step. 

Additionally, I also dont see why we need a "test framework" per se. Why not 
just use JUnit or TestNG? 

> 3 - Release 2.2.2 from the pruned jtsk tree.  Release 1.0.0 of the test 
> framework.
> 4 - Pull out the infrastructure service implementations (Reggie, Outrigger, 
> Norm, etc) from the core into separate products.  Release 1.0.0 on each of 
> them.  Release 2.2.3 from the pruned jtsk tree.

Yes, we need to trim up project. Once 2.2.1 is released I had planned on 
branching and starting a modular (maven-ized) project. I really do not know 
what the state of trunk, qa-trunk (or any other branch) so starting with 2.2 as 
the baseline seems more stable to me.

Regards

Dennis

Re: Next steps after 2.2.1 release

2013-04-07 Thread Dan Creswell
On 7 April 2013 15:18, Dennis Reedy  wrote:

>
> On Apr 6, 2013, at 1026PM, Greg Trasuk wrote:
> >
> > Proposal
> > 
> >
> > 1 - Release version 2.2.1 from the 2.2 branch.
>
> +1 for this, we really need to get this done, and get this done before the
> end of the month.
>
>
> > 2 - Create a separate source tree for the test framework.  This could
> come from the "qa_refactor" branch, but the goal should be to successfully
> test the 2.2.1 release.  Plus it should be a no-brainer to pull it down and
> run it on a local machine.
>
> I would actually like to have the (at least) unit tests part of the same
> source tree. I can see integration tests being in a separate module, but it
> should not be a separate source tree. When River gets built we should
> automatically run unit tests. Integration testing can happen as a separate
> step.
>
> Additionally, I also dont see why we need a "test framework" per se. Why
> not just use JUnit or TestNG?
>

No objection to using a "standard" test framework equally we've got a lot
of code piled up for various releases. Thus I'd say we add the framework
replacement work to the "future once we've got the pile down to a
manageable size".


>
> > 3 - Release 2.2.2 from the pruned jtsk tree.  Release 1.0.0 of the test
> framework.
> > 4 - Pull out the infrastructure service implementations (Reggie,
> Outrigger, Norm, etc) from the core into separate products.  Release 1.0.0
> on each of them.  Release 2.2.3 from the pruned jtsk tree.
>
> Yes, we need to trim up project. Once 2.2.1 is released I had planned on
> branching and starting a modular (maven-ized) project. I really do not know
> what the state of trunk, qa-trunk (or any other branch) so starting with
> 2.2 as the baseline seems more stable to me.
>
> Regards
>
> Dennis


Re: Next steps after 2.2.1 release

2013-04-07 Thread Greg Trasuk

On Sun, 2013-04-07 at 10:18, Dennis Reedy wrote:
> On Apr 6, 2013, at 1026PM, Greg Trasuk wrote:
> > 
> > Proposal
> > 
> > 
> > 1 - Release version 2.2.1 from the 2.2 branch.
> 
> +1 for this, we really need to get this done, and get this done before
> the end of the month.

Agreed.  I was sincerely hoping for this last month.

> 
> 
> > 2 - Create a separate source tree for the test framework.  This
> could come from the "qa_refactor" branch, but the goal should be to
> successfully test the 2.2.1 release.  Plus it should be a no-brainer
> to pull it down and run it on a local machine.
> 
> I would actually like to have the (at least) unit tests part of the
> same source tree. I can see integration tests being in a separate
> module, but it should not be a separate source tree. When River gets
> built we should automatically run unit tests. Integration testing can
> happen as a separate step. 
> 

Unit tests in the main source tree are fine.  But I feel like we've
reached the point we're at because we don't have a clean separation
between the code and the integration/regression tests.  Hence I'd like
to make sure that there's a test package separate from the development
branch, so that we can have confidence that the core functionality.

> Additionally, I also dont see why we need a "test framework" per se.
> Why not just use JUnit or TestNG? 
I don't see a problem with streamlining the tests, once the integration
and regression tests are split out.  If we're going to develop/modify
tests, I want to make sure we can run them against a known-good release
build.  Only then can we apply them to a "probably good" release
candidate.  Seems to me that splitting out the acceptance testing is a
convenient way to allow for separate governance on the tests and core
functionality.

By the way, I think a part of me just died when I used the "g-word"
above :-)

> 
> > 3 - Release 2.2.2 from the pruned jtsk tree.  Release 1.0.0 of the
> test framework.
> > 4 - Pull out the infrastructure service implementations (Reggie,
> Outrigger, Norm, etc) from the core into separate products.  Release
> 1.0.0 on each of them.  Release 2.2.3 from the pruned jtsk tree.
> 
> Yes, we need to trim up project. Once 2.2.1 is released I had planned
> on branching and starting a modular (maven-ized) project. I really do
> not know what the state of trunk, qa-trunk (or any other branch) so
> starting with 2.2 as the baseline seems more stable to me.

That's about where I'm coming from too.

> 
> Regards
> 
> Dennis



Re: Next steps after 2.2.1 release

2013-04-07 Thread Peter
+1 for 2.2.1 release.
+1 to investigate modular build options.

-1 for broad brush proposal based on incorrect assumptions.


I'm concerned people are jumping to conclusions and making incorrect 
assumptions, the understanding of the current build is clearly lacking:

1. The qa suite can be run against any other build by changing the RIVER_HOME 
property.  It's already a separate build, it's not coupled.  It is where it is 
for convenience and ease of use for new developers.
2.  Windows passes all tests with trunk and qa-refactoring.  We have RFC3986 
compliant URI paths that can handle spaces and illegal characters.  Yes these 
builds are more jini standards compliant than the so called clean 2.2 branch.
3.  The quality of the code in trunk and qa-refactoring is far superior to the 
2.2 branch.  
4.  Branching a new trunk off 2.2, because it's "clean" is insufficient 
justification and discards three years of hard work.
5.  The integration tests cannot be replaced by junit, all test frameworks in 
use are of equal importance.
6.  I have not seen one question regarding the code, this has been developed in 
collaboration, with multiple discussions on trunk and with standards compliance 
testing over the last 3 years.

Furthermore:

I reiterate; I am confident the latest code is superior to 2.2 and is API 
compatible.

My next task is to refactor or remove TaskManager.  Once this is done I'll 
release 2.3.0

I refuse to participate in a new trunk branched off 2.2, it is doomed to fail, 
discarding 3 years of synchronization fixes, concurrency improvements as well 
as fixes for fundamental platform issues like over reliance on DNS is engaging 
in an act of project vandalism.

Developers are free to maintain a 2.2 branch, I'm not against that.  In fact it 
would allow a transition period.

People need to think very carefully before making rash decisions, remember your 
actions are on public display, do more research, choose your next move wisely.

Regards,

Peter Firmstone.

- Original message -
>
> On Sun, 2013-04-07 at 10:18, Dennis Reedy wrote:
> > On Apr 6, 2013, at 1026PM, Greg Trasuk wrote:
> > >
> > > Proposal
> > > 
> > >
> > > 1 - Release version 2.2.1 from the 2.2 branch.
> >
> > +1 for this, we really need to get this done, and get this done before
> > the end of the month.
>
> Agreed.  I was sincerely hoping for this last month.
>
> >
> >
> > > 2 - Create a separate source tree for the test framework.  This
> > could come from the "qa_refactor" branch, but the goal should be to
> > successfully test the 2.2.1 release.  Plus it should be a no-brainer
> > to pull it down and run it on a local machine.
> >
> > I would actually like to have the (at least) unit tests part of the
> > same source tree. I can see integration tests being in a separate
> > module, but it should not be a separate source tree. When River gets
> > built we should automatically run unit tests. Integration testing can
> > happen as a separate step.
> >
>
> Unit tests in the main source tree are fine.  But I feel like we've
> reached the point we're at because we don't have a clean separation
> between the code and the integration/regression tests.  Hence I'd like
> to make sure that there's a test package separate from the development
> branch, so that we can have confidence that the core functionality.
>
> > Additionally, I also dont see why we need a "test framework" per se.
> > Why not just use JUnit or TestNG?
> I don't see a problem with streamlining the tests, once the integration
> and regression tests are split out.  If we're going to develop/modify
> tests, I want to make sure we can run them against a known-good release
> build.  Only then can we apply them to a "probably good" release
> candidate.  Seems to me that splitting out the acceptance testing is a
> convenient way to allow for separate governance on the tests and core
> functionality.
>
> By the way, I think a part of me just died when I used the "g-word"
> above :-)
>
> >
> > > 3 - Release 2.2.2 from the pruned jtsk tree.  Release 1.0.0 of the
> > test framework.
> > > 4 - Pull out the infrastructure service implementations (Reggie,
> > Outrigger, Norm, etc) from the core into separate products.  Release
> > 1.0.0 on each of them.  Release 2.2.3 from the pruned jtsk tree.
> >
> > Yes, we need to trim up project. Once 2.2.1 is released I had planned
> > on branching and starting a modular (maven-ized) project. I really do
> > not know what the state of trunk, qa-trunk (or any other branch) so
> > starting with 2.2 as the baseline seems more stable to me.
>
> That's about where I'm coming from too.
>
> >
> > Regards
> >
> > Dennis
>



Re: Next steps after 2.2.1 release

2013-04-07 Thread Peter
Greg, why have you repeated this message?

I think this is a deliberate attack on the project because you haven't been 
following development in trunk and now you're scared because you see changes 
you don't understand.

I've been following your developments in surrogates, an impressive amount of 
productivity.  Although I think you should consider upgrading apache.commons 
vfs to version 2 before releasing.

Open your mind and ask questions, the code isn't set in stone, you have an 
obligation as project lead to encourage and nurture development, not stifle it.

You strike me as someone who's a very good programmer, but still learning 
leadership because you lack faith in others and must do everything yourself.  
Remember I offered to assist with Surrogates, but you wanted to work alone? 

You need to let go and give others a go too.

How you handle this matter will be a test for your own personal development and 
an opportunity to grow as a leader. 

You also hold the future of this project in your hands, so I hope you find 
strength to let go.

Regards,

Peter.

- Original message -
>
> OK, so in my last message I talked about how (speaking only for myself) I'm a
> little nervous about the state of the trunk.
>
> So what now? 
>
> Problems we need to avoid in this discussion:
> -
>
> - Conflation of source tree structure issues with build tool selection.
> - Conflation of Maven build, Maven as codebase provider (artifact urls), and
> posting artifacts to Maven Central - Wish lists of pet features
> - Bruised egos and personal criticisms.
>
> Issues I see, in no particular order:
> --
> - We've done changes both to the test framework and the code, and lots of 
> them.
> We should do one or the other, or small amounts of coevolution, if absolutely
> necessary. - Really, I'd like to see a completely separate integration test, 
> and
> have the TCK tests separated out again. - The source tree is incomprehensible 
> -
> The tests appear to be awfully sensitive to their environment.  Insofar as 
> when
> I run them locally on an untouched source tree, I get 280 failures. - There 
> have
> been changes to class loading and security subsystems.  These subsystems are
> core to Jini, and the changes were made to the existing source, so there's no
> way to "opt-out" of the changes.  I'd like to see radical changes be optional
> until proven in the field, where possible.  In the case of policy providers 
> and
> class loaders, that should be easy to do. - Similarly, it seems there have 
> been
> some changes to the JERI framework. - There are ".jar" files in our 
> repository.
> I'll stipulate that the licensing has been checked, but it smells bad.
>
> Discussion
> -
> I guess the biggest thing I'd like to see is stability in the test framework.
> Perhaps it needs refactoring or reorganization, but if so, we need to be very
> careful to separate it from changes to the core functionality.
>
> Next, I'd like for it to be easier to comprehend the source tree.  I think a
> good way to do that is to separate out (carefully) the core Jini package
> (basically the contents of jsk-platform.jar) and the service implementations.
> There's no reason that we have to have one huge 
> everything-but-the-kitchen-sink
> distribution.  That's just a holdover from how Sun structured the JTSK - It 
> was
> literally a "starter kit".  To me it would be fine to have separate 
> deliverables
> for the platform and the services.
>
> While we're separating out the services, it might also be a decent time to
> implement Maven-based builds if we think that's a good idea.  I'd start with
> Reggie.  It would also be a good time to get rid of the "com.sun.jini" 
> packages.
>
> Aside:  I'm personally ambivalent on Maven (which is to say I'm nowhere near 
> as
> negative on it as I once was).  I do agree with Dennis, though, that the jars
> and appropriate poms need to be published to Maven Central.  There's no doubt
> that users will appreciate that.
>
> Once we have a stable set of regression tests, then OK, we could think about
> improving performance or using Maven repositories as the codebase server.
>
> I realize this won't be popular, but my gut feel is that we need to step back 
> to
> the 2.2 branch and retrace our steps a little, and go through the evolution
> again in a more measured fashion.
>
> Proposal
> 
>
> 1 - Release version 2.2.1 from the 2.2 branch.
> 2 - Create a separate source tree for the test framework.  This could come 
> from
> the "qa_refactor" branch, but the goal should be to successfully test the 
> 2.2.1
> release.  Plus it should be a no-brainer to pull it down and run it on a local
> machine. 3 - Release 2.2.2 from the pruned jtsk tree.  Release 1.0.0 of the 
> test
> framework. 4 - Pull out the infrastructure service implementations (Reggie,
> Outrigger, Norm, etc) from th

Re: Next steps after 2.2.1 release

2013-04-07 Thread Greg Trasuk

On Sun, 2013-04-07 at 17:54, Peter wrote:
> Greg, why have you repeated this message?
> 

First time I sent it was from the wrong email address, so it got hung up
in moderation.  I sent it again from my subscribed address.  I'm
guessing someone just moderated the original through.


Anyway, let's address one or two of your points...

I see you writing inflammatory statements about my leadership skills and
I think you're  upset because you think I was questioning the quality of
your work. I understand.  You've put a lot of effort into the codebase.

I feel sorry that you feel that way - it wasn't what I intended.

Apache doesn't recognize any kind of a "project leader" position, and I
don't pretend to hold any such influence over River.  I'm speaking as a
committer and PMC member.  I certainly don't think I "hold the future of
the project in my hands".  If anyone does hold individual control over
the future of the project, then it doesn't qualify as an Apache project,
and we need to remedy that.

Really, what I'm trying to do is answer this question for myself - "Can
I vote +1 on a release based on the trunk?".  There have been a lot of
changes to the trunk code.  Yes, many that I don't understand.  I've
done more management than you thnk.  I don't require that I understand
everything.  That leads me to ask "How can I be confident about a
release?"

The best answer I have is to ask "does it pass the regression tests?". 
But that implies another question - "Do I trust the tests?"  And the
answer to that is "currently, no, because from what I can see there have
also been changes to the tests".

I'm honestly and truly not passing judgement on the quality of the
code.  I honestly don't know if it's good or bad.  I have to confess
that, given that Jini was written as a top-level project at Sun,
sponsored by Bill Joy, when Sun was at the top of its game, and the Jini
project team was a "who's-who" of distributed computing pioneers, the
idea that it's riddled with concurrency bugs surprises me.  But mainly,
I'm still trying to answer that question - "How do I know if it's good?"

Here's what I'm doing:

- I'm attempting to run the tests from "tags/2.2.0" against the "2.2"
branch.  When I have confidence in the "2.2" branch, I'll publish the
results, ask anyone else who's interested to test it, and then call for
a release on "2.2.1"
- After that, the developers need to reach consensus about how to move
forward.

Cheers,

Greg.



> I think this is a deliberate attack on the project because you haven't
> been following development in trunk and now you're scared because you
> see changes you don't understand.
> 
> I've been following your developments in surrogates, an impressive
> amount of productivity.  Although I think you should consider
> upgrading apache.commons vfs to version 2 before releasing.
> 
> Open your mind and ask questions, the code isn't set in stone, you
> have an obligation as project lead to encourage and nurture
> development, not stifle it.
> 
> You strike me as someone who's a very good programmer, but still
> learning leadership because you lack faith in others and must do
> everything yourself.  Remember I offered to assist with Surrogates,
> but you wanted to work alone? 
> 
> You need to let go and give others a go too.
> 
> How you handle this matter will be a test for your own personal
> development and an opportunity to grow as a leader. 
> 
> You also hold the future of this project in your hands, so I hope you
> find strength to let go.
> 
> Regards,
> 
> Peter.
> 
> - Original message -
> >
> > OK, so in my last message I talked about how (speaking only for
> myself) I'm a
> > little nervous about the state of the trunk.
> >
> > So what now? 
> >
> > Problems we need to avoid in this discussion:
> > -
> >
> > - Conflation of source tree structure issues with build tool
> selection.
> > - Conflation of Maven build, Maven as codebase provider (artifact
> urls), and
> > posting artifacts to Maven Central - Wish lists of pet features
> > - Bruised egos and personal criticisms.
> >
> > Issues I see, in no particular order:
> > --
> > - We've done changes both to the test framework and the code, and
> lots of them.
> > We should do one or the other, or small amounts of coevolution, if
> absolutely
> > necessary. - Really, I'd like to see a completely separate
> integration test, and
> > have the TCK tests separated out again. - The source tree is
> incomprehensible -
> > The tests appear to be awfully sensitive to their environment. 
> Insofar as when
> > I run them locally on an untouched source tree, I get 280 failures.
> - There have
> > been changes to class loading and security subsystems.  These
> subsystems are
> > core to Jini, and the changes were made to the existing source, so
> there's no
> > way to "opt-out" of the changes.  I'd like to see radical changes be
> opt

Re: Next steps after 2.2.1 release

2013-04-07 Thread Patricia Shanahan

On 4/7/2013 5:03 PM, Greg Trasuk wrote:
...

I'm honestly and truly not passing judgement on the quality of the
code.  I honestly don't know if it's good or bad.  I have to confess
that, given that Jini was written as a top-level project at Sun,
sponsored by Bill Joy, when Sun was at the top of its game, and the Jini
project team was a "who's-who" of distributed computing pioneers, the
idea that it's riddled with concurrency bugs surprises me.  But mainly,
I'm still trying to answer that question - "How do I know if it's good?"

...

I don't know whether it has concurrency bugs, and that is a problem in 
its own right. The theory of why does not suffer from concurrency 
problems is nowhere near clear.


I have no faith in the infallibility of Sun developers, because I used 
to be one. Some of them were very, very smart, but those were not 
necessarily the ones writing every line of code. The issue is not the 
distributed system design, but details of coding that may be leading to 
local concurrency problems within a program.


I am a little worried that my doubts about RetryTask may lead to over 
focus on that issue. It should be considered as a candidate, but I was 
never able to become certain there was a bug involving it. If I had, I 
would have created an issue for it and fixed it.


Patricia



Re: Next steps after 2.2.1 release

2013-04-08 Thread Dan Creswell
So

On 7 April 2013 22:16, Peter  wrote:

> +1 for 2.2.1 release.
> +1 to investigate modular build options.
>
> -1 for broad brush proposal based on incorrect assumptions.
>
>
> I'm concerned people are jumping to conclusions and making incorrect
> assumptions, the understanding of the current build is clearly lacking:
>
>
And where would one find information on "the current build"?

Heck, define "the current build". Which branch would that be?

Are you referring to something sitting in skunk? As opposed to trunk which
to quote what we have publicly documented is "mainline development"

http://river.apache.org/development-process.html

My two cents, the reason it's "clearly lacking" is because it clearly
"isn't shared knowledge". And why is that?

1. The qa suite can be run against any other build by changing the
> RIVER_HOME property.  It's already a separate build, it's not coupled.  It
> is where it is for convenience and ease of use for new developers.
> 2.  Windows passes all tests with trunk and qa-refactoring.  We have
> RFC3986 compliant URI paths that can handle spaces and illegal characters.
>  Yes these builds are more jini standards compliant than the so called
> clean 2.2 branch.
> 3.  The quality of the code in trunk and qa-refactoring is far superior to
> the 2.2 branch.
> 4.  Branching a new trunk off 2.2, because it's "clean" is insufficient
> justification and discards three years of hard work.
> 5.  The integration tests cannot be replaced by junit, all test frameworks
> in use are of equal importance.
> 6.  I have not seen one question regarding the code, this has been
> developed in collaboration, with multiple discussions on trunk and with
> standards compliance testing over the last 3 years.
>
>
"In collaboration"? Sorry, I'm going to beg to differ. There has been
discussion for sure but agreement that all the bits and pieces being worked
on should be worked on or which branch to put them in or how far to go
before making a release or what should be done to help out the community?
No. Don't think so.

Just look here and see: http://river.apache.org/roadmap.html


> Furthermore:
>
> I reiterate; I am confident the latest code is superior to 2.2 and is API
> compatible.
>

Except you still have at least one outstanding likely concurrency problem
and the nature of these means we can never be sure they're all gone.

You can claim superiority in your own opinion, which is lovely but it isn't
collaborative nor is it hard proven fact. The changes you've made are
important to you, are they for everyone else? Maybe that doesn't matter?


>
> My next task is to refactor or remove TaskManager.  Once this is done I'll
> release 2.3.0
>

Why? Because we have as yet unproven assumptions that it *might* be the
cause of problems in 2.3.0?

Evidence?

And, whilst it might be your next task, is it the community's?


>
> I refuse to participate in a new trunk branched off 2.2, it is doomed to
> fail, discarding 3 years of synchronization fixes, concurrency improvements
> as well as fixes for fundamental platform issues like over reliance on DNS
> is engaging in an act of project vandalism.
>

Over-reliance on DNS is not a fundamental platform issue and no one is
talking about discarding. People *are* talking about a more gradual change
process and multiple releases. Something like continuous integration and
iterative, micro-releases which is generally agreed to be a "good thing".


>
> Developers are free to maintain a 2.2 branch, I'm not against that.  In
> fact it would allow a transition period.
>
> People need to think very carefully before making rash decisions, remember
> your actions are on public display, do more research, choose your next move
> wisely.
>

Yes sir, they are indeed, yours too.

There are a bunch of you with good intentions, equally you all, rather
obviously, have your own individual agendas and they are showing through.
And you aren't discussing them openly or indeed acting much like a
community because of it.

The thing of it is, if you got your individual agendas organised and onto a
roadmap you'd have a pretty solid future and some great prospects for the
codebase. Alas, you're all binning it off. This has been a problem for
River way back in its Jini days and we're no further forward. Greg T (oh
wait, what's his role again? You elected him remember) is attempting to get
that discussion under way yet again. Looks like it's doomed yet again.

Frankly, it makes me sick because it's such a criminal waste of talent and
code. Sort it for heavens sake



>
> Regards,
>
> Peter Firmstone.
>
> - Original message -
> >
> > On Sun, 2013-04-07 at 10:18, Dennis Reedy wrote:
> > > On Apr 6, 2013, at 1026PM, Greg Trasuk wrote:
> > > >
> > > > Proposal
> > > > 
> > > >
> > > > 1 - Release version 2.2.1 from the 2.2 branch.
> > >
> > > +1 for this, we really need to get this done, and get this done before
> > > the end of the month.
> >
> > Agreed.  I was sincer

Re: Next steps after 2.2.1 release

2013-04-08 Thread Dan Creswell
Peter,

I shall remind you of your statement elsewhere about behaviour in public.
Dude, I know you're a much better person that the below suggests.

Perhaps you wrote it in anger or frustration or fatigue or some
combination. Nevertheless it doesn't come off well and would point at you
needing to do just as much development of leadership skills as you assert
is required for Greg.

Trust has to be earned just as much as granted. It starts from respect and
quality dialogue.

On 7 April 2013 22:54, Peter  wrote:

> Greg, why have you repeated this message?
>
> I think this is a deliberate attack on the project because you haven't
> been following development in trunk and now you're scared because you see
> changes you don't understand.
>
> I've been following your developments in surrogates, an impressive amount
> of productivity.  Although I think you should consider upgrading
> apache.commons vfs to version 2 before releasing.
>
> Open your mind and ask questions, the code isn't set in stone, you have an
> obligation as project lead to encourage and nurture development, not stifle
> it.
>
> You strike me as someone who's a very good programmer, but still learning
> leadership because you lack faith in others and must do everything
> yourself.  Remember I offered to assist with Surrogates, but you wanted to
> work alone?
>
> You need to let go and give others a go too.
>
> How you handle this matter will be a test for your own personal
> development and an opportunity to grow as a leader.
>
> You also hold the future of this project in your hands, so I hope you find
> strength to let go.
>
> Regards,
>
> Peter.
>
> - Original message -
> >
> > OK, so in my last message I talked about how (speaking only for myself)
> I'm a
> > little nervous about the state of the trunk.
> >
> > So what now?
> >
> > Problems we need to avoid in this discussion:
> > -
> >
> > - Conflation of source tree structure issues with build tool selection.
> > - Conflation of Maven build, Maven as codebase provider (artifact urls),
> and
> > posting artifacts to Maven Central - Wish lists of pet features
> > - Bruised egos and personal criticisms.
> >
> > Issues I see, in no particular order:
> > --
> > - We've done changes both to the test framework and the code, and lots
> of them.
> > We should do one or the other, or small amounts of coevolution, if
> absolutely
> > necessary. - Really, I'd like to see a completely separate integration
> test, and
> > have the TCK tests separated out again. - The source tree is
> incomprehensible -
> > The tests appear to be awfully sensitive to their environment.  Insofar
> as when
> > I run them locally on an untouched source tree, I get 280 failures. -
> There have
> > been changes to class loading and security subsystems.  These subsystems
> are
> > core to Jini, and the changes were made to the existing source, so
> there's no
> > way to "opt-out" of the changes.  I'd like to see radical changes be
> optional
> > until proven in the field, where possible.  In the case of policy
> providers and
> > class loaders, that should be easy to do. - Similarly, it seems there
> have been
> > some changes to the JERI framework. - There are ".jar" files in our
> repository.
> > I'll stipulate that the licensing has been checked, but it smells bad.
> >
> > Discussion
> > -
> > I guess the biggest thing I'd like to see is stability in the test
> framework.
> > Perhaps it needs refactoring or reorganization, but if so, we need to be
> very
> > careful to separate it from changes to the core functionality.
> >
> > Next, I'd like for it to be easier to comprehend the source tree.  I
> think a
> > good way to do that is to separate out (carefully) the core Jini package
> > (basically the contents of jsk-platform.jar) and the service
> implementations.
> > There's no reason that we have to have one huge
> everything-but-the-kitchen-sink
> > distribution.  That's just a holdover from how Sun structured the JTSK -
> It was
> > literally a "starter kit".  To me it would be fine to have separate
> deliverables
> > for the platform and the services.
> >
> > While we're separating out the services, it might also be a decent time
> to
> > implement Maven-based builds if we think that's a good idea.  I'd start
> with
> > Reggie.  It would also be a good time to get rid of the "com.sun.jini"
> packages.
> >
> > Aside:  I'm personally ambivalent on Maven (which is to say I'm nowhere
> near as
> > negative on it as I once was).  I do agree with Dennis, though, that the
> jars
> > and appropriate poms need to be published to Maven Central.  There's no
> doubt
> > that users will appreciate that.
> >
> > Once we have a stable set of regression tests, then OK, we could think
> about
> > improving performance or using Maven repositories as the codebase server.
> >
> > I realize this won't be popular, b

Re: Next steps after 2.2.1 release

2013-04-08 Thread Peter

- Original message -
>
> On Sun, 2013-04-07 at 17:54, Peter wrote:
> > Greg, why have you repeated this message?
> >
>
> First time I sent it was from the wrong email address, so it got hung up
> in moderation.  I sent it again from my subscribed address.  I'm
> guessing someone just moderated the original through.
>
>

My apologies, it gave me the impression you were escallating an argument to 
roll trunk back 3 years.  

Unfortunately the tests can't prove the absence of errors, concurrency problems 
can lie dormant for years.  The tests passed previously with inadequate 
synchronization, it's plausible that client code could also have inadequate 
synchronization and experience issues.

There are a number of Jira's I need to follow up on, these known issues may be 
related to the random failures, one in particular explains how unsynchronized 
access is used to avoid deadlock:

River-145
River-348
River-258
River-140
River-113
River-43
River-37
River-30 (includes patch)

The tests can be run against previous releases to simulate an environment where 
only the test code has changed.



> Anyway, let's address one or two of your points...
>
> I see you writing inflammatory statements about my leadership skills and
> I think you're  upset because you think I was questioning the quality of
> your work. I understand.  You've put a lot of effort into the codebase.
>
> I feel sorry that you feel that way - it wasn't what I intended.
>
> Apache doesn't recognize any kind of a "project leader" position, and I
> don't pretend to hold any such influence over River.  I'm speaking as a
> committer and PMC member.  I certainly don't think I "hold the future of
> the project in my hands".  If anyone does hold individual control over
> the future of the project, then it doesn't qualify as an Apache project,
> and we need to remedy that.
>
> Really, what I'm trying to do is answer this question for myself - "Can
> I vote +1 on a release based on the trunk?".  There have been a lot of
> changes to the trunk code.  Yes, many that I don't understand.  I've
> done more management than you thnk.  I don't require that I understand
> everything.  That leads me to ask "How can I be confident about a
> release?"
>
> The best answer I have is to ask "does it pass the regression tests?".
> But that implies another question - "Do I trust the tests?"  And the
> answer to that is "currently, no, because from what I can see there have
> also been changes to the tests".
>
> I'm honestly and truly not passing judgement on the quality of the
> code.  I honestly don't know if it's good or bad.  I have to confess
> that, given that Jini was written as a top-level project at Sun,
> sponsored by Bill Joy, when Sun was at the top of its game, and the Jini
> project team was a "who's-who" of distributed computing pioneers, the
> idea that it's riddled with concurrency bugs surprises me.  But mainly,
> I'm still trying to answer that question - "How do I know if it's good?"
>
> Here's what I'm doing:
>
> - I'm attempting to run the tests from "tags/2.2.0" against the "2.2"
> branch.  When I have confidence in the "2.2" branch, I'll publish the
> results, ask anyone else who's interested to test it, and then call for
> a release on "2.2.1"
> - After that, the developers need to reach consensus about how to move
> forward.
>
> Cheers,
>
> Greg.
>
>
>
> > I think this is a deliberate attack on the project because you haven't
> > been following development in trunk and now you're scared because you
> > see changes you don't understand.
> >
> > I've been following your developments in surrogates, an impressive
> > amount of productivity.  Although I think you should consider
> > upgrading apache.commons vfs to version 2 before releasing.
> >
> > Open your mind and ask questions, the code isn't set in stone, you
> > have an obligation as project lead to encourage and nurture
> > development, not stifle it.
> >
> > You strike me as someone who's a very good programmer, but still
> > learning leadership because you lack faith in others and must do
> > everything yourself.  Remember I offered to assist with Surrogates,
> > but you wanted to work alone?
> >
> > You need to let go and give others a go too.
> >
> > How you handle this matter will be a test for your own personal
> > development and an opportunity to grow as a leader.
> >
> > You also hold the future of this project in your hands, so I hope you
> > find strength to let go.
> >
> > Regards,
> >
> > Peter.
> >
> > - Original message -
> > >
> > > OK, so in my last message I talked about how (speaking only for
> > myself) I'm a
> > > little nervous about the state of the trunk.
> > >
> > > So what now?
> > >
> > > Problems we need to avoid in this discussion:
> > > -
> > >
> > > - Conflation of source tree structure issues with build tool
> > selection.
> > > - Conflation of Maven build, Maven as codebase provider (artif

Re: Next steps after 2.2.1 release

2013-04-08 Thread Gregg Wonderly

On 4/7/2013 7:03 PM, Greg Trasuk wrote:
I'm honestly and truly not passing judgement on the quality of the code. I 
honestly don't know if it's good or bad. I have to confess that, given that 
Jini was written as a top-level project at Sun, sponsored by Bill Joy, when 
Sun was at the top of its game, and the Jini project team was a "who's-who" of 
distributed computing pioneers, the idea that it's riddled with concurrency 
bugs surprises me. But mainly, I'm still trying to answer that question - "How 
do I know if it's good?" Here's what I'm doing: - I'm attempting to run the 
tests from "tags/2.2.0" against the "2.2" branch. When I have confidence in 
the "2.2" branch, I'll publish the results, ask anyone else who's interested 
to test it, and then call for a release on "2.2.1" - After that, the 
developers need to reach consensus about how to move forward. Cheers, Greg.


This is an important issue to address.  I know a lot of people here probably 
don't participate on the Concurrency-interest mailing list that has a wide range 
of discussion about the JLS vs the JMM and what the JIT compilers actually do to 
code these days.


The number one issue that you need to understand, is that the optimizer is 
working against you more and more these days if you don't have JMM details 
exactly write.  Statements are being reordered more and more, including actual 
"assignments" which can expose uninitialized data items in "racy" concurrent 
code.  The latest example is the  Thread.setName()/Thread.getName() pair.  They 
are most likely always to be accessed by "other threads", yet there is no 
synchronization on them, including no "visibility" control with volatile even.  
What this means, is that if setName() and getName() are being called in a racy 
environment, the setName, will assign the array that is created to copy the 
characters into, before the arraycopy of the data occurs, potentially exposing 
an uninitialized name to getName().


There are literally hundreds of places in the JDK that still have these kinds of 
races going on, and no one at Oracle, based on how people are acting, appears to 
be responsible for dealing with it. The Jini code, has many many of the same 
issues that just randomly appear in stress cases on "slower" or "faster" 
hardware, depending on the issue.


When you haven't got sharing and visibility covered correctly, the JIT code 
rewrites can make execution order play a big part in conflating what you "see" 
happening verses what the "code" says, to you, should happen.


There are some very simple things to get the JIT out of the picture.  One of 
these, is to actually open the source up in an IDE and declare every field 
final.  If that doesn't work due to 'mutation' of values, change those fields to 
'volatile' so that it will compile again.   Then run your tests and you will now 
greatly diminish reordering and visibility issues so that you can just get to 
the simple "was it set correctly, before it was read" and "did we provide the 
correct atomicity for that update" kinds of questions that will help you 
understand things better when code is misbehaving.


This is the kind of thing that Peter has been working through because the usage 
of the code in real life has not continued in the same way that it did when the 
code was written, and the JMM in JDK5 has literally broken so much software, all 
over the planet, that used to work quite well, because there wasn't a formal 
definition of "happens before".   Now that there is, the compiler optimizations 
are against you if you don't get it right.  The behaviors you will experience, 
because of reorderings that are targeted at all out performance (minimize 
traffic in and out of the CPU through memory subsystems), can create completely 
unexpected results.  Intra-thread semantics are kept correct, but inter-thread 
execution will just seem intangible because stuff will not be happening in the 
order the "code" says it should.


Gregg Wonderly



Re: Next steps after 2.2.1 release

2013-04-08 Thread Patricia Shanahan

On 4/8/2013 6:11 AM, Gregg Wonderly wrote:

On 4/7/2013 7:03 PM, Greg Trasuk wrote:

I'm honestly and truly not passing judgement on the quality of the
code. I honestly don't know if it's good or bad. I have to confess
that, given that Jini was written as a top-level project at Sun,
sponsored by Bill Joy, when Sun was at the top of its game, and the
Jini project team was a "who's-who" of distributed computing pioneers,
the idea that it's riddled with concurrency bugs surprises me. But
mainly, I'm still trying to answer that question - "How do I know if
it's good?" Here's what I'm doing: - I'm attempting to run the tests
from "tags/2.2.0" against the "2.2" branch. When I have confidence in
the "2.2" branch, I'll publish the results, ask anyone else who's
interested to test it, and then call for a release on "2.2.1" - After
that, the developers need to reach consensus about how to move
forward. Cheers, Greg.


This is an important issue to address.  I know a lot of people here
probably don't participate on the Concurrency-interest mailing list that
has a wide range of discussion about the JLS vs the JMM and what the JIT
compilers actually do to code these days.

...

I used to be a concurrency expert, but have not been following the topic
recently. For practical Java coding, I have tended to follow the ideas
in Java Concurrency in Practice. Do any of the changes invalidate that
approach?

Patricia




Re: Next steps after 2.2.1 release

2013-04-08 Thread Dan Creswell
> This is an important issue to address.  I know a lot of people here
>> probably don't participate on the Concurrency-interest mailing list that
>> has a wide range of discussion about the JLS vs the JMM and what the JIT
>> compilers actually do to code these days.
>>
> ...
>
> I used to be a concurrency expert, but have not been following the topic
> recently. For practical Java coding, I have tended to follow the ideas
> in Java Concurrency in Practice. Do any of the changes invalidate that
> approach?
>
>
No, they don't. The JMM hasn't really changed since the work Doug Lea did
for Java 5 and beyond. What has changed over time is the amount typical
JITs exploit the opportunities presented by the JMM for aggressive
instruction re-ordering etc.

If your code "does the right things" it'll be fine. It just potentially
runs better (it could actually run worse in some cases). If you've
misunderstood JMM or how it relates to JLS then you may have problems.


Re: Next steps after 2.2.1 release

2013-04-08 Thread Tom Hobbs
I'm not sure where I stand with regards to votes and what-not anymore, but
here's my opinion.  I think it's wise to do a quick release with the
minimum of changes in Right Now.  Particularly if those changes includes
the JDK7 fix.  Releasing as many changes as there are without more
consideration just feels too risky.  The greater need right now is to get
the bare minimum release out asap to fix some real-life issues.

That's not a comment on what I think the quality of all the trunk changes
are, it's just my gut feel.

To counter some of Greg's comments, if we trusted the old tests for
previous releases, then digging our way out of the hole is not hugely
difficult.  We just run the old tests against the new code to verify the
new code.  Then we run the new tests against an old release.  There will be
some additional due diligence needed to tie up loose ends and un-grey some
areas, but I think we can then have a good degree of confidence in both the
new tests and new code.

When that's done, I would suggest that it sounds like another release would
be due.  Then the source tree can be straightened out with the right bits
merged to the right branches and the right branches being created for the
right work streams.

I don't think that there is much value in debating the motives people have
had for the code they've written/changed.  As has been said before, we've
all got our own itches to scratch.  If there is a technical reason for
blocking some change then fine, but I don't believe that a lack of
benchmarks detailing some pain point is a reason to throw it out.
 Questioning the Why is often useful because it can aid a discussion and
guide us to what the real What should be, but I'm not sure that it will in
the case - then again, I've been wrong before...  So I think that it is a
good approach to modify the code so it follows the advice given in what
many would consider the "Concurrency Bible" - even if I can't prove that
the previous implementation was flawed in some way.

Dan has mentioned the policy (or lack of) with regards to what gets put
onto trunk and that is something that should really be discussed.  So some
questions;

- Is there a policy?
- Was it the right policy?
- Did we stick to the policy?
- What is a better policy?

Then everything else from Maven, to separate builds, to TaskManager
replacements (or not), to whatever else should just slot into place.

Lastly, I'm glad that the conversation has calmed down.  Thanks to both
Greg for not biting and Peter for responding in kind.  Your reactions to
what could have become a nasty situation speaks volumes to both of your
characters and that's A Good Thing.

Cheers,

Tom


On Mon, Apr 8, 2013 at 2:42 PM, Dan Creswell  wrote:

> > This is an important issue to address.  I know a lot of people here
> >> probably don't participate on the Concurrency-interest mailing list that
> >> has a wide range of discussion about the JLS vs the JMM and what the JIT
> >> compilers actually do to code these days.
> >>
> > ...
> >
> > I used to be a concurrency expert, but have not been following the topic
> > recently. For practical Java coding, I have tended to follow the ideas
> > in Java Concurrency in Practice. Do any of the changes invalidate that
> > approach?
> >
> >
> No, they don't. The JMM hasn't really changed since the work Doug Lea did
> for Java 5 and beyond. What has changed over time is the amount typical
> JITs exploit the opportunities presented by the JMM for aggressive
> instruction re-ordering etc.
>
> If your code "does the right things" it'll be fine. It just potentially
> runs better (it could actually run worse in some cases). If you've
> misunderstood JMM or how it relates to JLS then you may have problems.
>


Re: Next steps after 2.2.1 release

2013-04-08 Thread Peter
Greg,

I apologise again for my outburst, I was wrong about your leadership skills.

I couldn't help but notice the number of test failures, did they have an error 
message like the number of services started != number of services wanted?

If so, this is because port 4160 is in use.

See the latest arm test failure on jenkins for an example, exactly 280 tests 
failed there too.

Regards,

Peter.

- Original message -
> The tests appear to be awfully sensitive to their environment.  Insofar as 
> when
> I run them locally on an untouched source tree, I get 280 failures.



Re: Next steps after 2.2.1 release

2013-04-08 Thread Peter
Thanks Gregg,

You've hit the nail on the head, this is exactly the issue I'm having.

So I've been fixing safe publication in constructors by making fields final or 
volatile and ensuring "this" doesn't escape, fixing synchronisation on 
collections etc during method calls.

To fix deadlock, I investigate immutable non blocking data structures with 
volatile publication, if future state doesn't depend on previous state, if it 
does a CAS atomic reference can be used instead of volatile.

Often i find synchronization is quite acceptable if it is limited in scope, if 
synchronized or holding a lock while a thread is executing outside your objects 
scope of control, that's when deadlock is more likely to occur.

The polciy providers were deadlock prone, which is why they're mostly immutable 
non blocking now, any synchronization or locking is limited.

I basically follow Doug Lea's concurrency in practise guidelines.

For debugging I follow Cliff Click's reccommendations.

Unfortunately fixing concurrency bugs means finding a trace of execution, 
identifying all classes and inspecting the code visually.  Findbugs identifies 
cases of inadequate sychronization using static analysis.

Regards,

Peter.

- Original message -
> On 4/7/2013 7:03 PM, Greg Trasuk wrote:
> > I'm honestly and truly not passing judgement on the quality of the code. I
> > honestly don't know if it's good or bad. I have to confess that, given that
> > Jini was written as a top-level project at Sun, sponsored by Bill Joy, when
> > Sun was at the top of its game, and the Jini project team was a "who's-who" 
> > of
> > distributed computing pioneers, the idea that it's riddled with concurrency
> > bugs surprises me. But mainly, I'm still trying to answer that question - 
> > "How
> > do I know if it's good?" Here's what I'm doing: - I'm attempting to run the
> > tests from "tags/2.2.0" against the "2.2" branch. When I have confidence in
> > the "2.2" branch, I'll publish the results, ask anyone else who's interested
> > to test it, and then call for a release on "2.2.1" - After that, the
> > developers need to reach consensus about how to move forward. Cheers, Greg.
>
> This is an important issue to address.  I know a lot of people here probably
> don't participate on the Concurrency-interest mailing list that has a wide 
> range
> of discussion about the JLS vs the JMM and what the JIT compilers actually do 
> to
> code these days.
>
> The number one issue that you need to understand, is that the optimizer is
> working against you more and more these days if you don't have JMM details
> exactly write.  Statements are being reordered more and more, including actual
> "assignments" which can expose uninitialized data items in "racy" concurrent
> code.  The latest example is the  Thread.setName()/Thread.getName() pair.  
> They
> are most likely always to be accessed by "other threads", yet there is no
> synchronization on them, including no "visibility" control with volatile 
> even. 
> What this means, is that if setName() and getName() are being called in a racy
> environment, the setName, will assign the array that is created to copy the
> characters into, before the arraycopy of the data occurs, potentially exposing
> an uninitialized name to getName().
>
> There are literally hundreds of places in the JDK that still have these kinds 
> of
> races going on, and no one at Oracle, based on how people are acting, appears 
> to
> be responsible for dealing with it. The Jini code, has many many of the same
> issues that just randomly appear in stress cases on "slower" or "faster"
> hardware, depending on the issue.
>
> When you haven't got sharing and visibility covered correctly, the JIT code
> rewrites can make execution order play a big part in conflating what you "see"
> happening verses what the "code" says, to you, should happen.
>
> There are some very simple things to get the JIT out of the picture.  One of
> these, is to actually open the source up in an IDE and declare every field
> final.  If that doesn't work due to 'mutation' of values, change those fields 
> to
> 'volatile' so that it will compile again.    Then run your tests and you will 
> now
> greatly diminish reordering and visibility issues so that you can just get to
> the simple "was it set correctly, before it was read" and "did we provide the
> correct atomicity for that update" kinds of questions that will help you
> understand things better when code is misbehaving.
>
> This is the kind of thing that Peter has been working through because the 
> usage
> of the code in real life has not continued in the same way that it did when 
> the
> code was written, and the JMM in JDK5 has literally broken so much software, 
> all
> over the planet, that used to work quite well, because there wasn't a formal
> definition of "happens before".    Now that there is, the compiler 
> optimizations
> are against you if you don't get it right.  The behaviors you will experienc

Re: Next steps after 2.2.1 release

2013-04-08 Thread Greg Trasuk

On Mon, 2013-04-08 at 15:59, Peter wrote:
> Greg,
> 
> I apologise again for my outburst, I was wrong about your leadership skills.
> 
> I couldn't help but notice the number of test failures, did they have an 
> error message like the number of services started != number of services 
> wanted?
> 
> If so, this is because port 4160 is in use.
> 

Yes, that's it.  Definitely a configuration issue.  It's not that port
4160 is in use by anything else - the problem is that the test framework
is trying to start up multiple Reggies, which requires port overrides
that are currently not getting through.  Perhaps if someone recognizes
the issue they can clue me in and save some time.  If not, then it's
just a matter of slogging through the code to understand the test
configurations.

There's a file called 'reggie3_2Ports.properties' in
qa/src/com/sun/jini/test/share' that seems to call out these overrides,
but I haven't yet figured out how it's supposed to get included in the
test configuration.  I suspect it's something fairly trivial that needs
to be setup locally.

Anybody recognize the problem?  On the bright side, every failure I see
in the 2.2 branch that I'm testing locally appears to be caused by this
configuration problem, so everything looks very promising.

Cheers,

Greg.

> See the latest arm test failure on jenkins for an example, exactly 280 tests 
> failed there too.
> 
> Regards,
> 
> Peter.
> 
> - Original message -
> > The tests appear to be awfully sensitive to their environment.  Insofar as 
> > when
> > I run them locally on an untouched source tree, I get 280 failures.
> 



Re: Next steps after 2.2.1 release

2013-04-09 Thread Peter Firmstone
I've never experienced the issue locally (I see it on Jenkins quite a 
lot), but I suspect a stale registrar process left from another test may 
be stopping the socket from closing.  Not that registrars are also 
simulated for discovery tests, so it may not necessarily be Reggie.


The code is duplicated in two places in superclasses of the tests that 
are failing,  the method portInUse(int port) is supposed to check if the 
ports available, but only selects from a list of LookupLocator's known 
to have started, so doesn't actually check the port's available.


Perhaps you can find the stale test process responsible?

Regards,

Peter.

Greg Trasuk wrote:

On Mon, 2013-04-08 at 15:59, Peter wrote:
  

Greg,

I apologise again for my outburst, I was wrong about your leadership skills.

I couldn't help but notice the number of test failures, did they have an error 
message like the number of services started != number of services wanted?

If so, this is because port 4160 is in use.




Yes, that's it.  Definitely a configuration issue.  It's not that port
4160 is in use by anything else - the problem is that the test framework
is trying to start up multiple Reggies, which requires port overrides
that are currently not getting through.  Perhaps if someone recognizes
the issue they can clue me in and save some time.  If not, then it's
just a matter of slogging through the code to understand the test
configurations.

There's a file called 'reggie3_2Ports.properties' in
qa/src/com/sun/jini/test/share' that seems to call out these overrides,
but I haven't yet figured out how it's supposed to get included in the
test configuration.  I suspect it's something fairly trivial that needs
to be setup locally.

Anybody recognize the problem?  On the bright side, every failure I see
in the 2.2 branch that I'm testing locally appears to be caused by this
configuration problem, so everything looks very promising.

Cheers,

Greg.

  

See the latest arm test failure on jenkins for an example, exactly 280 tests 
failed there too.

Regards,

Peter.

- Original message -


The tests appear to be awfully sensitive to their environment.  Insofar as when
I run them locally on an untouched source tree, I get 280 failures.
  



  




Re: Next steps after 2.2.1 release

2013-04-09 Thread Simon IJskes - QCG

On 09-04-13 11:44, Peter Firmstone wrote:

I've never experienced the issue locally (I see it on Jenkins quite a
lot), but I suspect a stale registrar process left from another test may
be stopping the socket from closing.  Not that registrars are also
simulated for discovery tests, so it may not necessarily be Reggie.

The code is duplicated in two places in superclasses of the tests that
are failing,  the method portInUse(int port) is supposed to check if the
ports available, but only selects from a list of LookupLocator's known
to have started, so doesn't actually check the port's available.

Perhaps you can find the stale test process responsible?


And other jobs get executed on the same machine at the same time. Maybe 
these consume ports?


The best way to find the problem looks to me like doing a 'lsof -i' in 
the test as soons as a port turns out used unexpectedly.


Gr. Simon

--
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Den Haag: 28088397


Re: Next steps after 2.2.1 release

2013-04-10 Thread Gregg Wonderly
I just want to extend this conversation a bit by saying that nearly everything 
about River is "concurrently accessed".  There are, of course several places, 
where work is done by one thread, at a time, but new threads are created to do 
that work, and that means that "visibility" has to be considered.

I won't say that every single field in every class in River needs to be final 
or volatile, but that should not be considered an extreme.  Specifically, you 
might see code execute just fine without appropriate concurrency design, and 
then it will suddenly break when a new optimization appears on the scene, 
reordering something under the covers and creating an intangible behavior.  
Some "visibility bugs" might not ever manifest because of other "happens 
before" and "cache line sync" activities that happen implicitly based on the 
"current design" or "thread model".  We can "be happy" with "it ain't broke, so 
don't fix it", but I don't think that's very productive.

I personally, have been beating on various parts of Jini in my "fork" because 
of completely unpredictable results in discovery and discovery management.  
I've written, rewritten, debugged and stared at that code till I was blue in 
the face, because my ServiceUI desktop application just doesn't behave like it 
should.  Some of it is missing lifecycle management that was not in the 
original services, because System.registerShutdownHook() hasn't been used.  But 
other parts are these race conditions and thread scheduling overlaps (or 
underlaps) which keep discovery and notification from happening reliably.   
There are lots of different reasons why people might not be "complaining" about 
this stuff, but I would contend that the fact that there are many examples of 
people forking and extending Jini, which to me, reflects the fact that there 
are things that aren't correct, or functional in the wild, and this causes them 
to jump over the cliff and never look back.

We are at that point today, and Peter's continued slogging through the motions 
to track down and discover where the issues actually are, is an astronomical 
effort!  I have been very involved in several different, new work opportunities 
that have kept me from jumping in to participate in dealing with all of these 
issues, as I have really wanted to.  

Gregg Wonderly

On Apr 8, 2013, at 3:19 PM, Peter  wrote:

> Thanks Gregg,
> 
> You've hit the nail on the head, this is exactly the issue I'm having.
> 
> So I've been fixing safe publication in constructors by making fields final 
> or volatile and ensuring "this" doesn't escape, fixing synchronisation on 
> collections etc during method calls.
> 
> To fix deadlock, I investigate immutable non blocking data structures with 
> volatile publication, if future state doesn't depend on previous state, if it 
> does a CAS atomic reference can be used instead of volatile.
> 
> Often i find synchronization is quite acceptable if it is limited in scope, 
> if synchronized or holding a lock while a thread is executing outside your 
> objects scope of control, that's when deadlock is more likely to occur.
> 
> The polciy providers were deadlock prone, which is why they're mostly 
> immutable non blocking now, any synchronization or locking is limited.
> 
> I basically follow Doug Lea's concurrency in practise guidelines.
> 
> For debugging I follow Cliff Click's reccommendations.
> 
> Unfortunately fixing concurrency bugs means finding a trace of execution, 
> identifying all classes and inspecting the code visually.  Findbugs 
> identifies cases of inadequate sychronization using static analysis.
> 
> Regards,
> 
> Peter.
> 
> - Original message -
>> On 4/7/2013 7:03 PM, Greg Trasuk wrote:
>>> I'm honestly and truly not passing judgement on the quality of the code. I
>>> honestly don't know if it's good or bad. I have to confess that, given that
>>> Jini was written as a top-level project at Sun, sponsored by Bill Joy, when
>>> Sun was at the top of its game, and the Jini project team was a "who's-who" 
>>> of
>>> distributed computing pioneers, the idea that it's riddled with concurrency
>>> bugs surprises me. But mainly, I'm still trying to answer that question - 
>>> "How
>>> do I know if it's good?" Here's what I'm doing: - I'm attempting to run the
>>> tests from "tags/2.2.0" against the "2.2" branch. When I have confidence in
>>> the "2.2" branch, I'll publish the results, ask anyone else who's interested
>>> to test it, and then call for a release on "2.2.1" - After that, the
>>> developers need to reach consensus about how to move forward. Cheers, Greg.
>> 
>> This is an important issue to address.  I know a lot of people here probably
>> don't participate on the Concurrency-interest mailing list that has a wide 
>> range
>> of discussion about the JLS vs the JMM and what the JIT compilers actually 
>> do to
>> code these days.
>> 
>> The number one issue that you need to understand, is that the optimizer is
>> working

Re: Next steps after 2.2.1 release

2013-04-11 Thread Peter Firmstone

Gregg,

Thanks again for your support.

I refactored LookupDiscovery and tidied up LookupLocatorDiscovery.

If you get some time, I could use a hand with other classes you've 
already fixed.


I'm working on MailboxImpl presently, there's some very dubious code, 
Threads being started from inside their constructors, called from static 
init methods from within MailboxImpl's constructor.


I know some would prefer me to prove something is broken before fixing 
it, providing tests that prove the failure, but this isn't an enterprise 
project and I lack the resources for such things, there's always the 
option of running a 2.2 maintenance branch for those who'd like to wait 
longer before upgrading.


Regards,

Peter.

On 11/04/2013 2:00 AM, Gregg Wonderly wrote:

I just want to extend this conversation a bit by saying that nearly everything about River is 
"concurrently accessed".  There are, of course several places, where work is done by one 
thread, at a time, but new threads are created to do that work, and that means that 
"visibility" has to be considered.

I won't say that every single field in every class in River needs to be final or volatile, but that should not be considered an extreme.  
Specifically, you might see code execute just fine without appropriate concurrency design, and then it will suddenly break when a new optimization 
appears on the scene, reordering something under the covers and creating an intangible behavior.  Some "visibility bugs" might not ever 
manifest because of other "happens before" and "cache line sync" activities that happen implicitly based on the "current 
design" or "thread model".  We can "be happy" with "it ain't broke, so don't fix it", but I don't think that's 
very productive.

I personally, have been beating on various parts of Jini in my "fork" because of 
completely unpredictable results in discovery and discovery management.  I've written, rewritten, 
debugged and stared at that code till I was blue in the face, because my ServiceUI desktop 
application just doesn't behave like it should.  Some of it is missing lifecycle management that 
was not in the original services, because System.registerShutdownHook() hasn't been used.  But 
other parts are these race conditions and thread scheduling overlaps (or underlaps) which keep 
discovery and notification from happening reliably.   There are lots of different reasons why 
people might not be "complaining" about this stuff, but I would contend that the fact 
that there are many examples of people forking and extending Jini, which to me, reflects the fact 
that there are things that aren't correct, or functional in the wild, and this causes them to jump 
over the cliff and never look back.

We are at that point today, and Peter's continued slogging through the motions 
to track down and discover where the issues actually are, is an astronomical 
effort!  I have been very involved in several different, new work opportunities 
that have kept me from jumping in to participate in dealing with all of these 
issues, as I have really wanted to.

Gregg Wonderly

On Apr 8, 2013, at 3:19 PM, Peter  wrote:


Thanks Gregg,

You've hit the nail on the head, this is exactly the issue I'm having.

So I've been fixing safe publication in constructors by making fields final or volatile 
and ensuring "this" doesn't escape, fixing synchronisation on collections etc 
during method calls.

To fix deadlock, I investigate immutable non blocking data structures with 
volatile publication, if future state doesn't depend on previous state, if it 
does a CAS atomic reference can be used instead of volatile.

Often i find synchronization is quite acceptable if it is limited in scope, if 
synchronized or holding a lock while a thread is executing outside your objects 
scope of control, that's when deadlock is more likely to occur.

The polciy providers were deadlock prone, which is why they're mostly immutable 
non blocking now, any synchronization or locking is limited.

I basically follow Doug Lea's concurrency in practise guidelines.

For debugging I follow Cliff Click's reccommendations.

Unfortunately fixing concurrency bugs means finding a trace of execution, 
identifying all classes and inspecting the code visually.  Findbugs identifies 
cases of inadequate sychronization using static analysis.

Regards,

Peter.

- Original message -

On 4/7/2013 7:03 PM, Greg Trasuk wrote:

I'm honestly and truly not passing judgement on the quality of the code. I
honestly don't know if it's good or bad. I have to confess that, given that
Jini was written as a top-level project at Sun, sponsored by Bill Joy, when
Sun was at the top of its game, and the Jini project team was a "who's-who" of
distributed computing pioneers, the idea that it's riddled with concurrency
bugs surprises me. But mainly, I'm still trying to answer that question - "How
do I know if it's good?" Here's what I'm doing: - I'm attempting to run the
tests from "tags/2.2.0

Re: Next steps after 2.2.1 release

2013-04-11 Thread Patricia Shanahan

On 4/11/2013 4:15 AM, Peter Firmstone wrote:
...

I know some would prefer me to prove something is broken before fixing
it, providing tests that prove the failure, but this isn't an enterprise
project and I lack the resources for such things, there's always the
option of running a 2.2 maintenance branch for those who'd like to wait
longer before upgrading.

...

I'm still trying to remember/find a place where there seemed to me to be 
a possible race condition.


Back when I was actively working on this, I considered trying to set up 
a test, and all I could think of was to put a moderately long 
Thread.sleep() call in the code where I thought there was a window. I 
was not able to produce a failing test.


Patricia