Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread David Holmes

This time really adding Doug Lea :)

On 11/02/2015 11:43 AM, David Holmes wrote:

Adding Doug Lea.

On 11/02/2015 12:51 AM, Paul Sandoz wrote:

On Feb 10, 2015, at 3:39 PM, Chris Hegarty 
wrote:


Adding native methods to 166 classes will introduce another form of
dependency on the platform that i would prefer to avoid.


Right. And I suspect that the separate distributable of 166, outside
of the Java Platform, would not want to have to carry a separate
platform specific native library.


Yes.


Right and for that reason jsr166 distribution would not want to have to
know whether to use Thread or Unsafe.






But I don't see any reason why we couldn't move the implementation
from unsafe.cpp to jvm.cpp and invoke via a native method
implementation of LockSupport. It would still be just as "unsafe"
of course.



Can you think of any reasons, beyond that of "don't touch the core
classes", why we cannot copy this functionality over to
java.lang.{*, Thread} ?


Would you be thinking of making the changes public, i.e. new standard
API on java.lang.Thread ?



Yes, j.l.Thread seems like the ideal location :-)


I don't see any point in moving it to Thread now. The public supported
API already exists in LockSupport. As I said you'd have to deprecate it
in 9 with an intent to remove in 10 - assuming you are even allowed to
do that.

The issue is not LockSupport versus Thread, but the use of Unsafe.

David



Paul.



Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread David Holmes

On 10/02/2015 10:46 PM, Paul Sandoz wrote:

Hi David,

On Feb 10, 2015, at 1:02 PM, David Holmes  wrote:


Hi Paul,

When we added j.u.c there was reluctance to add these methods to Thread - this 
was the normal policy (for the time) of don't touch the core classes. So 
LockSupport is the public API and Unsafe was chosen as the implementation as it 
is a de-facto interface to the VM from JDK code rather then defining explicit 
native methods (I must admit I'm unsure why we went this route rather than 
simply hooking into jvm.cpp with JVM_park/JVM_unpark. I think in many ways it 
was/is just easier to call an Unsafe method and define a new unsafe.cpp native 
call. Plus we had a bunch of other Unsafe API's being used from j.u.c.)

So we can't just move things from LockSupport to Thread as we'd need to 
deprecate first etc etc.


I was thinking that the implementation of LockSupport could be updated to use 
any new stuff we could add to java.lang.{*, Thread}.

I am trying to tease apart the internal dependencies that 166 classes have on 
the platform. In the case of LockSupport there are two, Unsafe and Thread.

Adding native methods to 166 classes will introduce another form of dependency 
on the platform that i would prefer to avoid.



But I don't see any reason why we couldn't move the implementation from unsafe.cpp to 
jvm.cpp and invoke via a native method implementation of LockSupport. It would still be 
just as "unsafe" of course.



Can you think of any reasons, beyond that of "don't touch the core classes", 
why we cannot copy this functionality over to java.lang.{*, Thread} ?



Aside: this is where the infamous "spurious wakeup" from park() can arise. If you just 
randomly unpark() a Thread there is nothing to guarantee that the thread doesn't terminate and free 
its native resources while you are working on it. But the ParkEvents are type-stable-memory, so 
even if the event has been disassociated from the target thread you can still call unpark() as it 
is never freed. However if that ParkEvent has since been reused by another thread, then that thread 
will encounter a "spurious wakeup" if it calls park().



I see, and can the same happen for Object.wait as well? I gather so from the 
documentation.


Spec allows for it but it can't happen for the same reason as unpark() 
as it is based on Object monitors which can't vanish while in use.


David





On Feb 10, 2015, at 1:04 PM, David Holmes  wrote:

Oh I just realized/remembered why we use Unsafe for this - it is because of the 
potential for intrinsics.


I can certainly imagine it's convenient place to put things in that regard.

Paul.



Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread David Holmes

Adding Doug Lea.

On 11/02/2015 12:51 AM, Paul Sandoz wrote:

On Feb 10, 2015, at 3:39 PM, Chris Hegarty  wrote:


Adding native methods to 166 classes will introduce another form of dependency 
on the platform that i would prefer to avoid.


Right. And I suspect that the separate distributable of 166, outside of the 
Java Platform, would not want to have to carry a separate platform specific 
native library.


Yes.


Right and for that reason jsr166 distribution would not want to have to 
know whether to use Thread or Unsafe.







But I don't see any reason why we couldn't move the implementation from unsafe.cpp to 
jvm.cpp and invoke via a native method implementation of LockSupport. It would still be 
just as "unsafe" of course.



Can you think of any reasons, beyond that of "don't touch the core classes", 
why we cannot copy this functionality over to java.lang.{*, Thread} ?


Would you be thinking of making the changes public, i.e. new standard API on 
java.lang.Thread ?



Yes, j.l.Thread seems like the ideal location :-)


I don't see any point in moving it to Thread now. The public supported 
API already exists in LockSupport. As I said you'd have to deprecate it 
in 9 with an intent to remove in 10 - assuming you are even allowed to 
do that.


The issue is not LockSupport versus Thread, but the use of Unsafe.

David



Paul.



Re: RFR: JDK-8072842 Add support for building native JTReg tests

2015-02-10 Thread David Holmes

Hi Magnus,

On 11/02/2015 12:23 AM, Magnus Ihse Bursie wrote:

Here is an addition to the build system, that will compile native
libraries and executables and make them available for JTReg tests
written in Java.


Sorry I'm missing the basics here: when does this compilation take 
place? As part of a normal build? Where will the build artifacts go?


Thanks,
David H.


This patch is the result of the work (in serial, mostly) of Staffan
Larsen, David Simms and me. (And possible more that I don't know about.)

In it current form, the patch only provides the framework on which to
attach real tests. To prove the concept, some initial dummy tests are
present. These are supposed to be removed as soon as real tests starts
to propagate into the jdk and hotspot jtreg test suites.

The behavior is based on the following design: For directories with
native jtreg compilation enabled, the build system searches for native
files prefixed with either "lib" or "exe", and compiles a free-standing
library or an executable, respectively, for each such file. Since all
compiled files are placed in a single directory (this is currently a
requirement from the jtreg native support), there is the additional
requirement that all files have unique names.

My personal opinion is that a better long-term approach is to compile
all tests into a single library, if possible. (I realize some tests need
to be separate to test library loading, etc.) For that to work, however,
JTReg needs to be changed. The build framework in the patch is designed
in such a way that it will be easy to add compilation to a common
library in the future, if that restriction is lifted from JTReg.

This patch also lays the foundation for building additional tests, not
only native jtreg tests, by the build system in the future. For
instance, it codifies the suggested correspondence between make targets,
intermediate test output and test image final build results. With the
on-going test co-location project, we expect a stream of tests to be
added to the build system in the future, and we hope this project can
serve as an example of a suitable way of integration.

The patch modifies hotspot code, but it's mostly due to the addition of
the new dummy tests. My preference would be to integrate this patch via
jdk9-dev, since it modifies the build system most of all, but I'm open
for discussion.

Bug: https://bugs.openjdk.java.net/browse/JDK-8072842
WebRev:
http://cr.openjdk.java.net/~ihse/JDK-8072842-build-native-jtreg-tests/webrev.01


/Magnus


Re: RFR 8071479: Stream and lamdification improvements to j.u.regex.Matcher

2015-02-10 Thread Stuart Marks

Hi Paul,

I spent some time looking at this API. Overall it seems to me that things work a 
bit more nicely when these methods are added to Pattern instead of Matcher. 
Unfortunately there are some odd things with the existing API that make this 
tradeoff not so obvious.


First, here's what a simple replacement operation looks like when replaceAll() 
is added to Matcher:


String input = "foobbfooabbbfoo";
Pattern p = Pattern.compile("a*b");
Matcher m = p.matcher(input);
String result = m.replaceAll(mr -> mr.group().toUpperCase());

But if replaceAll() is on Pattern, we can skip a step:

String input = "foobbfooabbbfoo";
Pattern p = Pattern.compile("a*b");
String result = p.replaceAll(input, mr -> mr.group().toUpperCase());

Getting stream of match results is similar. So yes, I agree that it simplifies 
things to have these be on Pattern instead of Matcher.


An advantage of having these on Pattern is that the matcher that gets created is 
encapsulated, and its state isn't exposed to being mucked about by the 
application. Thus you can avoid the additional concurrent modification checks 
that you have to do if replaceAll et. al. are on Matcher.


Unfortunately, putting these on Pattern now creates some difficulties meshing 
with the existing API.


One issue is that Matcher already has replaceAll(String) and 
replaceFirst(String). It would be strange to have these here and to have 
replaceAll(replacer) and replaceFirst(replacer) on Pattern.


Another issue is that Matcher supports matching on region (subrange) of its 
input. For example, today you can do this:


pattern.matcher(input).region(start, end)

The region will constrain the matching for certain operations, such as find() 
(but not replaceAll or replaceFirst). If something like results() were added to 
Matcher, I'd expect that it would respect the Matcher's region, but if results() 
(or matches() as you called it) were on Pattern, the region constraint would be 
lacking.


Also note that Pattern already has this:

static boolean matches(regex, input)

so I don't think an overload of matches() that returns a Stream would be a good 
idea. (Maybe findAll()?)


Another issue, not directly related to where the new lambda/streams methods get 
added, is that MatchResult allows references only numbered capturing groups. 
Matcher, which implements MatchResult, also supports named capturing groups, 
with the new overloaded methods group(String), start(String), and end(String). 
These were added in Java 7. Logically these also belong on MatchResult, but they 
probably weren't added because of the compatibility issue of adding methods to 
interfaces. Maybe we should add these as default methods to MatchResult.


(But what would the supported implementation be? Just throw 
UnsupportedOperationException?)


--

I'm not entirely sure where this leaves things. It certainly seems more 
convenient to have the new methods on Pattern. But given the way the existing 
API is set up, it seems like it's a better fit to add them to Matcher.


s'marks



On 2/9/15 6:18 AM, Paul Sandoz wrote:

Here is an alternative that pushes the methods on to Pattern instead:

   
http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8071479--Matcher-stream-results/on-Pattern/webrev/

(Whe webrev reports some files as empty, please ingore those, i have this 
webrev stacked on the previous one.)

I have also included replaceFirst.

This simplifies things for streaming on the match results and also for 
replacing.

Note that the existing replace* methods on Matcher reset the matcher before 
matching and indicate that the matcher should be reset afterwards for reuse. 
Thus there is no loss in functionality moving such lambda accepting methods 
from Matcher to Pattern. It comes down to the performance of reusing a matcher, 
which does not seems so compelling to me.

Paul.

On Feb 5, 2015, at 11:59 AM, Paul Sandoz  wrote:


Hi.

Please review these stream/lambda enhancements on Matcher:

  
http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8071479--Matcher-stream-results/webrev/

Two new methods are added to Matcher:

1) replaceAll(Function ) that is more flexible than the 
existing replaceAll that accepts a single value.

2) Stream results() that returns a stream of MatchResult for all 
matches.

The former does introduce a minor source incompatibility for a null argument, 
but then so did the new append methods accepting StringBuilder that were 
recently added (see JDK-8039124).

For the latter i opted to place the method on Matcher rather than Pattern as i 
think that is a better fit with current usages of Matcher and operating on a 
MatchResult. That marginally increases the complexity since co-modification 
checking is required.

I update the test PatternStreamTest to derive the expected result.


I suppose i could add another method replaceFirst(Function 
) if anyone feels strongly about that. Consistency-wise it seems the righ

[9] 8064562: (doc) errors in java.io.PushbackInputStream API documentation

2015-02-10 Thread Brian Burkhalter
Please review at your convenience.

Issue:  https://bugs.openjdk.java.net/browse/JDK-8064562
Patch:  http://cr.openjdk.java.net/~bpb/8064562/webrev.00/

Thanks,

Brian


Re: Lexicographic array comparators

2015-02-10 Thread John Rose
On Feb 10, 2015, at 3:15 PM, Martin Buchholz  wrote:
> 
> I was surprised that System.arraycopy hasn't been specialized for every
> array type.
> I guess the JIT is good at specializing all callers.

Yes.  Usually the arguments have specific static types the JIT can see.
The Object formal type doesn't obscure this.

> I don't know whether we will someday regret the explosion of array
> comparison methods.

It's technical debt.  When we go to value types there will be an infinite 
number of array types.
That's why in Project Valhalla we are thinking hard, before that, about 
parametric polymorphism.
We need to be able to have true polymorphism over APIs in T[], where T can be 
ref, prim, or value.
At that point, hopefully, what we have won't be impossible to retrofit.

— John

Re: Lexicographic array comparators

2015-02-10 Thread Martin Buchholz
The addition of array comparison intrinsics makes sense - I've missed them
myself on occasion.

I was surprised that System.arraycopy hasn't been specialized for every
array type.
I guess the JIT is good at specializing all callers.

I don't know whether we will someday regret the explosion of array
comparison methods.
The case for the intrinsic seems more compelling to me.

(Hopefully someone else will do the real review)

On Tue, Feb 10, 2015 at 6:36 AM, Paul Sandoz  wrote:

> Hi,
>
> One common use of Unsafe is boost the performance of comparing unsigned
> byte[] by viewing as long[] (for example as performed by Guava or
> Cassandra). To reduce the dependency on Unsafe we need a public and safe
> equivalent that works on all platforms while meeting the performance
> expectations.
>
> In the past we have discussed exposing something on ByteBuffer but i think
> the right thing to explore are comparators for all basic types on
> java.util.Arrays:
>
>  https://bugs.openjdk.java.net/browse/JDK-8033148
>
> This should be explored with the following issue in mind to expose an
> intrinsic that could be leveraged when implemented:
>
>  https://bugs.openjdk.java.net/browse/JDK-8044082
>
> Some preliminary work can be found here:
>
>
> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8033148-Arrays-lexico-compare/webrev/
>
> The "explosion" in methods is annoying. Out of the 8 primitives 4 can be
> signed or unsigned. Objects can be comparable or an explicit comparator can
> be used. There are implicit and explicit ranges. That makes a maximum of 28
> methods, but could be reduced to a minimum of 10, for all basic types.
>
> Comparator instances, for implicit ranges, can easily be created from
> method refs or lambdas.
>
> Thoughts?
>
> Paul.
>


Re: RFR 8071600: Add a flat-mapping collector

2015-02-10 Thread Stuart Marks

Hi Paul,

On 2/3/15 5:48 AM, Paul Sandoz wrote:

http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8071600-Collector-flatMapping/webrev/

This patch adds a new flat mapping collector to Collectors. This can be useful 
if one needs to map 0 or more items into a downstream collector.


Mostly pretty good, just a couple minor nits.

Re the comment at TabulatorsTest.java line 513, this isn't a two-level groupBy 
anymore, it's a groupByWithMapping (as the test name indicates).


Given the new tests of closing in FlatMapOpTest.java, is it necessary to have 
testFlatMappingClose() in TabulatorsTest.java?



A following patch, which i plan to fold into the above patch, performs some 
renames on the collectors test to be consistent with the current naming:

http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8071600-Collector-flatMapping-test-rename/webrev/


Renaming looks good and makes it easy to correlate the tests, the assertion 
classes, and the collector implementations under test. Unfortunately, the word 
for the act of collecting is "collection" which is confusing since "collection" 
already has another meaning, but oh well.


s'marks



Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Roger Riggs

Hi Staffan,


On 2/10/2015 4:53 AM, Staffan Larsen wrote:

Happy to see this!

In ProcessHandle.Info would it be possible to include the environment variables 
of the process as well?
The API was intended to provide a degree of control over subprocesses, 
but not
be a diagnostic tool, there are better os specific tools for those 
purposes.
On some systems getting current information requires reading the 
processes memory image
and that's a fragile mechanism that I don't think it is worth coding and 
maintaining.

There is also a risk of duplicating the functions of java.lang.management.
Can access to the environment can be supported in that API?
It is easier and  more reliable to retrieve that kind of information 
from inside of the process.


How does ProcessHandle.allChildren() behave when process A launches B which 
launches C, and B terminates? Is C included in allChildren() of A?
Whatever the OS does. The implementation makes a best effort at a 
snapshot of the processes
that exist at the time the method is called. Depending on the OS the 
snapshot may or may not
get a consistent view. For example, on Unix, /proc/nnn is read and each 
process is opened to
get its parent.  The set of processes and the parentage can change 
during that process.
And the parent/child relationships can change after that, processes can 
be created or terminate at any time.


Roger



Thanks,
/Staffan


On 10 feb 2015, at 00:25, Roger Riggs  wrote:

Hi,

After a protracted absence from working on JEP 102, the updated API draft
provides access to process hierarchies and individual process information;
as permitted by the OS. The relationship between Process and ProcessHandle
is clarified and the security model validated.

Both Processes and ProcessHandles can be monitored using CompletableFuture
for termination and to trigger additional actions on Process exit.
Information about processes includes the total cputime, starttime, user,
executable, and arguments.

Please review and comment:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Thanks, Roger








Re: Lexicographic array comparators

2015-02-10 Thread Paul Sandoz
Hi Martin,

In this case i am trying to pick off one particularly common case, within the 9 
time-frame, used in a number of popular libraries. In that context (including 
that of the intrinsic referenced in the related issue) do you think this is 
reasonable?


On Feb 10, 2015, at 8:48 PM, Martin Buchholz  wrote:

> People will continue to want to access byte arrays (and direct byte buffers) 
> with C-like performance, and are currently using Unsafe to do so.
> Hard to fix for real.

Yes, that is a much larger problem that i hope both value types and panama will 
address more fully.


>   Endianness and unaligned access are both non-portable aspects.  People 
> don't want to pay for bounds checking and especially not for alignment 
> checking of indexes known to be aligned. 

Note that as part of the VarHandles work we will try and improve the strength 
reduction of bounds checks for array access. If we expose an intrinsic for 
unsigned integer comparison that can be reused within the nio buffers.


> Lexicographic comparison is only one use case (that happened to be important 
> performance wise for guava). I have long thought that arrays should acquire 
> more listesque methods (e.g. contains, indexOf), even though arrays are 
> unpopular.  

> 

Yes, i was wondering the same, there should be an interface. But i think that 
is an arrays 2.0 and value types thing.

Paul.



Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Roger Riggs

Hi Anthony,

On 2/10/2015 2:12 PM, Anthony Vanelverdinghe wrote:

Hi Roger

This looks great already. My feedback is about process destruction:

Why isn't ProcessHandle::isDestroyForcible a static method?

The Process API is subclassable and in that case the subclass
should be able to control that behavior.  When the specification refers to
implementation dependent it refers to the classes that implement Process.
The built-in ProcessBuilder and Runtime.exec that uses ProcessBuilder
define their implementation behaviors.



For ProcessHandle::destroy and Process::destroy, I'd like to propose 
replacing
"Whether the process represented by this Process object is forcibly 
terminated or not is implementation dependent."

with:
"The process represented by this Process object is forcibly terminated 
if and only if isDestroyForcible returns true."
There is some API history to contend with and multiple implementations 
are possible.
An application library can provide a factory of Process instances with 
other kinds of behavior.




Process::destroyForcibly contains the following phrase: "The default 
implementation of this method invokes destroy() and so may not 
forcibly terminate the process."
Why doesn't the default implementation throw 
UnsupportedOperationException if forcible termination is not supported 
on/implemented for the current platform? If I write code like: 
process.destroyForcibly().waitFor(), I'd assume it would finish in a 
timely manner, but due to the default implementation, this may 
actually not finish at all.
The evolution of the Process API has been long and uneven.  Destroy came 
first but was not predictable;
the destroyForcibly was added but destroy remained backward compatible 
(and still must be backward compatible).


Roger



Kind regards, Anthony


On 10/02/2015 0:25, Roger Riggs wrote:

Hi,

After a protracted absence from working on JEP 102, the updated API 
draft
provides access to process hierarchies and individual process 
information;
as permitted by the OS. The relationship between Process and 
ProcessHandle

is clarified and the security model validated.

Both Processes and ProcessHandles can be monitored using 
CompletableFuture

for termination and to trigger additional actions on Process exit.
Information about processes includes the total cputime, starttime, user,
executable, and arguments.

Please review and comment:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Thanks, Roger











Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Roger Riggs

Hi Peter,

Yep, I hadn't gotten to improving that part of the implementation yet.
The threading needs to be revised; System threads and the limited stack 
were used currently
to monitor processes (on Linux) but are not suitable for evaluating 
CompletionStages.

Switching to some form of Executor as a source of thread is needed.
I was also trying to preserve an option to have a lightweight monitor
of thread liveness that did not require a thread to improve scalability.
That would allow the thread needed for CompletionStage to be created 
only if and

when it is needed.

On 2/10/2015 1:17 PM, Peter Levart wrote:

On 02/10/2015 02:02 PM, Peter Levart wrote:

On 02/10/2015 12:35 PM, Peter Levart wrote:
ProcessHandle.completableFuture().cancel(true) forcibly destorys 
(destroyForcibly()) the process *and* vice versa: 
destory[Forcibly]() cancels the CompletableFuture. I don't know if 
this is the best way - can't decide yet. In particular, in the 
implementation it would be hard to achieve the atommicity of both 
destroying the process and canceling the future. Races are 
inevitable. So it would be better to think of a process (and a 
ProcessHandle representing it) as the 1st stage in the processing 
pipeline, where ProcessHandle.completableFuture() is it's dependent 
stage which tracks real changes of the process. Which means the 
behaviour would be something like the following:


- ProcessHandle.destroy[Forcibly]() triggers destruction of the 
process which in turn (when successful) triggers completion of 
CompletableFuture, exceptionally with CompletionException, wrapping 
the exception indicating the destruction of the process 
(ProcessDestroyedException?).


- ProcessHandle.completableFuture().cancel(true/false) just cancels 
the CompletableFuture and does not do anything to the process itself.


In that variant, then perhaps it would be more appropriate for 
ProcessHandle.completableFuture() to be a "factory" for 
CompletableFuture(s) so that each call would return new independent 
instance.


What do you think? 


Contemplating on this a little more, then perhaps the singleton-per 
pid CompletionStage could be OK if it was a "mirror" of real process 
state. For that purpose then, instead of .completableFuture() the 
method would be:


public CompletionStage completionStage()

Returns a CompletionStage for the process. The 
CompletionStage provides supporting dependent functions and actions 
that are run upon process completion.


Returns:
a CompletionStage for the ProcessHandle; the same 
instance is returned for each unique pid.



This would provide the most clean API I think, as CompletionStage 
does not have any cancel(), complete(), obtrudeXXX() or get() 
methods. One could still obtain a CompletableFuture by calling 
.toCompletableFuture() on the CompletionStage, but that future would 
be a 2nd stage future (like calling .thenApply(x -> x)) which would 
not propagate cancel(true) to the process destruction.


The implementation could still use CompletableFuture under the hood, 
but exposed wrapped in a delegating CompletionStage proxy.


So the javadoc might be written as:


public abstract void destroy()

Kills the process. Whether the process represented by this Process 
object is forcibly terminated or not is implementation dependent. If 
the process is not alive, no action is taken.


If/when the process dies as the result of calling destroy(), the 
completionStage() completes exceptionally with CompletionException, 
wrapping ProcessDestroyedException.



public abstract ProcessHandle destroyForcibly()

Kills the process. The process represented by this ProcessHandle 
object is forcibly terminated. If the process is not alive, no action 
is taken.


If/when the process dies as the result of calling destroyForcibly(), 
the completionStage() completes exceptionally with 
CompletionException, wrapping ProcessDestroyedException.



But I'm still unsure of whether it would be better for the 
completionStage() to complete normally in any case. Unless the fact 
that the process died as a result of killing it could be reliably 
communicated regardless of who was responsible for the killing 
(either via ProcessHandle.destoroy() or by a KILL/TERMINATE signal 
originating from outside the VM).


Peter




Hi Roger,

I checked out your branch in jdk9-sandbox and studied current 
implementation.


One problem with this approach (returning a singleton-per-pid 
CompletableFuture or CompletionStage) is that current 
processReaperExecutor is using threads with very small stack size (32 
K) and the returned CompletableFuture could be instructed to append a 
continuation that executes synchronously:


CompletionStage.thenApply(), CompletionStage.handle(), etc...

... so user code would execute by such thread and probably get 
StackOverflowException...

Yes, a normal stack size is needed.


Also, multiple ProcessHandle objects together with a Process object 
for the same pid each return a separate Comple

Re: Lexicographic array comparators

2015-02-10 Thread Martin Buchholz
People will continue to want to access byte arrays (and direct byte
buffers) with C-like performance, and are currently using Unsafe to do so.
Hard to fix for real.  Endianness and unaligned access are both
non-portable aspects.  People don't want to pay for bounds checking and
especially not for alignment checking of indexes known to be aligned.
Lexicographic comparison is only one use case (that happened to be
important performance wise for guava).

I have long thought that arrays should acquire more listesque methods (e.g.
contains, indexOf), even though arrays are unpopular.

On Tue, Feb 10, 2015 at 6:36 AM, Paul Sandoz  wrote:

> Hi,
>
> One common use of Unsafe is boost the performance of comparing unsigned
> byte[] by viewing as long[] (for example as performed by Guava or
> Cassandra). To reduce the dependency on Unsafe we need a public and safe
> equivalent that works on all platforms while meeting the performance
> expectations.
>
> In the past we have discussed exposing something on ByteBuffer but i think
> the right thing to explore are comparators for all basic types on
> java.util.Arrays:
>
>  https://bugs.openjdk.java.net/browse/JDK-8033148
>
> This should be explored with the following issue in mind to expose an
> intrinsic that could be leveraged when implemented:
>
>  https://bugs.openjdk.java.net/browse/JDK-8044082
>
> Some preliminary work can be found here:
>
>
> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8033148-Arrays-lexico-compare/webrev/
>
> The "explosion" in methods is annoying. Out of the 8 primitives 4 can be
> signed or unsigned. Objects can be comparable or an explicit comparator can
> be used. There are implicit and explicit ranges. That makes a maximum of 28
> methods, but could be reduced to a minimum of 10, for all basic types.
>
> Comparator instances, for implicit ranges, can easily be created from
> method refs or lambdas.
>
> Thoughts?
>
> Paul.
>


Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Roger Riggs

Hi Peter,

On 2/10/2015 6:35 AM, Peter Levart wrote:

On 02/10/2015 12:25 AM, Roger Riggs wrote:

Hi,

After a protracted absence from working on JEP 102, the updated API 
draft
provides access to process hierarchies and individual process 
information;
as permitted by the OS. The relationship between Process and 
ProcessHandle

is clarified and the security model validated.

Both Processes and ProcessHandles can be monitored using 
CompletableFuture

for termination and to trigger additional actions on Process exit.
Information about processes includes the total cputime, starttime, user,
executable, and arguments.

Please review and comment:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Thanks, Roger





Hi Roger,

Great to see progress on this.

Here are my comments:

ProcessHandle.{allProcesses,allChildren,children} return 
Stream. This is convenient if one wants to use Stream 
API, but a little more inconvenient and perhaps with additional 
overhead if one doesn't. Since all those methods return a "snapshot", 
I can imagine that internally, there is a collection built from this 
snapshot. So it would be most appropriate to return a 
List. One can always continue the expression by 
appending .stream()..., but on the other hand if one wants to 
.collect(toList()), additional copying is introduced.

I've been pulled both ways by various folks here also.
Both Streams and Collections are easily converted to the other.
The internal snapshot is an array of int pid's;  with the Stream, the 
creation of ProcessHandles can

be delayed until they are needed, if they are needed.


ProcessHandle.completableFuture().cancel(true) forcibly destorys 
(destroyForcibly()) the process *and* vice versa: destory[Forcibly]() 
cancels the CompletableFuture. I don't know if this is the best way - 
can't decide yet. In particular, in the implementation it would be 
hard to achieve the atommicity of both destroying the process and 
canceling the future. Races are inevitable.
Atomicity is provided by CompletableFuture, it handles the atomicity of 
updating its state.
If cancelled, there is no useful information or synchronization is 
needed from the (dieing) Process;

the CompletableFuture state/status is cancelled.
So it would be better to think of a process (and a ProcessHandle 
representing it) as the 1st stage in the processing pipeline, where 
ProcessHandle.completableFuture() is it's dependent stage which tracks 
real changes of the process. Which means the behaviour would be 
something like the following:


- ProcessHandle.destroy[Forcibly]() triggers destruction of the 
process which in turn (when successful) triggers completion of 
CompletableFuture, exceptionally with CompletionException, wrapping 
the exception indicating the destruction of the process 
(ProcessDestroyedException?).
Maybe thinking along the similar lines; Process and Future are loosely 
coupled;
A process exiting, regardless of whether  it was killed by some entity 
in the system

or the Java API should complete the Future.
For a Process, the exitStatus is available and could be used to modify 
the normal vs exceptional completion.
But for an arbitrary ProcessHandle, the process may not have been 
spawned by this process;
in that case the exit status is not available (see previous long 
discussions about reaping).
Also, the exit status may not be a reliable indication of success vs 
failure of the process.
So the best that can be reported is that the Process is terminated and 
the Future is completed (normally).




- ProcessHandle.completableFuture().cancel(true/false) just cancels 
the CompletableFuture and does not do anything to the process itself.


My model was that the CompletableFuture from ProcessHandle is a proxy 
for the process itself.
It is an abstraction for the Process that is independent of the 
implementation (as a Process).

It looks just like any other stage in the tree of CompletionStages.
The application that spawns the Process can control what it exposes as 
the CompletableFuture
it can expose the root or create another Completable future to hang 
other operations on.




In that variant, then perhaps it would be more appropriate for 
ProcessHandle.completableFuture() to be a "factory" for 
CompletableFuture(s) so that each call would return new independent 
instance.


What do you think?


I'm not sure which model is more useful to the application/library.
The implementation may be simpler if it only has to keep track of a 
single CompletableFuture

instead of a set but decoupling may be simpler to model.

Thanks, Roger


Regards, Peter





Re: JDK 9 RFR of JDK-8072843: Typo in the description of the j.l.r.Executable.getAnnotatedReceiverType

2015-02-10 Thread Lance Andersen
+1
On Feb 10, 2015, at 2:25 PM, joe darcy  wrote:

> Hello,
> 
> Please review this typo fix for
> 
> JDK-8072843: Typo in the description of the 
> j.l.r.Executable.getAnnotatedReceiverType
> 
> Patch below; I spellchecked the rest of the comments and strings and didn't 
> find any other problems.
> 
> Thanks,
> 
> -Joe
> 
> diff -r 30f5fa716218 
> src/java.base/share/classes/java/lang/reflect/Executable.java
> --- a/src/java.base/share/classes/java/lang/reflect/Executable.java Mon Feb 
> 09 17:49:26 2015 -0800
> +++ b/src/java.base/share/classes/java/lang/reflect/Executable.java Tue Feb 
> 10 11:24:10 2015 -0800
> @@ -662,7 +662,7 @@
>  *
>  * If this {@code Executable} object represents a static method or
>  * represents a constructor of a top level, static member, local, or
> - * anoymous class, then the return value is null.
> + * anonymous class, then the return value is null.
>  *
>  * @return an object representing the receiver type of the method or
>  * constructor represented by this {@code Executable} or {@code null} if
> 



Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering 
1 Network Drive 
Burlington, MA 01803
lance.ander...@oracle.com





JDK 9 RFR of JDK-8072843: Typo in the description of the j.l.r.Executable.getAnnotatedReceiverType

2015-02-10 Thread joe darcy

Hello,

Please review this typo fix for

 JDK-8072843: Typo in the description of the 
j.l.r.Executable.getAnnotatedReceiverType


Patch below; I spellchecked the rest of the comments and strings and 
didn't find any other problems.


Thanks,

-Joe

diff -r 30f5fa716218 
src/java.base/share/classes/java/lang/reflect/Executable.java
--- a/src/java.base/share/classes/java/lang/reflect/Executable.java Mon 
Feb 09 17:49:26 2015 -0800
+++ b/src/java.base/share/classes/java/lang/reflect/Executable.java Tue 
Feb 10 11:24:10 2015 -0800

@@ -662,7 +662,7 @@
  *
  * If this {@code Executable} object represents a static method or
  * represents a constructor of a top level, static member, local, or
- * anoymous class, then the return value is null.
+ * anonymous class, then the return value is null.
  *
  * @return an object representing the receiver type of the method or
  * constructor represented by this {@code Executable} or {@code 
null} if




Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Martin Buchholz
(not enough time for real review)
I support Peter's approach.  I know of no way to be reliably notified of
process termination on unix without a dedicated thread per subprocess (unix
is broken!).  Given that, we want to keep its stack size small, which is
where the 32k comes from.  Maybe on 64-bit systems this becomes unimportant
- not sure - but we're not there yet this decade.  I agree that if the
reaper threads invokes CompletableFuture continuations, they should not be
run in the process reaper thread (stack too small), but run somewhere else,
probably the common pool.

On Tue, Feb 10, 2015 at 10:17 AM, Peter Levart 
wrote:

> On 02/10/2015 02:02 PM, Peter Levart wrote:
>
>> On 02/10/2015 12:35 PM, Peter Levart wrote:
>>
>>> ProcessHandle.completableFuture().cancel(true) forcibly destorys
>>> (destroyForcibly()) the process *and* vice versa: destory[Forcibly]()
>>> cancels the CompletableFuture. I don't know if this is the best way - can't
>>> decide yet. In particular, in the implementation it would be hard to
>>> achieve the atommicity of both destroying the process and canceling the
>>> future. Races are inevitable. So it would be better to think of a process
>>> (and a ProcessHandle representing it) as the 1st stage in the processing
>>> pipeline, where ProcessHandle.completableFuture() is it's dependent
>>> stage which tracks real changes of the process. Which means the behaviour
>>> would be something like the following:
>>>
>>> - ProcessHandle.destroy[Forcibly]() triggers destruction of the process
>>> which in turn (when successful) triggers completion of CompletableFuture,
>>> exceptionally with CompletionException, wrapping the exception indicating
>>> the destruction of the process (ProcessDestroyedException?).
>>>
>>> - ProcessHandle.completableFuture().cancel(true/false) just cancels the
>>> CompletableFuture and does not do anything to the process itself.
>>>
>>> In that variant, then perhaps it would be more appropriate for
>>> ProcessHandle.completableFuture() to be a "factory" for
>>> CompletableFuture(s) so that each call would return new independent
>>> instance.
>>>
>>> What do you think?
>>>
>>
>> Contemplating on this a little more, then perhaps the singleton-per pid
>> CompletionStage could be OK if it was a "mirror" of real process state. For
>> that purpose then, instead of .completableFuture() the method would be:
>>
>> public CompletionStage completionStage()
>>
>> Returns a CompletionStage for the process. The
>> CompletionStage provides supporting dependent functions and actions that
>> are run upon process completion.
>>
>> Returns:
>> a CompletionStage for the ProcessHandle; the same
>> instance is returned for each unique pid.
>>
>>
>> This would provide the most clean API I think, as CompletionStage does
>> not have any cancel(), complete(), obtrudeXXX() or get() methods. One could
>> still obtain a CompletableFuture by calling .toCompletableFuture() on the
>> CompletionStage, but that future would be a 2nd stage future (like calling
>> .thenApply(x -> x)) which would not propagate cancel(true) to the process
>> destruction.
>>
>> The implementation could still use CompletableFuture under the hood, but
>> exposed wrapped in a delegating CompletionStage proxy.
>>
>> So the javadoc might be written as:
>>
>>
>> public abstract void destroy()
>>
>> Kills the process. Whether the process represented by this Process object
>> is forcibly terminated or not is implementation dependent. If the process
>> is not alive, no action is taken.
>>
>> If/when the process dies as the result of calling destroy(), the
>> completionStage() completes exceptionally with CompletionException,
>> wrapping ProcessDestroyedException.
>>
>>
>> public abstract ProcessHandle destroyForcibly()
>>
>> Kills the process. The process represented by this ProcessHandle object
>> is forcibly terminated. If the process is not alive, no action is taken.
>>
>> If/when the process dies as the result of calling destroyForcibly(), the
>> completionStage() completes exceptionally with CompletionException,
>> wrapping ProcessDestroyedException.
>>
>>
>> But I'm still unsure of whether it would be better for the
>> completionStage() to complete normally in any case. Unless the fact that
>> the process died as a result of killing it could be reliably communicated
>> regardless of who was responsible for the killing (either via
>> ProcessHandle.destoroy() or by a KILL/TERMINATE signal originating from
>> outside the VM).
>>
>> Peter
>>
>>
>>
> Hi Roger,
>
> I checked out your branch in jdk9-sandbox and studied current
> implementation.
>
> One problem with this approach (returning a singleton-per-pid
> CompletableFuture or CompletionStage) is that current processReaperExecutor
> is using threads with very small stack size (32 K) and the returned
> CompletableFuture could be instructed to append a continuation that
> executes synchronously:
>
> CompletionStage.thenApply(), CompletionStage.handle(), etc...
>
> .

Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Anthony Vanelverdinghe

Hi Roger

This looks great already. My feedback is about process destruction:

Why isn't ProcessHandle::isDestroyForcible a static method?

For ProcessHandle::destroy and Process::destroy, I'd like to propose 
replacing
"Whether the process represented by this Process object is forcibly 
terminated or not is implementation dependent."

with:
"The process represented by this Process object is forcibly terminated 
if and only if isDestroyForcible returns true."


Process::destroyForcibly contains the following phrase: "The default 
implementation of this method invokes destroy() and so may not forcibly 
terminate the process."
Why doesn't the default implementation throw 
UnsupportedOperationException if forcible termination is not supported 
on/implemented for the current platform? If I write code like: 
process.destroyForcibly().waitFor(), I'd assume it would finish in a 
timely manner, but due to the default implementation, this may actually 
not finish at all.


Kind regards, Anthony


On 10/02/2015 0:25, Roger Riggs wrote:

Hi,

After a protracted absence from working on JEP 102, the updated API draft
provides access to process hierarchies and individual process 
information;
as permitted by the OS. The relationship between Process and 
ProcessHandle

is clarified and the security model validated.

Both Processes and ProcessHandles can be monitored using 
CompletableFuture

for termination and to trigger additional actions on Process exit.
Information about processes includes the total cputime, starttime, user,
executable, and arguments.

Please review and comment:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Thanks, Roger









Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Peter Levart

On 02/10/2015 02:02 PM, Peter Levart wrote:

On 02/10/2015 12:35 PM, Peter Levart wrote:
ProcessHandle.completableFuture().cancel(true) forcibly destorys 
(destroyForcibly()) the process *and* vice versa: destory[Forcibly]() 
cancels the CompletableFuture. I don't know if this is the best way - 
can't decide yet. In particular, in the implementation it would be 
hard to achieve the atommicity of both destroying the process and 
canceling the future. Races are inevitable. So it would be better to 
think of a process (and a ProcessHandle representing it) as the 1st 
stage in the processing pipeline, where 
ProcessHandle.completableFuture() is it's dependent stage which 
tracks real changes of the process. Which means the behaviour would 
be something like the following:


- ProcessHandle.destroy[Forcibly]() triggers destruction of the 
process which in turn (when successful) triggers completion of 
CompletableFuture, exceptionally with CompletionException, wrapping 
the exception indicating the destruction of the process 
(ProcessDestroyedException?).


- ProcessHandle.completableFuture().cancel(true/false) just cancels 
the CompletableFuture and does not do anything to the process itself.


In that variant, then perhaps it would be more appropriate for 
ProcessHandle.completableFuture() to be a "factory" for 
CompletableFuture(s) so that each call would return new independent 
instance.


What do you think? 


Contemplating on this a little more, then perhaps the singleton-per 
pid CompletionStage could be OK if it was a "mirror" of real process 
state. For that purpose then, instead of .completableFuture() the 
method would be:


public CompletionStage completionStage()

Returns a CompletionStage for the process. The 
CompletionStage provides supporting dependent functions and actions 
that are run upon process completion.


Returns:
a CompletionStage for the ProcessHandle; the same 
instance is returned for each unique pid.



This would provide the most clean API I think, as CompletionStage does 
not have any cancel(), complete(), obtrudeXXX() or get() methods. One 
could still obtain a CompletableFuture by calling 
.toCompletableFuture() on the CompletionStage, but that future would 
be a 2nd stage future (like calling .thenApply(x -> x)) which would 
not propagate cancel(true) to the process destruction.


The implementation could still use CompletableFuture under the hood, 
but exposed wrapped in a delegating CompletionStage proxy.


So the javadoc might be written as:


public abstract void destroy()

Kills the process. Whether the process represented by this Process 
object is forcibly terminated or not is implementation dependent. If 
the process is not alive, no action is taken.


If/when the process dies as the result of calling destroy(), the 
completionStage() completes exceptionally with CompletionException, 
wrapping ProcessDestroyedException.



public abstract ProcessHandle destroyForcibly()

Kills the process. The process represented by this ProcessHandle 
object is forcibly terminated. If the process is not alive, no action 
is taken.


If/when the process dies as the result of calling destroyForcibly(), 
the completionStage() completes exceptionally with 
CompletionException, wrapping ProcessDestroyedException.



But I'm still unsure of whether it would be better for the 
completionStage() to complete normally in any case. Unless the fact 
that the process died as a result of killing it could be reliably 
communicated regardless of who was responsible for the killing (either 
via ProcessHandle.destoroy() or by a KILL/TERMINATE signal originating 
from outside the VM).


Peter




Hi Roger,

I checked out your branch in jdk9-sandbox and studied current 
implementation.


One problem with this approach (returning a singleton-per-pid 
CompletableFuture or CompletionStage) is that current 
processReaperExecutor is using threads with very small stack size (32 K) 
and the returned CompletableFuture could be instructed to append a 
continuation that executes synchronously:


CompletionStage.thenApply(), CompletionStage.handle(), etc...

... so user code would execute by such thread and probably get 
StackOverflowException...


Also, multiple ProcessHandle objects together with a Process object for 
the same pid each return a separate CompletableFuture instance (not what 
spec. says). Each of them also spawns it's own thread to wait for 
process termination.


Here's a better approach (a diff to your branch):

http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/webrev.01/

...it contains a global registry of internal CompletionStage(s) - one 
per pid. They are not exposed to users, just used internally (in Unix 
ProcessImpl) to execute cleanup and in .completionStage() to append an 
asynchronous stage which is returned by the method on each call. So 
users appending synchronous continuations to returned CompletionStage 
would execute them in a common FJ pool.


I haven'

Re: RFR 8064924: Update java.net.URL to work with modules

2015-02-10 Thread Chris Hegarty
On 10 Feb 2015, at 11:35, Alan Bateman  wrote:

> On 09/02/2015 14:57, Chris Hegarty wrote:
>> :
>> 
>> Updated webrev [ spec and implementation] :
>>  http://cr.openjdk.java.net/~chegar/8064924/04/
>> 
> The updated javadoc looks good. I haven't had a chance yet to look at the 
> implementation but I think you will need to update 
> /make/common/CORE_PKGS.gmk for the spi package.

Sorry, I have the change locally but forgot it. I updated the webrev with the 
changes to the top level repo ( below ):

$ hg diff make/common/CORE_PKGS.gmk 
diff --git a/make/common/CORE_PKGS.gmk b/make/common/CORE_PKGS.gmk
--- a/make/common/CORE_PKGS.gmk
+++ b/make/common/CORE_PKGS.gmk
@@ -103,6 +103,7 @@
 java.lang.reflect \
 java.math \
 java.net \
+java.net.spi \
 java.nio \
 java.nio.channels \
 java.nio.channels.spi \

-Chris.

> -Alan.



Re: RFR: JDK-8068682 - Deprivilege/move java.corba to the ext class loader

2015-02-10 Thread Mandy Chung

The change without ORB.java looks okay to me.
Mandy

On 2/10/15 4:11 AM, Mark Sheppard wrote:

OK I'll remove it.

I thought that property files had been migrated from lib to conf, as 
per conf/security,  so I made the change


regards
Mark

On 10/02/2015 11:37, Alan Bateman wrote:

On 10/02/2015 11:20, Mark Sheppard wrote:

thanks Alan

the updated corba part is at

http://cr.openjdk.java.net/~msheppar/8068682/corba/webrev.02/

I assume ORB.java isn't meant to be in this webrev (the lib->conf 
issue is separate and I think will need an @implNote in additional to 
checking for orb.properties in both lib and conf).


-Alan.






Re: RFR: JDK-8072842 Add support for building native JTReg tests

2015-02-10 Thread Magnus Ihse Bursie

> 10 feb 2015 kl. 16:10 skrev Staffan Larsen :
> 
> 
>> On 10 feb 2015, at 15:23, Magnus Ihse Bursie  
>> wrote:
>> 
>> Here is an addition to the build system, that will compile native libraries 
>> and executables and make them available for JTReg tests written in Java.
>> 
>> This patch is the result of the work (in serial, mostly) of Staffan Larsen, 
>> David Simms and me. (And possible more that I don't know about.)
>> 
>> In it current form, the patch only provides the framework on which to attach 
>> real tests. To prove the concept, some initial dummy tests are present. 
>> These are supposed to be removed as soon as real tests starts to propagate 
>> into the jdk and hotspot jtreg test suites.
>> 
>> The behavior is based on the following design: For directories with native 
>> jtreg compilation enabled, the build system searches for native files 
>> prefixed with either "lib" or "exe", and compiles a free-standing library or 
>> an executable, respectively, for each such file. Since all compiled files 
>> are placed in a single directory (this is currently a requirement from the 
>> jtreg native support), there is the additional requirement that all files 
>> have unique names.
>> 
>> My personal opinion is that a better long-term approach is to compile all 
>> tests into a single library, if possible. (I realize some tests need to be 
>> separate to test library loading, etc.) For that to work, however, JTReg 
>> needs to be changed. The build framework in the patch is designed in such a 
>> way that it will be easy to add compilation to a common library in the 
>> future, if that restriction is lifted from JTReg.
> 
> To clarify: The restriction in jtreg is that all tests are loaded in separate 
> class loaders (when running in samevm mode). A native library can only be 
> loaded in one class loader at a time. So if two tests tries to load the same 
> library we get errors. It would be possible to change this if samevm mode is 
> removed from jtreg.

I thought the problem was that we try to load the same library multiple times 
by different classloaders, and that the solution was to load the library only 
once, e.g. by the jtreg framework before launching samevm tests. 

However, if the library has to be loaded by the class loader in which the test 
executes, this will not work. :(

Nevertheless, this discussion is tangential to the current review. If it is 
possible to add a singe-library approach to jtreg without sacrificing 
functionality, I believe that is a good thing, and this patch supports such an 
extension. If not, the current patch works anyway. 

/Magnus

> 
> /Staffan
> 
>> 
>> This patch also lays the foundation for building additional tests, not only 
>> native jtreg tests, by the build system in the future. For instance, it 
>> codifies the suggested correspondence between make targets, intermediate 
>> test output and test image final build results. With the on-going test 
>> co-location project, we expect a stream of tests to be added to the build 
>> system in the future, and we hope this project can serve as an example of a 
>> suitable way of integration.
>> 
>> The patch modifies hotspot code, but it's mostly due to the addition of the 
>> new dummy tests. My preference would be to integrate this patch via 
>> jdk9-dev, since it modifies the build system most of all, but I'm open for 
>> discussion.
>> 
>> Bug: https://bugs.openjdk.java.net/browse/JDK-8072842
>> WebRev: 
>> http://cr.openjdk.java.net/~ihse/JDK-8072842-build-native-jtreg-tests/webrev.01
>> 
>> /Magnus
> 


Re: RFR: JDK-8072842 Add support for building native JTReg tests

2015-02-10 Thread Staffan Larsen

> On 10 feb 2015, at 15:23, Magnus Ihse Bursie  
> wrote:
> 
> Here is an addition to the build system, that will compile native libraries 
> and executables and make them available for JTReg tests written in Java.
> 
> This patch is the result of the work (in serial, mostly) of Staffan Larsen, 
> David Simms and me. (And possible more that I don't know about.)
> 
> In it current form, the patch only provides the framework on which to attach 
> real tests. To prove the concept, some initial dummy tests are present. These 
> are supposed to be removed as soon as real tests starts to propagate into the 
> jdk and hotspot jtreg test suites.
> 
> The behavior is based on the following design: For directories with native 
> jtreg compilation enabled, the build system searches for native files 
> prefixed with either "lib" or "exe", and compiles a free-standing library or 
> an executable, respectively, for each such file. Since all compiled files are 
> placed in a single directory (this is currently a requirement from the jtreg 
> native support), there is the additional requirement that all files have 
> unique names.
> 
> My personal opinion is that a better long-term approach is to compile all 
> tests into a single library, if possible. (I realize some tests need to be 
> separate to test library loading, etc.) For that to work, however, JTReg 
> needs to be changed. The build framework in the patch is designed in such a 
> way that it will be easy to add compilation to a common library in the 
> future, if that restriction is lifted from JTReg.

To clarify: The restriction in jtreg is that all tests are loaded in separate 
class loaders (when running in samevm mode). A native library can only be 
loaded in one class loader at a time. So if two tests tries to load the same 
library we get errors. It would be possible to change this if samevm mode is 
removed from jtreg.

/Staffan

> 
> This patch also lays the foundation for building additional tests, not only 
> native jtreg tests, by the build system in the future. For instance, it 
> codifies the suggested correspondence between make targets, intermediate test 
> output and test image final build results. With the on-going test co-location 
> project, we expect a stream of tests to be added to the build system in the 
> future, and we hope this project can serve as an example of a suitable way of 
> integration.
> 
> The patch modifies hotspot code, but it's mostly due to the addition of the 
> new dummy tests. My preference would be to integrate this patch via jdk9-dev, 
> since it modifies the build system most of all, but I'm open for discussion.
> 
> Bug: https://bugs.openjdk.java.net/browse/JDK-8072842
> WebRev: 
> http://cr.openjdk.java.net/~ihse/JDK-8072842-build-native-jtreg-tests/webrev.01
> 
> /Magnus



Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread Paul Sandoz
On Feb 10, 2015, at 3:39 PM, Chris Hegarty  wrote:
>> 
>> Adding native methods to 166 classes will introduce another form of 
>> dependency on the platform that i would prefer to avoid.
> 
> Right. And I suspect that the separate distributable of 166, outside of the 
> Java Platform, would not want to have to carry a separate platform specific 
> native library.

Yes.


> 
>>> But I don't see any reason why we couldn't move the implementation from 
>>> unsafe.cpp to jvm.cpp and invoke via a native method implementation of 
>>> LockSupport. It would still be just as "unsafe" of course.
>>> 
>> 
>> Can you think of any reasons, beyond that of "don't touch the core classes", 
>> why we cannot copy this functionality over to java.lang.{*, Thread} ?
> 
> Would you be thinking of making the changes public, i.e. new standard API on 
> java.lang.Thread ?
> 

Yes, j.l.Thread seems like the ideal location :-)

Paul.


Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread Chris Hegarty
On 10 Feb 2015, at 12:46, Paul Sandoz  wrote:

> Hi David,
> 
> On Feb 10, 2015, at 1:02 PM, David Holmes  wrote:
> 
>> Hi Paul,
>> 
>> When we added j.u.c there was reluctance to add these methods to Thread - 
>> this was the normal policy (for the time) of don't touch the core classes. 
>> So LockSupport is the public API and Unsafe was chosen as the implementation 
>> as it is a de-facto interface to the VM from JDK code rather then defining 
>> explicit native methods (I must admit I'm unsure why we went this route 
>> rather than simply hooking into jvm.cpp with JVM_park/JVM_unpark. I think in 
>> many ways it was/is just easier to call an Unsafe method and define a new 
>> unsafe.cpp native call. Plus we had a bunch of other Unsafe API's being used 
>> from j.u.c.)
>> 
>> So we can't just move things from LockSupport to Thread as we'd need to 
>> deprecate first etc etc.
> 
> I was thinking that the implementation of LockSupport could be updated to use 
> any new stuff we could add to java.lang.{*, Thread}.
> 
> I am trying to tease apart the internal dependencies that 166 classes have on 
> the platform. In the case of LockSupport there are two, Unsafe and Thread.
> 
> Adding native methods to 166 classes will introduce another form of 
> dependency on the platform that i would prefer to avoid.

Right. And I suspect that the separate distributable of 166, outside of the 
Java Platform, would not want to have to carry a separate platform specific 
native library.

>> But I don't see any reason why we couldn't move the implementation from 
>> unsafe.cpp to jvm.cpp and invoke via a native method implementation of 
>> LockSupport. It would still be just as "unsafe" of course.
>> 
> 
> Can you think of any reasons, beyond that of "don't touch the core classes", 
> why we cannot copy this functionality over to java.lang.{*, Thread} ?

Would you be thinking of making the changes public, i.e. new standard API on 
java.lang.Thread ?

-Chris.

>> Aside: this is where the infamous "spurious wakeup" from park() can arise. 
>> If you just randomly unpark() a Thread there is nothing to guarantee that 
>> the thread doesn't terminate and free its native resources while you are 
>> working on it. But the ParkEvents are type-stable-memory, so even if the 
>> event has been disassociated from the target thread you can still call 
>> unpark() as it is never freed. However if that ParkEvent has since been 
>> reused by another thread, then that thread will encounter a "spurious 
>> wakeup" if it calls park().
>> 
> 
> I see, and can the same happen for Object.wait as well? I gather so from the 
> documentation.
> 
> 
> On Feb 10, 2015, at 1:04 PM, David Holmes  wrote:
>> Oh I just realized/remembered why we use Unsafe for this - it is because of 
>> the potential for intrinsics.
> 
> I can certainly imagine it's convenient place to put things in that regard.
> 
> Paul.
> 



Lexicographic array comparators

2015-02-10 Thread Paul Sandoz
Hi,

One common use of Unsafe is boost the performance of comparing unsigned byte[] 
by viewing as long[] (for example as performed by Guava or Cassandra). To 
reduce the dependency on Unsafe we need a public and safe equivalent that works 
on all platforms while meeting the performance expectations.

In the past we have discussed exposing something on ByteBuffer but i think the 
right thing to explore are comparators for all basic types on java.util.Arrays:

 https://bugs.openjdk.java.net/browse/JDK-8033148

This should be explored with the following issue in mind to expose an intrinsic 
that could be leveraged when implemented: 

 https://bugs.openjdk.java.net/browse/JDK-8044082

Some preliminary work can be found here:

  
http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8033148-Arrays-lexico-compare/webrev/

The "explosion" in methods is annoying. Out of the 8 primitives 4 can be signed 
or unsigned. Objects can be comparable or an explicit comparator can be used. 
There are implicit and explicit ranges. That makes a maximum of 28 methods, but 
could be reduced to a minimum of 10, for all basic types.

Comparator instances, for implicit ranges, can easily be created from method 
refs or lambdas.

Thoughts?

Paul.


RFR: JDK-8072842 Add support for building native JTReg tests

2015-02-10 Thread Magnus Ihse Bursie
Here is an addition to the build system, that will compile native 
libraries and executables and make them available for JTReg tests 
written in Java.


This patch is the result of the work (in serial, mostly) of Staffan 
Larsen, David Simms and me. (And possible more that I don't know about.)


In it current form, the patch only provides the framework on which to 
attach real tests. To prove the concept, some initial dummy tests are 
present. These are supposed to be removed as soon as real tests starts 
to propagate into the jdk and hotspot jtreg test suites.


The behavior is based on the following design: For directories with 
native jtreg compilation enabled, the build system searches for native 
files prefixed with either "lib" or "exe", and compiles a free-standing 
library or an executable, respectively, for each such file. Since all 
compiled files are placed in a single directory (this is currently a 
requirement from the jtreg native support), there is the additional 
requirement that all files have unique names.


My personal opinion is that a better long-term approach is to compile 
all tests into a single library, if possible. (I realize some tests need 
to be separate to test library loading, etc.) For that to work, however, 
JTReg needs to be changed. The build framework in the patch is designed 
in such a way that it will be easy to add compilation to a common 
library in the future, if that restriction is lifted from JTReg.


This patch also lays the foundation for building additional tests, not 
only native jtreg tests, by the build system in the future. For 
instance, it codifies the suggested correspondence between make targets, 
intermediate test output and test image final build results. With the 
on-going test co-location project, we expect a stream of tests to be 
added to the build system in the future, and we hope this project can 
serve as an example of a suitable way of integration.


The patch modifies hotspot code, but it's mostly due to the addition of 
the new dummy tests. My preference would be to integrate this patch via 
jdk9-dev, since it modifies the build system most of all, but I'm open 
for discussion.


Bug: https://bugs.openjdk.java.net/browse/JDK-8072842
WebRev: 
http://cr.openjdk.java.net/~ihse/JDK-8072842-build-native-jtreg-tests/webrev.01


/Magnus


Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread David M. Lloyd

On 02/09/2015 07:52 PM, Roger Riggs wrote:

On 2/9/15 6:44 PM, David M. Lloyd wrote:

Also, as a general comment, there isn't really a good explanation as
to what the difference is between a normal destroy and a forcible
destroy.  Given that you've added an isDestroyForcible() method, I
think it might be a good idea to explain what it means when this
method returns true.  There must be some criteria in the
implementation to return true here, so at the least, that criteria
should be explained.  Also the destroy() method now has the odd
characteristic that its implementation *may* forcibly destroy a
process, but you can't really determine that from the API at all.


From an implementation perspective, for Unix it is the distinction
between SIGTERM and SIGKILL;one is allowed/expected to be caught and handled by 
the application for
a clean shutdown,the other is not interceptable. But the OS variations and caveats make 

it hard to write anything more

than an informative statement.


Understood, but I'm thinking that such a statement should be added; 
something along the lines of "Forcible process destruction is defined as 
the immediate termination of a process, whereas regular destruction 
allows a process to shut down cleanly."  This gives a clear criterion as 
to what it means when isDestroyForcible returns true, since each of 
these behaviors (at least on Unix) are readily identified with SIGKILL 
and SIGTERM.


Upon rereading the API I see that isDestroyForcible() actually reflects 
the behavior of destroy(), which is opposite of my original reading of 
it, and that clarifies things a bit.  But there is still no way in the 
API to know if forcible process termination is supported; can it be 
assumed that it is supported on all platforms?



The descriptions are copied from Process, which previously did not offer
an explanation.


--
- DML


Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Peter Levart

On 02/10/2015 12:35 PM, Peter Levart wrote:
ProcessHandle.completableFuture().cancel(true) forcibly destorys 
(destroyForcibly()) the process *and* vice versa: destory[Forcibly]() 
cancels the CompletableFuture. I don't know if this is the best way - 
can't decide yet. In particular, in the implementation it would be 
hard to achieve the atommicity of both destroying the process and 
canceling the future. Races are inevitable. So it would be better to 
think of a process (and a ProcessHandle representing it) as the 1st 
stage in the processing pipeline, where 
ProcessHandle.completableFuture() is it's dependent stage which tracks 
real changes of the process. Which means the behaviour would be 
something like the following:


- ProcessHandle.destroy[Forcibly]() triggers destruction of the 
process which in turn (when successful) triggers completion of 
CompletableFuture, exceptionally with CompletionException, wrapping 
the exception indicating the destruction of the process 
(ProcessDestroyedException?).


- ProcessHandle.completableFuture().cancel(true/false) just cancels 
the CompletableFuture and does not do anything to the process itself.


In that variant, then perhaps it would be more appropriate for 
ProcessHandle.completableFuture() to be a "factory" for 
CompletableFuture(s) so that each call would return new independent 
instance.


What do you think? 


Contemplating on this a little more, then perhaps the singleton-per pid 
CompletionStage could be OK if it was a "mirror" of real process state. 
For that purpose then, instead of .completableFuture() the method would be:


public CompletionStage completionStage()

Returns a CompletionStage for the process. The 
CompletionStage provides supporting dependent functions and actions that 
are run upon process completion.


Returns:
a CompletionStage for the ProcessHandle; the same 
instance is returned for each unique pid.



This would provide the most clean API I think, as CompletionStage does 
not have any cancel(), complete(), obtrudeXXX() or get() methods. One 
could still obtain a CompletableFuture by calling .toCompletableFuture() 
on the CompletionStage, but that future would be a 2nd stage future 
(like calling .thenApply(x -> x)) which would not propagate cancel(true) 
to the process destruction.


The implementation could still use CompletableFuture under the hood, but 
exposed wrapped in a delegating CompletionStage proxy.


So the javadoc might be written as:


public abstract void destroy()

Kills the process. Whether the process represented by this Process 
object is forcibly terminated or not is implementation dependent. If the 
process is not alive, no action is taken.


If/when the process dies as the result of calling destroy(), the 
completionStage() completes exceptionally with CompletionException, 
wrapping ProcessDestroyedException.



public abstract ProcessHandle destroyForcibly()

Kills the process. The process represented by this ProcessHandle object 
is forcibly terminated. If the process is not alive, no action is taken.


If/when the process dies as the result of calling destroyForcibly(), the 
completionStage() completes exceptionally with CompletionException, 
wrapping ProcessDestroyedException.



But I'm still unsure of whether it would be better for the 
completionStage() to complete normally in any case. Unless the fact that 
the process died as a result of killing it could be reliably 
communicated regardless of who was responsible for the killing (either 
via ProcessHandle.destoroy() or by a KILL/TERMINATE signal originating 
from outside the VM).


Peter



Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread Paul Sandoz
Hi David,

On Feb 10, 2015, at 1:02 PM, David Holmes  wrote:

> Hi Paul,
> 
> When we added j.u.c there was reluctance to add these methods to Thread - 
> this was the normal policy (for the time) of don't touch the core classes. So 
> LockSupport is the public API and Unsafe was chosen as the implementation as 
> it is a de-facto interface to the VM from JDK code rather then defining 
> explicit native methods (I must admit I'm unsure why we went this route 
> rather than simply hooking into jvm.cpp with JVM_park/JVM_unpark. I think in 
> many ways it was/is just easier to call an Unsafe method and define a new 
> unsafe.cpp native call. Plus we had a bunch of other Unsafe API's being used 
> from j.u.c.)
> 
> So we can't just move things from LockSupport to Thread as we'd need to 
> deprecate first etc etc.

I was thinking that the implementation of LockSupport could be updated to use 
any new stuff we could add to java.lang.{*, Thread}.

I am trying to tease apart the internal dependencies that 166 classes have on 
the platform. In the case of LockSupport there are two, Unsafe and Thread.

Adding native methods to 166 classes will introduce another form of dependency 
on the platform that i would prefer to avoid.


> But I don't see any reason why we couldn't move the implementation from 
> unsafe.cpp to jvm.cpp and invoke via a native method implementation of 
> LockSupport. It would still be just as "unsafe" of course.
> 

Can you think of any reasons, beyond that of "don't touch the core classes", 
why we cannot copy this functionality over to java.lang.{*, Thread} ?


> Aside: this is where the infamous "spurious wakeup" from park() can arise. If 
> you just randomly unpark() a Thread there is nothing to guarantee that the 
> thread doesn't terminate and free its native resources while you are working 
> on it. But the ParkEvents are type-stable-memory, so even if the event has 
> been disassociated from the target thread you can still call unpark() as it 
> is never freed. However if that ParkEvent has since been reused by another 
> thread, then that thread will encounter a "spurious wakeup" if it calls 
> park().
> 

I see, and can the same happen for Object.wait as well? I gather so from the 
documentation.


On Feb 10, 2015, at 1:04 PM, David Holmes  wrote:
> Oh I just realized/remembered why we use Unsafe for this - it is because of 
> the potential for intrinsics.

I can certainly imagine it's convenient place to put things in that regard.

Paul.



Re: RFR: JDK-8068682 - Deprivilege/move java.corba to the ext class loader

2015-02-10 Thread Mark Sheppard

OK I'll remove it.

I thought that property files had been migrated from lib to conf, as per 
conf/security,  so I made the change


regards
Mark

On 10/02/2015 11:37, Alan Bateman wrote:

On 10/02/2015 11:20, Mark Sheppard wrote:

thanks Alan

the updated corba part is at

http://cr.openjdk.java.net/~msheppar/8068682/corba/webrev.02/

I assume ORB.java isn't meant to be in this webrev (the lib->conf 
issue is separate and I think will need an @implNote in additional to 
checking for orb.properties in both lib and conf).


-Alan.




Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread David Holmes
Oh I just realized/remembered why we use Unsafe for this - it is because 
of the potential for intrinsics.


David

On 10/02/2015 10:02 PM, David Holmes wrote:

Hi Paul,

When we added j.u.c there was reluctance to add these methods to Thread
- this was the normal policy (for the time) of don't touch the core
classes. So LockSupport is the public API and Unsafe was chosen as the
implementation as it is a de-facto interface to the VM from JDK code
rather then defining explicit native methods (I must admit I'm unsure
why we went this route rather than simply hooking into jvm.cpp with
JVM_park/JVM_unpark. I think in many ways it was/is just easier to call
an Unsafe method and define a new unsafe.cpp native call. Plus we had a
bunch of other Unsafe API's being used from j.u.c.)

So we can't just move things from LockSupport to Thread as we'd need to
deprecate first etc etc. But I don't see any reason why we couldn't move
the implementation from unsafe.cpp to jvm.cpp and invoke via a native
method implementation of LockSupport. It would still be just as "unsafe"
of course.

Aside: this is where the infamous "spurious wakeup" from park() can
arise. If you just randomly unpark() a Thread there is nothing to
guarantee that the thread doesn't terminate and free its native
resources while you are working on it. But the ParkEvents are
type-stable-memory, so even if the event has been disassociated from the
target thread you can still call unpark() as it is never freed. However
if that ParkEvent has since been reused by another thread, then that
thread will encounter a "spurious wakeup" if it calls park().

Cheers,
David

On 10/02/2015 8:49 PM, Paul Sandoz wrote:

Hi,

As part of the effort around Unsafe (some may have noticed some
cleanup work) i have been recently looking at the park/unpark methods.

The class java.util.concurrent.locks.LockSupport [1] has some thin
public wrappers around these methods:

 public static void unpark(Thread thread) {
 if (thread != null)
 U.unpark(thread);
 }

 public static void park() {
 U.park(false, 0L);
 }

 public static void parkNanos(long nanos) {
 if (nanos > 0)
 U.park(false, nanos);
 }

 public static void parkUntil(long deadline) {
 U.park(true, deadline);
 }

Is not clear to me what is exactly unsafe about park/unpark and why
they were not originally placed on Thread itself given the above
wrapping.

There is mention of unpark being unsafe with regards to native code
and ensuring the thread has not been destroyed:

   /**
* Unblock the given thread blocked on park, or, if it is
* not blocked, cause the subsequent call to park not to
* block.  Note: this operation is "unsafe" solely because the
* caller must somehow ensure that the thread has not been
* destroyed. Nothing special is usually required to ensure this
* when called from Java (in which there will ordinarily be a live
* reference to the thread) but this is not nearly-automatically
* so when calling from native code.
* @param thread the thread to unpark.
*
*/
   public native void unpark(Object thread);

However, native code is anyway inherently unsafe.


In addition this class has a cosy relationship with Thread (it wants
to poke into Thread's non-public fields):

 // Hotspot implementation via intrinsics API
 private static final sun.misc.Unsafe U =
sun.misc.Unsafe.getUnsafe();
 private static final long PARKBLOCKER;
 private static final long SEED;
 private static final long PROBE;
 private static final long SECONDARY;
 static {
 try {
 PARKBLOCKER = U.objectFieldOffset
 (Thread.class.getDeclaredField("parkBlocker"));
 SEED = U.objectFieldOffset

(Thread.class.getDeclaredField("threadLocalRandomSeed"));
 PROBE = U.objectFieldOffset

(Thread.class.getDeclaredField("threadLocalRandomProbe"));
 SECONDARY = U.objectFieldOffset

(Thread.class.getDeclaredField("threadLocalRandomSecondarySeed"));
 } catch (ReflectiveOperationException e) {
 throw new Error(e);
 }
 }

Although only PARKBLOCKER and SECONDARY are used AFAICT.

I am sure there is some history behind all this... but in my ignorance
of the past perhaps it's time to reconsider?

We could reduce the coupling on Thread and dependency on Unsafe if we
consider moving park/unpark and LockSupport functionality to Thread
itself.

Thoughts?

Paul.

[1]
http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/locks/LockSupport.java?view=co




Re: Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread David Holmes

Hi Paul,

When we added j.u.c there was reluctance to add these methods to Thread 
- this was the normal policy (for the time) of don't touch the core 
classes. So LockSupport is the public API and Unsafe was chosen as the 
implementation as it is a de-facto interface to the VM from JDK code 
rather then defining explicit native methods (I must admit I'm unsure 
why we went this route rather than simply hooking into jvm.cpp with 
JVM_park/JVM_unpark. I think in many ways it was/is just easier to call 
an Unsafe method and define a new unsafe.cpp native call. Plus we had a 
bunch of other Unsafe API's being used from j.u.c.)


So we can't just move things from LockSupport to Thread as we'd need to 
deprecate first etc etc. But I don't see any reason why we couldn't move 
the implementation from unsafe.cpp to jvm.cpp and invoke via a native 
method implementation of LockSupport. It would still be just as "unsafe" 
of course.


Aside: this is where the infamous "spurious wakeup" from park() can 
arise. If you just randomly unpark() a Thread there is nothing to 
guarantee that the thread doesn't terminate and free its native 
resources while you are working on it. But the ParkEvents are 
type-stable-memory, so even if the event has been disassociated from the 
target thread you can still call unpark() as it is never freed. However 
if that ParkEvent has since been reused by another thread, then that 
thread will encounter a "spurious wakeup" if it calls park().


Cheers,
David

On 10/02/2015 8:49 PM, Paul Sandoz wrote:

Hi,

As part of the effort around Unsafe (some may have noticed some cleanup work) i 
have been recently looking at the park/unpark methods.

The class java.util.concurrent.locks.LockSupport [1] has some thin public 
wrappers around these methods:

 public static void unpark(Thread thread) {
 if (thread != null)
 U.unpark(thread);
 }

 public static void park() {
 U.park(false, 0L);
 }

 public static void parkNanos(long nanos) {
 if (nanos > 0)
 U.park(false, nanos);
 }

 public static void parkUntil(long deadline) {
 U.park(true, deadline);
 }

Is not clear to me what is exactly unsafe about park/unpark and why they were 
not originally placed on Thread itself given the above wrapping.

There is mention of unpark being unsafe with regards to native code and 
ensuring the thread has not been destroyed:

   /**
* Unblock the given thread blocked on park, or, if it is
* not blocked, cause the subsequent call to park not to
* block.  Note: this operation is "unsafe" solely because the
* caller must somehow ensure that the thread has not been
* destroyed. Nothing special is usually required to ensure this
* when called from Java (in which there will ordinarily be a live
* reference to the thread) but this is not nearly-automatically
* so when calling from native code.
* @param thread the thread to unpark.
*
*/
   public native void unpark(Object thread);

However, native code is anyway inherently unsafe.


In addition this class has a cosy relationship with Thread (it wants to poke 
into Thread's non-public fields):

 // Hotspot implementation via intrinsics API
 private static final sun.misc.Unsafe U = sun.misc.Unsafe.getUnsafe();
 private static final long PARKBLOCKER;
 private static final long SEED;
 private static final long PROBE;
 private static final long SECONDARY;
 static {
 try {
 PARKBLOCKER = U.objectFieldOffset
 (Thread.class.getDeclaredField("parkBlocker"));
 SEED = U.objectFieldOffset
 (Thread.class.getDeclaredField("threadLocalRandomSeed"));
 PROBE = U.objectFieldOffset
 (Thread.class.getDeclaredField("threadLocalRandomProbe"));
 SECONDARY = U.objectFieldOffset
 
(Thread.class.getDeclaredField("threadLocalRandomSecondarySeed"));
 } catch (ReflectiveOperationException e) {
 throw new Error(e);
 }
 }

Although only PARKBLOCKER and SECONDARY are used AFAICT.

I am sure there is some history behind all this... but in my ignorance of the 
past perhaps it's time to reconsider?

We could reduce the coupling on Thread and dependency on Unsafe if we consider 
moving park/unpark and LockSupport functionality to Thread itself.

Thoughts?

Paul.

[1] 
http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/locks/LockSupport.java?view=co



Re: RFR: JDK-8068682 - Deprivilege/move java.corba to the ext class loader

2015-02-10 Thread Alan Bateman

On 10/02/2015 11:20, Mark Sheppard wrote:

thanks Alan

the updated corba part is at

http://cr.openjdk.java.net/~msheppar/8068682/corba/webrev.02/

I assume ORB.java isn't meant to be in this webrev (the lib->conf issue 
is separate and I think will need an @implNote in additional to checking 
for orb.properties in both lib and conf).


-Alan.


Re: RFR 8064924: Update java.net.URL to work with modules

2015-02-10 Thread Alan Bateman

On 09/02/2015 14:57, Chris Hegarty wrote:

:

Updated webrev [ spec and implementation] :
  http://cr.openjdk.java.net/~chegar/8064924/04/

The updated javadoc looks good. I haven't had a chance yet to look at 
the implementation but I think you will need to update 
/make/common/CORE_PKGS.gmk for the spi package.


-Alan.


Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Peter Levart

On 02/10/2015 12:25 AM, Roger Riggs wrote:

Hi,

After a protracted absence from working on JEP 102, the updated API draft
provides access to process hierarchies and individual process 
information;
as permitted by the OS. The relationship between Process and 
ProcessHandle

is clarified and the security model validated.

Both Processes and ProcessHandles can be monitored using 
CompletableFuture

for termination and to trigger additional actions on Process exit.
Information about processes includes the total cputime, starttime, user,
executable, and arguments.

Please review and comment:
   http://cr.openjdk.java.net/~rriggs/ph-apidraft/

Thanks, Roger





Hi Roger,

Great to see progress on this.

Here are my comments:

ProcessHandle.{allProcesses,allChildren,children} return 
Stream. This is convenient if one wants to use Stream 
API, but a little more inconvenient and perhaps with additional overhead 
if one doesn't. Since all those methods return a "snapshot", I can 
imagine that internally, there is a collection built from this snapshot. 
So it would be most appropriate to return a List. One can 
always continue the expression by appending .stream()..., but on the 
other hand if one wants to .collect(toList()), additional copying is 
introduced.


ProcessHandle.completableFuture().cancel(true) forcibly destorys 
(destroyForcibly()) the process *and* vice versa: destory[Forcibly]() 
cancels the CompletableFuture. I don't know if this is the best way - 
can't decide yet. In particular, in the implementation it would be hard 
to achieve the atommicity of both destroying the process and canceling 
the future. Races are inevitable. So it would be better to think of a 
process (and a ProcessHandle representing it) as the 1st stage in the 
processing pipeline, where ProcessHandle.completableFuture() is it's 
dependent stage which tracks real changes of the process. Which means 
the behaviour would be something like the following:


- ProcessHandle.destroy[Forcibly]() triggers destruction of the process 
which in turn (when successful) triggers completion of 
CompletableFuture, exceptionally with CompletionException, wrapping the 
exception indicating the destruction of the process 
(ProcessDestroyedException?).


- ProcessHandle.completableFuture().cancel(true/false) just cancels the 
CompletableFuture and does not do anything to the process itself.


In that variant, then perhaps it would be more appropriate for 
ProcessHandle.completableFuture() to be a "factory" for 
CompletableFuture(s) so that each call would return new independent 
instance.


What do you think?

Regards, Peter



Re: RFR: JDK-8068682 - Deprivilege/move java.corba to the ext class loader

2015-02-10 Thread Mark Sheppard

thanks Alan

the updated corba part is at

http://cr.openjdk.java.net/~msheppar/8068682/corba/webrev.02/

regards
Mark

On 10/02/2015 09:14, Alan Bateman wrote:

On 07/02/2015 00:22, Mark Sheppard wrote:

Hi Alan,
   I had meant to remove the commented lines prior to generating the 
patch


Okay, so ignoring that part then the rest looks good to me. Hopefully 
we have enough tests in this area that run with a security manager to 
help find any issues.


-Alan




Unsafe.park/unpark, j.u.c.LockSupport and Thread

2015-02-10 Thread Paul Sandoz
Hi,

As part of the effort around Unsafe (some may have noticed some cleanup work) i 
have been recently looking at the park/unpark methods.

The class java.util.concurrent.locks.LockSupport [1] has some thin public 
wrappers around these methods:

public static void unpark(Thread thread) {
if (thread != null)
U.unpark(thread);
}

public static void park() {
U.park(false, 0L);
}

public static void parkNanos(long nanos) {
if (nanos > 0)
U.park(false, nanos);
}

public static void parkUntil(long deadline) {
U.park(true, deadline);
}

Is not clear to me what is exactly unsafe about park/unpark and why they were 
not originally placed on Thread itself given the above wrapping.

There is mention of unpark being unsafe with regards to native code and 
ensuring the thread has not been destroyed: 

  /**
   * Unblock the given thread blocked on park, or, if it is
   * not blocked, cause the subsequent call to park not to
   * block.  Note: this operation is "unsafe" solely because the
   * caller must somehow ensure that the thread has not been
   * destroyed. Nothing special is usually required to ensure this
   * when called from Java (in which there will ordinarily be a live
   * reference to the thread) but this is not nearly-automatically
   * so when calling from native code.
   * @param thread the thread to unpark.
   *
   */
  public native void unpark(Object thread);

However, native code is anyway inherently unsafe.


In addition this class has a cosy relationship with Thread (it wants to poke 
into Thread's non-public fields):

// Hotspot implementation via intrinsics API
private static final sun.misc.Unsafe U = sun.misc.Unsafe.getUnsafe();
private static final long PARKBLOCKER;
private static final long SEED;
private static final long PROBE;
private static final long SECONDARY;
static {
try {
PARKBLOCKER = U.objectFieldOffset
(Thread.class.getDeclaredField("parkBlocker"));
SEED = U.objectFieldOffset
(Thread.class.getDeclaredField("threadLocalRandomSeed"));
PROBE = U.objectFieldOffset
(Thread.class.getDeclaredField("threadLocalRandomProbe"));
SECONDARY = U.objectFieldOffset

(Thread.class.getDeclaredField("threadLocalRandomSecondarySeed"));
} catch (ReflectiveOperationException e) {
throw new Error(e);
}
}

Although only PARKBLOCKER and SECONDARY are used AFAICT.

I am sure there is some history behind all this... but in my ignorance of the 
past perhaps it's time to reconsider?

We could reduce the coupling on Thread and dependency on Unsafe if we consider 
moving park/unpark and LockSupport functionality to Thread itself.

Thoughts?

Paul.

[1] 
http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/locks/LockSupport.java?view=co



Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Paul Sandoz

On Feb 10, 2015, at 12:25 AM, Roger Riggs  wrote:

> Hi,
> 
> After a protracted absence from working on JEP 102, the updated API draft
> provides access to process hierarchies and individual process information;
> as permitted by the OS. The relationship between Process and ProcessHandle
> is clarified and the security model validated.
> 
> Both Processes and ProcessHandles can be monitored using CompletableFuture
> for termination and to trigger additional actions on Process exit.
> Information about processes includes the total cputime, starttime, user,
> executable, and arguments.
> 
> Please review and comment:
>   http://cr.openjdk.java.net/~rriggs/ph-apidraft/
> 

Since i have your source it's easier in some cases if i copy/paste with line 
numbers:
  57  * 
  58  * A {@link java.util.concurrent.CompletableFuture} available from {@link 
#completableFuture}
  59  * can be used to wait for process termination and execute a task.
  60  * The threads used to monitor the process and execute the task are pooled
  61  * using a {@link java.util.concurrent.Executors#newCachedThreadPool}.
Is that too much impl detail mentioning the use of the cached thread pool? If 
so seems more of an implementation note than specification.

Paul.


Re: RFR (xs): JDK-8072611: (process) ProcessBuilder redirecting output to file should work with long file names (win)

2015-02-10 Thread Thomas Stüfe
Hi Roger, Volker,

thanks for reviewing!

I added all requested changes:

@Roger
- use now Files.createTempDirectory to get a unique directory
- wrapped test in try/finally to cleanup afterwards
@Volker
- moved include up and added dependency to comment in io_util_md.h
- Now I use hostname.exe, which I hope is part of every Windows :)

http://cr.openjdk.java.net/~stuefe/webrevs/8072611/webrev.02/webrev/

Regards, Thomas

On Mon, Feb 9, 2015 at 7:48 PM, Volker Simonis 
wrote:

> Hi Thomas,
>
> the change looks good and I can sponsor it once it is reviewed.
>
> Just some small notes:
>
> - it seems that "getPath()" isn't used anywhere else in
> ProcessImpl_md.c and because it is static it can't be used anywhere
> else. So can you please remove it completely.
>
> - io_util_md.h has a list of files which use the prototypes like
> "pathToNTPath" before the declaration of the prototypes. Could you
> please also add ProcessImpl_md.c to this list
>
> - can you pleae place the new include in ProcessImpl_md.c as follows:
>
>  #include "io_util.h"
> +#include "io_util_md.h"
>  #include 
>  #include 
>
> I saw that system and local headers are already intermixed in that
> file but at I think at least we shouldn't introduce more disorder.
>
> - is "robocopy" really available on all supported Windows system. In
> http://en.wikipedia.org/wiki/Robocopy I read that it was introduced in
> Server 2008. I just verified that Java 8 only supports Server 2008 and
> above but just think of our internal test system:) Maybe we can use a
> program which is supported in every Windows version?
>
> Regards,
> Volker
>
>
> On Mon, Feb 9, 2015 at 2:30 PM, Thomas Stüfe 
> wrote:
> > Hi all,
> >
> > please review this small change at your convenience:
> >
> > http://cr.openjdk.java.net/~stuefe/webrevs/8072611/webrev.01/webrev/
> >
> > It fixes a small issue which causes ProcessBuilder not to be able to open
> > files to redirect into (when using Redirect.append) on windows, if that
> > file name is longer than 255 chars.
> >
> > Kind Regards, Thomas Stuefe
>


Re: JEP 102 Process Updates revised API draft

2015-02-10 Thread Staffan Larsen
Happy to see this!

In ProcessHandle.Info would it be possible to include the environment variables 
of the process as well?

How does ProcessHandle.allChildren() behave when process A launches B which 
launches C, and B terminates? Is C included in allChildren() of A?

Thanks,
/Staffan

> On 10 feb 2015, at 00:25, Roger Riggs  wrote:
> 
> Hi,
> 
> After a protracted absence from working on JEP 102, the updated API draft
> provides access to process hierarchies and individual process information;
> as permitted by the OS. The relationship between Process and ProcessHandle
> is clarified and the security model validated.
> 
> Both Processes and ProcessHandles can be monitored using CompletableFuture
> for termination and to trigger additional actions on Process exit.
> Information about processes includes the total cputime, starttime, user,
> executable, and arguments.
> 
> Please review and comment:
>   http://cr.openjdk.java.net/~rriggs/ph-apidraft/
> 
> Thanks, Roger
> 
> 
> 
> 



Re: RFR: JDK-8068682 - Deprivilege/move java.corba to the ext class loader

2015-02-10 Thread Alan Bateman

On 07/02/2015 00:22, Mark Sheppard wrote:

Hi Alan,
   I had meant to remove the commented lines prior to generating the 
patch


Okay, so ignoring that part then the rest looks good to me. Hopefully we 
have enough tests in this area that run with a security manager to help 
find any issues.


-Alan


Re: RFR 8068587: ScriptEngineFactory.getParameter() should specify NPE for a null key

2015-02-10 Thread Marcus Lagergren
Looks OK to me. +1

/M

> On 10 Feb 2015, at 08:21, A. Sundararajan 
>  wrote:
> 
> Please review http://cr.openjdk.java.net/~sundar/8068587/ for 
> https://bugs.openjdk.java.net/browse/JDK-8068587
> 
> Thanks,
> -Sundar



Re: RFR 8068587: ScriptEngineFactory.getParameter() should specify NPE for a null key

2015-02-10 Thread Alan Bateman

On 10/02/2015 07:21, A. Sundararajan wrote:
Please review http://cr.openjdk.java.net/~sundar/8068587/ for 
https://bugs.openjdk.java.net/browse/JDK-8068587
This looks okay to me although it makes me wonder if JSR-223 will need 
to be updated.


-Alan.