Re: RFR 9:8148775 : Spec for j.l.ProcessBuilder.Redirect.DISCARD need to be improved

2016-02-15 Thread Chris Hegarty



On 12/02/16 23:10, Martin Buchholz wrote:

Still looks good!


+1.

-Chris.


On Fri, Feb 12, 2016 at 3:08 PM, Roger Riggs  wrote:

Thanks,  Martin

Updated in place.




On 2/12/2016 5:48 PM, Martin Buchholz wrote:

looks good.
The previous line

Redirect.DISCARD.file() the filename appropriate for the operating system
  557  * and may be null &&

could use the word "is"

On Fri, Feb 12, 2016 at 2:42 PM, Roger Riggs  wrote:

Please review a minor javadoc cleanup to remove a reference to a method
(append)
of the implementation.

Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-discard-8148775/

Issue:
   https://bugs.openjdk.java.net/browse/JDK-8148775

Thanks, Roger




Re: RFR 9: 8149750 Decouple sun.misc.Signal from the base module

2016-02-15 Thread Chris Hegarty

On 12/02/16 17:47, Roger Riggs wrote:

Please review moving the functionality of sun.misc.Signal to
jdk.internal.misc and
creating a wrapper sun.misc.Signal for existing external uses.

+++ This time including the hotspot changes to update the target of the
upcalls.

A new replacement API will be considered separately.

The update includes a test that passes with or without the changes.
(Except for an NPE instead of SEGV if null is passed).

Webrev:
   jdk: http://cr.openjdk.java.net/~rriggs/webrev-signal-9149750/


Overall looks ok, and satisfies the requirement of JEP 260.

It took me a while to satisfy myself that it is ok to "ignore"
the passed Signal in the wrapper's 'handle' methods. The assumption
is that this wrapper's signal field and the passes signal are, MUST,
represent the same underlying signal. Maybe an assert to make this
explicit?

Looking at j.i.m.Signal., I can see that you explicitly check
for the 'SIG' prefix and prepend it if not there, but toString() also
prepends it. Will getName also be impacted by this ( it may now have
the name prepended with 'SIG', where it previously was not ).


hotspot: http://cr.openjdk.java.net/~rriggs/webrev-hs-signal-9149750/


Looks fine.

-Chris.


Re: RFR - 8132734: java.util.jar.* changes to support multi-release jar files

2016-02-15 Thread Alan Bateman


On 10/02/2016 01:04, Steve Drach wrote:

Hi,

Yet another webrev, 
http://cr.openjdk.java.net/~sdrach/8132734/webrev.06/ 
, with a 
change to JarEntryIterator to fix a problem discovered by performance 
tests — calling hasNext() twice as often as needed.  I also removed 
the @since 9 tags from the methods entries() and stream(), and added 
an additional sentence to the spec for each of those methods that 
clarifies what a base entry is (actually is not).



I went through the latest webrev and it looks quite good.

A few comments on the javadoc:

"... partitioned by the major version of Java platform releases" - this 
might be better as "... partitioned by the major version of the Java 
platform".


In JarFile.Release then it uses the phrase "top-most (base) directory". 
I thought we had purged "top-*" from the javadoc in previous iterations 
because it hints of classes or resources in the top most directory 
(which isn't the case with classes in a named package).


"... will not be accessible by this JarFile" hints of access control or 
even security manager. Would it clearer to re-word this to something 
like "will not be located by methods such as getEntry" ?


"returned depends whether" -> "returned depends on whether".

In the javadoc for entries() and stream() then it mentions "the 
constructor" many times. I would be tempted to replace many of these - 
for example "all entries are returned, regardless of the constructor" 
might be better as "all entries of returned, regards of how the JarFile 
is created".


A couple of nits on the implementation:

vze = JarFile.super.getEntry(META_INF_VERSIONS + i-- + sname);
- it would be more readable if you move the decrement to its own line. 
Also I assume that JarFile is not needed here.


L942-943 looks messy too, I assume that can be cleaned up.

JarFileFactory - "earl" will confuse readers, needs a comment or a 
better name.


I think this is all that I have for now.

-Alan.


Re: We don't need jdk.internal.ref.Cleaner any more

2016-02-15 Thread Chris Hegarty

Peter,

> 
http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.02/


Given the changes in 6857566, I think your proposal to remove
jdk.internal.ref.Cleaner should be good. ( Note: third party libraries
digging into internals of direct buffers will need to change again ).

-Chris.

On 09/02/16 13:35, Chris Hegarty wrote:

Peter,

On 08/02/16 10:58, Peter Levart wrote:


On 02/08/2016 10:33 AM, Chris Hegarty wrote:

On 8 Feb 2016, at 06:27, Alan Bateman  wrote:


On 07/02/2016 22:20, Peter Levart wrote:

:

If the decision to remove sun.misc.Cleaner was partly influenced by
the desire to maintain just 2 instead of 3 Cleaner(s), then my
proposal to migrate JDK code to the public API might enable Oracle
to reconsider keeping sun.misc.Cleaner.

The main issue driving this is the design principles that we have
listed in JEP 200. We don't want a standard module (java.base in
this case) exporting a non-standard API. This means surgery to
jettison the sun.misc package from java.base and move it to its own
module (jdk.unsupported is the proposal in JEP 260). This is painful
of course but we are at least in good shape for the most critical
internal API, sun.misc.Unsafe.

For sun.misc.Cleaner then the original proposal was for it to be a
critical internal API too but it become clear that changing the type
of internal/private fields in the NIO buffer and channel classes
would break libraries that have been hacking into those fields. If
they are changing away then there seems little motive to keep
sun.misc.Cleaner so Chris moved into into jdk.internal.ref for now.
As to whether we even need to keep jdk.internal.ref.Cleaner then I
think the only remaining question was whether the special handling
of Cleaner in the Reference implementation was worth it or not. It
may not be, in which case your current proposal to just remove seems
right.

Alan’s summary of the situation is spot on.

Before moving sun.misc.Cleaner to jdk.internal.ref, I did enquire if
it would be
possible to just remove it and use the new public Cleaner, but I got
feedback
that there were some issues with failing NIO tests ( it appeared that
Cleaners
were not running as quickly as possible ). I didn’t look further into
that at the
time, since the VM still had special knowledge of these cleaners, but
as you
say this is now removed.  It is probably a good time to revisit this.

-Chris.


Hi Chris,

Are you referring to the following test:

   test/java/nio/Buffer/DirectBufferAllocTest.java ?

This test was written specifically for the following issue:

https://bugs.openjdk.java.net/browse/JDK-6857566

...which exercises multi-threaded allocation of direct buffers. Even
sun.misc.Cleaner was not quick enough to promptly deallocate them, so
allocation path was patched to help ReferenceHandler thread while
re-attempting to allocate new direct buffers. Transitioning to
java.lang.ref.Cleaner which uses additional thread to process
Cleanable(s), direct-buffer allocation path must help the Cleaner thread
too. See:


Ah, I was not aware of this. So additional threads attempting to
allocate already help out with cleaning. So moving away from the
ReferenceHandler Thread is not such a big deal as I was thinking.


http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.02/src/java.base/share/classes/java/nio/Bits.java.sdiff.html


...for the place where this is performed...


I did look through your changes and I think they are good. I'd like
to spend a little more time on the details.

-Chris.


Regards, Peter





Re: RFR - 8132734: java.util.jar.* changes to support multi-release jar files

2016-02-15 Thread Steve Drach
Thank you Alan.  I’ll address the issues you bring up before integration.

> On Feb 15, 2016, at 4:30 AM, Alan Bateman  wrote:
> 
> 
> On 10/02/2016 01:04, Steve Drach wrote:
>> Hi,
>> 
>> Yet another webrev, http://cr.openjdk.java.net/~sdrach/8132734/webrev.06/ 
>> , with a change to 
>> JarEntryIterator to fix a problem discovered by performance tests — calling 
>> hasNext() twice as often as needed.  I also removed the @since 9 tags from 
>> the methods entries() and stream(), and added an additional sentence to the 
>> spec for each of those methods that clarifies what a base entry is (actually 
>> is not).
>> 
> I went through the latest webrev and it looks quite good.
> 
> A few comments on the javadoc:
> 
> "... partitioned by the major version of Java platform releases" - this might 
> be better as "... partitioned by the major version of the Java platform".
> 
> In JarFile.Release then it uses the phrase "top-most (base) directory". I 
> thought we had purged "top-*" from the javadoc in previous iterations because 
> it hints of classes or resources in the top most directory (which isn't the 
> case with classes in a named package).
> 
> "... will not be accessible by this JarFile" hints of access control or even 
> security manager. Would it clearer to re-word this to something like "will 
> not be located by methods such as getEntry" ?
> 
> "returned depends whether" -> "returned depends on whether".
> 
> In the javadoc for entries() and stream() then it mentions "the constructor" 
> many times. I would be tempted to replace many of these - for example "all 
> entries are returned, regardless of the constructor" might be better as "all 
> entries of returned, regards of how the JarFile is created".
> 
> A couple of nits on the implementation:
> 
> vze = JarFile.super.getEntry(META_INF_VERSIONS + i-- + sname);
> - it would be more readable if you move the decrement to its own line. Also I 
> assume that JarFile is not needed here.
> 
> L942-943 looks messy too, I assume that can be cleaned up.
> 
> JarFileFactory - "earl" will confuse readers, needs a comment or a better 
> name.
> 
> I think this is all that I have for now.
> 
> -Alan.



Re: We don't need jdk.internal.ref.Cleaner any more

2016-02-15 Thread Alan Bateman



On 15/02/2016 14:57, Chris Hegarty wrote:

Peter,

> 
http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.02/


Given the changes in 6857566, I think your proposal to remove
jdk.internal.ref.Cleaner should be good. ( Note: third party libraries
digging into internals of direct buffers will need to change again ).

I think so too, the main think is that the fast path for cleaners has 
been removed (and I see that it has).


-Alan


Re: RFR: 8072727 - add variation of Stream.iterate() that's finite

2016-02-15 Thread Brian Goetz
The model that Tagir implemented has a natural analogue: the for loop.  

for (T v = seed; predicate.test(v); v = op.apply(v)) { … }

Like a for-loop, this proposed version of iterate() can generate an empty 
sequence; if the predicate applied to the seed returns false, you’ll get an 
empty stream.  

There are lots of for-loops that might want to be converted to streams, but 
don’t fit into the existing factories: 

for (T p = root; p != null; p = p.next) { … }
converts naturally to
Stream.iterate(root, p -> p != null; p -> p.next)…

Providing an initial value is not at odds with an empty sequence; it still gets 
tested by the predicate.  


On Feb 14, 2016, at 8:03 PM, Stuart Marks  wrote:

> Hi Tagir,
> 
> Thanks for picking this up.
> 
> I'll be at a conference this week and I won't have much time to discuss this 
> in detail until afterward. But here are some quick thoughts about the 
> proposal.
> 
> I'd suggest focusing on the API first before worrying about how to track the 
> stream state with booleans, etc. Is the API convenient to use, and how well 
> does it support the use cases we envision for it?
> 
> In particular, I can imagine a number of cases where it would be very helpful 
> to be able to support an empty stream, or where the computation to produce 
> the first element is the same as the computation to produce subsequent 
> elements. Requiring a value for the first stream element is at odds with that.
> 
> Here are some ideas for use cases to try out:
> 
> - a series of dice rolls representing a round of craps [1]
> - elements drawn from a queue until the queue is empty or until
>   a sentinel is reached
> - a sequence of numbers that (probably) terminates but whose length
>   isn't necessarily known in advance (e.g. Collatz sequence [2])
> 
> [1] https://en.wikipedia.org/wiki/Craps
> 
> [2] https://en.wikipedia.org/wiki/Collatz_conjecture
> 
> Note that in some cases the sentinel value that terminates the stream should 
> be part of the stream, and in other cases it's not.
> 
> I'm sure you can find more uses cases by perusing Stack Overflow. :-)
> 
> I'm a bit skeptical of the use of "iterate" for producing a finite stream. 
> There are the usual issues with overloading, but there's also potential 
> confusion as some forms of iterate() are infinite and others finite. I'll 
> suggest the name "produce" instead, but there are surely better terms.
> 
> One thing to think about is where the state of the producer is stored. Is it 
> expected to be in an argument that's passed to each invocation of the 
> functional argument, or is it expected to be captured? I don't think there's 
> an answer in isolation; examining use cases would probably shed some light 
> here.
> 
> Here are a few API ideas (wildcards elided):
> 
> --
> 
>  Stream iterate(T seed, Predicate predicate, UnaryOperator f)
> 
> The API from your proposal, for comparison purposes.
> 
> --
> 
>  Stream produce(Supplier>)
> 
> Produces elements until empty Optional is returned. This box/unboxes every 
> element, maybe(?) alleviated by Valhalla.
> 
> --
> 
>  Stream produce(BooleanSupplier, Supplier)
> 
> Calls the BooleanSupplier; if true the next stream element is what's returned 
> by calling the Supplier. If BooleanSupplier returns false, end of stream. If 
> you have an iterator already, this enables
> 
> produce(iterator::hasNext, iterator::next)
> 
> But if you don't have an iterator already, coming up with the functions to 
> satisfy the iterator-style protocol is sometimes painful.
> 
> --
> 
>  Stream produce(Predicate> advancer)
> 
> This has an odd signature, but the function is like Spliterator.tryAdvance(). 
> It must either call the consumer once and return true, or return false 
> without calling the consumer.
> 
> --
> 
>  Stream produce(Consumer> advancer)
> 
> A variation of the above, without a boolean return. The advancer calls the 
> consumer one or more times to add elements to the stream. End of stream 
> occurs when the advancer doesn't call the consumer.
> 
> --
> 
>  Stream produce(Supplier>)
> 
> A variation of Supplier> where the supplier returns a stream 
> containing zero or more elements. The stream terminates if the supplier 
> returns an empty stream. There "boxing" overhead here, but we don't seem to 
> be bothered by this with flatMap().
> 
> --
> 
> s'marks
> 
> 
> On 2/14/16 6:53 AM, Tagir F. Valeev wrote:
>> Hello!
>> 
>> I wanted to work on foldLeft, but Brian asked me to take this issue
>> instead. So here's webrev:
>> http://cr.openjdk.java.net/~tvaleev/webrev/8072727/r1/
>> 
>> I don't like iterator-based Stream source implementations, so I made
>> them AbstractSpliterator-based. I also implemented manually
>> forEachRemaining as, I believe, this improves the performance in
>> non-short-circuiting cases.
>> 
>> I also decided to keep two flags (started and finished) to track the
>> state. Currently existing implementation of infinite iterate() does
>> not use started flag,

RE: We don't need jdk.internal.ref.Cleaner any more

2016-02-15 Thread Uwe Schindler
Hi,

> On 15/02/2016 14:57, Chris Hegarty wrote:
> > Peter,
> >
> > >
> > http://cr.openjdk.java.net/~plevart/jdk9-
> dev/removeInternalCleaner/webrev.02/
> >
> > Given the changes in 6857566, I think your proposal to remove
> > jdk.internal.ref.Cleaner should be good. ( Note: third party libraries
> > digging into internals of direct buffers will need to change again ).
> >
> I think so too, the main think is that the fast path for cleaners has
> been removed (and I see that it has).

What do you mean with fast path? The usual code pattern for forceful unmapping 
did not change:
- Make java.nio.DirectByteBuffer#cleaner() accessible (this is the thing where 
you need to fight against the Java access checks).
- call clean() (Runnable#run() in previous commit) on the returned object (with 
correct casting to interface before calling, this is where existing legacy code 
must change).

In preparation for these changes, I made a patch for Lucene 
(Solr/Elasticsearch) to "compile" the whole unmapping logic into a MethodHandle 
(this allows me to do all needed accesibility/security checks early on startup 
and before actually invoking unmapping). This allows us to tell user that 
forceful unmapping works *before* we actually need to call it: 
https://issues.apache.org/jira/secure/attachment/12787878/LUCENE-6989.patch

This will be part of Lucene 6.0 which will require Java 8 as minimum. Earlier 
Lucene version will not be compatible to Java 9 and can only be used in 
"slower" non-mmap mode.

Thanks,
Uwe



Re: RFR: 8072727 - add variation of Stream.iterate() that's finite

2016-02-15 Thread Stefan Zobel
2016-02-15 7:48 GMT+01:00 Peter Levart :
>
>
> What about even more permissive:
>
> static  Stream iterate3(S seed, Predicate
> predicate, Function f)
>


Hi Peter,

yes, Function instead of UnaryOperator is
indeed tempting.

On the other hand, for my taste, I find the wildcard-ridden signature
of iterate3 oversteps a certain complexity budget a little (that's
just my personal opinion, of course).

But you may be right - we already have such complex signatures in a
couple of other places. The question is, whether it would be really useful.


Regards,
Stefan



>
> public class SignatureTest {
>
> static  Stream iterate1(T seed, Predicate predicate,
> UnaryOperator f) {
> ...
> }
>
> static  Stream iterate2(S seed, Predicate
> predicate, UnaryOperator f) {
> ...
> }
>
> static  Stream iterate3(S seed, Predicate
> predicate, Function f) {
> ...
> }
>
>
> public static void main(String[] args) {
>
> Stream si1 = iterate1(0, i -> i < 10, i -> i + 1); // OK
> Stream si2 = iterate2(0, i -> i < 10, i -> i + 1); // OK
> Stream si3 = iterate3(0, i -> i < 10, i -> i + 1); // OK
>
> Stream sn1 = iterate1(0, i -> i < 10, i -> i + 1);
>
> //SignatureTest.java:32: error: bad operand types for binary
> operator '<'
> //Stream sn1 = iterate1(0, i -> i < 10, i -> i + 1);
> //^
> //SignatureTest.java:32: error: bad operand types for binary
> operator '+'
> //Stream sn1 = iterate1(0, i -> i < 10, i -> i + 1);
> // ^
>
>
> Stream sn2 = iterate2(0, i -> i < 10, i -> i + 1); // OK
> Stream sn3 = iterate3(0, i -> i < 10, i -> i + 1); // OK
>
>
> Predicate csNotEmpty = cs -> cs.length() > 0;
>
> Stream css1 = iterate1("abcde", csNotEmpty, (String s)
> -> s.substring(1));
>
> //SignatureTest.java:44: error: method iterate1 in class
> SignatureTest cannot be applied to given types;
> //Stream css1 = iterate1("abcde", csNotEmpty,
> (String s) -> s.substring(1));
> //^
> //  required: T,Predicate,UnaryOperator
> //  found: String,Predicate,(String s)[...]ng(1)
> //  reason: inference variable T has incompatible equality
> constraints String,CharSequence
> //  where T is a type-variable:
> //T extends Object declared in method
> iterate1(T,Predicate,UnaryOperator)
>
>
> Stream css2 = iterate2("abcde", csNotEmpty, (String s)
> -> s.substring(1));
>
> //SignatureTest.java:54: error: method iterate2 in class
> SignatureTest cannot be applied to given types;
> //Stream css2 = iterate2("abcde", csNotEmpty,
> (String s) -> s.substring(1));
> //^
> //  required: S,Predicate,UnaryOperator
> //  found: String,Predicate,(String s)[...]ng(1)
> //  reason: inference variable S has incompatible equality
> constraints String,CharSequence
> //  where S,T are type-variables:
> //S extends T declared in method
> iterate2(S,Predicate,UnaryOperator)
> //T extends Object declared in method
> iterate2(S,Predicate,UnaryOperator)
>
>
> Stream css3 = iterate3("abcde", csNotEmpty, (String s)
> -> s.substring(1)); // OK
> }
> }
>
>
> Regards, Peter
>
>
>
> Regards,
> Stefan
>
>
> 2016-02-14 15:53 GMT+01:00 Tagir F. Valeev :
>
> Hello!
>
> I wanted to work on foldLeft, but Brian asked me to take this issue
> instead. So here's webrev:
> http://cr.openjdk.java.net/~tvaleev/webrev/8072727/r1/
>
> I don't like iterator-based Stream source implementations, so I made
> them AbstractSpliterator-based. I also implemented manually
> forEachRemaining as, I believe, this improves the performance in
> non-short-circuiting cases.
>
> I also decided to keep two flags (started and finished) to track the
> state. Currently existing implementation of infinite iterate() does
> not use started flag, but instead reads one element ahead for
> primitive streams. This seems wrong to me and may even lead to
> unexpected exceptions (*). I could get rid of "started" flag for
> Stream.iterate() using Streams.NONE, but this would make object
> implementation different from primitive implementations. It would also
> be possible to keep single three-state variable (byte or int,
> NOT_STARTED, STARTED, FINISHED), but I doubt that this would improve
> the performance or footprint. Having two flags looks more readable to
> me.
>
> Currently existing two-arg iterate methods can now be expressed as a
> partial case of the new method:
>
> public static Stream iterate(final T seed, final UnaryOperator f) {
> return iterate(seed, x -> true, f);
> }
> (same for primitive streams). I may do this if you think it's
> reasonable.
>
> I created new test class and ad

Re: RFR: 8072727 - add variation of Stream.iterate() that's finite

2016-02-15 Thread Brian Goetz
Overall the implementation choices are sound and the testing story is 
nicely integrated into the existing test framework.


For the spec, I'd like to make the connection with the for-loop, and I'd 
like to clarify that this may produce an empty stream (if 
predicate.test(seed) is false.)


/**
1208 * Returns a sequential ordered {@code Stream} produced by iterative
1209 * application of a function {@code f} to an initial element {@code 
seed},
1210 * producing a {@code Stream} consisting of {@code seed}, {@code 
f(seed)},

1211 * {@code f(f(seed))}, etc. The stream terminates when {@code predicate}
1212 * returns false.
1213 *
1214 * The first element (position {@code 0}) in the {@code Stream} 
will be

1215 * the provided {@code seed}. For {@code n > 0}, the element at position
1216 * {@code n}, will be the result of applying the function {@code f} 
to the

1217 * element at position {@code n - 1}.
1218 *
1219 * @param  the type of stream elements
1220 * @param seed the initial element
1221 * @param predicate a predicate to apply to elements to determine 
when the

1222 * stream must terminate.
1223 * @param f a function to be applied to the previous element to produce
1224 * a new element
1225 * @return a new sequential {@code Stream}
1226 * @since 9
1227 */


Something like:

Returns a sequential ordered Stream produced by iterative application of 
a function to an initial element, conditioned on satisfying the supplied 
predicate.  The stream terminates as soon as the predicate function 
returns false.  Stream.iterate should produce the same sequence of 
elements as produced by the corresponding for-loop:


for (T index=seed; predicate.test(index); index = f.apply(index)) { 
... }


The resulting sequence may be empty if the predicate does not hold on 
the seed value.  Otherwise the first element will be the supplied seed 
value, the next element (if present) will be the result of applying the 
function f to the seed value, and so on iteratively until the predicate 
indicates that the stream should terminate.



On 2/14/2016 9:53 AM, Tagir F. Valeev wrote:

Hello!

I wanted to work on foldLeft, but Brian asked me to take this issue
instead. So here's webrev:
http://cr.openjdk.java.net/~tvaleev/webrev/8072727/r1/

I don't like iterator-based Stream source implementations, so I made
them AbstractSpliterator-based. I also implemented manually
forEachRemaining as, I believe, this improves the performance in
non-short-circuiting cases.

I also decided to keep two flags (started and finished) to track the
state. Currently existing implementation of infinite iterate() does
not use started flag, but instead reads one element ahead for
primitive streams. This seems wrong to me and may even lead to
unexpected exceptions (*). I could get rid of "started" flag for
Stream.iterate() using Streams.NONE, but this would make object
implementation different from primitive implementations. It would also
be possible to keep single three-state variable (byte or int,
NOT_STARTED, STARTED, FINISHED), but I doubt that this would improve
the performance or footprint. Having two flags looks more readable to
me.

Currently existing two-arg iterate methods can now be expressed as a
partial case of the new method:

public static Stream iterate(final T seed, final UnaryOperator f) {
 return iterate(seed, x -> true, f);
}
(same for primitive streams). I may do this if you think it's
reasonable.

I created new test class and added new iterate sources to existing
data providers.

Please review and sponsor!

With best regards,
Tagir Valeev.

(*) Consider the following code:

int[] data = {1,2,3,4,-1};
IntStream.iterate(0, x -> data[x])
  .takeWhile(x -> x >= 0)
  .forEach(System.out::println);

Currently this unexpectedly throws an AIOOBE, because
IntStream.iterate unnecessarily tries to read one element ahead.





Re: RFR JDK-8148624: Test failure of ConstructInflaterOutput.java

2016-02-15 Thread joe darcy

Look fine Sherman; thanks,

-Joe

On 2/12/2016 8:58 PM, Xueming Shen wrote:

Hi,

Please help review the change for JDK-8148624

Issue: https://bugs.openjdk.java.net/browse/JDK-8148624
Webrev: http://cr.openjdk.java.net/~sherman/8148624/webrev/

The tests are intended to test/verify that the Deflater/Inflater.end() 
method will
not be invoked when they are passed in as the parameter to the 
constructor of
the DefaultInputStream and InflateOutputStream class, when the 
corresponding
close&/finish() method are invoked.  The stacktrace suggests that the 
end()
method is not actually being called (failure) by the close/finish(), 
but by the
finalize(). It appears the deflater/inflater is somehow being 
finalized by gc

between A and B showed below

public static void main(String[] args) throws Throwable {
try {realMain(args);} catch (Throwable t) {unexpected(t);}
[A]-->
System.out.println("\nPassed = " + passed + " failed = " + 
failed);

[B]-->
if (failed > 0) throw new AssertionError("Some tests failed");}


the problem can be easily reproduced by inserting a System.gc() in 
between.


The easy fix is just move the inflater/deflater out of realMain() to 
be the static variable

to prevent them from being gc-ed.

Thanks,
Sherman




Re: RFR: 8072727 - add variation of Stream.iterate() that's finite

2016-02-15 Thread Paul Sandoz

> On 15 Feb 2016, at 18:39, Brian Goetz  wrote:
> 
> Overall the implementation choices are sound and the testing story is nicely 
> integrated into the existing test framework.
> 
> For the spec, I'd like to make the connection with the for-loop, and I'd like 
> to clarify that this may produce an empty stream (if predicate.test(seed) is 
> false.)
> 

Yes, +1, that is the behaviour i expect, but it could be misconstrued without 
such clarification.

I will take a closer look at the webrev tomorrow, i am not sure we need such 
test integration within the data providers, we have to be careful when 
increasing the overall time of test execution.

—

Note that the example in (*) is somewhat concocted :-) the function needs to 
work correctly for an infinite sequence, and not be "cognisant” of what 
operations follow downstream, in that sense it is ok to be aggressive, and 
parallel execution may also consume more elements than is strictly necessary.

For the three arg case we need exact control; the emulation iterate(s, 
uo).takeWhile(p) is not always sufficient, nor necessarily efficient.

Paul.

> /**
> 1208 * Returns a sequential ordered {@code Stream} produced by iterative
> 1209 * application of a function {@code f} to an initial element {@code seed},
> 1210 * producing a {@code Stream} consisting of {@code seed}, {@code f(seed)},
> 1211 * {@code f(f(seed))}, etc. The stream terminates when {@code predicate}
> 1212 * returns false.
> 1213 *
> 1214 * The first element (position {@code 0}) in the {@code Stream} will be
> 1215 * the provided {@code seed}. For {@code n > 0}, the element at position
> 1216 * {@code n}, will be the result of applying the function {@code f} to the
> 1217 * element at position {@code n - 1}.
> 1218 *
> 1219 * @param  the type of stream elements
> 1220 * @param seed the initial element
> 1221 * @param predicate a predicate to apply to elements to determine when the
> 1222 * stream must terminate.
> 1223 * @param f a function to be applied to the previous element to produce
> 1224 * a new element
> 1225 * @return a new sequential {@code Stream}
> 1226 * @since 9
> 1227 */
> 
> 
> Something like:
> 
> Returns a sequential ordered Stream produced by iterative application of a 
> function to an initial element, conditioned on satisfying the supplied 
> predicate.  The stream terminates as soon as the predicate function returns 
> false.  Stream.iterate should produce the same sequence of elements as 
> produced by the corresponding for-loop:
> 
>for (T index=seed; predicate.test(index); index = f.apply(index)) { ... }
> 
> The resulting sequence may be empty if the predicate does not hold on the 
> seed value.  Otherwise the first element will be the supplied seed value, the 
> next element (if present) will be the result of applying the function f to 
> the seed value, and so on iteratively until the predicate indicates that the 
> stream should terminate.
> 
> 
> On 2/14/2016 9:53 AM, Tagir F. Valeev wrote:
>> Hello!
>> 
>> I wanted to work on foldLeft, but Brian asked me to take this issue
>> instead. So here's webrev:
>> http://cr.openjdk.java.net/~tvaleev/webrev/8072727/r1/
>> 
>> I don't like iterator-based Stream source implementations, so I made
>> them AbstractSpliterator-based. I also implemented manually
>> forEachRemaining as, I believe, this improves the performance in
>> non-short-circuiting cases.
>> 
>> I also decided to keep two flags (started and finished) to track the
>> state. Currently existing implementation of infinite iterate() does
>> not use started flag, but instead reads one element ahead for
>> primitive streams. This seems wrong to me and may even lead to
>> unexpected exceptions (*). I could get rid of "started" flag for
>> Stream.iterate() using Streams.NONE, but this would make object
>> implementation different from primitive implementations. It would also
>> be possible to keep single three-state variable (byte or int,
>> NOT_STARTED, STARTED, FINISHED), but I doubt that this would improve
>> the performance or footprint. Having two flags looks more readable to
>> me.
>> 
>> Currently existing two-arg iterate methods can now be expressed as a
>> partial case of the new method:
>> 
>> public static Stream iterate(final T seed, final UnaryOperator f) {
>> return iterate(seed, x -> true, f);
>> }
>> (same for primitive streams). I may do this if you think it's
>> reasonable.
>> 
>> I created new test class and added new iterate sources to existing
>> data providers.
>> 
>> Please review and sponsor!
>> 
>> With best regards,
>> Tagir Valeev.
>> 
>> (*) Consider the following code:
>> 
>> int[] data = {1,2,3,4,-1};
>> IntStream.iterate(0, x -> data[x])
>>  .takeWhile(x -> x >= 0)
>>  .forEach(System.out::println);
>> 
>> Currently this unexpectedly throws an AIOOBE, because
>> IntStream.iterate unnecessarily tries to read one element ahead.
>> 
> 



RE: RFR: JDK-8031767 Support system or alternative implementations of zlib

2016-02-15 Thread Viswanathan, Sandhya
I would like to reiterate the need for system zlib support by the JVM instead 
of bundled zlib from the performance point of view.

Compression is one of the hotspots in Big Data, Genomics and Middleware 
applications.   
There are many solutions in market today which help improve compression 
performance either through software or hardware acceleration.
Intel has faster compression libraries as part of IPP as Sherman mentioned. 
Also there are hardware compression accelerators from Intel like QuickAssist.
Both of these provide the acceleration through the zlib interface. 
Support for system zlib by the JVM would make these acceleration capabilities 
available to Java applications almost seamlessly through java.util.zip.

Best Regards,
Sandhya

-Original Message-
From: core-libs-dev [mailto:core-libs-dev-boun...@openjdk.java.net] On Behalf 
Of Martin Buchholz
Sent: Thursday, February 11, 2016 10:49 AM
To: Seán Coffey
Cc: build-dev; core-libs-dev
Subject: Re: RFR: JDK-8031767 Support system or alternative implementations of 
zlib

Currently the pendulum is swinging away from multiple applications
sharing common libraries towards every application being
self-contained, perhaps because disk space is dirt cheap and because
of the rise of "containers".  It may be that much of the packaging of
jdks will be picked up by third parties who package up the JDK and any
of its dependencies - e.g. I already see
https://hub.docker.com/_/java/

On Thu, Feb 11, 2016 at 2:34 AM, Seán Coffey  wrote:
>
> On 11/02/2016 00:25, Xueming Shen wrote:
>>
>>
>> One of the benefits of moving to the system libz is actually for
>> better/easy
>> maintenance. Just replacing the offending version of libz with an
>> earlier/later
>> version that works, instead of waiting for a customized jdk/jre image with
>> a
>> working/bundled libz, or the next update release. Especially given the
>> fact
>> that we have decided not to touch the libz at source level. Having
>> dependency
>> on the system libz is really not that bad. The experience suggests those
>> binaries are quite stable. And it should always be easier to replace the
>> system libz than a java runtime, in case of problem.
>
>
> I think people overestimate people's ability to 'just replace' a zlib
> library. Adding a new system property is a struggle for some enterprises.
> We'll enjoy supporting a many to one matrix of zlib:JDK scenarios now rather
> than old style 1:1. It's great to have system library support - I'm just
> highlighting a possible risk. A fall back option solves that but there's no
> appetite for such a solution. Let's see how it goes.
>
> regards,
> Sean.
>
>>
>> Sherman
>>
>> On 02/10/2016 06:41 AM, Seán Coffey wrote:
>>>
>>>
>>> On 10/02/16 14:29, Alan Bateman wrote:


 On 10/02/2016 13:57, Seán Coffey wrote:
>
> I'm all for allowing one to introduce a new version of zlib to their
> JDK at runtime. I just don't think it's in the interests of enterprises 
> and
> stability to introduce a dependency to the JDK on the underlying OS zlib
> libraries. Could we at least consider a runtime property to allow linking 
> to
> the (currently bundled) zlib v.1.2.8 port in case issues arise ?

 Don't the LD_* environment variables serve this need already? Once the
 JDK is using the system zlib then this is the simplest way to get it to use
 a different libz library at runtime.
>>>
>>> No - I don't see that as a solution. You've still made the default JDK
>>> config become dependent on OS environment for all libzip operations. I don't
>>> think we even capture the zlib version that the JDK would be operating with
>>> in any diagnostics. An application wanting a tried/tested and stable libzip
>>> version has extra work to do now. Letting the default be system dependent
>>> has just increased risk for QA teams also. A system property just makes this
>>> all go away. In fact - I would say that for JDK9, the default should be the
>>> JDK bundled libzip library. For those looking for libzip experimenting and
>>> performance benefits, they could take the system property approach.
>>>
>>> regards,
>>> Sean.


 -Alan
>>>
>>>
>>>
>>> On 08/02/16 09:55, Alan Bateman wrote:


 On 08/02/2016 10:42, Seán Coffey wrote:
>
> Is there an option to fall back to the older v.1.2.8 implementation if
> necessary ? It would certainly alleviate issues for any applications that
> might run into issues with the latest and greatest zlib libraries.
> JDK-8133206 would be one such example.

 Just at build time
>>>
>>> so - we introduce a runtime dependency on the underlying zlib libraries
>>> on the OS by default. I would be very concerned with this approach. We've
>>> run JDK 6 for 10+ years with zlib v1.1.3. It was consistent and reliable for
>>> the most part. When we moved JDK7/8 to zlib v1.2.5, we encountered an
>>> inflation issue[1]. When JDK was upgraded to zlib v1.2.8, we receive

[9] RFR (XS): 8148518: Unsafe.getCharUnaligned() loads aren't folded in case of -XX:-UseUnalignedAccesses

2016-02-15 Thread Vladimir Ivanov

http://cr.openjdk.java.net/~vlivanov/8148518/webrev.00/
https://bugs.openjdk.java.net/browse/JDK-8148518

Current Unsafe::getCharUnaligned Java implementation isn't fully 
optimized by C2: it never constant folds such loads from char fields.


C2 matches the types of a load and a location and does constant folding 
only if they match. For getCharUnaligned it sees a load of a short 
value, but the field is of type char.


The fix is to call Unsafe::getChar when offset is aligned. It doesn't 
matter what is used for unaligned case, so I decided to keep makeShort().


Testing: failing test, JPRT.

Best regards,
Vladimir Ivanov

PS: I don't update the test because it was refactored in Jigsaw [1]. To 
avoid unnecessary conflicts during merges, I filed an RFE [2] to adjust 
the test once the fix and Jake are integrated.


[1] 
http://hg.openjdk.java.net/jigsaw/jake/hotspot/file/f6daf3633512/test/compiler/unsafe/UnsafeGetConstantField.java


[2] https://bugs.openjdk.java.net/browse/JDK-8149844


Re: [9] RFR (XS): 8148518: Unsafe.getCharUnaligned() loads aren't folded in case of -XX:-UseUnalignedAccesses

2016-02-15 Thread Vladimir Kozlov

Looks good to me.

Thanks,
Vladimir K

On 2/15/16 11:01 AM, Vladimir Ivanov wrote:

http://cr.openjdk.java.net/~vlivanov/8148518/webrev.00/
https://bugs.openjdk.java.net/browse/JDK-8148518

Current Unsafe::getCharUnaligned Java implementation isn't fully optimized by 
C2: it never constant folds such loads
from char fields.

C2 matches the types of a load and a location and does constant folding only if 
they match. For getCharUnaligned it sees
a load of a short value, but the field is of type char.

The fix is to call Unsafe::getChar when offset is aligned. It doesn't matter 
what is used for unaligned case, so I
decided to keep makeShort().

Testing: failing test, JPRT.

Best regards,
Vladimir Ivanov

PS: I don't update the test because it was refactored in Jigsaw [1]. To avoid 
unnecessary conflicts during merges, I
filed an RFE [2] to adjust the test once the fix and Jake are integrated.

[1] 
http://hg.openjdk.java.net/jigsaw/jake/hotspot/file/f6daf3633512/test/compiler/unsafe/UnsafeGetConstantField.java

[2] https://bugs.openjdk.java.net/browse/JDK-8149844


Re: We don't need jdk.internal.ref.Cleaner any more

2016-02-15 Thread Mandy Chung

> On Feb 15, 2016, at 9:06 AM, Uwe Schindler  wrote:
> 
> Hi,
> 
>> On 15/02/2016 14:57, Chris Hegarty wrote:
>>> Peter,
>>> 
>>> http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.02/
>>> 

This patch looks correct to me.  The innocuous thread created for common 
cleaner is of Thread.MAX_PRIORITY - 2 priority whereas the reference handler 
thread is of MAX_PRIORITY priority.   I don’t know if this would impact any 
real-world application or whether worth having a dedicated thread with 
MAX_PRIORTY (David Holmes may have recommendation to this).

Another subtle difference is no more package access check whether running with 
security manager.  On the other hand, “suppressAccessChecks” permission is 
still checked.  I don’t have issue with this.

>>> Given the changes in 6857566, I think your proposal to remove
>>> jdk.internal.ref.Cleaner should be good. ( Note: third party libraries
>>> digging into internals of direct buffers will need to change again ).
>>> 
>> I think so too, the main think is that the fast path for cleaners has
>> been removed (and I see that it has).
> 
> What do you mean with fast path?

I believe Alan refers to the fast path handling jdk.internal.ref.Cleaner via 
the Reference Handler thread.

What this means to existing code is that the java.nio.DirectByteBuffer::cleaner 
method still exists but with java.lang.ref.Cleaner.Cleanable return type.  You 
still need to call setAccessible(true) on the cleaner method.  You can then 
statically reference java.lang.ref.Cleaner.Cleanable::clean method 


Mandy

> The usual code pattern for forceful unmapping did not change:
> - Make java.nio.DirectByteBuffer#cleaner() accessible (this is the thing 
> where you need to fight against the Java access checks).
> - call clean() (Runnable#run() in previous commit) on the returned object 
> (with correct casting to interface before calling, this is where existing 
> legacy code must change).
> 
> In preparation for these changes, I made a patch for Lucene 
> (Solr/Elasticsearch) to "compile" the whole unmapping logic into a 
> MethodHandle (this allows me to do all needed accesibility/security checks 
> early on startup and before actually invoking unmapping). This allows us to 
> tell user that forceful unmapping works *before* we actually need to call it: 
> https://issues.apache.org/jira/secure/attachment/12787878/LUCENE-6989.patch
> 
> This will be part of Lucene 6.0 which will require Java 8 as minimum. Earlier 
> Lucene version will not be compatible to Java 9 and can only be used in 
> "slower" non-mmap mode.
> 
> Thanks,
> Uwe
> 



Re: [9] RFR (XS): 8148518: Unsafe.getCharUnaligned() loads aren't folded in case of -XX:-UseUnalignedAccesses

2016-02-15 Thread Aleksey Shipilev
On 02/15/2016 10:01 PM, Vladimir Ivanov wrote:
> http://cr.openjdk.java.net/~vlivanov/8148518/webrev.00/
> https://bugs.openjdk.java.net/browse/JDK-8148518

+1

> Current Unsafe::getCharUnaligned Java implementation isn't fully
> optimized by C2: it never constant folds such loads from char fields.
> C2 matches the types of a load and a location and does constant folding
> only if they match. For getCharUnaligned it sees a load of a short
> value, but the field is of type char.

Does the same affect putCharUnaligned?

Cheers,
-Aleksey




RFR: 8135108: java/util/zip/TestLocalTime.java fails intermittently with Invalid value for NanoOfSecond

2016-02-15 Thread Xueming Shen

Hi,

Please help review the changes for JDK-8135108

issue: https://bugs.openjdk.java.net/browse/JDK-8135108
webrev: http://cr.openjdk.java.net/~sherman/8135108

It's a overflow of the low 32-bit of xdostime when the year is > 0x70
(which will be shifting << 25). We have a similar patch at
ZipUtils.javaToDosTime(). Obviously ZipEntry.setTimeLocale() needs
one as well.

   /**
 * Converts Java time to DOS time.
 */
private static long javaToDosTime(long time) {
Instant instant = Instant.ofEpochMilli(time);
LocalDateTime ldt = LocalDateTime.ofInstant(
instant, ZoneId.systemDefault());
int year = ldt.getYear() - 1980;
if (year < 0) {
return (1 << 21) | (1 << 16);
}
return (year << 25 |
ldt.getMonthValue() << 21 |
ldt.getDayOfMonth() << 16 |
ldt.getHour() << 11 |
ldt.getMinute() << 5 |
ldt.getSecond() >> 1) & 0xL;
}

Thanks!
Sherman


Re: RFR: 8135108: java/util/zip/TestLocalTime.java fails intermittently with Invalid value for NanoOfSecond

2016-02-15 Thread Joseph D. Darcy

Looks fine Sherman; thanks,

-Joe

On 2/15/2016 2:45 PM, Xueming Shen wrote:

Hi,

Please help review the changes for JDK-8135108

issue: https://bugs.openjdk.java.net/browse/JDK-8135108
webrev: http://cr.openjdk.java.net/~sherman/8135108

It's a overflow of the low 32-bit of xdostime when the year is > 0x70
(which will be shifting << 25). We have a similar patch at
ZipUtils.javaToDosTime(). Obviously ZipEntry.setTimeLocale() needs
one as well.

   /**
 * Converts Java time to DOS time.
 */
private static long javaToDosTime(long time) {
Instant instant = Instant.ofEpochMilli(time);
LocalDateTime ldt = LocalDateTime.ofInstant(
instant, ZoneId.systemDefault());
int year = ldt.getYear() - 1980;
if (year < 0) {
return (1 << 21) | (1 << 16);
}
return (year << 25 |
ldt.getMonthValue() << 21 |
ldt.getDayOfMonth() << 16 |
ldt.getHour() << 11 |
ldt.getMinute() << 5 |
ldt.getSecond() >> 1) & 0xL;
}

Thanks!
Sherman




Re: We don't need jdk.internal.ref.Cleaner any more

2016-02-15 Thread David Holmes

Hi Mandy,

On 16/02/2016 6:05 AM, Mandy Chung wrote:



On Feb 15, 2016, at 9:06 AM, Uwe Schindler  wrote:

Hi,


On 15/02/2016 14:57, Chris Hegarty wrote:

Peter,

http://cr.openjdk.java.net/~plevart/jdk9-dev/removeInternalCleaner/webrev.02/



This patch looks correct to me.  The innocuous thread created for common 
cleaner is of Thread.MAX_PRIORITY - 2 priority whereas the reference handler 
thread is of MAX_PRIORITY priority.   I don’t know if this would impact any 
real-world application or whether worth having a dedicated thread with 
MAX_PRIORTY (David Holmes may have recommendation to this).


How did you know I would read this? :)

Thread priority has no meaning on Solaris, Linux or BSD/OSX by default.

Only Windows actually applies thread priorities under normal conditions. 
The difference between MAX_PRIORITY and MAX_PRIORITY-2 is the former 
uses THREAD_PRIORITY_HIGHEST and the latter 
THREAD_PRIORITY_ABOVE_NORMAL. Without any real/reasonable workloads for 
benchmarking it is all somewhat arbitrary. Arguably the Reference 
handler thread has more work to do in general so might be better given 
the higher priority.


Cheers,
David
-


Another subtle difference is no more package access check whether running with 
security manager.  On the other hand, “suppressAccessChecks” permission is 
still checked.  I don’t have issue with this.


Given the changes in 6857566, I think your proposal to remove
jdk.internal.ref.Cleaner should be good. ( Note: third party libraries
digging into internals of direct buffers will need to change again ).


I think so too, the main think is that the fast path for cleaners has
been removed (and I see that it has).


What do you mean with fast path?


I believe Alan refers to the fast path handling jdk.internal.ref.Cleaner via 
the Reference Handler thread.

What this means to existing code is that the java.nio.DirectByteBuffer::cleaner 
method still exists but with java.lang.ref.Cleaner.Cleanable return type.  You 
still need to call setAccessible(true) on the cleaner method.  You can then 
statically reference java.lang.ref.Cleaner.Cleanable::clean method


Mandy


The usual code pattern for forceful unmapping did not change:
- Make java.nio.DirectByteBuffer#cleaner() accessible (this is the thing where 
you need to fight against the Java access checks).
- call clean() (Runnable#run() in previous commit) on the returned object (with 
correct casting to interface before calling, this is where existing legacy code 
must change).

In preparation for these changes, I made a patch for Lucene (Solr/Elasticsearch) to 
"compile" the whole unmapping logic into a MethodHandle (this allows me to do 
all needed accesibility/security checks early on startup and before actually invoking 
unmapping). This allows us to tell user that forceful unmapping works *before* we 
actually need to call it: 
https://issues.apache.org/jira/secure/attachment/12787878/LUCENE-6989.patch

This will be part of Lucene 6.0 which will require Java 8 as minimum. Earlier Lucene 
version will not be compatible to Java 9 and can only be used in "slower" 
non-mmap mode.

Thanks,
Uwe





JDK 9 RFR of JDK-8149896: Remove unnecessary values in FloatConsts and DoubleConsts

2016-02-15 Thread joe darcy

Hello,

The the FloatConsts and DoubleConsts classes, while moved to an internal 
package recently (JDK-8145990), contain constants now available via the 
public API. All such uses of the redundant values should be removed as 
well as the redundant constants themselves.


A quick note on the 2d changes, several constants (and a copy from a 
package-private method from java.lang) were used to initialize a double 
value POWER_2_TO_32; this can be accomplished more directly using a 
hexadecimal floating-point literal.


Please review the webrev

http://cr.openjdk.java.net/~darcy/8149896.0/

Thanks,

-Joe