Re: JEP 269: Convenience Factory Methods for Collections

2015-10-07 Thread Stephen Colebourne
On 7 October 2015 at 01:13, Stuart Marks  wrote:
> My question is, is this enough of a problem that we should allow nulls in
> these collections? I would prefer not to do this, but if there is evidence
> that this would be a mistake, I'd like to hear it.
>
> And if disallowing nulls will cause developers to create things like
> Map, are we ok with that, and are developers ok with that?

Given what we know now vs when the collections library ws created, I
think it would be a mistake to allow nulls. Developers that
desperately want null in there have other mechanisms to achieve that,
not just Optional. I too would argue against Optional in collections,
at least until value types exist, but thats a social discussion, not
one that can be controlled.

Stephen


Re: RFR 9: 8138963 : java.lang.Objects new method to default to non-null

2015-10-06 Thread Stephen Colebourne
What the existing code does would seem to be irrelevant. What is
important is deciding what the correct semantics should be for a core
method on Objects.

If there a desperate need for the webrev semantics, how about two methods?

firstNonNull(a, b) - result must not be null
firstOrElse(a, b) - result is null only is b is null

(although I struggle to see much point in the latter)

Stephen




On 6 October 2015 at 15:09, Roger Riggs <roger.ri...@oracle.com> wrote:
> Hi Stephen,
>
> Guava's firstNonNull is a useful semantic, but would not be a drop in
> replacement for existing code which
> would need to be examined for behavioral changes due to possibly throwing an
> exception.
> So perhaps the nonNullorElse name would be more accurate.
>
> Thanks, Roger
>
>
>
> On 10/6/2015 10:00 AM, Stephen Colebourne wrote:
>>
>> Guava uses firstNonNull(a, b). It ensures that the result is never
>> null, by checking that b is not null.
>> I think the semantics of Guava's method is correct. I tend to think
>> the method name isn't bad either.
>>
>> Calling it nonNull(Object,Object) clashes with the existing
>> nonNull(Object) method. Those two have nothing much to do with each
>> other.
>>
>> Stephen
>>
>>
>> On 6 October 2015 at 14:43, Roger Riggs <roger.ri...@oracle.com> wrote:
>>>
>>> Java.lang.Objects contains a number of convenience methods to make it
>>> easier
>>> to handle references that are null.
>>> For example, toString(obj, nullDefault),
>>>
>>> A new method is proposed to return the reference or a default value if
>>> the
>>> reference is null.
>>> static  T nonNull(T obj, T nullDefault);
>>>
>>> Alternatives to the method name include
>>> nonNullOrElse ( using the java.util.Optional name pattern) or
>>> nonNullOrDefault
>>>
>>> Please review and comment.
>>>
>>> Webrev:
>>>http://cr.openjdk.java.net/~rriggs/webrev-object-non-null/
>>>
>>> Issue:
>>>https://bugs.openjdk.java.net/browse/JDK-8138963
>>>
>>> Thanks, Roger
>>>
>


Re: Test for JDK-5108778 Too many instances of java.lang.Boolean created in Java application

2015-10-06 Thread Stephen Colebourne
Moving to core-libs:

On 3 October 2015 at 03:20, Stuart Marks  wrote:
> An interesting exercise would be to add the @Deprecated annotation to the
> Boolean constructor and build the JDK and see how many warnings result. That
> might be a quick way to find the code that needs to be fixed.

I'd really like to see this. Specifically, adding @Deprecated to _all_
constructors on Boolean, Short, Integer, Long, Character, Float,
Double. This may require some additional factory methods to be added.

Doing this for Java 9 would be good preparation for value types. While
the Valhalla team are not currently talking about converting these
classes to values, they are clearly very value-like. Pushing users to
treat them more as values seems like a Good Thing. It might even tip
the balance towards making them actual value types.

Stephen


On 3 October 2015 at 03:20, Stuart Marks  wrote:
> Hi Sebastian,
>
> Good to see you contributing to OpenJDK again!
>
> I'm not sure it's sensible to have a regression test for this sort of thing.
> This seems more like static code analysis to me. In fact, Findbugs already
> seems to have a pattern that detects calls to the Boolean constructor:
>
> http://findbugs.sourceforge.net/bugDescriptions.html#DM_BOOLEAN_CTOR
>
> (I believe that people are running Findbugs over the JDK regularly, but we
> don't really have a workflow for processing diagnostics that it generates.)
>
> Now, regression tests are there to prevent bugs from reappearing once
> they're fixed. So how would we do this? For cases like this, there's another
> approach, which is deprecation:
>
> https://bugs.openjdk.java.net/browse/JDK-4941777
>
> If the Boolean constructor is deprecated, then a warning will be issued
> wherever they're called. By default, the JDK build treats warnings as
> errors, so this will effectively prohibit any uses of Boolean constructors.
> (The two legitimate uses of the Boolean constructor can be annotated with
> @SuppressWarnings to prevent this.)
>
> An interesting exercise would be to add the @Deprecated annotation to the
> Boolean constructor and build the JDK and see how many warnings result. That
> might be a quick way to find the code that needs to be fixed.
>
> As for your other questions:
>
> 2. Boolean is a special case since there are exactly two boxed boolean
> values. Possibly Byte as well, since all Byte values are cached. (See spec
> for Byte.valueOf(byte).) There is a moderate preference for valueOf() over
> the constructors of Character, Integer, and Short, since those cache a
> subset of values. It's not clear to me these are worth worrying about
> though. (Long, Float, and Double don't have caches.)
>
> 3. I would say that it's more likely that future JDK enhancements would be
> able to optimize auto-boxing than explicit method calls that deal with boxed
> values.
>
> 4. Unfortunately, different portions of code are handled by different
> groups, and are thus reviewed on different mailing lists. I think asking on
> jdk9-dev, as you've been doing, about where things need to be reviewed, is
> the right thing to do.
>
> I'll reply on macosx-port-dev on your specific changes there.
>
> s'marks
>
>
>
> On 9/30/15 12:43 PM, Sebastian Sickelmann wrote:
>>
>> Hi,
>>
>> a few days ago i started to investigate JDK-5108778[1] and started
>> discussions
>> for a small parts of it in macosx-port-dev[2] and hotspot-dev[3]. As
>> suggested by
>> Alexandr there should be a test that saves for regression for such
>> changes. I would
>> like to introduce a test like[4], what do you think?
>>
>> It scans for all jimage-files in /lib/modules and opens every
>> classfile
>> and scans every-method for a NEW-bytecode to a Wrapper-Type Classname.
>> Every match that is not in the Wrapper-Type itself is reported and
>> counted.
>>
>>
>> I have some questions about this:
>> 1. Is there a good way to get rid of the "9.0" part for reading the
>> classes out of the jimage?
>> 2. What is with other Wrapper-Types (Byte,Short,Integer,Long, Character)
>> is it a good idea to also change such ctor of those? Would someone raise
>> an enhancement
>> for those?
>> 3. How are value-types related to such an issue. Is it counterproductive
>> to change to XYZ.valueOf Method uses, or should we change to autoboxing
>> where possible? I haven't changed to autoboxing where i thought it would
>> be much less readable.
>> 4. Should the changes be discussed in the group-lists? Or is there a
>> good place for discussion off central-changes?
>>
>>
>> -- Sebastian
>>
>>
>> [1] https://bugs.openjdk.java.net/browse/JDK-5108778
>> [2]
>>
>> http://mail.openjdk.java.net/pipermail/macosx-port-dev/2015-September/006970.html
>> [3]
>>
>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2015-September/020018.html
>> [4]
>>
>> https://dl.dropboxusercontent.com/u/43692695/oss-patches/openjdk/jdk-5108778/test_0/webrev/index.html
>>
>


Re: RFR 9: 8138963 : java.lang.Objects new method to default to non-null

2015-10-06 Thread Stephen Colebourne
Guava uses firstNonNull(a, b). It ensures that the result is never
null, by checking that b is not null.
I think the semantics of Guava's method is correct. I tend to think
the method name isn't bad either.

Calling it nonNull(Object,Object) clashes with the existing
nonNull(Object) method. Those two have nothing much to do with each
other.

Stephen


On 6 October 2015 at 14:43, Roger Riggs  wrote:
> Java.lang.Objects contains a number of convenience methods to make it easier
> to handle references that are null.
> For example, toString(obj, nullDefault),
>
> A new method is proposed to return the reference or a default value if the
> reference is null.
>static  T nonNull(T obj, T nullDefault);
>
> Alternatives to the method name include
> nonNullOrElse ( using the java.util.Optional name pattern) or
> nonNullOrDefault
>
> Please review and comment.
>
> Webrev:
>   http://cr.openjdk.java.net/~rriggs/webrev-object-non-null/
>
> Issue:
>   https://bugs.openjdk.java.net/browse/JDK-8138963
>
> Thanks, Roger
>


Re: JEP 269: Convenience Factory Methods for Collections

2015-09-30 Thread Stephen Colebourne
In my view, the proposal is pretty good. I too use Guava's immutable
classes as types, because of the extra value obtained. But that does
not mean these methods should not be in the JDK. (Not every project
uses Guava).

I'd argue for two changes to the JEP.

Map.fromEntries() -> Map.ofEntries()
In JSR-310, we reserve "from" for factories with a high chance of
failure due to performing some kind of conversion, and use "of" for
factories that only fail if doing something stupid. It also means that
IDE users will easily see both choices when auto-completing, thus
easily learn how to go beyond the hard coded 6 entry factory.

Map.Entry.entry() -> Map.entry()
This would allow static imports to be focussed just on Map, and not
needing Map.Entry as well (helpful in Eclipse at least). If the method
were on Map.Entry, I'd expect it to be of(), whereas on Map itself,
entry() is a good name. There might be a case for having both
Map.entry() and Map.Entry.of().

Stephen



On 30 September 2015 at 03:35, Stuart Marks  wrote:
> Hi all, I've been on vacation for a few days. Did anything happen while I
> was away? :-)
>
> I see this JEP was posted as a Candidate, and Kevin and Remi had some
> comments. I'll reply to Kevin's comments here and to Remi's separately.
>
> Kevin,
>
> I'm glad you don't think the proposal is bad. :-)
>
> I definitely see the value in having immutability in the type system. If
> your application is using Guava's immutable types, then it would certainly
> be a step backwards to stop using them in favor of the factories proposed
> here.
>
> However, I don't see this JEP as being in opposition to immutable collection
> types. This JEP isn't specifically about immutable collections; it's about
> convenience APIs (some of which create collection instances that happen to
> be immutable). Immutable collection types could be added later without too
> much difficulty. I'd be interested in seeing and even possibly working on
> such a proposal in the future.
>
>> Note that without permitting nulls, Map.of(key, Optional.of(value)) will
>> become
>> reasonably common, and that fact you can't serialize that will become even
>> more
>> strange than it already is.
>
>
> Interesting. Given that Guava's Maps disallow null values, do you see a lot
> of use of Optional for Map values in Guava? For the JDK, do you think it
> would be preferable to allow null values in Maps, or to make Optional
> serializable? (Note to readers: Guava's Optional is serializable but the
> JDK's is not.) Or is this one of those problems where all the solutions
> suck?
>
>> I think the example of "double-brace initialization" should be even more
>> clearly
>> labeled as pure evil. :-) You could also mention all the horrible
>> consequences
>> if anyone ever serializes such a collection.
>
>
> I'm not sure if one is allowed to say "evil" in a JEP, but I agree that the
> "double brace" "idiom" is definitely evil! I did mention the potential for
> memory leaks in the JEP, but you have a good point about serialization, not
> only of the enclosing instance, but also of all captured references.
>
> s'marks
>


Re: [9] RFR of 8023217: Additional floorDiv/floorMod/multiplyExact methods for java.lang.Math

2015-09-29 Thread Stephen Colebourne
Good to see this happen.

I agree a test would be good to demonstrate edge cases.

1)
I think the code for floorMod(long x, int y); cannot actually
overflow. As such, the result could just be cast without the if and
throw.


2)
My preferred algorithm for floorMod is:

return ((a % b) + b) % b;

as it contains no Java-side branches, although tests would be needed
to prove performance.

This also allows an algorithm for floorDiv with no Java-side branches:

int mod = ((a % b) + b) % b;
return (a - mod) / b;


3)
While making changes, this code could be changed to avoid the local
variable (just return):
public static int floorMod(int x, int y) {
  int r = x - floorDiv(x, y) * y;
  return r;
}

Stephen


On 29 September 2015 at 03:17, Joseph D. Darcy  wrote:
> Hi Brian,
>
> Do you think any tests are needed here, at least for a quick sanity check?
>
> Thanks,
>
> -Joe
>
>
> On 9/28/2015 6:06 PM, Brian Burkhalter wrote:
>>
>> Please review at your convenience.
>>
>> Issue:  https://bugs.openjdk.java.net/browse/JDK-8023217
>> Patch:  http://cr.openjdk.java.net/~bpb/8023217/webrev.00/
>>
>> Summary:
>> Add some additional methods of the same name as existing methods but with
>> the commonly used parameter signature “long,int”:
>>
>> long multiplyExact(long,int)
>> long floorDiv(long,int)
>> int floorDiv(long,int)
>>
>> These methods also provide hooks for potential intrinsics. There might be
>> room for improvement in their Java implementations.
>>
>> The modifications to the java.time classes are to fix warnings about
>> “redundant cast to int.”
>>
>> Thanks,
>>
>> Brian
>
>


Re: RFR 8135248: Add utility methods to check indexes and ranges

2015-09-29 Thread Stephen Colebourne
Just to note that an ideal location for this would be on a new class,
one that has been discussed before, an "argument checker class".

See Guava Preconditions:
https://github.com/google/guava/blob/master/guava/src/com/google/common/base/Preconditions.java

See Commons Lang Validate:
https://commons.apache.org/proper/commons-lang/javadocs/api-release/org/apache/commons/lang3/Validate.html

See OpenGamma ArgChecker:
http://opengamma.github.io/StrataDocs/apidocs/com/opengamma/strata/collect/ArgChecker.html
https://github.com/OpenGamma/Strata/blob/master/modules/collect/src/main/java/com/opengamma/strata/collect/ArgChecker.java

This was discussed when Objects.requiresNotNull was discussed IIRC. I
still think it would be a great addition to the JDK (as every project
has something similar). This issue could be the start, although it
would need a few more methods, guided by the examples above.

Stephen



On 29 September 2015 at 09:44, Paul Sandoz  wrote:
>
>> On 29 Sep 2015, at 06:48, John Rose  wrote:
>>
>> On Sep 28, 2015, at 5:10 PM, Joseph D. Darcy > > wrote:
>>>
>>> Joining this thread late, I think the range checking methods would have a 
>>> happier life where they are more often found and used if the they live 
>>> somewhere other than the exception classes.
>>>
>>> The class java.util.Objects is not an ideal host for these methods, since 
>>> they don't deal directly with objects per se, but the scope of Objects 
>>> could be expanded to include these utilities as well.
>>
>> You have a point, that's probably a better bikeshed.
>>
>
> No objections :-) I think it’s a better location to find such methods. Moved, 
> and i also tweaked to class doc of Objects to include static methods for 
> “checking certain conditions before operation”.
>
>
>> Since the key operation is an integer compare, putting it on Integer (near 
>> compare) would also be defensible.
>>
>> (The new factory methods for exceptions make sense on the exceptions 
>> themselves, of course.)
>>
>
> I clarified the exception constructors that access one to two out of band 
> values to say they are included in the exception detail message (without 
> actually specifying the presentation format of that message).
>
>   
> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8135248-ioobe-check-index-range/webrev/
>  
> 
>   
> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8135248-ioobe-check-index-range/specdiff/overview-summary.html
>  
> 
>
> Paul.


Re: [9] RFR of 8023217: Additional floorDiv/floorMod/multiplyExact methods for java.lang.Math

2015-09-29 Thread Stephen Colebourne
On 29 September 2015 at 16:02, Brian Burkhalter
 wrote:
> 2)
> My preferred algorithm for floorMod is:
>
> return ((a % b) + b) % b;
>
> as it contains no Java-side branches, although tests would be needed
> to prove performance.
>
> This also allows an algorithm for floorDiv with no Java-side branches:
>
> int mod = ((a % b) + b) % b;
> return (a - mod) / b;
>
>
> I tested the code which was in the original issue description and found some
> discrepancies. I’ll need to revisit this to see what happened.

Yes, the code in the issue for floorDiv() fails when the divisor is
negative. The one in my email today works though.
Stephen


Re: RFR 8135248: Add utility methods to check indexes and ranges

2015-09-29 Thread Stephen Colebourne
On 29 September 2015 at 21:01, Paul Sandoz  wrote:
> The concern i have is once Preconditions is let loose the scope expands with 
> proposals for “just one more method” (there is even the opportunity to bike 
> shed over the names checkNotNull or requiresNotNull etc. etc.)  I don’t want 
> to discuss such additional methods right now otherwise i will never make 
> progress with the current set.
>
> A way forward is to initially include Preconditions with *only* the check 
> index methods, and in subsequent proposals selectively add more. At the 
> moment i am still leaning towards Objects.

As you say, there is lots to discuss:
- the class name
- any IllegalStateException methods? Or two classes, one for args and
one for state?
- design for static import or not
- message formatting
- complete messages, or ones where only the argument name is specified
- nonNull() vs notNull()
- IllegalArgException vs NPE for notNull

The first three points are key and set the tone of the class, As such,
it is hard to just create it. (For example Guava Preconditions is
stylistically different to Commons Validate or OpenGamma ArgChecker)

If you need to move on, then add the methods to Objects, and see if
separately agreement can be got on a new class, potentially moving the
methods before 9 is released.

Stephen


Re: RFR 8080418 Add Optional.or()

2015-09-25 Thread Stephen Colebourne
On 25 September 2015 at 11:58, Paul Sandoz  wrote:
> Please review this change to add a method Optional.or that allows one to 
> better compose optionals:
>
>   http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8080418-optional-or/webrev/

New method seems fine.

> Separately while we are on the topic of Optional i would be interested in 
> opinions on the following changes:
>
>   http://cr.openjdk.java.net/~psandoz/jdk9/optional-prims-filter-map/webrev/
>
> 1) add methods that were missing on the primitive specializations; and

Seems fine, although I think a good case can be made for mapToObj() -
while going via a stream is possible, it is non-intuitive. I don't
think mapToInt() or mapToLong() are necessary on OptionalDouble (and
so on), but not being able to reach Object will be restrictive..

> 2) add to all variants a mapOrElseGet (otherwise known as a “fold”), which is 
> the equivalent of map(mapper).orElseGet(supplier). That is arguably less 
> mind-bending to use when transforming from Optional to Optional.

To me, this is pointless, and makes the API more confusing. Chaining
methods is a way of life in Java 8, and developers have to be able to
think that way.

Stephen


Re: RFR 9: 8129744 : Documentation in Month refers to quarters

2015-09-24 Thread Stephen Colebourne
+1
Stephen

On 24 September 2015 at 14:23, Roger Riggs  wrote:
> Please review two java.time editorial changes:
>   - 8129744: a trivial editorial change in java.time.Month javadoc.
>   - 8129556: TemporalAdjusters dayOfWeekInMonth wrongly says "in the same
> month"
>
> Webrev:
>   http://cr.openjdk.java.net/~rriggs/webrev-months-8129744/
>
> Issue:
>   https://bugs.openjdk.java.net/browse/JDK-8129744
>
> Thanks, Roger
>


Re: RFR 8135248: Add utility methods to check indexes and ranges

2015-09-21 Thread Stephen Colebourne
While I think I understand the need for the lambda/exception interface
(backwards compatibility) it is definitely weird as a method
signature.

It would seem very worthwhile to add overloaded versions of each of
these methods that do not have the OutOfBoundsToException in the
argument list. Instead, these overloads would throw a "standard"
exception. Such methods would then be much more amenable to just being
dropped into existing application code, where the precise exception
that is thrown is less of a concern.

Stephen



On 21 September 2015 at 14:42, Paul Sandoz  wrote:
> Hi,
>
> Please review the following which adds methods to Arrays to check indexes and 
> ranges:
>
>   https://bugs.openjdk.java.net/browse/JDK-8135248
>   
> http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8135248-array-check-index-range/webrev/
>
> The original motivation was an intrinsic method, Arrays.checkIndex, to check 
> if an index is within bounds. Such an intrinsic guides HotSpot towards better 
> optimisations for bounds checks using one unsigned comparison instead of two 
> signed comparisons, and better eliding of integer to long conversions when an 
> index is used to create an offset for Unsafe access. The end result is more 
> efficient array access especially so from within unrolled loops. The 
> VarHandles work will use Arrays.checkIndex for array access.
>
> A follow up issue [1] will track the intrinsification of Arrays.checkIndex.
>
> We thought it would be opportunistic to support two further common use-cases 
> for sub-range checks, Arrays.checkFromToIndex and Arrays.
> checkFromIndexSize. There is no current plan to intrinsify these methods.
>
> Bounds checking is not difficult but it can be easy to make trivial mistakes. 
> Thus it is advantageous to consolidate such checks not just from an 
> optimization perspective but from a correctness and security/integrity 
> perspective.
>
> There are many areas in the JDK where such checks are performed. A follow up 
> issue [2] will track updates to use the new methods.
>
> The main challenge for these new methods is to design in such a way that
>
> 1) existing use-cases can still report the same set of exceptions with the 
> same messages;
> 2) method byte code size is not unduly increased, thus perturbing inlining; 
> and
> 3) there is a reasonable path for any future support of long indexes.
>
> Paul.
>
> [1] https://bugs.openjdk.java.net/browse/JDK-8042997
> [2] https://bugs.openjdk.java.net/browse/JDK-8135250


Re: RFR 8135248: Add utility methods to check indexes and ranges

2015-09-21 Thread Stephen Colebourne
On 21 September 2015 at 15:28, Paul Sandoz  wrote:
>> It would seem very worthwhile to add overloaded versions of each of
>> these methods that do not have the OutOfBoundsToException in the
>> argument list. Instead, these overloads would throw a "standard"
>> exception.
>
> That seems reasonable e.g.:
>
> public static
> int checkIndex(int index, int length) IndexOutOfBoundsException {
> return checkIndex(index, length, null);
> }
>
>> Such methods would then be much more amenable to just being
>> dropped into existing application code, where the precise exception
>> that is thrown is less of a concern.
> Though is passing “null” such a big deal?

Best to try and avoid asking application developers to pass null in to
get a special behaviour. In fact, the method could even reject null
and throw an exception, which would make more sense.

I agree with Remi on naming at least - a functional interface ending
"Exception" is likely to be confusing.

Stephen


Re: RFR JDK-8133079 LocalDate/LocalTime ofInstant()

2015-09-10 Thread Stephen Colebourne
Shall I just open a number of these with things to be changed, so you
can deal with them in a batch then?
Stephen


On 10 September 2015 at 14:42, Roger Riggs <roger.ri...@oracle.com> wrote:
> Hi Stephen,
>
> I can take it up but probably won't get to it until after JavaOne. (Early
> Nov)
>
> Roger
>
>
>
> On 9/8/2015 2:53 PM, Stephen Colebourne wrote:
>>
>> Anyone like to take this on please?
>> Stephen
>> On 28 Aug 2015 00:07, "Stephen Colebourne" <scolebou...@joda.org> wrote:
>>
>>> External question sites indicate that users have difficulty converting
>>> between java.util.Date and LocalDate, and also between Instant and
>>> LocalDate/LocalTime. This user difficulty can be resolved with some
>>> additional methods.
>>>
>>> Currently, the following methods exist:
>>>
>>>   ZonedDateTime.ofInstant(Instant, ZoneId);
>>>   OffsetDateTime.ofInstant(Instant, ZoneId);
>>>   LocalDateTime.ofInstant(Instant, ZoneId);
>>>   OffsetTime.ofInstant(Instant, ZoneId);
>>>
>>> This enhancement proposes to add the same method to LocalDate and
>>> LocalTime. It can reasonably be argued this was a simple oversight in
>>> the original development.
>>>
>>> Diff attached to the JIRA:
>>>
>>> https://bugs.openjdk.java.net/browse/JDK-8133079
>>>
>>> Would also appreciate a reviewer to ensure that the code and tests
>>> pass OK (as opposed to my best efforts on a very dubious
>>> Windows/Eclipse/JDK8 setup)
>>>
>>> Stephen
>>>
>


Re: RFR JDK-8133079 LocalDate/LocalTime ofInstant()

2015-09-08 Thread Stephen Colebourne
Anyone like to take this on please?
Stephen
On 28 Aug 2015 00:07, "Stephen Colebourne" <scolebou...@joda.org> wrote:

> External question sites indicate that users have difficulty converting
> between java.util.Date and LocalDate, and also between Instant and
> LocalDate/LocalTime. This user difficulty can be resolved with some
> additional methods.
>
> Currently, the following methods exist:
>
>  ZonedDateTime.ofInstant(Instant, ZoneId);
>  OffsetDateTime.ofInstant(Instant, ZoneId);
>  LocalDateTime.ofInstant(Instant, ZoneId);
>  OffsetTime.ofInstant(Instant, ZoneId);
>
> This enhancement proposes to add the same method to LocalDate and
> LocalTime. It can reasonably be argued this was a simple oversight in
> the original development.
>
> Diff attached to the JIRA:
>
> https://bugs.openjdk.java.net/browse/JDK-8133079
>
> Would also appreciate a reviewer to ensure that the code and tests
> pass OK (as opposed to my best efforts on a very dubious
> Windows/Eclipse/JDK8 setup)
>
> Stephen
>


RFR JDK-8133079 LocalDate/LocalTime ofInstant()

2015-08-27 Thread Stephen Colebourne
External question sites indicate that users have difficulty converting
between java.util.Date and LocalDate, and also between Instant and
LocalDate/LocalTime. This user difficulty can be resolved with some
additional methods.

Currently, the following methods exist:

 ZonedDateTime.ofInstant(Instant, ZoneId);
 OffsetDateTime.ofInstant(Instant, ZoneId);
 LocalDateTime.ofInstant(Instant, ZoneId);
 OffsetTime.ofInstant(Instant, ZoneId);

This enhancement proposes to add the same method to LocalDate and
LocalTime. It can reasonably be argued this was a simple oversight in
the original development.

Diff attached to the JIRA:

https://bugs.openjdk.java.net/browse/JDK-8133079

Would also appreciate a reviewer to ensure that the code and tests
pass OK (as opposed to my best efforts on a very dubious
Windows/Eclipse/JDK8 setup)

Stephen


Re: RFR 9: 8133022: Instant.toEpochMilli() silently overflows

2015-08-06 Thread Stephen Colebourne
Fine by me.
Stephen

On 6 Aug 2015 19:15, Chris Hegarty chris.hega...@oracle.com wrote:

 On 6 Aug 2015, at 19:07, Roger Riggs roger.ri...@oracle.com wrote:

  Hi,
 
  Please review the update to include the additional case and fix.
 
  http://cr.openjdk.java.net/~rriggs/webrev-overflow-8133022/

 The updated version looks good Roger.

 -Chris.

  Thanks, Roger
 
 
  On 8/6/2015 12:37 PM, Roger Riggs wrote:
  Hi Ivan,
 
  I looked at that but didn't find a reproducer.
  I'll add that test case and respin.
 
  Thanks, Roger
 
 
  On 8/6/2015 12:34 PM, Ivan Gerasimov wrote:
  Hi Roger!
 
  There seems to be another numeric overflow possibility a the line#
1235, when 'seconds' is a large negative and 'nanos' is a small positive.
 
  For example,
  
 Instant inst = Instant.ofEpochSecond(-9223372036854776L, 1);
 System.out.println(inst.toEpochMilli());
  
  prints out 9223372036854775616.
 
 
  Sincerely yours,
  Ivan
 
  On 06.08.2015 18:33, Roger Riggs wrote:
  Please review a small fix and test for Instant.toEpochMilli
ArithmeticOverflow.
 
  Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-overflow-8133022/
 
  Issue:
 https://bugs.openjdk.java.net/browse/JDK-8133022
 
  Thanks, Roger
 
 
 
 
 
 
 
 



Re: ProcessBuilder support for pipelines

2015-07-29 Thread Stephen Colebourne
Seems like a useful addition to me.
Stephen

On 28 July 2015 at 09:28, Chris Hegarty chris.hega...@oracle.com wrote:
 I find this kinda cute. Seems like a reasonable addition to me.

 -Chris.

 On 27 Jul 2015, at 15:48, Roger Riggs roger.ri...@oracle.com wrote:

 On most operating systems, creating pipelines of processes is simple and 
 direct.
 That same function is missing from the Java Process support and can be 
 provided by
 java.lang.ProcessBuilder by enabling the pipes created for stdout to be
 used for standard input when the processes are created.
 Comments and feedback are appreciated on the prototype API and 
 implementation.

 Javadoc:  java.lang.ProcessBuilder.startPipe:
 http://cr.openjdk.java.net/~rriggs/pipedoc/java/lang/ProcessBuilder.html#startPipe-java.lang.ProcessBuilder...-

 webrev prototype:
  http://cr.openjdk.java.net/~rriggs//webrev-pipeline-8132394

 Thanks, Roger




Re: RFR 9: JDK-8075678 : java.time javadoc error in DateTimeFormatter::parsedLeapSecond

2015-05-28 Thread Stephen Colebourne
Fine by me
Stephen

On 28 May 2015 at 21:14, Roger Riggs roger.ri...@oracle.com wrote:
 Please review 3 editorial issues in the java.time package:

 Issues:
 8075678 https://bugs.openjdk.java.net/browse/JDK-8075678: java.time
 javadoc error in DateTimeFormatter::parsedLeapSecond
 8075676 https://bugs.openjdk.java.net/browse/JDK-8075676: java.time
 package javadoc typos
 8068276 https://bugs.openjdk.java.net/browse/JDK-8068276:
 java.time.chrono.HijrahChronology.eraOf() assertions may lead to
 misunderstanding

 Webrev:
 http://cr.openjdk.java.net/~rriggs/webrev-typo-8075678/

 Thanks, Roger



Re: Add Predicate.of(), Consumer.of(), etc.

2015-05-06 Thread Stephen Colebourne
On 6 May 2015 at 14:53, Paul Sandoz paul.san...@oracle.com wrote:
 In some respects i wonder if the default methods on the functional 
 interfaces are an attractive nuisance.

 Meaning, if .negate(Predicate) were a static method on the Predicate class 
 instead of a default method, then 
 stream.filter(Predicate.negate(String::isEmpty)) would be possible? Yeah…


 Yeah. We are now in the unfortunate position where to alleviate this problem 
 we might require duplicate static and default methods. I have been sitting on 
 the issue a bit and nearly closed it a few times :-)

I like Remi's solution. It may not be ideal, but it is very practical
and easy to explain on Stack Overflow. Duplicate static/default
methods would be much worse.

I also think that filterNot should be added to stream. Its just too
common a case.

Stephen


Re: RFR: JDK-8079063 ZoneOffsetTransition constructor should ignore nanoseconds

2015-05-06 Thread Stephen Colebourne
I am also happy with this change.

Stephen


On 6 May 2015 at 15:43, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Peter,

 Thanks for the analysis and followup.

 Yes, I think thesimple check as you propose is the desired clarification.
 I agree that the change should not affect any existing code outside the JDK
 and does not raise a compatibility issue.

 Roger



 On 5/4/2015 6:22 AM, Peter Levart wrote:

 Hi,

 Now that JDK-8074003 is in, I'd like to discuss how to tackle JDK-8079063.

 Package-private ZoneOffsetTransition constructor that takes LocalDateTime
 transition is invoked from the following 4 places:

 1 - the public static factory method:

 /**
  * Obtains an instance defining a transition between two offsets.
  * p
  * Applications should normally obtain an instance from {@link
 ZoneRules}.
  * This factory is only intended for use when creating {@link
 ZoneRules}.
  *
  * @param transition  the transition date-time at the transition,
 which never
  *  actually occurs, expressed local to the before offset, not null
  * @param offsetBefore  the offset before the transition, not null
  * @param offsetAfter  the offset at and after the transition, not
 null
  * @return the transition, not null
  * @throws IllegalArgumentException if {@code offsetBefore} and {@code
 offsetAfter}
  * are equal, or {@code transition.getNano()} returns non-zero
 value
  */
 public static ZoneOffsetTransition of(LocalDateTime transition,
 ZoneOffset offsetBefore, ZoneOffset offsetAfter) {

 ...this one already disallows transition parameters that have
 transition.getNano() != 0.


 2 - Lines 498..500 of ZoneOffsetTransitionRule:

 LocalDateTime localDT = LocalDateTime.of(date, time);
 LocalDateTime transition = timeDefinition.createDateTime(localDT,
 standardOffset, offsetBefore);
 return new ZoneOffsetTransition(transition, offsetBefore,
 offsetAfter);

 ...where 'time' is an instance field of ZoneOffsetTransitionRule. The
 ZoneOffsetTransitionRule public static factory method has the following
 definition:

 /**
  * Obtains an instance defining the yearly rule to create transitions
 between two offsets.
  * p
  * Applications should normally obtain an instance from {@link
 ZoneRules}.
  * This factory is only intended for use when creating {@link
 ZoneRules}.
  *
  * @param month  the month of the month-day of the first day of the
 cutover week, not null
  * @param dayOfMonthIndicator  the day of the month-day of the cutover
 week, positive if the week is that
  *  day or later, negative if the week is that day or earlier,
 counting from the last day of the month,
  *  from -28 to 31 excluding 0
  * @param dayOfWeek  the required day-of-week, null if the month-day
 should not be changed
  * @param time  the cutover time in the 'before' offset, not null
  * @param timeEndOfDay  whether the time is midnight at the end of day
  * @param timeDefnition  how to interpret the cutover
  * @param standardOffset  the standard offset in force at the cutover,
 not null
  * @param offsetBefore  the offset before the cutover, not null
  * @param offsetAfter  the offset after the cutover, not null
  * @return the rule, not null
  * @throws IllegalArgumentException if the day of month indicator is
 invalid
  * @throws IllegalArgumentException if the end of day flag is true
 when the time is not midnight
  */
 public static ZoneOffsetTransitionRule of(
 Month month,
 int dayOfMonthIndicator,
 DayOfWeek dayOfWeek,
 LocalTime time,
 boolean timeEndOfDay,
 TimeDefinition timeDefnition,
 ZoneOffset standardOffset,
 ZoneOffset offsetBefore,
 ZoneOffset offsetAfter) {
 Objects.requireNonNull(month, month);
 Objects.requireNonNull(time, time);
 Objects.requireNonNull(timeDefnition, timeDefnition);
 Objects.requireNonNull(standardOffset, standardOffset);
 Objects.requireNonNull(offsetBefore, offsetBefore);
 Objects.requireNonNull(offsetAfter, offsetAfter);
 if (dayOfMonthIndicator  -28 || dayOfMonthIndicator  31 ||
 dayOfMonthIndicator == 0) {
 throw new IllegalArgumentException(Day of month indicator
 must be between -28 and 31 inclusive excluding zero);
 }
 if (timeEndOfDay  time.equals(LocalTime.MIDNIGHT) == false) {
 throw new IllegalArgumentException(Time must be midnight when
 end of day flag is true);
 }
 return new ZoneOffsetTransitionRule(month, dayOfMonthIndicator,
 dayOfWeek, time, timeEndOfDay, timeDefnition, standardOffset, offsetBefore,
 offsetAfter);
 }

 ...which allows 'time' parameter (that becomes ZoneOffsetTransitionRule's
 'time' field) with no restriction as to whether it can contain nanos != 0 or
 not.


 3, 4 - Lines 

Re: RFR: JDK-8074003 java.time.zone.ZoneRules.getOffset(java.time.Instant) can be optimized

2015-04-30 Thread Stephen Colebourne
The approach works for me, and the patch is valid as is.
Stephen


On 30 April 2015 at 11:24, Peter Levart peter.lev...@gmail.com wrote:

 On 04/29/2015 05:35 PM, Roger Riggs wrote:

 Hi Peter,

 There should be two changesets; so pretend the truncation has been
 performed for this change.
 It maybe useful to backport the performance improvement to jdk 8 but the
 spec change
 will have to be in 9 (or wait for a maintenance release).


 Hi Roger,

 So perhaps it would be best to push what we have in webrev.03 now, so that
 it can be backported to 8u directly without modifications and simplify
 equals/compareTo/getInstant as part of the changeset for 8079063. I think
 this is more consistent. And I can prepare the change for 8079063 right away
 so the spec change process can be started.

 Do I have a go for webrev.03 for jdk9 ?

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.03/

 Regards, Peter


 The simplification of toInstant can happen with the changeset for 8079063.

 Thanks, Roger


 On 4/29/2015 11:26 AM, Peter Levart wrote:



 On 04/29/2015 03:31 PM, Roger Riggs wrote:

 Hi Peter,

 Point taken about the constructor and that should have a spec
 clarification to ignore the nanoseconds.
 The issue is tracked with:
 JDK-8079063 https://bugs.openjdk.java.net/browse/JDK-8079063
 ZoneOffsetTransition constructor should ignore nanoseconds

 With that, the compareTo method can be simpler.  The rest looks fine.

 Roger


 Hi Roger,

 Should I prepare a patch for both issues in one changeset as the correct
 compareTo/equals depends on the truncation or should I just pretend that
 truncation has been performed and make this change accordingly or should I
 1st do the JDK-8079063 and then this one on top?

 Also, getInstant() can be much simpler if the truncation is performed:
 return Instant.of(epochSecond);

 Regards, Peter



 On 4/29/2015 5:33 AM, Peter Levart wrote:


 On 04/27/2015 06:51 PM, Stephen Colebourne wrote:

 One additional change is needed. The compareTo() method can rely on
 the new epochSecond field as well.
 Otherwise good!
 Stephen


 Hi Stephen,

 LocalDateTime (transition) has nanosecond precision. It may be that
 transitions loaded from file in ZoneRules only have second precisions, but
 ZoneOffsetTransition is a public class with public factory method that 
 takes
 a LocalDateTime transition parameter, so I think compareTo() can't rely on
 epochSecond alone. But epochSecond can be used as optimization in
 compareTo() as well as equals():


 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.03/

 An alternative to keeping epochSecond field in ZoneOffsetTransition
 would be to keep a reference to Instant instead. Instant contains an
 epochSecond field (as well as nanos) and could be used for both
 toEpochSecond() and getInstant() methods.

 What do you think?

 It also occurred to me that serialization format of
 ZoneOffsetTransition is not adequate currently as it looses nanosecond
 precision.

 Regards, Peter


 On 27 April 2015 at 17:24, Peter Levart peter.lev...@gmail.com
 wrote:

 Hi again,

 Here's another optimization to be reviewed that has been discussed a
 while
 ago (just rebased from webrev.01) and approved by Stephen:


 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.02/


 The discussion about it is intermingled with the
 ZoneId.systemDefault()
 discussion and starts about here:


 http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031873.html


 The rationale for the optimization is speeding-up the conversion from
 epoch
 time to LocalDateTime. This conversion uses
 ZoneRules.getOffset(Instant)
 where there is a loop over ZoneOffsetTransition[] array that searches
 for
 1st transition that has its toEpochSecond value less than the
 Instant's
 epochSecond. This calls ZoneOffsetTransition.toEpochSecond
 repeatedly,
 converting ZoneOffsetTransition.transition which is a LocalDateTime
 to
 epochSecond. This repeated conversion is unnecessary, as
 ZoneOffsetTransition[] array is part of ZoneRules which is cached.
 Optimizing the ZoneOffsetTransition implementation (keeping both
 LocalDateTime variant and eposhSecond variant of transition time as
 the
 object's state) speeds up this conversion.


 Regards, Peter








Re: RFR: JDK-8074003 java.time.zone.ZoneRules.getOffset(java.time.Instant) can be optimized

2015-04-29 Thread Stephen Colebourne
On the LocalDateTime being passed in with nanoseconds, that was an
unconsidered use case. The whole offset system relies on second based
offsets, so it should really be validated/truncated to remove nanos.
That way the equals/compareTo could be simplified again. Seems like a
separate issue, but perhaps could be tackled here. You need an Oracle
sponsor to tell you ;-)

Stephen


On 29 April 2015 at 10:33, Peter Levart peter.lev...@gmail.com wrote:

 On 04/27/2015 06:51 PM, Stephen Colebourne wrote:

 One additional change is needed. The compareTo() method can rely on
 the new epochSecond field as well.
 Otherwise good!
 Stephen


 Hi Stephen,

 LocalDateTime (transition) has nanosecond precision. It may be that
 transitions loaded from file in ZoneRules only have second precisions, but
 ZoneOffsetTransition is a public class with public factory method that takes
 a LocalDateTime transition parameter, so I think compareTo() can't rely on
 epochSecond alone. But epochSecond can be used as optimization in
 compareTo() as well as equals():

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.03/

 An alternative to keeping epochSecond field in ZoneOffsetTransition would be
 to keep a reference to Instant instead. Instant contains an epochSecond
 field (as well as nanos) and could be used for both toEpochSecond() and
 getInstant() methods.

 What do you think?

 It also occurred to me that serialization format of ZoneOffsetTransition is
 not adequate currently as it looses nanosecond precision.

 Regards, Peter



 On 27 April 2015 at 17:24, Peter Levart peter.lev...@gmail.com wrote:

 Hi again,

 Here's another optimization to be reviewed that has been discussed a
 while
 ago (just rebased from webrev.01) and approved by Stephen:


 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.02/


 The discussion about it is intermingled with the ZoneId.systemDefault()
 discussion and starts about here:


 http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031873.html


 The rationale for the optimization is speeding-up the conversion from
 epoch
 time to LocalDateTime. This conversion uses ZoneRules.getOffset(Instant)
 where there is a loop over ZoneOffsetTransition[] array that searches for
 1st transition that has its toEpochSecond value less than the Instant's
 epochSecond. This calls ZoneOffsetTransition.toEpochSecond repeatedly,
 converting ZoneOffsetTransition.transition which is a LocalDateTime to
 epochSecond. This repeated conversion is unnecessary, as
 ZoneOffsetTransition[] array is part of ZoneRules which is cached.
 Optimizing the ZoneOffsetTransition implementation (keeping both
 LocalDateTime variant and eposhSecond variant of transition time as the
 object's state) speeds up this conversion.


 Regards, Peter




Re: Additional method on Stream

2015-04-28 Thread Stephen Colebourne
On 27 April 2015 at 16:23, Paul Sandoz paul.san...@oracle.com wrote:
 One issue is there are zillions of possible more specific convenience 
 operations we could add. Everyone has their own favourite. Some static 
 methods were recently added to Stream and Optional in preference to such 
 operations.

 There has to be a really good reason to add new operations. I realize this 
 use-case might be more common than others but i am still yet to be convinced 
 that it has sufficient weight given flatMap + lambda + static method.

 BTW, I wait months before making this request to see if it really was
 common enough a pattern, but I'm confident that it is now.

 Were you aware of the pattern using flatMap during those months?

No, but if I had, I would not have used it. Its a rubbish workaround.
It creates extra objects for no value. It involves statically
importing another random utility. and it is less expressive - the
desire is to filter by type, so I want the filter method.

I do understand the desire to control methods, but API design isn't
just about minimalism, it is also about meeting common use cases in a
natural way. The parallel is of course a standard if (obj instanceof
Foo) statement in Java, where developers often curse that they have to
additionally cast obj after the check. Why can't the compiler just do
it? (Kotlin does for example). While changing the Java language to do
this is hard, changing the Stream API would be easy, and good value
given a year's coding with streams.

Stephen


Re: Additional method on Stream

2015-04-27 Thread Stephen Colebourne
Obviously, this is yet another possible workaround. But it is a
workaround. There really aren't that many rough edges with the set of
methods added with lambdas, but this is definitely one. That Guava
handled it specially is another good indication.

BTW, I wait months before making this request to see if it really was
common enough a pattern, but I'm confident that it is now.

Stephen


On 27 April 2015 at 14:41, Paul Sandoz paul.san...@oracle.com wrote:
 Hi Stephen,

 You can do this:

 static T FunctionObject, StreamT casting(ClassT c) { // bike shed for 
 clearer name
 return o - Stream.ofNullable(c.isInstance(o) ? c.cast(o) : null);
 }


 Object[] s = Stream.of(1, 2, 3, 4).toArray();
 Stream.of(s).flatMap(casting(Integer.class)).
 forEach(System.out::println);


 I am a bit reluctant to add such a specific kind of filter method to Stream 
 when one can do the above.

 In general my preference is to keep the stream operation methods as general 
 as possible.

 Paul.

 On Apr 27, 2015, at 3:22 PM, Stephen Colebourne scolebou...@joda.org wrote:

 This is a request for an additional method on java.util.stream.Stream.

 Having written quite a bit of code in Java 8 now, one annoyance keeps
 on coming up, filtering and casting the stream.

 Currently, it is necessary to do this:

return input.stream()
.filter(Foo.class::isInstance)
.map(Foo.class::cast)
.someTerminalOperation();

 or

return input.stream()
.filter(t - t instanceof Foo)
.map(t - (Foo) t)
.someTerminalOperation();

 Really, what is needed is this:

return input.stream()
.filter(Foo.class)
.someTerminalOperation();

 For info, Guava's FluentIterable has such a method.

 The new method signature would be something like:

 public StreamR filter(ClassR cls);

 As far as I can see, there is no problem in implementing this in both
 serial and parallel modes, as it is essentially just a convenience.

 Thoughts?
 Stephen



Re: RFR: JDK-8074003 java.time.zone.ZoneRules.getOffset(java.time.Instant) can be optimized

2015-04-27 Thread Stephen Colebourne
One additional change is needed. The compareTo() method can rely on
the new epochSecond field as well.
Otherwise good!
Stephen


On 27 April 2015 at 17:24, Peter Levart peter.lev...@gmail.com wrote:
 Hi again,

 Here's another optimization to be reviewed that has been discussed a while
 ago (just rebased from webrev.01) and approved by Stephen:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.02/


 The discussion about it is intermingled with the ZoneId.systemDefault()
 discussion and starts about here:

 http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031873.html


 The rationale for the optimization is speeding-up the conversion from epoch
 time to LocalDateTime. This conversion uses ZoneRules.getOffset(Instant)
 where there is a loop over ZoneOffsetTransition[] array that searches for
 1st transition that has its toEpochSecond value less than the Instant's
 epochSecond. This calls ZoneOffsetTransition.toEpochSecond repeatedly,
 converting ZoneOffsetTransition.transition which is a LocalDateTime to
 epochSecond. This repeated conversion is unnecessary, as
 ZoneOffsetTransition[] array is part of ZoneRules which is cached.
 Optimizing the ZoneOffsetTransition implementation (keeping both
 LocalDateTime variant and eposhSecond variant of transition time as the
 object's state) speeds up this conversion.


 Regards, Peter



Re: RFR 9: 8078369: [testbug] java/time/tck/java/time/TCKOffsetTime[now] fails on slow devices

2015-04-27 Thread Stephen Colebourne
Fine by me.
Stephen

On 27 April 2015 at 19:50, Roger Riggs roger.ri...@oracle.com wrote:
 Please review a timing relaxation so tests do not fail on very slow hardware
 with more expensive runtime options.  (-Xcomp)

 The change will also be backported to jdk 8.

 Issue:
8078369: [testbug] java/time/tck/java/time/TCKOffsetTime[now] fails on
 slow devices

 Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-timeout-8078369/

 Thanks, Roger



Re: RFR: JDK-8074003 java.time.zone.ZoneRules.getOffset(java.time.Instant) can be optimized

2015-04-27 Thread Stephen Colebourne
TzdbZoneRulesProvider parses the byte[] of TZ data on demand when a
ZoneId is first requested. So, the ZoneOffsetTransition will not be
created unless a ZoneId is during startup.
Stephen

On 27 April 2015 at 22:58, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Peter,

 Caching the epochSecond moves the computation from a relatively lazy version
 to creation when it would be performed eagerly for every transition.
 Is there a quick way to see if it will have an impact on the startup time?

 Roger



 On 4/27/2015 12:24 PM, Peter Levart wrote:

 Hi again,

 Here's another optimization to be reviewed that has been discussed a while
 ago (just rebased from webrev.01) and approved by Stephen:


 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.02/


 The discussion about it is intermingled with the ZoneId.systemDefault()
 discussion and starts about here:


 http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031873.html


 The rationale for the optimization is speeding-up the conversion from
 epoch time to LocalDateTime. This conversion uses
 ZoneRules.getOffset(Instant) where there is a loop over
 ZoneOffsetTransition[] array that searches for 1st transition that has its
 toEpochSecond value less than the Instant's epochSecond. This calls
 ZoneOffsetTransition.toEpochSecond repeatedly, converting
 ZoneOffsetTransition.transition which is a LocalDateTime to epochSecond.
 This repeated conversion is unnecessary, as ZoneOffsetTransition[] array is
 part of ZoneRules which is cached. Optimizing the ZoneOffsetTransition
 implementation (keeping both LocalDateTime variant and eposhSecond variant
 of transition time as the object's state) speeds up this conversion.


 Regards, Peter




Re: RFR: JDK-8074002 java.time.ZoneId.systemDefault() should be faster

2015-04-27 Thread Stephen Colebourne
This seems like a good enhancement.
Stephen

On 27 April 2015 at 16:26, Peter Levart peter.lev...@gmail.com wrote:
 Hi,

 Please review the following improvement that caches default ZoneId object
 and makes the frequently executed ZoneId.systemDefault() method faster:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/webrev.04/

 The patch is just a rebased version of webrev.03 + some comments added about
 the importance of defensive cloning in TimeZone.setDefault(). There was
 already a discussion about this patch a while ago and Stephen basically
 approved it in this form:

 http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031714.html


 Thanks,

 Peter Levart


Re: JDK-8074023: Clock.system(ZoneId) could be optimized to always return the same clock for a given zone

2015-04-24 Thread Stephen Colebourne
No, the opposite. Those rules that do NOT have a fixed offset.

ZoneId.of(Europe/London) should be cached, but not
ZoneId.of(UTC+10:00). Note that the latter is an offset-based
ZoneRegion, not a ZoneOffset.

Stephen


On 24 April 2015 at 14:58, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Stephen,

 Just to clarify, by non-offset based ZoneRules do you mean it has a fixed
 offset
 meaning no transitions; aka ZoneRules.isFixedOffset() = true?

 Thanks, Roger



 On 4/24/2015 5:32 AM, Stephen Colebourne wrote:

 The code in the webrev changes the behaviour of java.time, so cannot go
 in.

 Run this code:
  TimeZone zone = TimeZone.getDefault();
  System.out.println(zone);
  ZoneId zoneId = ZoneId.systemDefault();
  System.out.println(zoneId);
  TimeZone.setDefault(TimeZone.getTimeZone(Europe/Paris));
  TimeZone zone2 = TimeZone.getDefault();
  System.out.println(zone2);
  ZoneId zoneId2 = ZoneId.systemDefault();
  System.out.println(zoneId2);

 before and after this change, a difference will be seen. Specifically,
 ZoneId.systemDefault() responds to a change to TimeZone.setDefault().

 The correct fix to this issue was outlined in the bug report.

 1) add caching for those ZoneRegion instances that have a non-offset
 based underlying ZoneRules. This would be done in ZoneRegion.ofId().
 Those instances of ZoneRegion created by ZoneId.ofOffset() would not
 be cached.
 2) Add a Clock instance variable to ZoneId that is only non-null if
 the object is cached
 3) Change Clock.system(ZoneId) to check to see if the clock in the
 ZoneId is non-null before creating a new instance.

 Stephen



 On 23 April 2015 at 22:28, Roger Riggs roger.ri...@oracle.com wrote:

 Hi Peter,

 Setting the user.timezone property doesn't reset the value returned from
 getDefaultRef().
 You can see the new value through java.util.TimeZone but not through
 java.time.ZoneId.

 Its a bad idea to allow the default timezone change and in java.time it
 was
 purposefully
 handled as an initialize once value.

 Roger



 On 4/23/2015 5:20 PM, Peter Levart wrote:



 On 04/23/2015 10:33 PM, Roger Riggs wrote:

 Hi Peter,

 The defaultTimeZone is cached in java.util.TimeZone the first time it
 is
 referenced
 and is not updated after that.  See java.util.TimeZone.getDefaultRef().

 Regards, Roger

 But any code (with enough privilege) can update it:

 abstract public class TimeZone  {

  public static void setDefault(TimeZone zone)
  {
  SecurityManager sm = System.getSecurityManager();
  if (sm != null) {
  sm.checkPermission(new PropertyPermission
 (user.timezone, write));
  }
  defaultTimeZone = zone;
  }



 Peter

 On 4/23/2015 4:11 PM, Peter Levart wrote:



 On 04/23/2015 09:53 PM, nadeesh tv wrote:

 Hi all,

 Please review this minor change which  optimized
 Clock.systemDefaultZone() and Clock.system(ZoneId)  to avoid creating
 new
 instance of Clock.SystemClock every time they are called.

 Bug ID : https://bugs.openjdk.java.net/browse/JDK-8074023

 Webrev : http://cr.openjdk.java.net/~rriggs/nadeesh-tv-8074023/

 Hi Nadeesh,

 What happens if ZondeId.systemDefault() changes
 (TimeZone.setDefault())
 *after* Clock class has already been initialized?

 Regards, Peter




Re: RFR: JDK-8074406 DateTimeFormatter.appendZoneOrOffsetId() fails to resolve a ZoneOffset for OffsetDateTime

2015-03-05 Thread Stephen Colebourne
I think the fix looks good. I haven't had time to look through the
tests in detail (as there are lots)
Stephen

On 4 March 2015 at 19:48, Xueming Shen xueming.s...@oracle.com wrote:
 Stephen and Roger,

 This is the DTF.appendZoneOrOffsetId() issues we discussed last year.
 Somehow the
 ball was dropped somewhere :-)

 Here is the proposed change we discussed. The only difference is I leave the
 ZoneOffset
 to throw the DateTimeException, instead of inside the Parsed.query()
 directly.

 I also updated the TCKZoneIdPrinterParser to (1) add the
 appendZoneOrOffsetId() (it appears
 we don't have any corresponding test cases for it) and (2) return a
 ZoneOffset for offset
 if there is indeed a ZoneOffset inside the parsed.

 Issue: https://bugs.openjdk.java.net/browse/JDK-8074406
 webrev: http://cr.openjdk.java.net/~sherman/8074406/webrev

 Thanks!
 -Sherman


Re: JEP 238: Multi-Version JAR Files

2015-03-02 Thread Stephen Colebourne
On 1 March 2015 at 23:37, Remi Forax fo...@univ-mlv.fr wrote:
 You only maintain one module which depend on the latest version.
 When you ship the module let say as a jar, you also ship the way to
 downgrade it as an executable code.
 At install time, when all the modules are known, before trying to create the
 images,
 the code that downgrades the module is executed if necessary.
 This code can rewrite the whole bytecodes of all the classes the module thus
 ensure backward compatibility without providing binary backward
 compatibility.

This direction certainly has potential. In many ways, the biggest
potential benefit any language could bring is the ability to handle
change of code over time (ie. its a huge potential language feature
for any new language). While we've always had deprecation, there is no
actual mechanism in code/compiler/linker terms to ensure that the
client has followed the deprecation (callers just get an error if the
code is removed). Thus, what is really needed is a standard
compiler/language mechanism to map old code to new code.

It would allow a method to be renamed (eg. Duration.getNano() to getNanos())
It would allow a class to be renamed, say java.util.Date to
DumbOldTimestampThing
It would allow an enum constant to be renamed
If done well, it might allow for much more complex refactoring
changes, but there would have to be limits

This comes under the modules banner in my eyes, because modules is
adding a linker step where this kind of stuff could be done. I hope
its on the radar (maybe fore JDK 10 rather than 9), as its a lot more
useful than MVJARs alone (which seem to have a pretty limited use, one
which could be helped over time just by making reflection easier to
use).

Note that this is slightly different to Remi's suggestion, which is
downgrading new code to run on an old JDK. Here I'm discussing
upgrading old code to run against a newer JDK. Its likely that the
both mechanisms make sense.

Stephen


Re: RFR: 8074032: Instant.ofEpochMilli(millis).toEpochMilli() can throw arithmetic overflow in toEpochMilli()

2015-02-27 Thread Stephen Colebourne
Looks good to me.
thanks
Stephen

On 27 February 2015 at 16:25, Daniel Fuchs daniel.fu...@oracle.com wrote:
 Hi,

 Please find below a patch for:

 8074032: Instant.ofEpochMilli(millis).toEpochMilli() can
  throw arithmetic overflow in toEpochMilli()

 http://cr.openjdk.java.net/~dfuchs/webrev_8074032/webrev.00/

 The issue is that when converting milliseconds to seconds + nanos
 Instant.ofEpochMilli() uses floorDiv and floorMod, as it should.
 This means that for negative values of 'millis' then seconds will
 be millis/1000 - 1,  and nanos will be positive.

 When converting back to epoch millis, if multiplyExact is used
 without precaution, then (millis/1000 -1) * 1000 may not fit in
 a long. The solution is thus to compute ((millis/1000 -1) +1) * 1000
 instead and then add (nanos - 1000) to the result.


 Note: this issue is causing some JCK tests that call
 LogRecord.setMillis(Long.MIN_VALUE) to fail.

 best regards,

 -- daniel


Re: java.time.ZoneId.systemDefalut() overhead

2015-02-27 Thread Stephen Colebourne
On 27 February 2015 at 10:55, Daniel Fuchs daniel.fu...@oracle.com wrote:
 I don't think I saw an issue raised for caching the clock in ZoneId.
 That seems like a good plan to me (ZoneId instances are controlled
 singletons, so there is no big memory issue there)

 I'd been intending to log that. Thanks for reminding me!
 Here is the issue:
 https://bugs.openjdk.java.net/browse/JDK-8074023

I've commented on the issue. Its a little more complex than I thought,
but perfectly do-able. Just needs some ZoneId instances to be cached
as well.

Stephen


Re: java.time.ZoneId.systemDefalut() overhead

2015-02-27 Thread Stephen Colebourne
On 26 February 2015 at 23:29, Peter Levart peter.lev...@gmail.com wrote:
 Here's another variant that doesn't use a back link to base TimeZone:
 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/webrev.03/

Yes, thats easier to follow. I think I'd be happy with that approach.

I don't think I saw an issue raised for caching the clock in ZoneId.
That seems like a good plan to me (ZoneId instances are controlled
singletons, so there is no big memory issue there)

Stephen


Re: java.time.ZoneId.systemDefalut() overhead

2015-02-26 Thread Stephen Colebourne
Personally, I found the SharedSecrets version easier to follow. I also
suspect that the toZoneId0() method would be invoked most of the time
with this variant, resulting in no beneficial inlining effect.

The ZoneOffsetTransition patch looks like a no-brainer to me.

Stephen




On 26 February 2015 at 20:14, Peter Levart peter.lev...@gmail.com wrote:
 Hi,

 Using routines for converting DOS time to Java time and back
 (java.util.zip.ZipUtils.[dosToJavaTime,javaToDosTime]) as a base for
 comparing deprecated java.util.Date API with new java.time API, here's a JMH
 test with 3 distinct implementations of those routines using java.util.Date,
 java.time.LocalDateTime and java.util.ZonedDateTime respectively:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/ZipUtilsTest.java

 Running the test in unpatched JDK9 reveals these results:

 Benchmark   Mode   SamplesScore  Score
 errorUnits
 j.t.ZipUtilsTest.dosToJavaTime_Date avgt15   95.644
 4.241ns/op
 j.t.ZipUtilsTest.dosToJavaTime_LDT  avgt15  118.155
 2.980ns/op
 j.t.ZipUtilsTest.dosToJavaTime_ZDT  avgt15  131.456
 3.586ns/op
 j.t.ZipUtilsTest.javaToDosTime_Date avgt15   74.692
 1.709ns/op
 j.t.ZipUtilsTest.javaToDosTime_LDT  avgt15  134.116
 4.396ns/op
 j.t.ZipUtilsTest.javaToDosTime_ZDT  avgt15  141.987
 8.697ns/op

 Using LocalDateTime instead of ZonedDateTime is a little faster, but does
 not make it as fast as using java.util.Date.

 With a patch for caching of ZoneId on default TimeZone, which speeds up
 repeated calls to ZoneId.systemDefault(), the results are as follows:

 Benchmark   Mode   SamplesScore  Score
 errorUnits
 j.t.ZipUtilsTest.dosToJavaTime_Date avgt15   95.590
 2.354ns/op
 j.t.ZipUtilsTest.dosToJavaTime_LDT  avgt15   79.682
 3.572ns/op
 j.t.ZipUtilsTest.dosToJavaTime_ZDT  avgt15   86.801
 2.081ns/op
 j.t.ZipUtilsTest.javaToDosTime_Date avgt15   75.096
 1.178ns/op
 j.t.ZipUtilsTest.javaToDosTime_LDT  avgt15   91.919
 2.191ns/op
 j.t.ZipUtilsTest.javaToDosTime_ZDT  avgt15   92.037
 2.054ns/op

 dosToJavaTime routine using LocalDateTime outperforms java.util.Date based.
 But javaToDosTime still lags behind. Profiling shows another point that can
 be optimized. In ZoneRules.getOffset(Instant) there is a loop over
 ZoneOffsetTransition[] array that searches for 1st transition that has its
 toEpochSecond value less than the Instant's epochSecond. This calls
 ZoneOffsetTransition.toEpochSecond repeatedly, converting
 ZoneOffsetTransition.transition which is a LocalDateTime to epochSecond.
 This repeated conversion is unnecessary, as ZoneOffsetTransition[] array is
 part of ZoneRules which is cached. Optimizing the ZoneOffsetTransition
 implementation (keeping both LocalDateTime variant and eposhSecond variant
 of transition time as the object's state) speeds up this conversion which
 makes the above test produce these results when combined with
 ZoneId.systemDefault() optimization:

 Benchmark   Mode   SamplesScore  Score
 errorUnits
 j.t.ZipUtilsTest.dosToJavaTime_Date avgt15   92.291
 2.903ns/op
 j.t.ZipUtilsTest.dosToJavaTime_LDT  avgt15   79.765
 3.106ns/op
 j.t.ZipUtilsTest.dosToJavaTime_ZDT  avgt15   87.282
 2.967ns/op
 j.t.ZipUtilsTest.javaToDosTime_Date avgt15   76.235
 2.651ns/op
 j.t.ZipUtilsTest.javaToDosTime_LDT  avgt15   73.115
 2.567ns/op
 j.t.ZipUtilsTest.javaToDosTime_ZDT  avgt15   75.701
 2.226ns/op

 For these tests I used another approach for speeding up
 ZoneId.systemDefault() which doesn't use ShareSecrets. It changes only
 TimeZone class so that base TimeZone instance is referenced from the clone
 returned by TimeZone.getDefault(). Although TimeZone object is unnecessarily
 cloned, it does not represent much overhead (probably the allocation is
 optimized away by JIT as the clone never escapes from
 ZoneId.systemDefault()):

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/webrev.02/

 The patch to ZoneOffsetTransition class is straightforward:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneOffsetTransition.epochSecond/webrev.01/


 So is this a worthy improvement? What do you think about the new approach
 for optimizing ZoneId.systemDefault() compared to SharedSecrets based:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/webrev.01/

 (this one still has a NPE bug in timeZone.setDefault())



 Regards, Peter



 On 02/23/2015 12:50 PM, Stephen Colebourne wrote:

 Having recently spent time on performance myself, I know that there
 are a number of things in the java.time package that need

Re: JEP 238: Multi-Version JAR Files

2015-02-25 Thread Stephen Colebourne
On 25 February 2015 at 13:30, Paul Sandoz paul.san...@oracle.com wrote:
 Even in the modular world i will expect class scanning will be used. While we 
 can now iterate reliably over classes in the image i don't believe that 
 functionality is made available for classes in modules on the module path.

To be honest, being able to reliably iterate over classes in a module
is the most interesting potential feature about modules in general.
Its such an unreliable pain at the moment that it really needs
tackling, and it is a very effective tool in application design in
languages that have it (like Fantom).

On 25 February 2015 at 16:27, Brian Goetz brian.go...@oracle.com wrote:
 On 2/12/2015 5:59 PM, Stephen Colebourne wrote:
 I would expect IDEs to have some considerable work to do.
 Agree on the work part, but I doubt it is considerable.

In Eclipse, there is a 1:1 mapping between a project and a JDK
compiler/version. Same in Maven. And no doubt elsewhere. I don't think
making that one to many instead is likely to be a particularly small
change.

Stephen


Re: RFR: 8073394: Clock.systemUTC() should return a constant

2015-02-24 Thread Stephen Colebourne
On 23 February 2015 at 22:27, Peter Levart peter.lev...@gmail.com wrote:
 What about the following idea:

 - create a (maybe still package-private) instance method
 ZoneId.getSystemClock() and cache the per-ZoneId Clock instance inside the
 ZoneId.
 - Clock.system(ZoneId) static method is then just delegating to
 ZoneId.getSystemClock().
 - Similarly Clock.systemDefaultZone() can just return
 ZoneId.defaultSystem().getSystemClock().
 - Similarly Clock.systemUTC() can just return
 ZoneOffset.UTC.getSystemClock().

 Or do we just want to cache systemUTC() Clock here?

I suspect that the above would be a good optimisation, though perhaps
a separate issue?

Stephen


Re: RFR: 8073394: Clock.systemUTC() should return a constant

2015-02-24 Thread Stephen Colebourne
Thanks for the change. This reviewer is happy.
Stephen

On 24 February 2015 at 10:59, Daniel Fuchs daniel.fu...@oracle.com wrote:
 On 23/02/15 21:40, Stephen Colebourne wrote:

 The rest of the java.time code tends to put the data provider method
 before the test, and mostly uses a naming convention of
 data_systemClocks. Neither of which are particularly important
 things.


 Thanks Stephen.

 I had a look at TestLocalDate.java
 http://hg.openjdk.java.net/jdk9/dev/jdk/file/tip/test/java/time/test/java/time/TestLocalDate.java#l177
 and tried to emulate what I saw there. All files in that
 directory seem to share the same convention.

 Here is the new webrev:
 http://cr.openjdk.java.net/~dfuchs/webrev_8073394/webrev.02/

 best regards,

 -- daniel



 Stephen


 On 23 February 2015 at 20:02, Daniel Fuchs daniel.fu...@oracle.com
 wrote:

 On 23/02/15 19:50, Roger Riggs wrote:


 Hi Daniel,

 Typically, a TestNG DataProvider would have been used to supply the case
 data
 instead of an internal Map.



 Thanks Roger. I should take some time to learn more about TestNG.

 Here is the new webrev:

 http://cr.openjdk.java.net/~dfuchs/webrev_8073394/webrev.01/

 -- daniel



 The rest looks fine.

 Thanks, Roger

 On 2/23/2015 12:41 PM, Daniel Fuchs wrote:


 Hi,

 Please find below a small patch for
 8073394: Clock.systemUTC() should return a constant

 http://cr.openjdk.java.net/~dfuchs/webrev_8073394/webrev.00/

 best regards,

 -- daniel







Re: java.time.ZoneId.systemDefalut() overhead

2015-02-23 Thread Stephen Colebourne
Having recently spent time on performance myself, I know that there
are a number of things in the java.time package that need some work.

Improving ZoneId.systemDefault() is definitely an improvement I
welcome. The change is along the lines of that I would have made,
involving a secret location to allow data to be exchanged (this code
needs to avoid the clone). The idea of adding a public method
TimeZone.getDefaultZoneId() is also one I'd be interested in if it
works.

On the patch, in TimeZone::toZoneId() I would use two methods:

public ZoneId toZoneId() {
 ZoneId zId = zoneId;
 if (zId == null) {
  zoneId = zId = toZoneId0();
 }
return zId;
}

private ZoneId toZoneId0() {
 String id = getID();
 if (ZoneInfoFile.useOldMapping()  id.length() == 3) {
  if (EST.equals(id)) {
   zId = ZoneId.of(America/New_York);
  } else if (MST.equals(id)) {
   zId = ZoneId.of(America/Denver);
  } else if (HST.equals(id)) {
   zId = ZoneId.of(America/Honolulu);
  } else {
   zId = ZoneId.of(id, ZoneId.SHORT_IDS);
  }
 }
}

This should aid hotspot inlining (removing uncommon paths from main flow).

Does the code properly handle the static setDefault() method? I
suspect so, but will be worth a test.

 If there's no undesirable side effect then caching zoneId
 inside TimeZone looks reasonable to me - however I'm not an expert
 of the field, and I got to learn that time/date can be more complex
 than what I first thought ;-)

 One thing I noticed is that there are two kinds of ZoneRulesProvider(s):
 those that are static and allow their rules to be cached and the dynamic
 ones, that can provide dynamic view on rules and therefore don't allow
 caching of rules. But this aspect is encapsulated in ZoneRegion class (an
 implementation of ZoneId). So I think an instance to ZoneRegion (or any
 ZoneId) can be cached for indefinite time as long as it's id is the one that
 we are looking for.

Yes, ZoneId can be safely cached as an immutable object (probably
never going to be a value type BTW). All current ZoneRegion instances
have static, not dynamic, rules.


As a final possibility, it might be possible to add a new subclass of
TimeZone that works directly off ZoneId. (sourcing the offset rules
from ZoneId instead of direct from the underlying data). However, the
mutable API of TimeZone probably makes it quite hard to get right.

On the specific ZIP code, using LocalDateTime rather than
ZonedDateTime may be faster as there are less objects to create.Claes'
code looks OK at first glance.

To get more performance, effort needs to be spent on LocalDate.of()
and LocalTime.of(). Both use very clean code, but it calls out to a
general routine that has to lookup boundary numbers. Because they are
general, they have quite deep call stacks, and inlining sometimes
fails to reach them due to inlining depth limits. For those two
critical pieces of code, the limits need to be inlined, something like
this:

public static LocalDate of(int year, int month, int dayOfMonth) {
 if (year  Year.MIN_VALUE || year  Year.MAX_VALUE) {
  throw new DateTimeException(genInvalidFieldMessage(YEAR, year))
 }
 if (month  1 || month  12) {
  throw new DateTimeException(genInvalidFieldMessage(MONTH_OF_YEAR, month))
 }
 if (dayOfMonth  1 || dayOfMonth  31) {
  throw new DateTimeException(genInvalidFieldMessage(DAY_OF_MONTH, dayOfMonth))
 }
 return create(year, month, dayOfMonth);
}
private String genInvalidFieldMessage(TemporalField field, long value) {
 return Invalid value for  + field +  (valid values  + this + ):  + value;
}

I've not tested if this is faster, but I'd be surprised if it wasn't.

Stephen



On 22 February 2015 at 20:21, Peter Levart peter.lev...@gmail.com wrote:
 Hi,

 I noticed internal JDK class java.util.zip.ZipUtils uses deprecated
 java.util.Date API for implementing two methods for converting DOS to Java
 time and back. I thought I'd try translating them to use new java.time API.
 Translation was straightforward, but I noticed that new implementations are
 not on par with speed to old java.util.Date versions. Here's a JMH benchmark
 implementing those two conversion methods with java.util.Date and
 java.time.ZonedDateTime APIs:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/ZipUtilsTest.java

 The results show the following:

 Benchmark   Mode   Samples Score  Score error
 Units
 j.t.ZipUtilsTest.dosToJavaTime_Date avgt 5 101.890   17.570
 ns/op
 j.t.ZipUtilsTest.dosToJavaTime_ZDT  avgt 5 137.587   13.533
 ns/op
 j.t.ZipUtilsTest.javaToDosTime_Date avgt 5   67.114   10.382
 ns/op
 j.t.ZipUtilsTest.javaToDosTime_ZDT  avgt 5 143.739   15.243
 ns/op


 Quick sampling with jvisualvm shows that a substantial time is spent
 repeatedly obtaining ZoneId.systemDefault() instance. I checked the
 implementation and came up with the following optimization:

 http://cr.openjdk.java.net/~plevart/jdk9-dev/ZoneId.systemDefault/webrev.01/

 TimeZone 

Re: RFR: 8073394: Clock.systemUTC() should return a constant

2015-02-23 Thread Stephen Colebourne
The rest of the java.time code tends to put the data provider method
before the test, and mostly uses a naming convention of
data_systemClocks. Neither of which are particularly important
things.

Stephen


On 23 February 2015 at 20:02, Daniel Fuchs daniel.fu...@oracle.com wrote:
 On 23/02/15 19:50, Roger Riggs wrote:

 Hi Daniel,

 Typically, a TestNG DataProvider would have been used to supply the case
 data
 instead of an internal Map.


 Thanks Roger. I should take some time to learn more about TestNG.

 Here is the new webrev:

 http://cr.openjdk.java.net/~dfuchs/webrev_8073394/webrev.01/

 -- daniel



 The rest looks fine.

 Thanks, Roger

 On 2/23/2015 12:41 PM, Daniel Fuchs wrote:

 Hi,

 Please find below a small patch for
 8073394: Clock.systemUTC() should return a constant

 http://cr.openjdk.java.net/~dfuchs/webrev_8073394/webrev.00/

 best regards,

 -- daniel





Re: RFR 9 8055330: (process spec) ProcessBuilder.start and Runtime.exec should throw UnsupportedOperationException ...

2015-02-21 Thread Stephen Colebourne
FWIW, I think UOE is clearly the right exception for a method that is
unsupported. An IOExeption is very much associated with the actual IO
system.
Stephen

On 20 February 2015 at 21:06, Martin Buchholz marti...@google.com wrote:
 I totally disagree about recoverable condition.  Instead of thinking
 about recovering, think instead about whether it ever makes sense to
 catch the resulting exception.  See my sample code broken by the switch to
 UOE.

 On Fri, Feb 20, 2015 at 11:58 AM, Roger Riggs roger.ri...@oracle.com
 wrote:

 Hi Martin,

 As I said earlier, launching a Process when process is not implemented
 is not a recoverable condition; there are no conditions under which it will
 succeed.  If the application is probing to determine what
 is supported on a platform then it should be prepared to handle UOE
 or using a test for the specific capability it requires.

 Roger



 On 2/20/2015 1:34 PM, Martin Buchholz wrote:

 One reason I keep pouring salt on this tiny wound is that throwing
 (unchecked) UOE for system-dependent failures when normally IOE is thrown
 is a fundamental design mistake for java and its checked exception design.
 I think it violates Josh's Effective Java Item 58: Use checked exceptions
 for recoverable conditions and runtime exceptions for programming errors.
 I don't think it's worth fixing places in jdk8 where this small mistake
 was
 made, but we can at least stop the incompatible worsening of existing
 APIs.

 On Fri, Feb 20, 2015 at 3:49 AM, Alan Bateman alan.bate...@oracle.com
 wrote:

  On 19/02/2015 21:54, Jason Mehrens wrote:

  I'm assuming that compatibility is given more weight vs. correcting
 choices made in the original design.

   Yes, I think we've spent more than enough time on it. In this case
 it's

 for a major release only and the compatibility impact seems to be only
 platforms or implementations that don't support launching of processes
 today but are running applications that attempt to start processes
 anyway.
 So overall it doesn't seem to be something to be overly concerned with.

 -Alan





Re: RFR: 8072645: java.util.logging should use java.time to get more precise time stamps

2015-02-19 Thread Stephen Colebourne
A good improvement and ready to go from my perspective,
thanks
Stephen


On 19 February 2015 at 15:19, Daniel Fuchs daniel.fu...@oracle.com wrote:
 On 18/02/15 12:11, Stephen Colebourne wrote:

 In LogRecord, the Javadoc of getInstant():
 Get event time.
 could be
 Gets the instant that the event ocurred.


 done.

 In LogRecord, the Javadoc of setInstant():
 Set event time.
 could be
 Sets the instant that the event ocurred.
 (matching change to @param)


 done.


 I'd prefer to see the other methods in the class have their Javadoc
 changed from Get to Gets and from Set to Sets to conform with
 Javadoc norms, but understand if this causes CCC issues.


 Agreed - but I will need to obtain a CCC approval for this change,
 and I'd prefer to leave the specdiff uncluttered.
 I will log an RFE for a doc cleanup.


 In PlatformLogger.
 There should be no need to assign to a static constant:
   private static final Clock systemUTC = Clock.systemUTC();
 However, Clock.systemUTC() needs to be fixed:
 https://bugs.openjdk.java.net/browse/JDK-8073394


 Ahah. I hadn't seen this RFE. Thanks for pointing it out.
 This RFE (JDK-8073394) won't need any CCC so it should be an easy
 quick fix. Given that, I agree that it's better to remove the
 static systemUTC fields (removed from PlatformLogger and LogRecord).
 I will also take care of JDK-8073394 later on then.


 This
   ZonedDateTime zdt = ZonedDateTime.ofInstant(systemUTC.instant(),
 zoneId);
 can be replaced with
   ZonedDateTime zdt = ZonedDateTime.now(zoneId);


 done.

 Again, the implementation of the better method isn't perfectly optimised.

 The default format used in String.format does not need to time-zone,
 just the local date/time. As such, in theory a LocalDateTime could be
 used. But, since the format can be changed by the user, a
 ZonedDateTime is needed (probably not worth being clever here).


 Yes - that was my reasoning too. ZonedDateTime seems to be the
 more resilient to format changes.

 Where you assign ZoneId.systemDefault() to a final instance variable,
 it is being locked for the lifetime of that object. I cannot
 immediately tell what the lifetime of the objects are. I also suspect
 that few if any will want to change their time-zone half way through
 logging.


 I don't think we should bother with this possibility.

 The PlatformLogger's simple logger may either never be
 released (if JUL is not used - meaning, if there is no user
 specified logging.properties file and if nobody calls
 LogManager.getLogManager()) - or it will be replaced by a
 plain logger - in which case the j.u.l.Formatter will
 be used - and the j.u.l.Formatter will usually have the
 same life-time than the Handler on which it is set.
 (usually, until reset() or readConfiguration() is called).


 XMLFormatter has a reviewer's comment that will need to be removed
 before committing.


 Will do before pushing. At the moment I'm still hoping it will catch
 the eye of an XML/DTD expert :-)

 In LogRecordWithNanosAPI you check the instant contents rather than
 just the instant:
   assertEquals(record.getInstant().getEpochSecond(),
 record2.getInstant().getEpochSecond(),
 getInstant().getEpochSecond());
   assertEquals(record.getInstant().getNano(),
 record2.getInstant().getNano(), getInstant().getNano());
   assertEquals(record.getInstant().toEpochMilli(),
 record2.getInstant().toEpochMilli(), getInstant().toEpochMilli());
 could be
   assertEquals(record.getInstant(), record2.getInstant(), getInstant());
 There are probably other instances of this point.


 Right. I'm not very motivated to change the LogRecord* tests.
 What happened is that I wrote these tests based on the use of
 LogRecord.setMillis/getMillis and
 LogRecord.getNanoAdjustment/setNanoAdjustment - and then had to
 update them when we decided to drop
 LogRecord.getNanoAdjustment/setNanoAdjustment and changed the
 implementation of getMillis/setMillis to work on top of instant.

 As a consequence, the test have probably become a bit more complex
 and cluttered than they would have been if I had written them
 directly against the later version of the API.

 However I believe they still do the job - I have been careful
 to add some additional tests to verify the new implementation of
 getMillis/setMillis - I don't believe I left any cracks.

 The important point is to verify the consistency of getMillis/getInstant
 and setMillis/setInstant as well as checking that nothing is lost
 in the serialization - I believe the tests are good enough in that
 respect.

 Hope that helps


 It does! I am grateful :-)

 Here is the updated webrev.
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/webrev.02/

 best regards,

 -- daniel


 Stephen



 On 17 February 2015 at 19:33, Daniel Fuchs daniel.fu...@oracle.com
 wrote:

 Hi,

 Here is a new webrev - which should contain all the feedback received
 so far:

 #1 LogRecord uses serialPersistentFields for serialization, and
 contains an Instant instead of the two fields

Re: RFR: 8072645: java.util.logging should use java.time to get more precise time stamps

2015-02-18 Thread Stephen Colebourne
In LogRecord, the Javadoc of getInstant():
Get event time.
could be
Gets the instant that the event ocurred.

In LogRecord, the Javadoc of setInstant():
Set event time.
could be
Sets the instant that the event ocurred.
(matching change to @param)

I'd prefer to see the other methods in the class have their Javadoc
changed from Get to Gets and from Set to Sets to conform with
Javadoc norms, but understand if this causes CCC issues.


In PlatformLogger.
There should be no need to assign to a static constant:
 private static final Clock systemUTC = Clock.systemUTC();
However, Clock.systemUTC() needs to be fixed:
https://bugs.openjdk.java.net/browse/JDK-8073394

This
 ZonedDateTime zdt = ZonedDateTime.ofInstant(systemUTC.instant(), zoneId);
can be replaced with
 ZonedDateTime zdt = ZonedDateTime.now(zoneId);
Again, the implementation of the better method isn't perfectly optimised.

The default format used in String.format does not need to time-zone,
just the local date/time. As such, in theory a LocalDateTime could be
used. But, since the format can be changed by the user, a
ZonedDateTime is needed (probably not worth being clever here).

Where you assign ZoneId.systemDefault() to a final instance variable,
it is being locked for the lifetime of that object. I cannot
immediately tell what the lifetime of the objects are. I also suspect
that few if any will want to change their time-zone half way through
logging.

XMLFormatter has a reviewer's comment that will need to be removed
before committing.

In LogRecordWithNanosAPI you check the instant contents rather than
just the instant:
 assertEquals(record.getInstant().getEpochSecond(),
record2.getInstant().getEpochSecond(),
getInstant().getEpochSecond());
 assertEquals(record.getInstant().getNano(),
record2.getInstant().getNano(), getInstant().getNano());
 assertEquals(record.getInstant().toEpochMilli(),
record2.getInstant().toEpochMilli(), getInstant().toEpochMilli());
could be
 assertEquals(record.getInstant(), record2.getInstant(), getInstant());
There are probably other instances of this point.

Hope that helps
Stephen



On 17 February 2015 at 19:33, Daniel Fuchs daniel.fu...@oracle.com wrote:
 Hi,

 Here is a new webrev - which should contain all the feedback received
 so far:

 #1 LogRecord uses serialPersistentFields for serialization, and
contains an Instant instead of the two fields millis +
nanoAdjustment.
 #2 getMillis/setMillis are deprecated, replaced by getInstant/setInstant
getNanoAdjustment/setNanoAdjustment have been dropped.

 [Thanks Peter for prototyping these 2 first changes!]

 #3 XMLFormatter uses ISO_INSTANT to print the instant in the date field.
new constructor has been dropped. printNanos property is renamed into
useInstant.
 #4 SimpleFormatter still uses ZonedDateTime - for compatibility and
flexibility and because it proved to be faster than Date [1].
 #5 Tests have been updated WRT to the changes above.
 #6 various doc tweaks compared to last webrev

 new webrev:
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/webrev.01/

 specdiff updated in place (unfortunately the serial form does
 not show up):
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/specdiff-logging-time/overview-summary.html

 best regards,

 -- daniel

 [1] benchmarks related to formatting the date:
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/benchmarks.html



 On 16/02/15 20:24, Daniel Fuchs wrote:

 Hi Stephen,

 Thanks again for your support and suggestions!

 On 14/02/15 14:03, Daniel Fuchs wrote:

 If I'm not mistaken the previous SimpleFormatter used to use
 java.util.Date
 and printed the time in the local time zone. I have tried to keep this
 behavior.
 I'm not sure we would want to change it to print the time in the UTC
 time zone
 by default. A lot of developers use logging for debugging - and when
 reading
 debug messages on the console I usually prefer to see the time in my own
 time zone.

 Would there be a more efficient way to keep the default formatting of
 the time
 in the SimpleFormatter?


 I spent part of the day reading the documentation  trying out the
 various TemporalAccessors and DateTimeFormatters...

 I have also done some microbenchmark measurements (jmh) WRT
 the performance  of formatting a date/time.
 Results can be seen here [1]:
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/benchmarks.html

 What it shows is that when using String.format (as the SimpleFormatter
 is specified to do), then using ZonedDateTime is actually an improvement
 over using Date.

 Now that I have read a bit more about LocalDateTime, ZonedDateTime,
 Date, and Instant, I do agree with you that for recording an event time,
 printing the Instant is the better alternative.
 However, since SimpleFormatter has always printed the local date,
 I would tend to want to keep it that way.

 So I'm going to propose to keep using ZonedDateTime in
 the SimpleFormatter, as I believe it's what gives it the maximum of
 flexibility - WRT 

Re: RFR: 8072645: java.util.logging should use java.time to get more precise time stamps

2015-02-14 Thread Stephen Colebourne
Elements of the date/time handling here don't really work.

Logging is essentially a UTC only problem, using a time-zone is slow and
unecessary. This indicates that all uses of ZonedDateTime should be
replaced with either Instant or an OffsetDateTime using ZoneOffset.UTC.

Any string format should have the ISO-8601 string end with Z, and not end
with -05:00 or any other zone offset.

(The webrev is broken wrt zones as it stores ZoneId.systemDefault() in a
static constant, but that method depends on the mutable
TimeZone.getDefault() ).


LogReecord.getInstantUTC() should be getInstant().

(An Instant is fully defined as a concept, and it cannot be in or not in
UTC).


In SimpleFormatter, you have {@code java.util.loggin} (missing a g).


In XMLFormatter, instead of using DateTimeFormatter.ISO_LOCAL_DATE_TIME,
create a new instance of DateTimeFormatter that does not output the
fraction of a second. That way, there is no need to use
truncatedTo(SECONDS).

StringBuilder appends can be used directly with formatters:

sb.append(ZonedDateTime.ofInstant(time, zoneId).format(dtformatter));

replace with

dtformatter.formatTo(ZonedDateTime.ofInstant(time, zoneId), sb);

thanks
Stephen



On 13 Feb 2015 14:57, Daniel Fuchs daniel.fu...@oracle.com wrote:

 Hi,

 Please find below a patch for:

 8072645: java.util.logging should use java.time to get more
  precise time stamps


 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/webrev.00/

 specdiff:
 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/
 specdiff-logging-time/java/util/logging/package-summary.html

 Overview:
 -

 The patch is made of the following pieces:

  - LogRecord uses java.time.Clock's systemClock to get an
Instant in the best available resolution.

The instant is split into a number of milliseconds (a long)
and a nanosecond adjustment (an int).
The number of milliseconds is the same than what would have
been obtained by calling System.currentTimeMillis().

  - LogRecord acquires a new serializable int nanoAdjustement field,
which can be used together with the number of milliseconds
to reconstruct the instant.

  - SimpleFormatter is updated to pass a ZoneDateTime
instance to String.format, instead of a Date.

The effect of that is that the format string can now
be configure to print the full instant precision, if
needed.

  - XMLformatter will add a new nanos element after the
millis element - if the value of the nanoAdjustment
field is not 0.

The date string will also contain the nano second
adjustment as well as the zone offset as formatted by
DateTimeFormatter.ISO_OFFSET_DATE_TIME

 Compatibility considerations:
 -

 - The serial for of log record is backward/forward compatible.
   I added a test to verify that.

 - XMLFormatter has acquired a new configurable property
   'FQCN.printNanos' which allows to revert to the old
   XML format, should the new format cause issues in
   existing applications.

 - The logger.dtd will need to be updated, to support the
   new optional nanos element. And for this matter,
   should we update the logger.dtd or rather define a
   logger-v2.dtd?
   See planned modification:

 http://cr.openjdk.java.net/~dfuchs/webrev_8072645/logger-
 dtd/logger.dtd.frames.html

 best regards,

 -- daniel



Re: JEP 238: Multi-Version JAR Files

2015-02-12 Thread Stephen Colebourne
Interesting direction.

Reading carefully, the goal is actually very limited in scope, by
preventing any public API changes. It doesn't help adoption of JSR-310
for example, but will be useful for Unsafe, which is clearly a
motivating factor.

I would expect IDEs to have some considerable work to do.

My biggest concern will be ensuring tools like Reflections (or other
classpath scanning tools) carry on working. Really, classpath scanning
should no longer be necessary with a module system, but since we've
heard nothing to indicate that issue is being tackled, those tools
will continue to be vital (such as to find all classes implementing an
annotation, or all implementations of an interface)

Stephen





On 12 February 2015 at 20:52, Paul Sandoz paul.san...@oracle.com wrote:
 Hi

 In connection with the JEP there is also a design document to help the 
 discussion:

   http://cr.openjdk.java.net/~psandoz/jdk9/MultiVersionJar-8u60-9-design.md

 We are especially interesting in hearing feedback from library developers, 
 tool/IDE developers, and anyone doing funky stuff with class loading and JAR 
 files.

 Paul.

 On Feb 12, 2015, at 9:41 PM, mark.reinh...@oracle.com wrote:

 New JEP Candidate: http://openjdk.java.net/jeps/238

 - Mark



Re: JEP 102 Process Updates revised API draft

2015-02-09 Thread Stephen Colebourne
On 9 February 2015 at 23:44, David M. Lloyd david.ll...@redhat.com wrote:
 ProcessHandle.Info provides a startTime and totalTime.  But it seems odd and
 inconsistent that startTime is an Instant, yet totalTime is a raw long as
 opposed to the arguably more consistent Duration.  Is there a reason you
 went with a raw long for this property?

I think using Duration would be OK here, even though the CPU time was
used up in a discontinuous manner. The returned value is, in essence,
a sum of many smaller durations.

I'd also suggest adjusting the two method names:
startTime() - startInstant()
totalTime() - totalCpuDuration()

Those communicate the intent more clearly IMO.

Stephen


Re: RFR 9: 8067800: Unexpected DateTimeException in the java.time.chrono.HijrahChronology.isLeapYear

2015-02-04 Thread Stephen Colebourne
The spec is pretty clear:
the proleptic-year to check, not validated for range

Adding an exception to the spec would cause requests to add exceptions
to all other chronologies. Doing so, would be very negative to
performance (it would require having a non-public copy of the logic
elsewhere to avoid the range checking cost.

Adding an exception to Hijrah only means that the implementation is
doing something not permitted by the spec.

The only sane choice here is to return false from Hijrah when out of
range. As intended and as specced.

Note that I don't believe that returning false will cause hardship in
any actual user code, because other methods will constrain the year to
be valid.

If necessary, the following spec clarification could be added to the
bullet points on Chronology:

liif the year is out of range the chronology should return a best
effort guess, or false if there is no suitable guess

Stephen


On 4 February 2015 at 16:05, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Stephen,

 On 2/4/15 10:43 AM, Stephen Colebourne wrote:

 The java.time code generally takes a lenient approach to methods that
 return a boolean. For example, they tend to accept null inputs without
 throwing an exception.

 Seems like an odd design provision leading to some behavior that
 would be inconsistent with non-boolean methods.


 Right now, this patch makes a subclass implementation incompatible
 with the superclass spec. That cannot happen. Thus there are only two
 options:

 - change the superclass spec
 - return false from the subclass

 I favour the latter as being in line with the rest of the package and
 less disruptive to existing users (would a CCC even pass?)

 For a 'young' API with limited uptake, issues can be fixed early and in a
 major
 release there is more flexibility to clarify the specification.

 DTE is a runtime exception and can occur in a majority of time computations.
 In the case of HijrahChronolog.isLeapYear, the implementation currently
 throws a different RuntimeException and this would be a correction
 to the conventional exception.

 Roger



 Stephen


 On 4 February 2015 at 15:10, Roger Riggs roger.ri...@oracle.com wrote:

 Hi Stephen,

 I also indicated in the Jira comments that it is misleading and incorrect
 to
 return
 false when it is not known that a year is or is not a leap year. All of
 the
 other
 HijrahChronology computations throw DTE for invalid dates and the same
 may
 be
 true for other Chronologies.

 The assertion in Chronology.isLeapYear about not checking the range is
 too
 broad
 and should be qualified by the Chronology.

 Perhaps the proposed change should include a caveat in that method.

 Roger




 On 2/3/15 8:05 PM, Stephen Colebourne wrote:

 -1

 As I indicated on JIRA, I don't believe that this change meets the
 spec or intent of the definition on Chronology. That is specified to
 not throw any exceptions and to handle all years, valid or not.

 I don't foresee any significant issue where a year is not validated by
 this method. Years out of range should simply return false, again
 something that is within the spirit of the spec a chronology that
 does not support the concept of a year must return false.

 Stephen



 On 3 February 2015 at 20:56, Lance Andersen lance.ander...@oracle.com
 wrote:

 +1
 On Feb 3, 2015, at 3:45 PM, Roger Riggs roger.ri...@oracle.com wrote:

 Please review this specification clarification of the range of Hijrah
 calendar variants.
 The issue was exposed by a bug in the HijrahChronology.isLeapYear
 method.

 Webrev:
 http://cr.openjdk.java.net/~rriggs/webrev-leap-year-8067800/

 Issue:
 https://bugs.openjdk.java.net/browse/JDK-8067800

 A CCC may be needed.

 Thanks, Roger


 Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
 Oracle Java Engineering
 1 Network Drive
 Burlington, MA 01803
 lance.ander...@oracle.com






Re: RFR 9: 8067800: Unexpected DateTimeException in the java.time.chrono.HijrahChronology.isLeapYear

2015-02-04 Thread Stephen Colebourne
I think this might be clearer:

* Checks if the specified year is a leap year.
* p
* A leap-year is a year of a longer length than normal.
* The exact meaning is determined by the chronology according to the
following constraints.
* ul
* lia leap-year must imply a year-length longer than a non leap-year.
* lia chronology that does not support the concept of a year must
return false.
* lithe correct result must be returned for all years within the
valid range of years for the chronology
* /ul
* Outside the range of valid years an implementation is free to return
either a best guess or false.
* An implementation must not throw an exception, even if the year is
outside the range of valid years..

Stephen



On 4 February 2015 at 19:08, Roger Riggs roger.ri...@oracle.com wrote:
 Hi,

 Clarifying that Chronology.isLeapYear is specified only within the range of
 the chronology
 makes it possible to maintain the invariants with the calendar computations
 and methods.
 Best effort isn't testable except in an implementation specific way and
 can't be relied on.

 The other two constraints are testable and use 'must' in their definitions.
 The new phrasing should clearly be either an exception or reinforce the
 value is
 specified only within the range of the chronology.

 How about:

 liexcept if the year is out of range the chronology should return a best
 effort guess, or false if there is no suitable guess

 Roger



 On 2/4/2015 11:18 AM, Stephen Colebourne wrote:

 The spec is pretty clear:
 the proleptic-year to check, not validated for range

 Adding an exception to the spec would cause requests to add exceptions
 to all other chronologies. Doing so, would be very negative to
 performance (it would require having a non-public copy of the logic
 elsewhere to avoid the range checking cost.

 Adding an exception to Hijrah only means that the implementation is
 doing something not permitted by the spec.

 The only sane choice here is to return false from Hijrah when out of
 range. As intended and as specced.

 Note that I don't believe that returning false will cause hardship in
 any actual user code, because other methods will constrain the year to
 be valid.

 If necessary, the following spec clarification could be added to the
 bullet points on Chronology:

 liif the year is out of range the chronology should return a best
 effort guess, or false if there is no suitable guess

 Stephen


 On 4 February 2015 at 16:05, Roger Riggs roger.ri...@oracle.com wrote:

 Hi Stephen,

 On 2/4/15 10:43 AM, Stephen Colebourne wrote:

 The java.time code generally takes a lenient approach to methods that
 return a boolean. For example, they tend to accept null inputs without
 throwing an exception.

 Seems like an odd design provision leading to some behavior that
 would be inconsistent with non-boolean methods.


 Right now, this patch makes a subclass implementation incompatible
 with the superclass spec. That cannot happen. Thus there are only two
 options:

 - change the superclass spec
 - return false from the subclass

 I favour the latter as being in line with the rest of the package and
 less disruptive to existing users (would a CCC even pass?)

 For a 'young' API with limited uptake, issues can be fixed early and in a
 major
 release there is more flexibility to clarify the specification.

 DTE is a runtime exception and can occur in a majority of time
 computations.
 In the case of HijrahChronolog.isLeapYear, the implementation currently
 throws a different RuntimeException and this would be a correction
 to the conventional exception.

 Roger


 Stephen


 On 4 February 2015 at 15:10, Roger Riggs roger.ri...@oracle.com wrote:

 Hi Stephen,

 I also indicated in the Jira comments that it is misleading and
 incorrect
 to
 return
 false when it is not known that a year is or is not a leap year. All of
 the
 other
 HijrahChronology computations throw DTE for invalid dates and the same
 may
 be
 true for other Chronologies.

 The assertion in Chronology.isLeapYear about not checking the range is
 too
 broad
 and should be qualified by the Chronology.

 Perhaps the proposed change should include a caveat in that method.

 Roger




 On 2/3/15 8:05 PM, Stephen Colebourne wrote:

 -1

 As I indicated on JIRA, I don't believe that this change meets the
 spec or intent of the definition on Chronology. That is specified to
 not throw any exceptions and to handle all years, valid or not.

 I don't foresee any significant issue where a year is not validated by
 this method. Years out of range should simply return false, again
 something that is within the spirit of the spec a chronology that
 does not support the concept of a year must return false.

 Stephen



 On 3 February 2015 at 20:56, Lance Andersen
 lance.ander...@oracle.com
 wrote:

 +1
 On Feb 3, 2015, at 3:45 PM, Roger Riggs roger.ri...@oracle.com
 wrote:

 Please review this specification clarification of the range of
 Hijrah
 calendar variants.
 The issue

Re: RFR 9: 8067800: Unexpected DateTimeException in the java.time.chrono.HijrahChronology.isLeapYear

2015-02-04 Thread Stephen Colebourne
Looks good overall.
Only comment is whether there should be a p tag after the /ul and
before Outside the range. The resulting javadoc looks like it needs
it, although I don't know what the OpenJDK house rule is for that
case.

Stephen

On 4 February 2015 at 21:17, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Stephen,

 That covers the cases better.

 The updated javadoc is:
 http://cr.openjdk.java.net/~rriggs/xx/java/time/chrono/Chronology.html
 http://cr.openjdk.java.net/~rriggs/xx/java/time/chrono/HijrahChronology.html

 Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-leap-year-8067800/

 Roger




 On 2/4/2015 3:42 PM, Stephen Colebourne wrote:

 I think this might be clearer:

 * Checks if the specified year is a leap year.
 * p
 * A leap-year is a year of a longer length than normal.
 * The exact meaning is determined by the chronology according to the
 following constraints.
 * ul
 * lia leap-year must imply a year-length longer than a non leap-year.
 * lia chronology that does not support the concept of a year must
 return false.
 * lithe correct result must be returned for all years within the
 valid range of years for the chronology
 * /ul
 * Outside the range of valid years an implementation is free to return
 either a best guess or false.
 * An implementation must not throw an exception, even if the year is
 outside the range of valid years..

 Stephen



 On 4 February 2015 at 19:08, Roger Riggs roger.ri...@oracle.com wrote:

 Hi,

 Clarifying that Chronology.isLeapYear is specified only within the range
 of
 the chronology
 makes it possible to maintain the invariants with the calendar
 computations
 and methods.
 Best effort isn't testable except in an implementation specific way and
 can't be relied on.

 The other two constraints are testable and use 'must' in their
 definitions.
 The new phrasing should clearly be either an exception or reinforce the
 value is
 specified only within the range of the chronology.

 How about:

 liexcept if the year is out of range the chronology should return a
 best
 effort guess, or false if there is no suitable guess

 Roger



 On 2/4/2015 11:18 AM, Stephen Colebourne wrote:

 The spec is pretty clear:
 the proleptic-year to check, not validated for range

 Adding an exception to the spec would cause requests to add exceptions
 to all other chronologies. Doing so, would be very negative to
 performance (it would require having a non-public copy of the logic
 elsewhere to avoid the range checking cost.

 Adding an exception to Hijrah only means that the implementation is
 doing something not permitted by the spec.

 The only sane choice here is to return false from Hijrah when out of
 range. As intended and as specced.

 Note that I don't believe that returning false will cause hardship in
 any actual user code, because other methods will constrain the year to
 be valid.

 If necessary, the following spec clarification could be added to the
 bullet points on Chronology:

 liif the year is out of range the chronology should return a best
 effort guess, or false if there is no suitable guess

 Stephen


 On 4 February 2015 at 16:05, Roger Riggs roger.ri...@oracle.com wrote:

 Hi Stephen,

 On 2/4/15 10:43 AM, Stephen Colebourne wrote:

 The java.time code generally takes a lenient approach to methods that
 return a boolean. For example, they tend to accept null inputs without
 throwing an exception.

 Seems like an odd design provision leading to some behavior that
 would be inconsistent with non-boolean methods.


 Right now, this patch makes a subclass implementation incompatible
 with the superclass spec. That cannot happen. Thus there are only two
 options:

 - change the superclass spec
 - return false from the subclass

 I favour the latter as being in line with the rest of the package and
 less disruptive to existing users (would a CCC even pass?)

 For a 'young' API with limited uptake, issues can be fixed early and in
 a
 major
 release there is more flexibility to clarify the specification.

 DTE is a runtime exception and can occur in a majority of time
 computations.
 In the case of HijrahChronolog.isLeapYear, the implementation currently
 throws a different RuntimeException and this would be a correction
 to the conventional exception.

 Roger


 Stephen


 On 4 February 2015 at 15:10, Roger Riggs roger.ri...@oracle.com
 wrote:

 Hi Stephen,

 I also indicated in the Jira comments that it is misleading and
 incorrect
 to
 return
 false when it is not known that a year is or is not a leap year. All
 of
 the
 other
 HijrahChronology computations throw DTE for invalid dates and the
 same
 may
 be
 true for other Chronologies.

 The assertion in Chronology.isLeapYear about not checking the range
 is
 too
 broad
 and should be qualified by the Chronology.

 Perhaps the proposed change should include a caveat in that method.

 Roger




 On 2/3/15 8:05 PM, Stephen Colebourne wrote:

 -1

 As I indicated on JIRA, I don't believe

Re: RFR 9: 8067800: Unexpected DateTimeException in the java.time.chrono.HijrahChronology.isLeapYear

2015-02-04 Thread Stephen Colebourne
The java.time code generally takes a lenient approach to methods that
return a boolean. For example, they tend to accept null inputs without
throwing an exception.

Right now, this patch makes a subclass implementation incompatible
with the superclass spec. That cannot happen. Thus there are only two
options:

- change the superclass spec
- return false from the subclass

I favour the latter as being in line with the rest of the package and
less disruptive to existing users (would a CCC even pass?)

Stephen


On 4 February 2015 at 15:10, Roger Riggs roger.ri...@oracle.com wrote:
 Hi Stephen,

 I also indicated in the Jira comments that it is misleading and incorrect to
 return
 false when it is not known that a year is or is not a leap year. All of the
 other
 HijrahChronology computations throw DTE for invalid dates and the same may
 be
 true for other Chronologies.

 The assertion in Chronology.isLeapYear about not checking the range is too
 broad
 and should be qualified by the Chronology.

 Perhaps the proposed change should include a caveat in that method.

 Roger




 On 2/3/15 8:05 PM, Stephen Colebourne wrote:

 -1

 As I indicated on JIRA, I don't believe that this change meets the
 spec or intent of the definition on Chronology. That is specified to
 not throw any exceptions and to handle all years, valid or not.

 I don't foresee any significant issue where a year is not validated by
 this method. Years out of range should simply return false, again
 something that is within the spirit of the spec a chronology that
 does not support the concept of a year must return false.

 Stephen



 On 3 February 2015 at 20:56, Lance Andersen lance.ander...@oracle.com
 wrote:

 +1
 On Feb 3, 2015, at 3:45 PM, Roger Riggs roger.ri...@oracle.com wrote:

 Please review this specification clarification of the range of Hijrah
 calendar variants.
 The issue was exposed by a bug in the HijrahChronology.isLeapYear
 method.

 Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-leap-year-8067800/

 Issue:
https://bugs.openjdk.java.net/browse/JDK-8067800

 A CCC may be needed.

 Thanks, Roger



 Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
 Oracle Java Engineering
 1 Network Drive
 Burlington, MA 01803
 lance.ander...@oracle.com






Re: RFR 8071670: java.util.Optional: please add a way to specify if-else behavior

2015-02-03 Thread Stephen Colebourne
Can't say I've used isPresent() much, as map()/flatMap()/orElse() take
care of most use cases.

What is an issue is that the primitive optional classes do not have
ofNullable(), filter(), map() or flatMap(). It seems odd to be adding
an additional new method to the primitive optional classes without
rounding out the missing methods to reach feature parity. I've heard
people complaining about the missing methods on more than one
occasion

Stephen



On 3 February 2015 at 15:38, Paul Sandoz paul.san...@oracle.com wrote:
 Hi,

 http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8071670-Optional-ifPresentOrElse/webrev/

 Here is another tweak to Optional (and primitives) that has some weight:

 /**
  * If a value is present, perform the given action with the value,
  * otherwise perform the given empty-based action.
  *
  * @param action the action to be performed if a value is present
  * @param emptyAction the empty-based action to be performed if a value is
  * not present
  * @throws NullPointerException if a value is present and {@code action} is
  * null, or a value is not present and {@code emptyAction} is null.
  * @since 1.9
  */
 public void ifPresentOrElse(Consumer? super T action, Runnable emptyAction) 
 {
 if (value != null) {
 action.accept(value);
 } else {
 emptyAction.run();
 }
 }

 (In hindsight we should have been consistent and thrown NPEs regardless of 
 the optional state. The exception throwing behaviour is consistent with 
 ifPresent.)

 Previously it was kind of awkward if one had two lambdas or method refs 
 handy, one had to do:

   o.ifPresent(v - ...);
   if (!o.ifPresent()) {
 ...
   }

 Or just:

   if (o.ifPresent()) {
 ...
   } else {
  ...
   }


 I also updated the language of the ifPresent methods to be more consistent 
 with Collection/Stream.forEach.

 Paul.


Re: RFR 9: 8067800: Unexpected DateTimeException in the java.time.chrono.HijrahChronology.isLeapYear

2015-02-03 Thread Stephen Colebourne
-1

As I indicated on JIRA, I don't believe that this change meets the
spec or intent of the definition on Chronology. That is specified to
not throw any exceptions and to handle all years, valid or not.

I don't foresee any significant issue where a year is not validated by
this method. Years out of range should simply return false, again
something that is within the spirit of the spec a chronology that
does not support the concept of a year must return false.

Stephen



On 3 February 2015 at 20:56, Lance Andersen lance.ander...@oracle.com wrote:
 +1
 On Feb 3, 2015, at 3:45 PM, Roger Riggs roger.ri...@oracle.com wrote:

 Please review this specification clarification of the range of Hijrah 
 calendar variants.
 The issue was exposed by a bug in the HijrahChronology.isLeapYear method.

 Webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-leap-year-8067800/

 Issue:
   https://bugs.openjdk.java.net/browse/JDK-8067800

 A CCC may be needed.

 Thanks, Roger




 Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
 Oracle Java Engineering
 1 Network Drive
 Burlington, MA 01803
 lance.ander...@oracle.com





Re: RFR 8050819: Please add java.util.Stream.ofNullable(T object)

2015-01-28 Thread Stephen Colebourne
On 28 January 2015 at 17:09, Paul Sandoz paul.san...@oracle.com wrote:
 I guess having a method like this makes it easier for people to deal with 
 their possibly-null-producing code in a new streams world. Such 
 null-producing code already exists. I guess you could say that providing 
 this method encourages people to continue producing nulls, but everybody is 
 already doing this today, so


 I had the same view as Remi (mostly still do) with regards to the wrong 
 signal, but there is stuff out there that returns null and when one finds 
 oneself on the wrong side of that null it's rather annoying.

Whether a team understands the benefits of avoiding null or not is
often orthogonal to whether the APIs a team interacts with avoid null.
A method like this is useful at those boundaries and painful when
missing. As such, I support the addition.

Stephen


Re: Explicit Serialization API and Security

2015-01-15 Thread Stephen Colebourne
On 15 January 2015 at 13:01, Peter Firmstone
peter.firmst...@zeus.net.au wrote:
 A third option that hasn't yet been considered would be to investigate an
 api that various already existing frameworks can implement providers for, so
 they no longer need to find creative ways to construct instances of java
 platform classes when unmarshalling.

Which is part of what this project is trying to build (for the JDK)
https://github.com/jodastephen/property-alliance

Stephen


Re: RFR 8068730: Increase the precision of the implementation of java.time.Clock.systemUTC()

2015-01-12 Thread Stephen Colebourne
On 12 January 2015 at 11:36, Daniel Fuchs daniel.fu...@oracle.com wrote:
 In java.time.Instant, a readObject() has been added. However, I don't
 believe this will ever be called because Instant has a writeReplace()
 method and so is not deserialized. (There may be some security related
 evil serialization stream reason why readObject() should exist, I
 can't say).

 I have not touched java.time.Instant in this patch.

Oops, I misread the webrev!

 I do believe that this change means that a new method should be added
 to Clock however:

 Ah - I see. I didn't think of that. It looks like a sensible
 RFE. Agreed.

I look forward to that webrev!

Stephen


Re: RFR 8068730: Increase the precision of the implementation of java.time.Clock.systemUTC()

2015-01-12 Thread Stephen Colebourne
On 12 January 2015 at 12:00, Daniel Fuchs daniel.fu...@oracle.com wrote:
 I'm not a big fan of the current name either.
 I would gladly rename it. I did think of
 windows_to_java_time_micros, but it actually returns tenth of
 micros - so it would be lying...
 Is there a good name for 'tenth of micros'?

.NET refers to them as ticks.

Stephen


Re: Explicit Serialization API and Security

2015-01-12 Thread Stephen Colebourne
On 12 January 2015 at 11:37, Chris Hegarty chris.hega...@oracle.com wrote:
 For clarity, I would like to describe how things currently work.

  1) Allocate a new instance of the deserialized type.
  2) Call the first non-Serializable types no-arg constructor
 ( may be j.l.Object ).
  3) For each type in the deserialized types hierarchy, starting
 with the top most ( closest to j.l.Object ),
3a) create objects representing all fields values for the type
[this step is recursive and will go to 1 until all
 non-primitive types have been created ]
3b)  [ holder for invariant validation ]
3c) assign objects to their respective members of the
containing instance

Just to note that for me, the problem is that (1) happens at all. IMO,
serialization should be separated entirely. Objects should only be
created via standard constructors or factory methods. That is the
direction that the static readObjectAndResolve() was in. Talk of
dedicated serialization constructors or more use of unsafe doesn't
really strike me as much of a way forward, except perhaps where
compatibility is concerned.

Stephen


Re: RFR 8068730: Increase the precision of the implementation of java.time.Clock.systemUTC()

2015-01-09 Thread Stephen Colebourne
Well that is a nice surprise ;-) And I think the implementation via an
adjustment is very sensible.

In java.time.Instant, a readObject() has been added. However, I don't
believe this will ever be called because Instant has a writeReplace()
method and so is not deserialized. (There may be some security related
evil serialization stream reason why readObject() should exist, I
can't say).

In java.util.Formatter, there should be no reason to retain use of the
MILLI_OF_SECOND, as any TemporalAccessor that can return
NANO_OF_SECOND should also return MILLI_OF_SECOND, however the spec is
not quite tight enough to guarantee that sadly. As such, I think the
catch will have to be retained.

TestFormatter has some commented out System.out statements that should
probably be removed.


I suspect that this change will break some user code, but it certainly
doesn't break the spec. As such, I think the change should go in.

I do believe that this change means that a new method should be added
to Clock however:

public static Clock tickMillis(ZoneId zone) {
return new TickClock(system(zone), NANOS_PER_MILLI);
}

While this can be achieved without a new method, the API would feel
slightly strange without it now better-than-milli clocks exist. I
recommend raising a separate RFE to track this.

Stephen



On 9 January 2015 at 16:56, Daniel Fuchs daniel.fu...@oracle.com wrote:
 Hi,

 Please find below a webrev for:

 8068730: Increase the precision of the implementation
  of java.time.Clock.systemUTC()
 https://bugs.openjdk.java.net/browse/JDK-8068730

 The java.time.Clock.system() method (and variants thereof) are
 specified to obtain a clock that returns the current instant
 using best available system clock. However the current
 implementation of the clock returned is based on
 System.currentTimeMillis() whereas the underlying native clock
 used by System.currentTimeMillis() has often a greater precision
 than milliseconds (for instance, on Linux, System.currentTimeMillis()
 is based on gettimeofday, which offers microseconds precision).

 This patch enhances the implementation of the
 java.time.Clock.SystemClock, so that it offer at least the
 same precision than the underlying clock available on the system.

 There is no change in the public API.

 http://cr.openjdk.java.net/~dfuchs/webrev_8068730/webrev.00/

 Some more details on the patch:

 native (hotspot) side:

  - adds a new method to the os class:
os::javaTimeSystemUTC(jlong seconds, jlong nanos)
which allows to get the time in the form of a number
of seconds and number of nanoseconds
(see os.hpp and various os_os.cpp files)

  - adds a JVM_GetNanoTimeAdjustment method, which takes
an offset (in seconds) as parameter and returns a
nano time adjustment compared to the offset.
Calls os::javaTimeSystemUTC to compute the adjustment
(see jvm.cpp)

 java (jdk) side:

  - adds a native sun.misc.VM.getNanoTimeAdjustment method
(which is bound to JVM_GetNanoTimeAdjustment)
(see VM.java and VM.c)

  - modifies java.time.Clock.SystemClock to call the new
sun.misc.VM.getNanoTimeAdjustment instead of
System.currentTimeMillis.

  - fixes java.util.Formatter - which wasn't expecting
greater than millisecond precision.

 testing:

  - A new test is added to test sun.misc.VM.getNanoTimeAdjustment
In particular it checks the edge cases.
  - New tests are added to TestClock_System.java to check for
the increased precision.
There is also a test for the edge cases there.
  - Some java.time tests where tripped by the increased precision,
and had to be modified to take that into account

 Note: comparing the new os::javaTimeSystemUTC with os::javaTimeMillis
   can help in reviewing the changes.

 best regards,

 -- daniel


Re: Explicit Serialization API and Security

2015-01-06 Thread Stephen Colebourne
On 6 January 2015 at 07:27, Peter Levart peter.lev...@gmail.com wrote:
 readObject() can be used on both ends of the spectrum or anywhere
 in-between. It can be used for explicit deserialization or just for
 validation of default deserialized state. Would separate validation method
 give us anything more or simplify things? readObject() can be used just as:

 private void readObject(ObjectInputStream in) throws IOException,
 ClassNotFoundException {
 in.defaultReadObject();
 ...
 just validation
 ...
 }
 Explicit deserialization and final fields don't play nicely together
 currently, yes. Sometimes data-racey-publication thread-safety is sacrificed
 just because explicit deserialization would complicate code too much by
 using reflection.

I've thought on a number of occasions that what I wanted from
serializable was a merger of readObject and readResolve

private Object readObjectAndResolve(ObjectInputStream in) throws IOException

Using such a method, the input could be read into local variables, and
then a normal constructor used to create the object itself. It avoids
the final fields problem (as its just reading to local variables) and
it handles the validation (by calling a regular constructor/factory).
Its also more efficient in many cases, as a common pattern today is to
have one object instance created by readObject (or serialization
internals) and then another returned by readResolve. If
readObjectAndResolve() can't handle cyclic graphs, I can live with
that.

Stephen


Re: Explicit Serialization API and Security

2015-01-06 Thread Stephen Colebourne
On 6 January 2015 at 10:25, Chris Hegarty chris.hega...@oracle.com wrote:
 On 6 Jan 2015, at 08:31, Stephen Colebourne scolebou...@joda.org wrote:
 I've thought on a number of occasions that what I wanted from
 serializable was a merger of readObject and readResolve

 private Object readObjectAndResolve(ObjectInputStream in) throws IOException

 This is an interesting idea.

 Just so I understand, readObject is called down the inheritance hierarchy and 
 can read, into locals, its classes serializable fields ( of course if can 
 access its super types fields that are already set ), where as just a single 
 readResolve call is made, if it is defined by or accessible (via inheritance) 
 by the given class.

I tend to work with shallow/no hierarchies so I've not thought too
much about the detail. I'd imagine you'd want to have
readObjectAndResolve() be a static method called only on the class
being deserialized and not superclasses. The method itself would be
responsible for any superclass deserialization. (Static because there
is no instance to call it on and it should have no access to instance
variables). It may be that the input should not be ObjectInputStream,
but some simpler but related type.

Stephen


Re: Explicit Serialization API and Security

2015-01-06 Thread Stephen Colebourne
On 6 January 2015 at 11:46, Chris Hegarty chris.hega...@oracle.com wrote:
 With shallow/no hierarchies, and others, the serialization proxy pattern 
 works quite well. The overhead of course being the creation of the proxy 
 itself, but this is typically a minimal container with just the state needed 
 to create the “real” object, and no behaviour. Whatever the input would have 
 to be to the static factory” method, readObjectAndWriteReplace, then it 
 would probably have the same overhead, albeit minimal, as the proxy. Though 
 one less serializable type. This could work out nice, but it could not 
 support cyclic graphs ( which I think I could live with ), or private 
 superclass state ( which I think would be limiting ).

I used a shared proxy on 310. Small serialized form (short class name,
shared overhead if more than one type from same package, but no
sharing). Its fine, but quite verbose.

Oh, and to be clear, with readObjectAndResolve() you'd ban
readObject() and readResolve() from being in the same class.

On private superclass state, that state must be settable via the
constructor or setters of the superclass. Providing you control the
superclass and can change your class if the superclass changes, then I
don't see private superclass state as a problem.

private static Object readObjectAndResolve(in) {
  String name = in.readStr(name)
  int age = in.readInt(age)
  Address addr = in.readObj(address)
 return new Person(name, age, address);
}

works fine even if name is in the superclass AbstractPerson and
age/address is in Person.

Stephen


Re: JDK 9 RFR of 4026465: Provide more byte array constructors for BigInteger

2014-12-30 Thread Stephen Colebourne
Just to note that an IndexOutOfBoundsException is mentioned in the
text (line 274, 350) but there is no matching @throws clause. The
IndexOutOfBoundsException could have a message added.

In general, I would expect an offset/length overload for most methods
that take an array, so this seems like a sensible request.

Stephen




On 30 December 2014 at 01:18, Brian Burkhalter
brian.burkhal...@oracle.com wrote:
 Please review at your convenience.

 Issue:  https://bugs.openjdk.java.net/browse/JDK-4026465
 Patch:  http://cr.openjdk.java.net/~bpb/4026465/webrev.00/

 Should this issue not look to be worth addressing after all, then I propose 
 that it should be resolved as “Won’t Fix” rather than be left dangling. If on 
 the other hand the change appears reasonable, then a CCC request will be in 
 order.

 Thanks,

 Brian


Re: JEP 187: Serialization 2.0

2014-12-30 Thread Stephen Colebourne
I've checked the bug database [1], JEPs list [2] and Google but this
JEP appears to have disappeared. Can this be confirmed?
Stephen

[1] https://bugs.openjdk.java.net/issues/?jql=issuetype%20%3D%20JEP
[2] http://openjdk.java.net/jeps/0

On 13 February 2014 at 11:05, Florian Weimer fwei...@redhat.com wrote:
 On 01/23/2014 03:30 PM, Chris Hegarty wrote:

 On 22 Jan 2014, at 15:14, Florian Weimer fwei...@redhat.com wrote:

 On 01/22/2014 03:47 PM, Chris Hegarty wrote:

 On 22/01/14 13:57, Florian Weimer wrote:

 On 01/14/2014 01:26 AM, mark.reinh...@oracle.com wrote:

 Posted: http://openjdk.java.net/jeps/187


 There's another aspect of the current approach to serialization that is
 not mentioned: the type information does not come from the calling
 context, but exclusively from the input stream.


 Have you overlooked resolveClass [1], or are you looking for additional
 context?


 I mean something slightly different, so here's a concrete example: Assume
 we are deserializing an instance of a class with a String field.  The
 current framework deserializes whatever is in the input stream, and will
 bail out with a ClassCastException if the deserialized object turns out
 unusable.  Contrast this with an alternative approach that uses the
 knowledge that the field String and will refuse to read anything else from
 the input stream.


 Sorry, but I may still be missing your point. From my reading of the code,
 CCE will only be thrown if the object being deserialized contains a field
 with a type that does not match the type of the similarly named field in the
 class loaded by the deserializing context.


 This is from the method
 ObjectStreamClass.FieldReflector::setObjFieldValues(Object, Object[]),
 called indirectly from ObjectInputStream::readObject0(boolean):

 Object val = vals[offsets[i]];
 if (val != null 
 !types[i - numPrimFields].isInstance(val))
 {
 Field f = fields[i].getField();
 throw new ClassCastException(
 cannot assign instance of  +
 val.getClass().getName() +  to field  +
 f.getDeclaringClass().getName() + . +
 f.getName() +  of type  +
 f.getType().getName() +  in instance of  +
   obj.getClass().getName());
 }

 So the exception is thrown after the object val has been read from the
 stream, which involves its construction and deserialization.  This means
 that you always expose the full serialization functionality, no matter with
 which types you start.

 Are you thinking specifically about corrupt/malicious streams, or evolving
 serializable types? Or a performance optimisation?


 Mostly the former.  Sorry, I should have said so.

 There are serialization frameworks which restrict the types of
 deserializable objects based on the type of the top-level object being
 requested.  Or in Java serialization terms, they won't load and instantiate
 arbitrary classes even if the programmer did not provide a customer,
 restrictive resolveClass() implementation.  I get that type-erased generics
 make it complicated to achieve this in current Java.  (But so is writing a
 correct resolveClass() because it breaks encapsulation.)

 Knowing in advance which types to expect in the stream would allow for a
 performance optimization, but something like that could be added to the
 current framework as well.


 --
 Florian Weimer / Red Hat Product Security Team


Re: Explicit Serialization API and Security

2014-12-30 Thread Stephen Colebourne
Just to note that there is some overlap between improving
serialization and the meta-model for Java (Beans v2.0) work I'm
looking at [1][2]. The key tool that a meta-model provides is to
enable class authors to select which pieces of their state form the
published properties. In most cases, the properties are the same data
as that needed for effective serialization. The Joda-Beans project [3]
does this and provides XML, JSON and binary serialization via
properties.

thanks
Stephen

[1] https://groups.google.com/forum/#!forum/beans-2-meta-model
[2] https://github.com/jodastephen/property-alliance
[3] http://www.joda.org/joda-beans/



On 28 December 2014 at 01:03, Peter Firmstone
peter.firmst...@zeus.net.au wrote:
 Is there any interest in developing an explicit API for Serialization?:

   1. Use a public constructor signature with a single argument,
  ReadSerialParameters (read only, writable only by the
  serialization framework) to recreate objects, subclasses (when
  permitted) call this first from their own constructor, they have
  an identical constructor signature.  ReadSerialParameters that are
  null may contain a circular reference and will be available after
  construction, see #3 below.
   2. Use a factory method (defined by an interface) with one parameter,
  WriteSerialParameters (write only, readable only by the
  serialization framework), this method can be overridden by
  subclasses (when permitted)
   3. For circular links, a public method (defined by an interface) that
  accepts one argument, ReadSerialParameters, this method is called
  after the constructor completes, subclasses overriding this should
  call the superclass method.  If this method is not called, an
  implementation, if known to possibly contain circular links,
  should check it has been fully initialized in each object method
  called.
   4. Retains compatibility with current serialization stream format.
   5. Each serial field has a name, calling class and object reference,
  similar to explicitly declaring private static final
  ObjectStreamField[] serialPersistentFields .

 Benefits:

   1. An object's internal form is not publicised.
   2. Each class in an object's heirarchy can use a static method to
  check invarients and throw an exception, prior to
  java.lang.Object's constructor being called, preventing
  construction and avoiding finalizer attacks.
   3. Final field friendly.
   4. Compatible with existing serial form.
   5. Flexible serial form evolution.
   6. All methods are public and explicitly defined.
   7. All class ProtectionDomain's exist in the current execution
  context, allowing an object to throw a SecurityException before
  construction.
   8. Less susceptible to deserialization attacks.

 Problems:

   1. Implementations cannot be package private or private.  Implicit
  serialization publicises internal form, any thoughts?

 Recommendations:

   1. Create a security check in the serialization framework for
  implicit serialization, allowing administrators to reduce their
  deserialization attack surface.
   2. For improved security, disallow classes implementing explicit
  serialization from having static state and static initializer
  blocks, only allow static methods, this would require complier and
  verifier changes.
   3. Alternative to #2, allow final static fields, but don't allow
  static initializer blocks or mutable static fields, similar to
  interfaces.

 Penny for your thoughts?

 Regards,

 Peter Firmstone.


Re: RFR: JDK-8033662 DateTimeFormatter parsing ignores withZone()

2014-03-24 Thread Stephen Colebourne
I'm happy with this updated patch.
Stephen



On 24 March 2014 18:24, Xueming Shen xueming.s...@oracle.com wrote:

  Hi Stephen,

 The webrev has been updated accordingly.

 http://cr.openjdk.java.net/~sherman/8033662/webrev/

 Thanks!
 -Sherman


 On 03/24/2014 08:59 AM, Stephen Colebourne wrote:

  I don't think think email was delivered to me. (And I think another from
 you also wasn't). The email is in the core-libs-dev archive though.

  I think the approach you changed it to is fine in principal although
 whether the tidy up should be in jdk8u is a different matter.

 In your changes I think that toParsed() should be renamed to
 toUnresolved(). And there still seem to be instance variables for
 effectiveZone and effectiveChrono (which I thought you were trying to
 remove).

 thanks
 Stephen





 On 24 March 2014 15:00, Xueming Shen xueming.s...@oracle.com wrote:


 Stephen,

 I commented on this one last week. I did not see the response, is it
 something worth considering? or
 you believe the original approach is the right way to go.

 Thanks!
 -Sherman

  Original Message   Message-ID:
 532b6f87.3000...@oracle.com 532b6f87.3000...@oracle.com  Date: Thu,
 20 Mar 2014 15:45:27 -0700  From: Xueming Shen 
 xueming.s...@oracle.comxueming.s...@oracle.com  User-Agent:
 Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110414
 Thunderbird/3.1.10  MIME-Version: 1.0  To: core-libs-dev@openjdk.java.net  
 Subject:
 Re: RFR: JDK-8033662 DateTimeFormatter parsing ignores withZone()  
 References:
 CACzrW9BTWhehV+oC-aEQAEkLtTA4XO6z-F5=jevyv-d35+y...@mail.gmail.comCACzrW9BTWhehV+oC-aEQAEkLtTA4XO6z-F5=jevyv-d35+y...@mail.gmail.com
 CACzrW9BcmmesdKQBrtgrAc=12hn3uowjvwzxskmfcksajea...@mail.gmail.comCACzrW9BcmmesdKQBrtgrAc=12hn3uowjvwzxskmfcksajea...@mail.gmail.com
   In-Reply-To:
 CACzrW9BcmmesdKQBrtgrAc=12hn3uowjvwzxskmfcksajea...@mail.gmail.comCACzrW9BcmmesdKQBrtgrAc=12hn3uowjvwzxskmfcksajea...@mail.gmail.com
   Content-Type:
 text/plain; charset=ISO-8859-1; format=flowed  Content-Transfer-Encoding:
 7bit

 Hi Stephen,

 Given the fact that the parser context is no longer public, the parsed from 
 the
 formatter is either unresolved or resolved, just wonder if we really need 
 those
 effective chrono and zone fields in Parsed. It appears perfect for me to 
 simply
 keep these info inside the parser context and return the unresolved and 
 resolved
 based on the formatter's request, for example
 http://cr.openjdk.java.net/~sherman/8033662/webrev2/

 -Sherman

 On 03/20/2014 11:24 AM, Stephen Colebourne wrote:
  Hi there,
  It would be great if I could get a review please.
 
  The patch is viewable in plain text at JIRA (for IP reasons):
  https://bugs.openjdk.java.net/secure/attachment/19216/ParseWithZone.patch

 
  The same patch is viewable in a nice format at GitHub
  https://gist.github.com/jodastephen/9505761
 
  This really needs to make 8u20 IMO, so I need to get it into jdk9 first
  thanks
  Stephen
 
 
 
  On 12 March 2014 12:29, Stephen Colebournescolebou...@joda.org 
  scolebou...@joda.org  wrote:
  This is a request for review of this bug:
  https://bugs.openjdk.java.net/browse/JDK-8033662
  and the duplicate:
  https://bugs.openjdk.java.net/browse/JDK-8033659
 
  The javadoc of the method java.time.format.DateTimeFormatter::withZone 
  says:
  If no zone has been parsed, then this override zone will be included
  in the result of the parse where it can be used to build instants and
  date-times.
  However, the implementation doesn't obey this.
 
  This is a very unfortunate bug that makes some date-time parsing a lot 
  harder.
 
  A second related bug in an egde case - not properly handling a
  ChronoZonedDateTime from TemporalField.resolve - is also tested for
  and fixed.
 
 
  Proposed patch:
  https://gist.github.com/jodastephen/9505761
  The patch includes no spec changes.
  The patch includes new and refactored TCK tests. The new tests for
  withZone() and withChronology() are based on the spec.
 
  I need a reviewer and a committer please.
  thanks









Re: RFR: JDK-8033662 DateTimeFormatter parsing ignores withZone()

2014-03-20 Thread Stephen Colebourne
Hi there,
It would be great if I could get a review please.

The patch is viewable in plain text at JIRA (for IP reasons):
https://bugs.openjdk.java.net/secure/attachment/19216/ParseWithZone.patch

The same patch is viewable in a nice format at GitHub
https://gist.github.com/jodastephen/9505761

This really needs to make 8u20 IMO, so I need to get it into jdk9 first
thanks
Stephen



On 12 March 2014 12:29, Stephen Colebourne scolebou...@joda.org wrote:
 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8033662
 and the duplicate:
 https://bugs.openjdk.java.net/browse/JDK-8033659

 The javadoc of the method java.time.format.DateTimeFormatter::withZone says:
 If no zone has been parsed, then this override zone will be included
 in the result of the parse where it can be used to build instants and
 date-times.
 However, the implementation doesn't obey this.

 This is a very unfortunate bug that makes some date-time parsing a lot harder.

 A second related bug in an egde case - not properly handling a
 ChronoZonedDateTime from TemporalField.resolve - is also tested for
 and fixed.


 Proposed patch:
 https://gist.github.com/jodastephen/9505761
 The patch includes no spec changes.
 The patch includes new and refactored TCK tests. The new tests for
 withZone() and withChronology() are based on the spec.

 I need a reviewer and a committer please.
 thanks


Re: RFR: JDK-8036785 ChronoLocalDate refers to generics that have been removed

2014-03-19 Thread Stephen Colebourne
At the time it was originally written I don't think @apiNote existed.
I agree it would be good to get the separation in there. However my
current concern is getting the change back to jdk8u, and it seems that
the simplest solution might be the best to achieve that.

Perhaps later, I might then consider rechecking all of java.time for
non-normative stuff to go in a separate commit.

Stephen



On 19 March 2014 10:53, Alan Bateman alan.bate...@oracle.com wrote:
 On 12/03/2014 10:52, Stephen Colebourne wrote:

 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8036785

 During development, ChronoLocalDate had a generic type parameter. It
 was removed during the development of JSR-310. The Javadoc was not
 updated to reflect the removal.

 The necessary change is to text that is essentially non-normative, and
 as such this change should be non-controversial.

 Have you considered using @apiNote here?

 -Alan.


Re: RFR: JDK-8035099 LocalTime with(MILLI_OF_DAY/MICRO_OF_DAY) incorrect

2014-03-17 Thread Stephen Colebourne
ping

On 12 March 2014 10:48, Stephen Colebourne scolebou...@joda.org wrote:
 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8035099

 The implementation for LocalTime with(MILLI_OF_DAY, n) and LocalTime
 with(MICRO_OF_DAY, n) fails to match the specification.

 LocalTime base = LocalTime.of(12, 30, 40, 987654321);

 LocalTime result = base.with(MILLI_OF_DAY, 0);
 expected: 00:00:00.0
 was: 00:00:00.000654321

 LocalTime result = base.with(MICRO_OF_DAY, 0);
 expected: 00:00:00.0
 was: 00:00:00.00321

 The spec is clear in both cases - This completely replaces the time
 and is equivalent to using {@link #ofNanoOfDay(long)}, thus this is
 clearly a bug.


 Proposed patch:
 https://gist.github.com/jodastephen/9057131
 The patch includes no spec changes.
 The patch includes new TCK tests that are derived explicitly from the spec.

 I need a reviewer and a committer please.
 thanks
 Stephen


Re: RFR: JDK-8036818: DateTimeFormatter withResolverFields() fails to accept null

2014-03-17 Thread Stephen Colebourne
To confirm, this counts as a review yes?
Stephen


On 12 March 2014 14:27, Chris Hegarty chris.hega...@oracle.com wrote:
 The change look ok to me too.

 There is a change in behavior here, but I don't expect it to be surprising (
 no NPE where there once was ), so I think it should be fine for 8u-dev also.

 The TCK test changes however, may not be suitable for 8u. Though I'm not
 sure how these tests feed from the OpenJDK repo into the actual TCK.

 -Chris.

 On 12/03/14 13:54, roger riggs wrote:

 Looks fine,  (not a reviewer).

 I can sponsor the fix when reviewed.

 Thanks, Roger


 On 3/12/2014 6:45 AM, Stephen Colebourne wrote:

 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8036818

 The method DateTimeFormatter withResolverFields() is supposed to
 accept null. This is to allow a coy of the formatter to be returned
 reset to the original state of having no resolver fields. The docs
 say:
 @param resolverFields the new set of resolver fields, null if no fields
 which was written to indicate that resetting to null is permitted.

 The fix is to check for null and return a copy of the formatter. Note
 that there are two variations of the method which need fixing.

 Proposed patch:
 https://gist.github.com/jodastephen/9395197
 The patch includes no spec changes.
 The patch fixes the broken TCK tests.

 I need a reviewer and a committer please.
 thanks
 Stephen





Re: RFR: JDK-8033662 DateTimeFormatter parsing ignores withZone()

2014-03-17 Thread Stephen Colebourne
ping

On 12 March 2014 12:29, Stephen Colebourne scolebou...@joda.org wrote:
 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8033662
 and the duplicate:
 https://bugs.openjdk.java.net/browse/JDK-8033659

 The javadoc of the method java.time.format.DateTimeFormatter::withZone says:
 If no zone has been parsed, then this override zone will be included
 in the result of the parse where it can be used to build instants and
 date-times.
 However, the implementation doesn't obey this.

 This is a very unfortunate bug that makes some date-time parsing a lot harder.

 A second related bug in an egde case - not properly handling a
 ChronoZonedDateTime from TemporalField.resolve - is also tested for
 and fixed.


 Proposed patch:
 https://gist.github.com/jodastephen/9505761
 The patch includes no spec changes.
 The patch includes new and refactored TCK tests. The new tests for
 withZone() and withChronology() are based on the spec.

 I need a reviewer and a committer please.
 thanks


RFR: JDK-8036818: DateTimeFormatter withResolverFields() fails to accept null

2014-03-12 Thread Stephen Colebourne
This is a request for review of this bug:
https://bugs.openjdk.java.net/browse/JDK-8036818

The method DateTimeFormatter withResolverFields() is supposed to
accept null. This is to allow a coy of the formatter to be returned
reset to the original state of having no resolver fields. The docs
say:
@param resolverFields the new set of resolver fields, null if no fields
which was written to indicate that resetting to null is permitted.

The fix is to check for null and return a copy of the formatter. Note
that there are two variations of the method which need fixing.

Proposed patch:
https://gist.github.com/jodastephen/9395197
The patch includes no spec changes.
The patch fixes the broken TCK tests.

I need a reviewer and a committer please.
thanks
Stephen


RFR: JDK-8035099 LocalTime with(MILLI_OF_DAY/MICRO_OF_DAY) incorrect

2014-03-12 Thread Stephen Colebourne
This is a request for review of this bug:
https://bugs.openjdk.java.net/browse/JDK-8035099

The implementation for LocalTime with(MILLI_OF_DAY, n) and LocalTime
with(MICRO_OF_DAY, n) fails to match the specification.

LocalTime base = LocalTime.of(12, 30, 40, 987654321);

LocalTime result = base.with(MILLI_OF_DAY, 0);
expected: 00:00:00.0
was: 00:00:00.000654321

LocalTime result = base.with(MICRO_OF_DAY, 0);
expected: 00:00:00.0
was: 00:00:00.00321

The spec is clear in both cases - This completely replaces the time
and is equivalent to using {@link #ofNanoOfDay(long)}, thus this is
clearly a bug.


Proposed patch:
https://gist.github.com/jodastephen/9057131
The patch includes no spec changes.
The patch includes new TCK tests that are derived explicitly from the spec.

I need a reviewer and a committer please.
thanks
Stephen


RFR: JDK-8036785 ChronoLocalDate refers to generics that have been removed

2014-03-12 Thread Stephen Colebourne
This is a request for review of this bug:
https://bugs.openjdk.java.net/browse/JDK-8036785

During development, ChronoLocalDate had a generic type parameter. It
was removed during the development of JSR-310. The Javadoc was not
updated to reflect the removal.

The necessary change is to text that is essentially non-normative, and
as such this change should be non-controversial.


Proposed patch:
https://gist.github.com/jodastephen/9395412
The patch includes non-normative spec changes.
The patch includes no code changes.

I need a reviewer and a committer please.
thanks
Stephen


RFR: JDK-8033662 DateTimeFormatter parsing ignores withZone()

2014-03-12 Thread Stephen Colebourne
This is a request for review of this bug:
https://bugs.openjdk.java.net/browse/JDK-8033662
and the duplicate:
https://bugs.openjdk.java.net/browse/JDK-8033659

The javadoc of the method java.time.format.DateTimeFormatter::withZone says:
If no zone has been parsed, then this override zone will be included
in the result of the parse where it can be used to build instants and
date-times.
However, the implementation doesn't obey this.

This is a very unfortunate bug that makes some date-time parsing a lot harder.

A second related bug in an egde case - not properly handling a
ChronoZonedDateTime from TemporalField.resolve - is also tested for
and fixed.


Proposed patch:
https://gist.github.com/jodastephen/9505761
The patch includes no spec changes.
The patch includes new and refactored TCK tests. The new tests for
withZone() and withChronology() are based on the spec.

I need a reviewer and a committer please.
thanks


Re: RFR: JDK-8036818: DateTimeFormatter withResolverFields() fails to accept null

2014-03-12 Thread Stephen Colebourne
I am happy for the tests to be located in the test.java.time package
rather than the tck.java.time package when backported. However, the
original (broken) TCK tests will need to be removed in that scenario.

I also do not know exactly how the TCK tests feed into the actual TCK
at this point.

Stephen



On 12 March 2014 14:27, Chris Hegarty chris.hega...@oracle.com wrote:
 The change look ok to me too.

 There is a change in behavior here, but I don't expect it to be surprising (
 no NPE where there once was ), so I think it should be fine for 8u-dev also.

 The TCK test changes however, may not be suitable for 8u. Though I'm not
 sure how these tests feed from the OpenJDK repo into the actual TCK.

 -Chris.

 On 12/03/14 13:54, roger riggs wrote:

 Looks fine,  (not a reviewer).

 I can sponsor the fix when reviewed.

 Thanks, Roger


 On 3/12/2014 6:45 AM, Stephen Colebourne wrote:

 This is a request for review of this bug:
 https://bugs.openjdk.java.net/browse/JDK-8036818

 The method DateTimeFormatter withResolverFields() is supposed to
 accept null. This is to allow a coy of the formatter to be returned
 reset to the original state of having no resolver fields. The docs
 say:
 @param resolverFields the new set of resolver fields, null if no fields
 which was written to indicate that resetting to null is permitted.

 The fix is to check for null and return a copy of the formatter. Note
 that there are two variations of the method which need fixing.

 Proposed patch:
 https://gist.github.com/jodastephen/9395197
 The patch includes no spec changes.
 The patch fixes the broken TCK tests.

 I need a reviewer and a committer please.
 thanks
 Stephen





Re: Unsafe: removing the monitorEnter/monitorExit/tryMonitorEnter methods

2014-02-27 Thread Stephen Colebourne
On 26 February 2014 20:54, Martin Buchholz marti...@google.com wrote:
 It does seem that being able to tell whether a java object monitor is
 currently locked is useful for debugging and monitoring - there should be a
 way to do that.

Perhaps it would be useful to be able to expose a java object monitor
as an instance of Lock?

Lock lk = Lock.ofMonitor(object)
if (lk.tryLock()) {
  ...
}

Such a method feels like it would be a useful missing link between
synchronized and locks.

Stephen


Re: Type of Class

2014-02-21 Thread Stephen Colebourne
On 21 February 2014 08:14, David Holmes david.hol...@oracle.com wrote:
 Would it be reasonable to add the following methods:
 - isNestedClass()
 This would be !isTopLevelClass() but otherwise
 isAnonymousClass() || isLocalClass() || isMemberClass()
 - isInnerClass()
 isAnonymousClass() || isLocalClass() || (isMemberClass()  !static)
 - isTopLevelClass()
 ! (isAnonymousClass() || isLocalClass() || isMemberClass())
 and for completeness:
 - isStaticNestedClass() == isMemberClass()  static

I think the next step is a JIRA unless anyone objects.
Stephen


Re: Type of Class

2014-02-21 Thread Stephen Colebourne
But is javax.lang.model a plan for JDK 9?

To give some idea of the pain in this area, here is the code I ended up with:

if (type.isInterface() || type.isAnnotation() || type.isPrimitive() ||
type.isArray() || type.isEnum() || type.isSynthetic() ||
Modifier.isAbstract(type.getModifiers()) || type.isAnonymousClass() ||
type.isLocalClass() || (type.isMemberClass() 
Modifier.isStatic(type.getModifiers()) == false)) {
  // error
}

Its pretty laughable really. I just wanted to say is it a normal class.

FWIW, the KindType does not distinguish between static and inner
member classes either, which was the part that took most of my time to
figure out.

Stephen



On 21 February 2014 15:40, Brian Goetz brian.go...@oracle.com wrote:
 I understand why you want this, though I think you'll find that there are 
 still thousands of other things missing from reflection.

 In the Java 1.0 days, the difference between the Java language and the class 
 file was pretty small.  So reflection served as both the class file (VM) 
 reflection and language reflection mechanism.  But, over time, the gap has 
 grown wider.  We've made the decision (though not always consistently 
 applied) that reflection is about serving up the class file information to 
 Java, not about answering questions about the Java language.  So, for 
 example, it can't tell that one method is a bridge for another, or easily 
 answer questions about inheritance or overriding.  Similarly, the issues 
 raised here are about gaps between the class file representation of a class 
 and the language level model.

 Historically we have added some things to reflection to fill in these gaps.  
 However, our current strategy is to expose this through javax.lang.model, 
 which is designed to reflect the langauge-level view of the world, and this 
 is what users really want anyway.  Currently the only implementation of 
 javax.lang.model that is available is in the compiler, exposed to annotation 
 processors, but we have a plan to expose one backed by core reflection which 
 is a more sensible way to express the information you are looking for.


 On Feb 21, 2014, at 2:27 AM, Stephen Colebourne scolebou...@joda.org wrote:

 On 21 February 2014 08:14, David Holmes david.hol...@oracle.com wrote:
 Would it be reasonable to add the following methods:
 - isNestedClass()
 This would be !isTopLevelClass() but otherwise
 isAnonymousClass() || isLocalClass() || isMemberClass()
 - isInnerClass()
 isAnonymousClass() || isLocalClass() || (isMemberClass()  !static)
 - isTopLevelClass()
 ! (isAnonymousClass() || isLocalClass() || isMemberClass())
 and for completeness:
 - isStaticNestedClass() == isMemberClass()  static

 I think the next step is a JIRA unless anyone objects.
 Stephen



Type of Class

2014-02-20 Thread Stephen Colebourne
In JDK 5, three methods were added to java.lang.Class [1]
- isAnonymousClass()
- isLocalClass()
- isMemberClass()

Unfortunately, these do not cover the complete range of possible types of class.

Would it be reasonable to add the following methods:
- isNestedClass()
- isInnerClass()
- isTopLevelClass()

While JVM/spec experts may know the difference, it is a surprisingly
tricky area. For example, the current isMemberClass() method could do
with much better Javadoc to explain what a member class actually is
(as it seemed hard to google).

FWIW, I was just trying to tell the difference between a nested class
and an inner class, in order to determine which classes can be
instantiated without reference to a surrounding object. The answer
seems to be (cls.isMemberClass() 
Modifiers.isStatic(cls.getModifiers()) which is not the most obvious
code.


In addition, I suffered from the absence of an isNormalClass() -
probably a better name for this.

Currently, you can determine if a class is an interface, annotation,
primitive or array, leaving the normal case as a quadruple negative.
This leaves such a user-written method vulnerable to any new type of
class that gets added in a future JDK:

boolean isNormalClass(Class cls) {
  return !cls.isInterface()  !cls.isAnnotation() 
!cls.isPrimitive()  !cls.isArray();
}

Stephen
[1] https://blogs.oracle.com/darcy/entry/nested_inner_member_and_top


RFR: JDK-8034906 Fix typos, errors and Javadoc differences in java.time

2014-02-19 Thread Stephen Colebourne
A set of minor wording fixes in Javadoc:

https://gist.github.com/jodastephen/8984256

Comments welcome.
Stephen


Re: RFR: JDK-8034906 Fix typos, errors and Javadoc differences in java.time

2014-02-19 Thread Stephen Colebourne
On 19 February 2014 11:39, Alan Bateman alan.bate...@oracle.com wrote:
 On 19/02/2014 11:08, David Holmes wrote:

 Hi Stephen,

 This could be construed as a spec-change, even if a typo :(

 - * which are too large to fit in an {@code int} and throw a {@code
 DateTimeException}.
 + * which are too large to fit in an {@code int} and throw an {@code
 UnsupportedTemporalTypeException}.

 David

 From what I can tell, this is LocalDate.get(TemporalField) and the @throws
 already declares the specific exception for the out of range case. So I
 think this one is okay and seems to just adjusting the method description so
 that it matches declaration (is that right?).

The throws clause says UnsupportedTemporalTypeException - if the
field is not supported or the range of values exceeds an int, so I
think this change can be justified under that. The code follows the
throws clause.

UnsupportedTemporalTypeException extends DateTimeException, so the
description is not broken, just less specific than it could be, but
the throws clause is correctly specific.

Stephen


Re: RFR: JDK-8034906 Fix typos, errors and Javadoc differences in java.time

2014-02-19 Thread Stephen Colebourne
Patch updated:
https://gist.github.com/jodastephen/8984256
with differences:
https://gist.github.com/jodastephen/8984256/revisions

thanks for the spots,
Stephen


On 19 February 2014 11:58, Ivan Gerasimov ivan.gerasi...@oracle.com wrote:
 Hi Stephen!

 May I ask you to add some more typo fixes to your patch?

 Sincerely yours,
 Ivan


 diff a/src/share/classes/java/time/OffsetTime.java
 b/src/share/classes/java/time/OffsetTime.java
 - * This will result in the old and new objects representing the same
 instant an an implied day.
 + * This will result in the old and new objects representing the same
 instant on an implied day.
 diff a/src/share/classes/java/time/ZonedDateTime.java
 b/src/share/classes/java/time/ZonedDateTime.java
 - * the the date-time and offset of the zoned date-time will differ from
 those specified.
 + * the date-time and offset of the zoned date-time will differ from
 those specified.
 diff a/src/share/classes/java/time/chrono/Chronology.java
 b/src/share/classes/java/time/chrono/Chronology.java
 - * The default implementation behaves as the the formatter was used to
 + * The default implementation behaves as the formatter was used to
 diff a/src/share/classes/java/time/format/DateTimeFormatterBuilder.java
 b/src/share/classes/java/time/format/DateTimeFormatterBuilder.java
 - * Note that this method is is identical to {@code appendZoneId()}
 except
 + * Note that this method is identical to {@code appendZoneId()} except
 - * Note that this method is is identical to {@code appendZoneId()}
 except
 + * Note that this method is identical to {@code appendZoneId()} except
 diff a/src/share/classes/java/time/temporal/TemporalField.java
 b/src/share/classes/java/time/temporal/TemporalField.java
 - * If this returns false, the the temporal cannot be queried for this
 field.
 + * If this returns false, the temporal cannot be queried for this
 field.
 diff a/src/share/classes/java/time/zone/ZoneOffsetTransition.java
 b/src/share/classes/java/time/zone/ZoneOffsetTransition.java
 - * Gaps occur where there are local date-times that simply do not not
 exist.
 + * Gaps occur where there are local date-times that simply do not exist.
 - * Gaps occur where there are local date-times that simply do not not
 exist.
 + * Gaps occur where there are local date-times that simply do not
 exist.



Re: RFR java.time cleanup of javadoc and messages

2014-02-03 Thread Stephen Colebourne
+1 by me
Stephen


On 3 February 2014 22:43, roger riggs roger.ri...@oracle.com wrote:
 Please review this group of java.time updates:

 8032749  https://bugs.openjdk.java.net/browse/JDK-8032749: Typo in
 java.time.Clock
 8032888  https://bugs.openjdk.java.net/browse/JDK-8032888: Error message
 typo in TemporalAccessor
 8032558  https://bugs.openjdk.java.net/browse/JDK-8032558: Instant spec
 includes incorrect assertion wrt valid range
 8032494  https://bugs.openjdk.java.net/browse/JDK-8032494:
 DateTimeFormatter spec includes irrelevant detail on parsing pattern

 Webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-typos-8032749/

 Thanks, Roger



Re: RFR java.time serialization

2013-12-16 Thread Stephen Colebourne
Looks fine AFAICT
Stephen
On 17 Dec 2013 09:23, roger riggs roger.ri...@oracle.com wrote:

 Hi Sherman,

 Thanks for the comments, corrected and updated the webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-time-serialization/

 On 12/16/2013 1:00 PM, Xueming Shen wrote:

 On 12/16/2013 09:02 AM, roger riggs wrote:

 Please review these changes to java.time serialization.
 The format of the serialized data is unchanged; deserialization
 uses readObject instead of readResolve to flag invalid values.

 Webrev:
http://cr.openjdk.java.net/~rriggs/webrev-time-serialization/

 Thanks, Roger

  || *
 *(1) test/java/time/tck/java/time/serial/TCKLocalDateSerialization.java*
 *
-- import not necessary?

 removed extra imports


 (2) AbstractTCKTest.assertNotSerializable()

   -- the problem is that this test fails with current readResolve()
   implementation, as we chatted last Friday. So it might be hard to
   say it's a bulletproof test. But I guess it might not be worth to
   a proof there would be a loophole here if only define the readResolve()
   to throw the IOE.

 Thanks, Yes, the test(s) should be more liberal and catch Exception instead
 of only InvalidObjectException since any exception during deserialization
 terminates serialization.

 Thanks, Roger




Re: RFR java.time.chrono equals and hashCode

2013-12-16 Thread Stephen Colebourne
Looks good to me
Stephen
On 17 Dec 2013 06:48, roger riggs roger.ri...@oracle.com wrote:

 Please review this change to explicitly document the behavior of
 equals and hashCode for java.time.chrono concrete types.
 The behavior is not changed.

 (They had been documented but the javadoc was not inherited during an
 earlier refactoring).

 Webrev:
 http://cr.openjdk.java.net/~rriggs/webrev-time-chrono-equals-8029909/

 Thanks, Roger




Re: Poor Javadoc of Map default methods [Re: RFR: 8029055: Map.merge must refuse null values]

2013-12-06 Thread Stephen Colebourne
See https://bugs.openjdk.java.net/browse/JDK-8029676

Stephen

On 26 November 2013 13:20, Stephen Colebourne scolebou...@joda.org wrote:
 I took a quick look, but jumped back in horror at the start of the
 Javadoc for the new methods in Map. A Javadoc description should start
 with the positive, not the negative. I had to read the code to figure
 out what they are supposed to do. Here are the four poor Javadocs and
 some proposed enhacements:

 merge()
 If the specified key is not already associated with ...
 should be
 Updates the value for an existing key merging the existing value with
 the specified value.
 p
 more detailed user-level explanation...
 p
 now the more spec/edge case part...
 (if is a terrible way to start the docs. Say what it does!)


 computeIfAbsent()
 If the specified key is not already associated with a value (or is mapped...
 should be
 Associates the key with a value computed on demand if the key is not
 currently present.
 p
 more detailed user-level explanation...
 p
 now the more spec/edge case part...


 computeIfPresent()
 If the value for the specified key is present and non-null,
 should be
 Updates the value for an existing key.
 p
 more detailed user-level explanation...
 p
 now the more spec/edge case part...


 compute()
 Attempts to compute a mapping for the specified key...
 should be
 Updates the value for a key, adding a new mapping if necessary.
 p
 more detailed user-level explanation...
 p
 now the more spec/edge case part...
 (attempts is negative, not positive)

 Similar problems seem to exist for putIfAbsent.

 also...

 I have to say that I don't overly like the method name.
 map.merge(...) sounds like a bulk operation affecting the whole map
 (ie. like putAll). Assuming that the method cannot be renamed, it
 could be improved just by changing the method parameter names.:

 default V merge(K key, V valueToMerge,
 BiFunction? super V, ? super V, ? extends V mergeFunction) {

 valueToMerge implies that action will occur to the parameter.
 mergeFunction is much more meaningful that remapping Function.

 Similar parameter name change, mappingFunction - valueCreationFunction
 default V computeIfAbsent(
   K key,
   Function? super K, ? extends V valueCreationFunction) {

 Similar parameter name change, remappingFunction - valueUpdateFunction
 default V computeIfPresent(
   K key,
   BiFunction? super K, ? super V, ? extends V valueUpdateFunction) {

 Similar parameter name change, remappingFunction - valueUpdateFunction
 default V compute(
   K key,
   BiFunction? super K, ? super V, ? extends V valueUpdateFunction) {


 also...
 Can you use HashSet::new in the example code?


 In general, all these new methods are written in a spec style, rather
 than a user friendly style, and I'm afraid I don't think thats
 sufficient.
 Stephen



 On 26 November 2013 04:32, Mike Duigou mike.dui...@oracle.com wrote:

 On Nov 24 2013, at 16:31 , David Holmes david.hol...@oracle.com wrote:

 Hi Mike,

 There is still no clear spec for what should happen if the param value is 
 null.

 I feel very uncomfortable the status quo of with null being ignored, used 
 for a sentinel and also as value. The relations between null and values in 
 this method are just too complicated.

 Currently:

 - The provided value may be either null or non-null. Is null a legal value? 
 It depends upon:
 - Is there an existing value?
 - Does the Map allow null values?
 - Does the function allow null values?
 - Existing null values are treated as absent.
 - If a null value is passed should we remove this mapping or add it 
 to the map?
   - null might not be accepted by the map
   - The associated value would still be regarded as absent for 
 future operations.
 - The function may return null to signal remove.

 In particular I dislike adding a null entry to the map if there is no 
 current mapping (or mapped to null). It seems like it should be invariant 
 whether we end up with an associated value. If null is used to signify 
 remove then map.contains(key) will return variable results depending upon 
 the initial state. Having the behaviour vary based upon whether the map 
 allows null values would be even worse.

 So I would like to suggest that we require value to be non-null. I have 
 provisionally updated the spec and impls accordingly.

 The parenthesized part is wrong.

 I think that's overzealous copying from compute(). I have removed it.


 Also you have changed the method implementation not just the implDoc so the 
 bug synopsis is somewhat misleading.

 I will correct this. More changes were made than I originally expected. New 
 synopsis will be Map.merge implementations should refuse null value param

 I have updated the webrev.

 http://cr.openjdk.java.net/~mduigou/JDK-8029055/1/webrev/

 Thanks,

 Mike


 Thanks,
 David

 On 23/11/2013 10:25 AM, Mike Duigou wrote:
 We'll be using https

Re: RFR: Add value-type notice to java.time classes

2013-12-05 Thread Stephen Colebourne
Pretty much looks good.

However, I notice that the equals, hashCode and toString methods of
the 4 calendar specific date classes are inherited rather than
written. Without additional spec, I'm not sure that they can claim to
be value types. Adding the 12 Javadocs should be relatively easy.

JapaneseEra should also be a value type, but it needs Javadoc and an
implementation of equals and hashCode that check state.

The chronology classes are potential value types, but as they are
generally singletons, perhaps it matters less.

Stephen



On 4 December 2013 21:08, roger riggs roger.ri...@oracle.com wrote:
 Corrected.

 Thanks, Roger


 On 12/4/2013 3:42 PM, Xueming Shen wrote:

 On 12/04/2013 12:00 PM, roger riggs wrote:

 Following the lead of the notices added to Optional, the java.time value
 based
 classes should include the same notice.

 Please review:
 http://cr.openjdk.java.net/~rriggs/webrev-time-valuetype-8029551/

 Thanks, Roger




 http://cr.openjdk.java.net/~rriggs/webrev-time-valuetype-8029551/src/share/classes/java/time/ZonedDateTime.java.sdiff.html

 LD - ZDT




Re: There needs to be support for java.time in java.text.MessageFormat

2013-12-02 Thread Stephen Colebourne
On 2 December 2013 19:47, Xueming Shen xueming.s...@oracle.com wrote:
 That said, it might be desired to add the support of the JSR310 date/time
 for the DateFormat, but it will not make jdk8.

FWIW, I think this should have been fixed in jdk 8.
https://bugs.openjdk.java.net/browse/JDK-8016743

Does the spec allow it to be fixed in a jdk 8 update release?

Stephen


Re: Poor Javadoc of Map default methods [Re: RFR: 8029055: Map.merge must refuse null values]

2013-11-27 Thread Stephen Colebourne
On 26 November 2013 17:35, Martin Buchholz marti...@google.com wrote:
 I haven't looked in depth, but I agree with Stephen's analysis.  This API
 and its javadoc needs work.
 E.g. It's not clear that the purpose of Map.compute is to *update* the
 mapping for key in the map.

I actually felt that the names of all four methods felt wrong. compute
and merge seem like unfortunate choices.

 Instead of The default implementation makes no guarantees about
 synchronization or atomicity properties of this method.  we should boldly
 say that the default implementation is *not* atomic, even if the underlying
 map is.

Saying that the default implementation is not atomic sounds uncontroversial.

Stephen


Poor Javadoc of Map default methods [Re: RFR: 8029055: Map.merge must refuse null values]

2013-11-26 Thread Stephen Colebourne
I took a quick look, but jumped back in horror at the start of the
Javadoc for the new methods in Map. A Javadoc description should start
with the positive, not the negative. I had to read the code to figure
out what they are supposed to do. Here are the four poor Javadocs and
some proposed enhacements:

merge()
If the specified key is not already associated with ...
should be
Updates the value for an existing key merging the existing value with
the specified value.
p
more detailed user-level explanation...
p
now the more spec/edge case part...
(if is a terrible way to start the docs. Say what it does!)


computeIfAbsent()
If the specified key is not already associated with a value (or is mapped...
should be
Associates the key with a value computed on demand if the key is not
currently present.
p
more detailed user-level explanation...
p
now the more spec/edge case part...


computeIfPresent()
If the value for the specified key is present and non-null,
should be
Updates the value for an existing key.
p
more detailed user-level explanation...
p
now the more spec/edge case part...


compute()
Attempts to compute a mapping for the specified key...
should be
Updates the value for a key, adding a new mapping if necessary.
p
more detailed user-level explanation...
p
now the more spec/edge case part...
(attempts is negative, not positive)

Similar problems seem to exist for putIfAbsent.

also...

I have to say that I don't overly like the method name.
map.merge(...) sounds like a bulk operation affecting the whole map
(ie. like putAll). Assuming that the method cannot be renamed, it
could be improved just by changing the method parameter names.:

default V merge(K key, V valueToMerge,
BiFunction? super V, ? super V, ? extends V mergeFunction) {

valueToMerge implies that action will occur to the parameter.
mergeFunction is much more meaningful that remapping Function.

Similar parameter name change, mappingFunction - valueCreationFunction
default V computeIfAbsent(
  K key,
  Function? super K, ? extends V valueCreationFunction) {

Similar parameter name change, remappingFunction - valueUpdateFunction
default V computeIfPresent(
  K key,
  BiFunction? super K, ? super V, ? extends V valueUpdateFunction) {

Similar parameter name change, remappingFunction - valueUpdateFunction
default V compute(
  K key,
  BiFunction? super K, ? super V, ? extends V valueUpdateFunction) {


also...
Can you use HashSet::new in the example code?


In general, all these new methods are written in a spec style, rather
than a user friendly style, and I'm afraid I don't think thats
sufficient.
Stephen



On 26 November 2013 04:32, Mike Duigou mike.dui...@oracle.com wrote:

 On Nov 24 2013, at 16:31 , David Holmes david.hol...@oracle.com wrote:

 Hi Mike,

 There is still no clear spec for what should happen if the param value is 
 null.

 I feel very uncomfortable the status quo of with null being ignored, used for 
 a sentinel and also as value. The relations between null and values in this 
 method are just too complicated.

 Currently:

 - The provided value may be either null or non-null. Is null a legal value? 
 It depends upon:
 - Is there an existing value?
 - Does the Map allow null values?
 - Does the function allow null values?
 - Existing null values are treated as absent.
 - If a null value is passed should we remove this mapping or add it 
 to the map?
   - null might not be accepted by the map
   - The associated value would still be regarded as absent for 
 future operations.
 - The function may return null to signal remove.

 In particular I dislike adding a null entry to the map if there is no current 
 mapping (or mapped to null). It seems like it should be invariant whether we 
 end up with an associated value. If null is used to signify remove then 
 map.contains(key) will return variable results depending upon the initial 
 state. Having the behaviour vary based upon whether the map allows null 
 values would be even worse.

 So I would like to suggest that we require value to be non-null. I have 
 provisionally updated the spec and impls accordingly.

 The parenthesized part is wrong.

 I think that's overzealous copying from compute(). I have removed it.


 Also you have changed the method implementation not just the implDoc so the 
 bug synopsis is somewhat misleading.

 I will correct this. More changes were made than I originally expected. New 
 synopsis will be Map.merge implementations should refuse null value param

 I have updated the webrev.

 http://cr.openjdk.java.net/~mduigou/JDK-8029055/1/webrev/

 Thanks,

 Mike


 Thanks,
 David

 On 23/11/2013 10:25 AM, Mike Duigou wrote:
 We'll be using https://bugs.openjdk.java.net/browse/JDK-8029055 for this 
 issue.

 I've posted a webrev here:

 http://cr.openjdk.java.net/~mduigou/JDK-8029055/0/webrev/

 There is an identical change in ConcurrentMap's merge().

 Mike

 

Re: RFR: 8029055: Map.merge must refuse null values

2013-11-26 Thread Stephen Colebourne
See the new thread for the general Javadoc issues. I'll focus on null here.

Given a map {Key - null} I find the notion that putIfAbsent() or
computeIfAbsent() ignore the existing mapping just plain wrong. While
I can rationalise it (just) it is a horrible reworking of the meaning
of absent.

Similarly, for computeIfPresent(), my expectation is that the
mapping is present, and that the null should be an input to the
function.

Yes, I get that nulls in collections are a pain, but trying to
redefine the meaning of absent is more of a pain. I'm aware that
ConcurrentMap makes things interesting, but these two just look wrong.


So, my rules would be
- if an existing mapping has a null value, then the mapping is
present and not absent (only affects methods with those words in
their name)
- none of these methods should try to add/update a mapping with a null value
- null returned from a function means remove (or is invalid)
- a null value parameter is invalid
- merge where the existing mapping has a null value should not pass
the null to the function

Stephen



On 26 November 2013 04:32, Mike Duigou mike.dui...@oracle.com wrote:

 On Nov 24 2013, at 16:31 , David Holmes david.hol...@oracle.com wrote:

 Hi Mike,

 There is still no clear spec for what should happen if the param value is 
 null.

 I feel very uncomfortable the status quo of with null being ignored, used for 
 a sentinel and also as value. The relations between null and values in this 
 method are just too complicated.

 Currently:

 - The provided value may be either null or non-null. Is null a legal value? 
 It depends upon:
 - Is there an existing value?
 - Does the Map allow null values?
 - Does the function allow null values?
 - Existing null values are treated as absent.
 - If a null value is passed should we remove this mapping or add it 
 to the map?
   - null might not be accepted by the map
   - The associated value would still be regarded as absent for 
 future operations.
 - The function may return null to signal remove.

 In particular I dislike adding a null entry to the map if there is no current 
 mapping (or mapped to null). It seems like it should be invariant whether we 
 end up with an associated value. If null is used to signify remove then 
 map.contains(key) will return variable results depending upon the initial 
 state. Having the behaviour vary based upon whether the map allows null 
 values would be even worse.

 So I would like to suggest that we require value to be non-null. I have 
 provisionally updated the spec and impls accordingly.

 The parenthesized part is wrong.

 I think that's overzealous copying from compute(). I have removed it.


 Also you have changed the method implementation not just the implDoc so the 
 bug synopsis is somewhat misleading.

 I will correct this. More changes were made than I originally expected. New 
 synopsis will be Map.merge implementations should refuse null value param

 I have updated the webrev.

 http://cr.openjdk.java.net/~mduigou/JDK-8029055/1/webrev/

 Thanks,

 Mike


 Thanks,
 David

 On 23/11/2013 10:25 AM, Mike Duigou wrote:
 We'll be using https://bugs.openjdk.java.net/browse/JDK-8029055 for this 
 issue.

 I've posted a webrev here:

 http://cr.openjdk.java.net/~mduigou/JDK-8029055/0/webrev/

 There is an identical change in ConcurrentMap's merge().

 Mike

 On Nov 22 2013, at 16:01 , Henry Jen henry@oracle.com wrote:


 On 11/21/2013 06:33 PM, David Holmes wrote:
 On 22/11/2013 5:02 AM, Louis Wasserman wrote:
 While I agree that case should be specified, I'm not certain I follow why
 that's what's going on here.

 The weird condition is that if oldValue is null, not value; oldValue
 is the
 old result of map.get(key).  The Javadoc seems not just unspecified, but
 actively wrong.  Here's a worked example:

 MapString, Integer map = new HashMap();
 map.merge(foo, 3, Integer::plus);
 Integer fooValue = map.get(foo);

 Going through the Javadoc-specified default implementation:

   V oldValue = map.get(key); // oldValue is null
   V newValue = (oldValue == null) ? value :
remappingFunction.apply(oldValue, value);
  // newValue is set to value, which is 3
   if (newValue == null) // newValue is nonnull, branch not taken
   map.remove(key);
   else if (oldValue == null) // oldValue is null, branch taken
   map.remove(key); // removes the entry from the map
   else // else if was already triggered, branch not taken
   map.put(key, newValue);


 Seems like a document bug to me, we should fix this @implSpec.

 Cheers,
 Henry





Re: RFR: 8027470: AnnotationSupport uses == rather than .equals to compare Class objects

2013-11-15 Thread Stephen Colebourne
On 15 November 2013 03:21, Joseph Darcy joe.da...@oracle.com wrote:
 Catching up on email, the specification of java.lang.Class does not
 explicitly promise that its notion of equality must be identity for all
 time. Therefore, while not required for today's implementations, I would
 prefer that new code we write in the JDK use equals rather than == when
 comparing classes.

I've used == for Class for many years. I expect many other developers
have done the same. I think that changing Class so identity was not
respected would break a *lot* of code.

Stephen


Re: 8026344: j.u.c.a *Adder and *Accumulator extend a package private class that is Serializable

2013-10-23 Thread Stephen Colebourne
On 23 October 2013 17:33, Alan Bateman alan.bate...@oracle.com wrote:
 I might have been tempted to conserve on classes and code duplication to
 share a serialization proxy between the classes and maybe even shorten the
 name.
 But it is hard to argue with the simplicity and directness of this design.

 I don't have a strong opinion on this, it could be something really like
 short (if Doug is inclined).

JSR-310 has a shared serialization proxy named Ser (one per package).
It can save a lot of space if multiple types are sent.

Stephen


Re: Please Review Cleanup of java.time serialization source 8024686

2013-10-22 Thread Stephen Colebourne
Fine by me (not an official reviewer)
Stephen

On 22 October 2013 15:34, roger riggs roger.ri...@oracle.com wrote:
 Hi,

 Please review to address issues in the serialization source files [1]
  - The OffsetTime and OffsetDateTime classes have inconsistent
 serialization.
  - Order of modifiers

 Webrev:
   http://cr.openjdk.java.net/~rriggs/webrev-serial-cleanup-8024686/

 [1] https://bugs.openjdk.java.net/browse/JDK-8024686

 Thanks, Roger



Re: [JBS] (JDK-8026842) Remove Time-Zone IDs HST/EST/MST

2013-10-18 Thread Stephen Colebourne
I've emailed the ThreeTen lists to ensure everyone knows about this.

To balance the points below, it should be noted that
(1) they've been deprecated in TimeZone for a long time
(2) their meaning has changed over time
(3) they lead to user bugs, where someone finds EST exists and thus
assumes that EDT/CST/CDT/PST/PDT exist as well (which they do not)

The complete set of short IDs in ZoneId right now is:
[HST, GMT, UCT, PRC, MET, CET, WET, EET, UTC, EST, MST, ROK]

UTC/UCT/GMT are fine.

WET/CET/EET are fine because they are trans-national IDs in Europe,
and represent something that cannot be represented in any other way.

MET is an equivalent/translation of CET. This is undesirable, but less
bad than HST/MST/EST. However, ECT (another translation) has been
removed.

PRC/ROK are equivalent of Asia/Shanghai and Asia/Seoul. Undesirable,
but less bad than HST/MST/EST.

The fully aggressive approach would remove PRC/ROK/MET as they add no
value. And yes removing PRC/ROK/MET would be my preference. However
since they are less bad than HST/MST/EST and its late I can live with
them staying if necessary. (Removing them would require adding them to
SHORT_IDS)

The complete list in TimeZone is:
[HST, ACT, PRT, IST, ECT, CNT, PNT, EET, EAT, AET, NST, GMT, ART, BST,
PRC, AGT, UTC, CAT, MIT, ROK, MST, CST, NET, UCT, PST, MET, BET, CET,
VST, IET, AST, WET, JST, EST, PLT, SST, CTT]

The set in SHORT_IDS handles these TimeZone IDs.


On the webrev, I would have left the tazdb data file alone and just
patched it in ZoneId when reading the file. However, your approach
will result in a slightly smaller data file, so its fine by me.

Stephen


On 18 October 2013 18:14, Xueming Shen xueming.s...@oracle.com wrote:
 Hi Stephen,

 As I said before that while I agree that these three short ids are kinda of
 confusing
 and understand the desire of removing them but I'm also concerned that the
 fact

 (1) these ids been in the available timezone ids from the j.u.TZ for a
 long time
 (2) these ids are defined in the official tzdb database (together with
 couple others)
 (3) and arguably the EST and MST are popular ids used in the real world

 so it might cause usability complain if they are missing from the ZoneId's
 available
 list and the app will have to use the ZoneId.of(name, SHORT_IDS) to gain the
 access.

 That said, if this is the strong opinion from the EG :-) here the webrev for
 it

 http://cr.openjdk.java.net/~sherman/8026842/webrev/

 Thanks!
 -Sherman

 On 10/17/13 3:16 PM, Stephen Colebourne wrote:

 On 17 October 2013 22:14, Xueming Shen xueming.s...@oracle.com wrote:

 On 10/17/2013 02:01 PM, Stephen Colebourne wrote:

 I'm happy with the contents of this webrev.

 As pointed out previously, it doesn't address the specific issue in
 8025971

 I will file a separate issue to address the issue. I would assume you are
 recommending
 to complete remove these three short ids from jvm?

 I'm recommending removing these three IDs from ZoneId. For example:

 ZoneId.of(HST) should throw an exception
 ZoneId.of(HST, SHORT_IDS) should succeed and return the matching
 ZoneOffset UTC-10:00

 TimeZone.of(HST) should succeed and return an offset-based TimeZone
 with the HST ID
 TimeZone.of(HST).toZoneId() should succeed and return the matching
 ZoneOffset UTC-10:00
 TimeZone.of(HST).toZoneId() with old mappings flag equal to true
 should succeed and return the Honolulu zone.

 Thus, myZoneId.getId() will never return HST.

 Stephen




Re: RFR 8025971 and 8026197

2013-10-17 Thread Stephen Colebourne
I'm happy with the contents of this webrev.

As pointed out previously, it doesn't address the specific issue in 8025971

Stephen


On 17 October 2013 21:53, Xueming Shen xueming.s...@oracle.com wrote:
 Webrev has been updated accordingly based on your suggestion.

 http://cr.openjdk.java.net/~sherman/8025971_8026197/webrev

 thanks!
 -Sherman


 On 10/16/2013 01:09 AM, Stephen Colebourne wrote:

 On 15 October 2013 20:35, Xueming Shenxueming.s...@oracle.com  wrote:

 Please help codereview the changes for

 8025971: Remove Time-Zone IDs HST/EST/MST

 The method ZoneId.systemDefault() now does not match its specification:

 /**
   * Gets the system default time-zone.
   *p
   * This queries {@link TimeZone#getDefault()} to find the default
 time-zone
   * and converts it to a {@code ZoneId}. If the system default
 time-zone is changed,
   * then the result of this method will also change.
   *
   * @return the zone ID, not null
   * @throws DateTimeException if the converted zone ID has an invalid
 format
   * @throws ZoneRulesException if the converted zone region ID cannot be
 found
   */
   public static ZoneId systemDefault() {
 return ZoneId.of(TimeZone.getDefault().getID(), SHORT_IDS);
   }

 This needs to be changed to:
   public static ZoneId systemDefault() {
 return TimeZone.getDefault().toZoneId();
   }

 This fix is for a different issue - not the one described in the
 8025971 bug report. I recommend opening a new issue. If you're not
 going to fix 8025971 (which I think you should) then that needs to be
 documented.

 Stephen




Re: RFR 8025971 and 8026197

2013-10-17 Thread Stephen Colebourne
On 17 October 2013 22:14, Xueming Shen xueming.s...@oracle.com wrote:
 On 10/17/2013 02:01 PM, Stephen Colebourne wrote:

 I'm happy with the contents of this webrev.

 As pointed out previously, it doesn't address the specific issue in
 8025971

 I will file a separate issue to address the issue. I would assume you are
 recommending
 to complete remove these three short ids from jvm?

I'm recommending removing these three IDs from ZoneId. For example:

ZoneId.of(HST) should throw an exception
ZoneId.of(HST, SHORT_IDS) should succeed and return the matching
ZoneOffset UTC-10:00

TimeZone.of(HST) should succeed and return an offset-based TimeZone
with the HST ID
TimeZone.of(HST).toZoneId() should succeed and return the matching
ZoneOffset UTC-10:00
TimeZone.of(HST).toZoneId() with old mappings flag equal to true
should succeed and return the Honolulu zone.

Thus, myZoneId.getId() will never return HST.

Stephen


Re: RFR 8025971 and 8026197

2013-10-16 Thread Stephen Colebourne
On 15 October 2013 20:35, Xueming Shen xueming.s...@oracle.com wrote:
 Please help codereview the changes for

 8025971: Remove Time-Zone IDs HST/EST/MST

The method ZoneId.systemDefault() now does not match its specification:

/**
 * Gets the system default time-zone.
 * p
 * This queries {@link TimeZone#getDefault()} to find the default time-zone
 * and converts it to a {@code ZoneId}. If the system default
time-zone is changed,
 * then the result of this method will also change.
 *
 * @return the zone ID, not null
 * @throws DateTimeException if the converted zone ID has an invalid format
 * @throws ZoneRulesException if the converted zone region ID cannot be found
 */
 public static ZoneId systemDefault() {
   return ZoneId.of(TimeZone.getDefault().getID(), SHORT_IDS);
 }

This needs to be changed to:
 public static ZoneId systemDefault() {
   return TimeZone.getDefault().toZoneId();
 }

This fix is for a different issue - not the one described in the
8025971 bug report. I recommend opening a new issue. If you're not
going to fix 8025971 (which I think you should) then that needs to be
documented.

Stephen


Re: Please Review Late binding of Chronology with appendValueReduced

2013-10-15 Thread Stephen Colebourne
Fine by me (not a formal reviewer)
Stephen

On 15 October 2013 17:03, roger riggs roger.ri...@oracle.com wrote:
 Hi,

 Spot fix for a date time parsing case witha  reduced value field when the
 chronology
 is parsed after the reduced value field.  The fix is to add a callback when
 the
 chronology is set to recompute the value of the field with the new
 chronology.

 Webrev:
 http://cr.openjdk.java.net/~rriggs/webrev-reduced-latechrono-8025828/

 Thanks, Roger

 [1]  https://bugs.openjdk.java.net/browse/JDK-8025828


<    1   2   3   4   5   >