Re: RFR: 8149790: NegativeArraySizeException with hprof

2016-05-03 Thread Andreas Eriksson

Hi,

Could I get a Reviewer to take a look at this please?

- Andreas


On 2016-04-21 19:35, Andreas Eriksson wrote:


On 2016-04-21 18:46, Erik Gahlin wrote:

On 2016-04-21 15:47, Andreas Eriksson wrote:

Hi,


On 2016-04-15 20:21, Erik Gahlin wrote:

Looks good, not a Reviewer.


Thanks.



Do you really need curly braces in the switch clauses?


In JavaValueArray.java they were needed before my change but not 
after. Do you want me to remove them?
In JavaObject.java they are needed because the 'value' variable is a 
different type in each clause, I could refactor so that I could 
remove the curly braces, but I don't think it is worth the effort.



I was thinking about JavaValueArray.java.

It would look better without them, but no need to create an updated 
webrev just for that.


Alright, I'll change that before I push.

Could a Reviewer take a look as well?

- Andreas



Erik


- Andreas



Erik

On 2016-04-15 16:40, Andreas Eriksson wrote:

Hi,

Please review this test fix for 8149790: 
NegativeArraySizeException with hprof


https://bugs.openjdk.java.net/browse/JDK-8149790
http://cr.openjdk.java.net/~aeriksso/8149790/webrev.00/

Changes are to the hprof verifier, which now will pass heap dump 
content around as JavaThing arrays instead of byte arrays, since 
the latter cannot be guaranteed to be able to hold all the 
elements of large arrays.


There is still a problem where the test will timeout on machines 
with lots of memory (seen on machines with 200+GB of memory) 
because the verification takes a long time. I'll file a new bug 
for that problem.


Regards,
Andreas












Re: RFR: 8149790: NegativeArraySizeException with hprof

2016-04-21 Thread Andreas Eriksson


On 2016-04-21 18:46, Erik Gahlin wrote:

On 2016-04-21 15:47, Andreas Eriksson wrote:

Hi,


On 2016-04-15 20:21, Erik Gahlin wrote:

Looks good, not a Reviewer.


Thanks.



Do you really need curly braces in the switch clauses?


In JavaValueArray.java they were needed before my change but not 
after. Do you want me to remove them?
In JavaObject.java they are needed because the 'value' variable is a 
different type in each clause, I could refactor so that I could 
remove the curly braces, but I don't think it is worth the effort.



I was thinking about JavaValueArray.java.

It would look better without them, but no need to create an updated 
webrev just for that.


Alright, I'll change that before I push.

Could a Reviewer take a look as well?

- Andreas



Erik


- Andreas



Erik

On 2016-04-15 16:40, Andreas Eriksson wrote:

Hi,

Please review this test fix for 8149790: NegativeArraySizeException 
with hprof


https://bugs.openjdk.java.net/browse/JDK-8149790
http://cr.openjdk.java.net/~aeriksso/8149790/webrev.00/

Changes are to the hprof verifier, which now will pass heap dump 
content around as JavaThing arrays instead of byte arrays, since 
the latter cannot be guaranteed to be able to hold all the elements 
of large arrays.


There is still a problem where the test will timeout on machines 
with lots of memory (seen on machines with 200+GB of memory) 
because the verification takes a long time. I'll file a new bug for 
that problem.


Regards,
Andreas










Re: RFR: 8149790: NegativeArraySizeException with hprof

2016-04-21 Thread Andreas Eriksson

Hi,


On 2016-04-15 20:21, Erik Gahlin wrote:

Looks good, not a Reviewer.


Thanks.



Do you really need curly braces in the switch clauses?


In JavaValueArray.java they were needed before my change but not after. 
Do you want me to remove them?
In JavaObject.java they are needed because the 'value' variable is a 
different type in each clause, I could refactor so that I could remove 
the curly braces, but I don't think it is worth the effort.


- Andreas



Erik

On 2016-04-15 16:40, Andreas Eriksson wrote:

Hi,

Please review this test fix for 8149790: NegativeArraySizeException 
with hprof


https://bugs.openjdk.java.net/browse/JDK-8149790
http://cr.openjdk.java.net/~aeriksso/8149790/webrev.00/

Changes are to the hprof verifier, which now will pass heap dump 
content around as JavaThing arrays instead of byte arrays, since the 
latter cannot be guaranteed to be able to hold all the elements of 
large arrays.


There is still a problem where the test will timeout on machines with 
lots of memory (seen on machines with 200+GB of memory) because the 
verification takes a long time. I'll file a new bug for that problem.


Regards,
Andreas






RFR: 8149790: NegativeArraySizeException with hprof

2016-04-15 Thread Andreas Eriksson

Hi,

Please review this test fix for 8149790: NegativeArraySizeException with 
hprof


https://bugs.openjdk.java.net/browse/JDK-8149790
http://cr.openjdk.java.net/~aeriksso/8149790/webrev.00/

Changes are to the hprof verifier, which now will pass heap dump content 
around as JavaThing arrays instead of byte arrays, since the latter 
cannot be guaranteed to be able to hold all the elements of large arrays.


There is still a problem where the test will timeout on machines with 
lots of memory (seen on machines with 200+GB of memory) because the 
verification takes a long time. I'll file a new bug for that problem.


Regards,
Andreas


Re: [8u] RFR: 8153641: assert(thread_state == _thread_in_native) failed: Assumed thread_in_native while heap dump

2016-04-15 Thread Andreas Eriksson

Thanks Staffan!

- Andreas

On 2016-04-15 13:00, Staffan Larsen wrote:

Looks good!

Thanks,
/Staffan

On 12 apr. 2016, at 18:05, Andreas Eriksson 
<andreas.eriks...@oracle.com <mailto:andreas.eriks...@oracle.com>> wrote:


Hi,

Please review this fix for 8153641: assert(thread_state == 
_thread_in_native) failed: Assumed thread_in_native while heap dump


Bug: https://bugs.openjdk.java.net/browse/JDK-8153641
Webrev: http://cr.openjdk.java.net/~aeriksso/8153641/webrev.00/

The crash is because changes I did in JDK-8129419 
<https://bugs.openjdk.java.net/browse/JDK-8129419> do not work with 
solaris logic used for UseVMInterruptibleIO in some cases.
This change reverts the changes I did to the solaris version of 
os::write, and changes heapDumper.cpp to use ::write and ::close 
directly, as it did before.


Thanks,
Andreas






Re: [8u] RFR: 8153641: assert(thread_state == _thread_in_native) failed: Assumed thread_in_native while heap dump

2016-04-13 Thread Andreas Eriksson

Thanks Serguei!

- Andreas

On 2016-04-12 20:57, serguei.spit...@oracle.com wrote:

Hi Andreas,

The fix looks good.

Thanks,
Serguei


On 4/12/16 09:05, Andreas Eriksson wrote:

Hi,

Please review this fix for 8153641: assert(thread_state == 
_thread_in_native) failed: Assumed thread_in_native while heap dump


Bug: https://bugs.openjdk.java.net/browse/JDK-8153641
Webrev: http://cr.openjdk.java.net/~aeriksso/8153641/webrev.00/

The crash is because changes I did in JDK-8129419 
<https://bugs.openjdk.java.net/browse/JDK-8129419> do not work with 
solaris logic used for UseVMInterruptibleIO in some cases.
This change reverts the changes I did to the solaris version of 
os::write, and changes heapDumper.cpp to use ::write and ::close 
directly, as it did before.


Thanks,
Andreas






Re: [8u] RFR: 8153641: assert(thread_state == _thread_in_native) failed: Assumed thread_in_native while heap dump

2016-04-12 Thread Andreas Eriksson

Thanks Dmitry!

- Andreas

On 2016-04-12 18:07, Dmitry Samersoff wrote:

Andreas,

Looks good for me!

-Dmitry

On 2016-04-12 19:05, Andreas Eriksson wrote:

Hi,

Please review this fix for 8153641: assert(thread_state ==
_thread_in_native) failed: Assumed thread_in_native while heap dump

Bug: https://bugs.openjdk.java.net/browse/JDK-8153641
Webrev: http://cr.openjdk.java.net/~aeriksso/8153641/webrev.00/

The crash is because changes I did in JDK-8129419
<https://bugs.openjdk.java.net/browse/JDK-8129419> do not work with
solaris logic used for UseVMInterruptibleIO in some cases.
This change reverts the changes I did to the solaris version of
os::write, and changes heapDumper.cpp to use ::write and ::close
directly, as it did before.

Thanks,
Andreas






[8u] RFR: 8153641: assert(thread_state == _thread_in_native) failed: Assumed thread_in_native while heap dump

2016-04-12 Thread Andreas Eriksson

Hi,

Please review this fix for 8153641: assert(thread_state == 
_thread_in_native) failed: Assumed thread_in_native while heap dump


Bug: https://bugs.openjdk.java.net/browse/JDK-8153641
Webrev: http://cr.openjdk.java.net/~aeriksso/8153641/webrev.00/

The crash is because changes I did in JDK-8129419 
 do not work with 
solaris logic used for UseVMInterruptibleIO in some cases.
This change reverts the changes I did to the solaris version of 
os::write, and changes heapDumper.cpp to use ::write and ::close 
directly, as it did before.


Thanks,
Andreas


Re: RFR (XS): 8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently

2016-03-03 Thread Andreas Eriksson



On 2016-03-03 18:29, Daniel D. Daugherty wrote:

> Webrev: http://cr.openjdk.java.net/~aeriksso/8151064/webrev/

test/com/sun/jdi/RedefineAddPrivateMethod.sh
L38: System.out.println("@1 breakpoint");
L39: System.out.println("@2 breakpoint");

With these tests, this is usually done like this:

System.out.println("stop here for breakpoint 1");  // @1 
breakpoint
System.out.println("stop here for breakpoint 2");  // @2 
breakpoint


I _think_ the scaffold stuff can handle the way you did
it on all platforms, but I'm not sure. Sorry, my memory
is a bit rusty on this stuff.

Thumbs up on the original version or you can change it to
the above. Your choice.


Sorry, saw this after I had already pushed with the original version.
But other tests did similar things:
RedefineException.sh - line 83: System.out.println("a3: @1 breakpoint 
here a3");

And it ran fine through RBT, so hopefully it should be no problem.



Dan

P.S.

It occurs to me that in the original code the main() method
is empty, i.e., just comment lines. I have a vague memory
about empty methods being treated differently in HotSpot,
but I don't remember the exact details...



What was weird is that it only failed sometimes. I'll take a look 
tomorrow to see if I can find out why, but I probably wont spend too 
much time on it.


Thanks,
Andreas




On 3/3/16 10:05 AM, Andreas Eriksson wrote:

Thanks Serguei.
I'll go ahead and push this now, since I believe this change is small 
enough.


- Andreas

On 2016-03-03 17:54, serguei.spit...@oracle.com wrote:

Hi Andreas,

Good++

Thanks,
Serguei


On 3/3/16 06:35, Dmitry Samersoff wrote:

Andreas,

Looks good for me.

-Dmitry

On 2016-03-03 17:05, Andreas Eriksson wrote:

Hi,

Can I please have a review of this fix for
8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently
https://bugs.openjdk.java.net/browse/JDK-8151064

Webrev: http://cr.openjdk.java.net/~aeriksso/8151064/webrev/

Still not sure why it only fails sometimes, but after this change the
test has not failed once after a couple of hours of testing. 
Before the

change it would fail after ~5 minutes of running it in a loop.

Thanks,
Andreas












Re: RFR (XS): 8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently

2016-03-03 Thread Andreas Eriksson

Thanks Serguei.
I'll go ahead and push this now, since I believe this change is small 
enough.


- Andreas

On 2016-03-03 17:54, serguei.spit...@oracle.com wrote:

Hi Andreas,

Good++

Thanks,
Serguei


On 3/3/16 06:35, Dmitry Samersoff wrote:

Andreas,

Looks good for me.

-Dmitry

On 2016-03-03 17:05, Andreas Eriksson wrote:

Hi,

Can I please have a review of this fix for
8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently
https://bugs.openjdk.java.net/browse/JDK-8151064

Webrev: http://cr.openjdk.java.net/~aeriksso/8151064/webrev/

Still not sure why it only fails sometimes, but after this change the
test has not failed once after a couple of hours of testing. Before the
change it would fail after ~5 minutes of running it in a loop.

Thanks,
Andreas








Re: RFR (XS): 8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently

2016-03-03 Thread Andreas Eriksson

Thanks.

- Andreas

On 2016-03-03 15:35, Dmitry Samersoff wrote:

Andreas,

Looks good for me.

-Dmitry

On 2016-03-03 17:05, Andreas Eriksson wrote:

Hi,

Can I please have a review of this fix for
8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently
https://bugs.openjdk.java.net/browse/JDK-8151064

Webrev: http://cr.openjdk.java.net/~aeriksso/8151064/webrev/

Still not sure why it only fails sometimes, but after this change the
test has not failed once after a couple of hours of testing. Before the
change it would fail after ~5 minutes of running it in a loop.

Thanks,
Andreas






RFR (XS): 8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently

2016-03-03 Thread Andreas Eriksson

Hi,

Can I please have a review of this fix for
8151064: com/sun/jdi/RedefineAddPrivateMethod.sh fails intermittently
https://bugs.openjdk.java.net/browse/JDK-8151064

Webrev: http://cr.openjdk.java.net/~aeriksso/8151064/webrev/

Still not sure why it only fails sometimes, but after this change the 
test has not failed once after a couple of hours of testing. Before the 
change it would fail after ~5 minutes of running it in a loop.


Thanks,
Andreas


Re: RFR (XS): 8150986: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java failing because expects HPROF JAVA PROFILE 1.0.1 file format

2016-03-02 Thread Andreas Eriksson

Thanks Dmitry.

- Andreas

On 2016-03-02 19:26, Dmitry Samersoff wrote:

Looks good for me!

On 2016-03-02 19:56, Andreas Eriksson wrote:

Hi,

Can I please have a review for this small test fix.

Bug: 8150986: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java
failing because expects HPROF JAVA PROFILE 1.0.1 file format
https://bugs.openjdk.java.net/browse/JDK-8150986
Webrev: http://cr.openjdk.java.net/~aeriksso/8150986/webrev/

As part of JDK-8144732
<https://bugs.openjdk.java.net/browse/JDK-8144732> support for dumping
hprof with format 1.0.1 was removed, and we use format 1.0.2 always.
I missed updating this test, which checks the header string in a heap dump.

Thanks,
Andreas






Re: RFR (XS): 8150986: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java failing because expects HPROF JAVA PROFILE 1.0.1 file format

2016-03-02 Thread Andreas Eriksson

Hi,

On 2016-03-02 19:10, Daniel D. Daugherty wrote:

On 3/2/16 9:56 AM, Andreas Eriksson wrote:

Hi,

Can I please have a review for this small test fix.

Bug: 8150986: 
serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java failing 
because expects HPROF JAVA PROFILE 1.0.1 file format

https://bugs.openjdk.java.net/browse/JDK-8150986
Webrev: http://cr.openjdk.java.net/~aeriksso/8150986/webrev/


test/serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java
L57: private static final String HPROF_HEADER_1_0_1 = "JAVA 
PROFILE 1.0.1";


This variable isn't used anymore. Do you want to delete it?
Or do you want to leave it for documentation/historical purposes?


Yes, I'll delete it.
Uploaded a new webrev:
http://cr.openjdk.java.net/~aeriksso/8150986/webrev.02/

Also updated copyright year.



Thumbs up!

It would also be a good idea to search the various test suites for the
"JAVA PROFILE 1.0.1" pattern to see if there are any other tests that
might need to be updated.



I found no more tests with a quick search.

Thanks,
Andreas



Since this bug (8150986) is an integration_blocker, please feel free
to proceed with just this fix and do any follow-up work with a new bug.

Dan




As part of JDK-8144732 
<https://bugs.openjdk.java.net/browse/JDK-8144732> support for 
dumping hprof with format 1.0.1 was removed, and we use format 1.0.2 
always.
I missed updating this test, which checks the header string in a heap 
dump.


Thanks,
Andreas






RFR (XS): 8150986: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java failing because expects HPROF JAVA PROFILE 1.0.1 file format

2016-03-02 Thread Andreas Eriksson

Hi,

Can I please have a review for this small test fix.

Bug: 8150986: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java 
failing because expects HPROF JAVA PROFILE 1.0.1 file format

https://bugs.openjdk.java.net/browse/JDK-8150986
Webrev: http://cr.openjdk.java.net/~aeriksso/8150986/webrev/

As part of JDK-8144732 
 support for dumping 
hprof with format 1.0.1 was removed, and we use format 1.0.2 always.

I missed updating this test, which checks the header string in a heap dump.

Thanks,
Andreas


Re: RFR: 8149743: JVM crash after debugger hotswap with lambdas

2016-02-23 Thread Andreas Eriksson



On 2016-02-23 16:54, Coleen Phillimore wrote:



On 2/23/16 10:47 AM, Andreas Eriksson wrote:


On 2016-02-23 10:22, Andreas Eriksson wrote:



On 2016-02-23 00:53, Coleen Phillimore wrote:



On 2/22/16 6:46 PM, Daniel D. Daugherty wrote:

Just noticed that this thread did not include the Serviceability
aliases... JVM/TI belongs to the Serviceability team so...

Dan


On 2/22/16 4:44 PM, Daniel D. Daugherty wrote:

On 2/22/16 1:28 PM, Coleen Phillimore wrote:


This fix looks good.   The test looks like it uses some 
framework I don't know, in a directory that I don't usually run 
tests in.


We have some redefinition tests in 
hotspot/test/runtime/RedefineTests - can you write one with 
source code there instead?


The redefine tests in the runtime directory are in the wrong place.


The tests in the runtime directory test runtime changes to support 
redefinition primarily, so are in the right place. They are run 
with JPRT -testset hotspot.  They don't use these other frameworks 
intentionally.




JVM/TI RedefineClasses() is a debugging API and jdk/test/com/sun/jdi
is the usual place for JDI tests that exercise RedefineClasses(). If
the test is using the Java entry points (JLI/JPLIS), then those 
tests
belong in jdk/test/java/lang/instrument. Of course, if the test 
needs

to use to JDWP entry point for JVM/TI RedefineClasses(), then those
tests live in yet another location.



Can you or someone from serviceability team review this test then? 
I have no idea what it does and would prefer a test more tailored 
to testing the JVM functionality than jdb commands.


Hi,

I'll try creating a hotspot only test, but I'm not sure how to 
reproduce this without using jdb.


- Andreas


Looks like creating a hotspot only test is difficult, the problem 
only causes obvious problems (a crash) on code paths taken when using 
JDI/JDB (or JVMTI directly).


Given that the serviceability team have reviewed this I'll go ahead 
and push this with the test as-is unless you object.


This is fine.  I'm glad someone reviewed the test.

Coleen


Great, thanks.
Also, I closed the bug you linked as a duplicate.

- Andreas





Thanks,
Andreas





Thanks,
Coleen

Dan




I linked an old bug to this, can you see if they are the same?

Thanks,
Coleen

On 2/18/16 11:11 AM, Andreas Eriksson wrote:

Hi,

Please review this fix for JDK-8149743: JVM crash after 
debugger hotswap with lambdas

https://bugs.openjdk.java.net/browse/JDK-8149743

Webrev: http://cr.openjdk.java.net/~aeriksso/8149743/webrev.00/

When redefining a class to add or delete methods an array 
that's tracking method ordering is not updated correctly.
This change swaps the method ordering array between the old 
class being redefined and the scratch class it is being 
redefined into at the same point where we swap the methods and 
constant pool between them.


Regards,
Andreas



















Re: RFR: 8149743: JVM crash after debugger hotswap with lambdas

2016-02-23 Thread Andreas Eriksson


On 2016-02-23 10:22, Andreas Eriksson wrote:



On 2016-02-23 00:53, Coleen Phillimore wrote:



On 2/22/16 6:46 PM, Daniel D. Daugherty wrote:

Just noticed that this thread did not include the Serviceability
aliases... JVM/TI belongs to the Serviceability team so...

Dan


On 2/22/16 4:44 PM, Daniel D. Daugherty wrote:

On 2/22/16 1:28 PM, Coleen Phillimore wrote:


This fix looks good.   The test looks like it uses some framework 
I don't know, in a directory that I don't usually run tests in.


We have some redefinition tests in 
hotspot/test/runtime/RedefineTests - can you write one with source 
code there instead?


The redefine tests in the runtime directory are in the wrong place.


The tests in the runtime directory test runtime changes to support 
redefinition primarily, so are in the right place.  They are run with 
JPRT -testset hotspot.  They don't use these other frameworks 
intentionally.




JVM/TI RedefineClasses() is a debugging API and jdk/test/com/sun/jdi
is the usual place for JDI tests that exercise RedefineClasses(). If
the test is using the Java entry points (JLI/JPLIS), then those tests
belong in jdk/test/java/lang/instrument. Of course, if the test needs
to use to JDWP entry point for JVM/TI RedefineClasses(), then those
tests live in yet another location.



Can you or someone from serviceability team review this test then? I 
have no idea what it does and would prefer a test more tailored to 
testing the JVM functionality than jdb commands.


Hi,

I'll try creating a hotspot only test, but I'm not sure how to 
reproduce this without using jdb.


- Andreas


Looks like creating a hotspot only test is difficult, the problem only 
causes obvious problems (a crash) on code paths taken when using JDI/JDB 
(or JVMTI directly).


Given that the serviceability team have reviewed this I'll go ahead and 
push this with the test as-is unless you object.


Thanks,
Andreas





Thanks,
Coleen

Dan




I linked an old bug to this, can you see if they are the same?

Thanks,
Coleen

On 2/18/16 11:11 AM, Andreas Eriksson wrote:

Hi,

Please review this fix for JDK-8149743: JVM crash after debugger 
hotswap with lambdas

https://bugs.openjdk.java.net/browse/JDK-8149743

Webrev: http://cr.openjdk.java.net/~aeriksso/8149743/webrev.00/

When redefining a class to add or delete methods an array that's 
tracking method ordering is not updated correctly.
This change swaps the method ordering array between the old class 
being redefined and the scratch class it is being redefined into 
at the same point where we swap the methods and constant pool 
between them.


Regards,
Andreas















Re: RFR: 8149743: JVM crash after debugger hotswap with lambdas

2016-02-23 Thread Andreas Eriksson



On 2016-02-23 00:53, Coleen Phillimore wrote:



On 2/22/16 6:46 PM, Daniel D. Daugherty wrote:

Just noticed that this thread did not include the Serviceability
aliases... JVM/TI belongs to the Serviceability team so...

Dan


On 2/22/16 4:44 PM, Daniel D. Daugherty wrote:

On 2/22/16 1:28 PM, Coleen Phillimore wrote:


This fix looks good.   The test looks like it uses some framework I 
don't know, in a directory that I don't usually run tests in.


We have some redefinition tests in 
hotspot/test/runtime/RedefineTests - can you write one with source 
code there instead?


The redefine tests in the runtime directory are in the wrong place.


The tests in the runtime directory test runtime changes to support 
redefinition primarily, so are in the right place.  They are run with 
JPRT -testset hotspot.  They don't use these other frameworks 
intentionally.




JVM/TI RedefineClasses() is a debugging API and jdk/test/com/sun/jdi
is the usual place for JDI tests that exercise RedefineClasses(). If
the test is using the Java entry points (JLI/JPLIS), then those tests
belong in jdk/test/java/lang/instrument. Of course, if the test needs
to use to JDWP entry point for JVM/TI RedefineClasses(), then those
tests live in yet another location.



Can you or someone from serviceability team review this test then? I 
have no idea what it does and would prefer a test more tailored to 
testing the JVM functionality than jdb commands.


Hi,

I'll try creating a hotspot only test, but I'm not sure how to reproduce 
this without using jdb.


- Andreas



Thanks,
Coleen

Dan




I linked an old bug to this, can you see if they are the same?

Thanks,
Coleen

On 2/18/16 11:11 AM, Andreas Eriksson wrote:

Hi,

Please review this fix for JDK-8149743: JVM crash after debugger 
hotswap with lambdas

https://bugs.openjdk.java.net/browse/JDK-8149743

Webrev: http://cr.openjdk.java.net/~aeriksso/8149743/webrev.00/

When redefining a class to add or delete methods an array that's 
tracking method ordering is not updated correctly.
This change swaps the method ordering array between the old class 
being redefined and the scratch class it is being redefined into 
at the same point where we swap the methods and constant pool 
between them.


Regards,
Andreas













Re: RFR: 8149743: JVM crash after debugger hotswap with lambdas

2016-02-23 Thread Andreas Eriksson



On 2016-02-23 01:09, serguei.spit...@oracle.com wrote:

On 2/22/16 15:44, Daniel D. Daugherty wrote:

On 2/22/16 1:28 PM, Coleen Phillimore wrote:


This fix looks good.   The test looks like it uses some framework I 
don't know, in a directory that I don't usually run tests in.


We have some redefinition tests in 
hotspot/test/runtime/RedefineTests - can you write one with source 
code there instead?


The redefine tests in the runtime directory are in the wrong place.

JVM/TI RedefineClasses() is a debugging API and jdk/test/com/sun/jdi
is the usual place for JDI tests that exercise RedefineClasses(). If
the test is using the Java entry points (JLI/JPLIS), then those tests
belong in jdk/test/java/lang/instrument. Of course, if the test needs
to use to JDWP entry point for JVM/TI RedefineClasses(), then those
tests live in yet another location.


Andreas,

I'm not sure what tests did you run.
It'd be nice to run both the com/sun/jdi and java/lang/instrument tests
to make sure there are no regressions with your fix.



Hi,

I ran the jdk_jdi group (com/sun/jdi) and the hotspot_runtime group. 
I'll run the java/lang/instrument as well.


- Andreas



Thanks,
Serguei




Dan




I linked an old bug to this, can you see if they are the same?

Thanks,
Coleen

On 2/18/16 11:11 AM, Andreas Eriksson wrote:

Hi,

Please review this fix for JDK-8149743: JVM crash after debugger 
hotswap with lambdas

https://bugs.openjdk.java.net/browse/JDK-8149743

Webrev: http://cr.openjdk.java.net/~aeriksso/8149743/webrev.00/

When redefining a class to add or delete methods an array that's 
tracking method ordering is not updated correctly.
This change swaps the method ordering array between the old class 
being redefined and the scratch class it is being redefined into at 
the same point where we swap the methods and constant pool between 
them.


Regards,
Andreas










Re: RFR: 8144732: VM_HeapDumper hits assert with bad dump_len

2016-02-22 Thread Andreas Eriksson

Thanks.

On 2016-02-22 08:45, Dmitry Samersoff wrote:

Andreas,

56 header 1.0.2 only now

Besides that - Looks good for me!

-Dmitry

On 2016-02-16 16:55, Andreas Eriksson wrote:

Hi,

New webrev with support for non-segmented dumps removed:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.01/

I changed calculate_array_max_length a bit (in addition to removing the
is_segmented check), to avoid a type underflow that could happen.

On 2016-02-08 15:40, Dmitry Samersoff wrote:

Andreas,

On 2016-02-08 17:18, Andreas Eriksson wrote:

Hi,

On 2016-02-08 13:28, Dmitry Samersoff wrote:

Andreas,

Sorry for delay.

Code changes looks good for me.

But behavior of non-segmented dump is not clean for me (1074, if dump is
not segmented than the size of the dump is always less than max_bytes
and code below (1086) will not be executed.

I think today we may always write a segmented heap dump and
significantly simplify logic.

Do you mean remove support for dumping non-segmented entirely?

Correct!

I think it brings more problems than solves (but it's my personal opinion).


Could I do that as part of this fix? And would it require a CCC request?

I guess not, it doesn't change anything visible for outside world but
let me re-check it.

Nothing have changed for heaps larger that 2Gb, dump of small heap will
contain one more header.

All existing tools have to support both segmented and not segmented
dumps so it shouldn't break anything.

PS: Please also update:

./share/vm/runtime/globals.hpp
1058:   develop(size_t, SegmentedHeapDumpThreshold, 2*G,

./serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java

-Dmitry


Also, I think that current_record_length() don't need as much asserts.
one assert(dump_end == (size_t)current_offset(), "checking"); is enough.

I'd like to keep the assert(dump_len <= max_juint, "bad dump length");
in case there is ever a similar problem on some other code path if
that's ok with you. I have no problem with removing the dump_start
assert though.

I removed this assert after all, since at a later point there is a
warning("record is too large"); that checks the same thing.

The hprof parser also has more problems with long arrays (see for
example JDK-8149790 <https://bugs.openjdk.java.net/browse/JDK-8149790>)
but I think that will require larger changes so I won't do it as part of
this fix.

Regards,
Andreas


OK.

-Dmitry



Regards,
Andreas


-Dmitry

On 2016-02-01 19:20, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays.

Bug:
8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/

Problem:
The hprof format uses an u4 as a record length field, but arrays can be
longer than that (counted in bytes).

Fix:
Truncate the dump length of the array using a new function,
calculate_array_max_length. For a given array it returns the number of
elements we can dump. That length is then used to truncate arrays that
are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Regards,
Andreas






Re: Fwd: Re: [8u] RFR: 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-02-16 Thread Andreas Eriksson

Hi,

Anyone with 8u Reviewer status who can take a look?

- Andreas


On 2016-02-10 19:01, Andreas Eriksson wrote:

[Adding jdk8u-dev list]

Hi,

I believe I need an 8u Reviewer as well, since Dmitry is only 
committer for 8u?

Could someone please take a look?

This is type cleanup for dumping a heap with long arrays.

Hotspot webrev: 
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/
JDK webrev: 
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/


See below for changes compared to the JDK 9 version.

Regards,
Andreas


 Forwarded Message 
Subject: 	Re: [8u] RFR: 8129419: heapDumper.cpp: 
assert(length_in_bytes > 0) failed: nothing to copy

Date:   Wed, 10 Feb 2016 20:38:21 +0300
From:   Dmitry Samersoff <dmitry.samers...@oracle.com>
To: 	Andreas Eriksson <andreas.eriks...@oracle.com>, 
serviceability-dev@openjdk.java.net




Andreas,

Looks good for me!

-Dmitry

On 2016-01-28 20:36, Andreas Eriksson wrote:
> Hi,
>
> Please review this backport of JDK-8129419
><https://bugs.openjdk.java.net/browse/JDK-8129419>.
>
> Hotspot webrev:
>http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/
> JDK webrev:http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/
>
> There are three changes compared to JDK 9:
>
> 1) The hprof parser is in the JDK repo instead of top repo.
> Otherwise these changes are the same.
>
> 2) On Solaris os::write has some UseVMInterruptibleIO logic that failed
> because it assumed it was called by a JavaThread.
> I changed it to skip UseVMInterruptibleIO logic if called by a
> non-JavaThread.
>
> 3) The "heap dump file created" message were using sprintf and pragmas
> because of legacy reasons.
> Code after the change is the same as JDK 9 code.
>
> Regards,
> Andreas


--
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the sources.






Re: RFR: 8144732: VM_HeapDumper hits assert with bad dump_len

2016-02-16 Thread Andreas Eriksson

Hi,

New webrev with support for non-segmented dumps removed:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.01/

I changed calculate_array_max_length a bit (in addition to removing the 
is_segmented check), to avoid a type underflow that could happen.


On 2016-02-08 15:40, Dmitry Samersoff wrote:

Andreas,

On 2016-02-08 17:18, Andreas Eriksson wrote:

Hi,

On 2016-02-08 13:28, Dmitry Samersoff wrote:

Andreas,

Sorry for delay.

Code changes looks good for me.

But behavior of non-segmented dump is not clean for me (1074, if dump is
not segmented than the size of the dump is always less than max_bytes
and code below (1086) will not be executed.

I think today we may always write a segmented heap dump and
significantly simplify logic.

Do you mean remove support for dumping non-segmented entirely?

Correct!

I think it brings more problems than solves (but it's my personal opinion).


Could I do that as part of this fix? And would it require a CCC request?

I guess not, it doesn't change anything visible for outside world but
let me re-check it.

Nothing have changed for heaps larger that 2Gb, dump of small heap will
contain one more header.

All existing tools have to support both segmented and not segmented
dumps so it shouldn't break anything.

PS: Please also update:

./share/vm/runtime/globals.hpp
1058:   develop(size_t, SegmentedHeapDumpThreshold, 2*G,

./serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java

-Dmitry


Also, I think that current_record_length() don't need as much asserts.
one assert(dump_end == (size_t)current_offset(), "checking"); is enough.

I'd like to keep the assert(dump_len <= max_juint, "bad dump length");
in case there is ever a similar problem on some other code path if
that's ok with you. I have no problem with removing the dump_start
assert though.


I removed this assert after all, since at a later point there is a 
warning("record is too large"); that checks the same thing.


The hprof parser also has more problems with long arrays (see for 
example JDK-8149790 <https://bugs.openjdk.java.net/browse/JDK-8149790>) 
but I think that will require larger changes so I won't do it as part of 
this fix.


Regards,
Andreas


OK.

-Dmitry



Regards,
Andreas


-Dmitry

On 2016-02-01 19:20, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays.

Bug:
8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/

Problem:
The hprof format uses an u4 as a record length field, but arrays can be
longer than that (counted in bytes).

Fix:
Truncate the dump length of the array using a new function,
calculate_array_max_length. For a given array it returns the number of
elements we can dump. That length is then used to truncate arrays that
are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Regards,
Andreas






Re: [8u] RFR: 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-02-10 Thread Andreas Eriksson

Thanks!

- Andreas

On 2016-02-10 18:38, Dmitry Samersoff wrote:

Andreas,

Looks good for me!

-Dmitry

On 2016-01-28 20:36, Andreas Eriksson wrote:

Hi,

Please review this backport of JDK-8129419
<https://bugs.openjdk.java.net/browse/JDK-8129419>.

Hotspot webrev:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/
JDK webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/

There are three changes compared to JDK 9:

1) The hprof parser is in the JDK repo instead of top repo.
Otherwise these changes are the same.

2) On Solaris os::write has some UseVMInterruptibleIO logic that failed
because it assumed it was called by a JavaThread.
I changed it to skip UseVMInterruptibleIO logic if called by a
non-JavaThread.

3) The "heap dump file created" message were using sprintf and pragmas
because of legacy reasons.
Code after the change is the same as JDK 9 code.

Regards,
Andreas






Fwd: Re: [8u] RFR: 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-02-10 Thread Andreas Eriksson

[Adding jdk8u-dev list]

Hi,

I believe I need an 8u Reviewer as well, since Dmitry is only committer 
for 8u?

Could someone please take a look?

This is type cleanup for dumping a heap with long arrays.

Hotspot webrev: 
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/

JDK webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/

See below for changes compared to the JDK 9 version.

Regards,
Andreas


 Forwarded Message 
Subject: 	Re: [8u] RFR: 8129419: heapDumper.cpp: assert(length_in_bytes 
> 0) failed: nothing to copy

Date:   Wed, 10 Feb 2016 20:38:21 +0300
From:   Dmitry Samersoff <dmitry.samers...@oracle.com>
To: 	Andreas Eriksson <andreas.eriks...@oracle.com>, 
serviceability-dev@openjdk.java.net




Andreas,

Looks good for me!

-Dmitry

On 2016-01-28 20:36, Andreas Eriksson wrote:

Hi,

Please review this backport of JDK-8129419
<https://bugs.openjdk.java.net/browse/JDK-8129419>.

Hotspot webrev:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/
JDK webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/

There are three changes compared to JDK 9:

1) The hprof parser is in the JDK repo instead of top repo.
Otherwise these changes are the same.

2) On Solaris os::write has some UseVMInterruptibleIO logic that failed
because it assumed it was called by a JavaThread.
I changed it to skip UseVMInterruptibleIO logic if called by a
non-JavaThread.

3) The "heap dump file created" message were using sprintf and pragmas
because of legacy reasons.
Code after the change is the same as JDK 9 code.

Regards,
Andreas



--
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the sources.





Re: RFR: 8144732: VM_HeapDumper hits assert with bad dump_len

2016-02-08 Thread Andreas Eriksson

Hi,

On 2016-02-08 13:28, Dmitry Samersoff wrote:

Andreas,

Sorry for delay.

Code changes looks good for me.

But behavior of non-segmented dump is not clean for me (1074, if dump is
not segmented than the size of the dump is always less than max_bytes
and code below (1086) will not be executed.

I think today we may always write a segmented heap dump and
significantly simplify logic.


Do you mean remove support for dumping non-segmented entirely?
Could I do that as part of this fix? And would it require a CCC request?



Also, I think that current_record_length() don't need as much asserts.
one assert(dump_end == (size_t)current_offset(), "checking"); is enough.


I'd like to keep the assert(dump_len <= max_juint, "bad dump length"); 
in case there is ever a similar problem on some other code path if 
that's ok with you. I have no problem with removing the dump_start 
assert though.


Regards,
Andreas



-Dmitry

On 2016-02-01 19:20, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays.

Bug:
8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/

Problem:
The hprof format uses an u4 as a record length field, but arrays can be
longer than that (counted in bytes).

Fix:
Truncate the dump length of the array using a new function,
calculate_array_max_length. For a given array it returns the number of
elements we can dump. That length is then used to truncate arrays that
are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Regards,
Andreas






RFR: 8144732: VM_HeapDumper hits assert with bad dump_len

2016-02-01 Thread Andreas Eriksson

Hi,

Please review this fix for dumping of long arrays.

Bug:
8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/

Problem:
The hprof format uses an u4 as a record length field, but arrays can be 
longer than that (counted in bytes).


Fix:
Truncate the dump length of the array using a new function, 
calculate_array_max_length. For a given array it returns the number of 
elements we can dump. That length is then used to truncate arrays that 
are too long.

Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type 
object[] with length 1,073,741,823; truncating to length 536,870,908


Much of the change is moving functions needed by 
calculate_array_max_length to the DumpWriter or DumperSupport class so 
that they can be accessed.


Regards,
Andreas


[8u] RFR: 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-28 Thread Andreas Eriksson

Hi,

Please review this backport of JDK-8129419 
.


Hotspot webrev: 
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/hotspot/

JDK webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.jdk8.00/jdk/

There are three changes compared to JDK 9:

1) The hprof parser is in the JDK repo instead of top repo.
Otherwise these changes are the same.

2) On Solaris os::write has some UseVMInterruptibleIO logic that failed 
because it assumed it was called by a JavaThread.
I changed it to skip UseVMInterruptibleIO logic if called by a 
non-JavaThread.


3) The "heap dump file created" message were using sprintf and pragmas 
because of legacy reasons.

Code after the change is the same as JDK 9 code.

Regards,
Andreas


Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-18 Thread Andreas Eriksson



On 2016-01-18 13:58, Dmitry Samersoff wrote:

Andreas,

On 2016-01-18 15:26, Andreas Eriksson wrote:

I believe I then have to cast it to size_t at 496, 497 and 498?

I hope not.


I'm not sure how write works in regards to returning ssize_t vs a size_t
length.

It's a tricky part.

The maximum number of bytes that could be actually written by write() in
one shot is implementation dependent and limited by fs (or VFS) driver.
Typically it is less than MAX_INT.

If len overflow it, write just return the number of bytes actually
written without setting an error.


Maybe I should change it back to checking for return value < 0 (instead
of only checking for OS_ERR), and then keep 'n' as ssize_t?

I would prefer this way.


Done.
Uploaded new webrev if you want to take a look:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.03/hotspot/

- Andreas



-Dmitry


- Andreas

On 2016-01-18 13:11, Dmitry Samersoff wrote:

Андреас,

481: It's better to declare n as ssize_t and remove cast at 488

Looks good for me!

-Dmitry


On 2016-01-18 14:42, Andreas Eriksson wrote:

Hi,

New webrev:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.02/hotspot/

Write_internal handles writes less than len, and EINTR.

EINTR is taken care of by os::write, but to use it I had to remove an
assert from the solaris os::write, which checked that a JavaThread was
in_native. Since heap dumping is done from the vm_thread (not a
JavaThread) the assert crashed the VM.

Regards,
Andreas

On 2016-01-13 13:39, Andreas Eriksson wrote:

Hi,

On 2015-12-29 21:27, Dmitry Samersoff wrote:

Andreas,

Great work. All but write_internal looks good for me (see below).


HprofReader.java:

Nit -  length -= skipped; should be after if skipped == 0

heapDumper.cpp:480

1. For windows you can do :
   assert(len < (size_t)UINT_MAX, ... );
   ::write( ..., (uint) (len & UINT_MAX));

2. You should check for n < len and if it is true,
deal with errno. n = 0 is legitimate case,
so assert is not necessary.

I only think n = 0 is valid if write is called with length 0, which
write_internal will never do.
Otherwise, according to posix, n will be -1 if an error occurred, or
greater than zero if some bytes were written. [1]
If some bytes but not all were written then the while(len > 0) loop
will make sure we try to write the rest of the bytes.
It looks like windows write should never return 0 when len != 0
either. [2]

I should however handle the -1 result with errno EINTR, working on a
new webrev.

Thanks,
Andreas

[1]
http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html
If write() is interrupted by a signal before it writes any data, it
shall return -1 with errno set to [EINTR].
If write() is interrupted by a signal after it successfully writes
some data, it shall return the number of bytes written.

[2] https://msdn.microsoft.com/en-us/library/1570wh78(v=vs.80).aspx


-Dmitry

On 2015-12-28 17:02, Andreas Eriksson wrote:

Hi,

Here is the webrev for type changes.
Top-repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/

The windows_x64 native write function uses a length of type uint, not
size_t. I added an ifdef for that case and handled it, but better
suggestions are welcome.

Also found and fixed another problem in the hprof parser when
skipping
over bytes.

I've not included the tests, they have OOM issues on several JPRT
hosts,
and until I've figured out a way to fix that I wont be pushing them.

Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts,
one
part with types cleanup and other part with array truncation
related
changes or ever push these changes separately as it address
different
problems.

Type cleanup could be reviewed quickly, but review of array
truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup
and
JDK-8144732 for array truncation)

Yes, that's a good point.


2.
it might be better to use size_t and jlong (or julong) across all
file
and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've split
the type changes into a separate webrev.


3.
Each assert costs some time in nightly testing, so we probably
don't
need as much asserts.

Alright.


4. write_internal:

  There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
  So we should check if ::write returns a value less that len
and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.

Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary
(depending on platform), but Solaris has an assert 

Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-18 Thread Andreas Eriksson

Thanks!

On 2016-01-18 14:10, Dmitry Samersoff wrote:

Andreas,

Thumbs up! (Reviewed)

-Dmitry

On 2016-01-18 16:07, Andreas Eriksson wrote:


On 2016-01-18 13:58, Dmitry Samersoff wrote:

Andreas,

On 2016-01-18 15:26, Andreas Eriksson wrote:

I believe I then have to cast it to size_t at 496, 497 and 498?

I hope not.


I'm not sure how write works in regards to returning ssize_t vs a size_t
length.

It's a tricky part.

The maximum number of bytes that could be actually written by write() in
one shot is implementation dependent and limited by fs (or VFS) driver.
Typically it is less than MAX_INT.

If len overflow it, write just return the number of bytes actually
written without setting an error.


Maybe I should change it back to checking for return value < 0 (instead
of only checking for OS_ERR), and then keep 'n' as ssize_t?

I would prefer this way.

Done.
Uploaded new webrev if you want to take a look:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.03/hotspot/

- Andreas


-Dmitry


- Andreas

On 2016-01-18 13:11, Dmitry Samersoff wrote:

Андреас,

481: It's better to declare n as ssize_t and remove cast at 488

Looks good for me!

-Dmitry


On 2016-01-18 14:42, Andreas Eriksson wrote:

Hi,

New webrev:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.02/hotspot/

Write_internal handles writes less than len, and EINTR.

EINTR is taken care of by os::write, but to use it I had to remove an
assert from the solaris os::write, which checked that a JavaThread was
in_native. Since heap dumping is done from the vm_thread (not a
JavaThread) the assert crashed the VM.

Regards,
Andreas

On 2016-01-13 13:39, Andreas Eriksson wrote:

Hi,

On 2015-12-29 21:27, Dmitry Samersoff wrote:

Andreas,

Great work. All but write_internal looks good for me (see below).


HprofReader.java:

Nit -  length -= skipped; should be after if skipped == 0

heapDumper.cpp:480

1. For windows you can do :
assert(len < (size_t)UINT_MAX, ... );
::write( ..., (uint) (len & UINT_MAX));

2. You should check for n < len and if it is true,
deal with errno. n = 0 is legitimate case,
so assert is not necessary.

I only think n = 0 is valid if write is called with length 0, which
write_internal will never do.
Otherwise, according to posix, n will be -1 if an error occurred, or
greater than zero if some bytes were written. [1]
If some bytes but not all were written then the while(len > 0) loop
will make sure we try to write the rest of the bytes.
It looks like windows write should never return 0 when len != 0
either. [2]

I should however handle the -1 result with errno EINTR, working on a
new webrev.

Thanks,
Andreas

[1]
http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html
If write() is interrupted by a signal before it writes any data, it
shall return -1 with errno set to [EINTR].
If write() is interrupted by a signal after it successfully writes
some data, it shall return the number of bytes written.

[2] https://msdn.microsoft.com/en-us/library/1570wh78(v=vs.80).aspx


-Dmitry

On 2015-12-28 17:02, Andreas Eriksson wrote:

Hi,

Here is the webrev for type changes.
Top-repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/

The windows_x64 native write function uses a length of type
uint, not
size_t. I added an ifdef for that case and handled it, but better
suggestions are welcome.

Also found and fixed another problem in the hprof parser when
skipping
over bytes.

I've not included the tests, they have OOM issues on several JPRT
hosts,
and until I've figured out a way to fix that I wont be pushing
them.

Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts,
one
part with types cleanup and other part with array truncation
related
changes or ever push these changes separately as it address
different
problems.

Type cleanup could be reviewed quickly, but review of array
truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup
and
JDK-8144732 for array truncation)

Yes, that's a good point.


2.
it might be better to use size_t and jlong (or julong) across all
file
and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've
split
the type changes into a separate webrev.


3.
Each assert costs some time in nightly testing, so we probably
don't
need as much asserts.

Alright.


4. write_internal:

   There are couple of cases where ::write writes less
bytes than
requested and doesn't return -1.
   So we should check if ::write returns a value less that len
and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.

Actually, I me

Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-18 Thread Andreas Eriksson

I believe I then have to cast it to size_t at 496, 497 and 498?
I'm not sure how write works in regards to returning ssize_t vs a size_t 
length.
Maybe I should change it back to checking for return value < 0 (instead 
of only checking for OS_ERR), and then keep 'n' as ssize_t?


- Andreas

On 2016-01-18 13:11, Dmitry Samersoff wrote:

Андреас,

481: It's better to declare n as ssize_t and remove cast at 488

Looks good for me!

-Dmitry


On 2016-01-18 14:42, Andreas Eriksson wrote:

Hi,

New webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.02/hotspot/

Write_internal handles writes less than len, and EINTR.

EINTR is taken care of by os::write, but to use it I had to remove an
assert from the solaris os::write, which checked that a JavaThread was
in_native. Since heap dumping is done from the vm_thread (not a
JavaThread) the assert crashed the VM.

Regards,
Andreas

On 2016-01-13 13:39, Andreas Eriksson wrote:

Hi,

On 2015-12-29 21:27, Dmitry Samersoff wrote:

Andreas,

Great work. All but write_internal looks good for me (see below).


HprofReader.java:

Nit -  length -= skipped; should be after if skipped == 0

heapDumper.cpp:480

1. For windows you can do :
  assert(len < (size_t)UINT_MAX, ... );
  ::write( ..., (uint) (len & UINT_MAX));

2. You should check for n < len and if it is true,
deal with errno. n = 0 is legitimate case,
so assert is not necessary.

I only think n = 0 is valid if write is called with length 0, which
write_internal will never do.
Otherwise, according to posix, n will be -1 if an error occurred, or
greater than zero if some bytes were written. [1]
If some bytes but not all were written then the while(len > 0) loop
will make sure we try to write the rest of the bytes.
It looks like windows write should never return 0 when len != 0
either. [2]

I should however handle the -1 result with errno EINTR, working on a
new webrev.

Thanks,
Andreas

[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html
If write() is interrupted by a signal before it writes any data, it
shall return -1 with errno set to [EINTR].
If write() is interrupted by a signal after it successfully writes
some data, it shall return the number of bytes written.

[2] https://msdn.microsoft.com/en-us/library/1570wh78(v=vs.80).aspx


-Dmitry

On 2015-12-28 17:02, Andreas Eriksson wrote:

Hi,

Here is the webrev for type changes.
Top-repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/

The windows_x64 native write function uses a length of type uint, not
size_t. I added an ifdef for that case and handled it, but better
suggestions are welcome.

Also found and fixed another problem in the hprof parser when skipping
over bytes.

I've not included the tests, they have OOM issues on several JPRT
hosts,
and until I've figured out a way to fix that I wont be pushing them.

Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts, one
part with types cleanup and other part with array truncation related
changes or ever push these changes separately as it address different
problems.

Type cleanup could be reviewed quickly, but review of array
truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup and
JDK-8144732 for array truncation)

Yes, that's a good point.


2.
it might be better to use size_t and jlong (or julong) across all
file
and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've split
the type changes into a separate webrev.


3.
Each assert costs some time in nightly testing, so we probably don't
need as much asserts.

Alright.


4. write_internal:

 There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
 So we should check if ::write returns a value less that len
and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.

Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary
(depending on platform), but Solaris has an assert that fails:
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692


It checks that os::write is called by a JavaThread that is in_native,
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for
Solaris.


5. we should handle zero-length array in calculate_array_max_length -
it's a legitimate case

Not sure what you mean here, I believe it does handle zero-length
arrays.
It should just fall through without taking any of the if-clauses.
(Or do you mean t

Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-18 Thread Andreas Eriksson

Hi,

New webrev: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.02/hotspot/

Write_internal handles writes less than len, and EINTR.

EINTR is taken care of by os::write, but to use it I had to remove an 
assert from the solaris os::write, which checked that a JavaThread was 
in_native. Since heap dumping is done from the vm_thread (not a 
JavaThread) the assert crashed the VM.


Regards,
Andreas

On 2016-01-13 13:39, Andreas Eriksson wrote:

Hi,

On 2015-12-29 21:27, Dmitry Samersoff wrote:

Andreas,

Great work. All but write_internal looks good for me (see below).


HprofReader.java:

Nit -  length -= skipped; should be after if skipped == 0

heapDumper.cpp:480

1. For windows you can do :
 assert(len < (size_t)UINT_MAX, ... );
 ::write( ..., (uint) (len & UINT_MAX));

2. You should check for n < len and if it is true,
deal with errno. n = 0 is legitimate case,
so assert is not necessary.


I only think n = 0 is valid if write is called with length 0, which 
write_internal will never do.
Otherwise, according to posix, n will be -1 if an error occurred, or 
greater than zero if some bytes were written. [1]
If some bytes but not all were written then the while(len > 0) loop 
will make sure we try to write the rest of the bytes.
It looks like windows write should never return 0 when len != 0 
either. [2]


I should however handle the -1 result with errno EINTR, working on a 
new webrev.


Thanks,
Andreas

[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html
If write() is interrupted by a signal before it writes any data, it 
shall return -1 with errno set to [EINTR].
If write() is interrupted by a signal after it successfully writes 
some data, it shall return the number of bytes written.


[2] https://msdn.microsoft.com/en-us/library/1570wh78(v=vs.80).aspx


-Dmitry

On 2015-12-28 17:02, Andreas Eriksson wrote:

Hi,

Here is the webrev for type changes.
Top-repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot: 
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/


The windows_x64 native write function uses a length of type uint, not
size_t. I added an ifdef for that case and handled it, but better
suggestions are welcome.

Also found and fixed another problem in the hprof parser when skipping
over bytes.

I've not included the tests, they have OOM issues on several JPRT 
hosts,

and until I've figured out a way to fix that I wont be pushing them.

Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts, one
part with types cleanup and other part with array truncation related
changes or ever push these changes separately as it address different
problems.

Type cleanup could be reviewed quickly, but review of array 
truncation

requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup and
JDK-8144732 for array truncation)

Yes, that's a good point.



2.
it might be better to use size_t and jlong (or julong) across all 
file

and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've split
the type changes into a separate webrev.


3.
Each assert costs some time in nightly testing, so we probably don't
need as much asserts.

Alright.


4. write_internal:

There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
So we should check if ::write returns a value less that len 
and if it

happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.

Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary
(depending on platform), but Solaris has an assert that fails:
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692 



It checks that os::write is called by a JavaThread that is in_native,
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for
Solaris.


5. we should handle zero-length array in calculate_array_max_length -
it's a legitimate case
Not sure what you mean here, I believe it does handle zero-length 
arrays.

It should just fall through without taking any of the if-clauses.
(Or do you mean that I should add a return immediately at the top if
length is zero?)


6. max_juint is less than SegmentedHeapDumpThreshold so non-segmented
heapdump can't contain huge array and it's better to check for 
segmented

dump before any other checks.

Correct me if I'm wrong here, but SegmentedHeapDumpThreshold could in
theory be set so that we have a non-segmented heap dump while still
having huge arrays.
Not sure if this is worth taking into account, since it most like

Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2016-01-13 Thread Andreas Eriksson

Hi,

On 2015-12-29 21:27, Dmitry Samersoff wrote:

Andreas,

Great work. All but write_internal looks good for me (see below).


HprofReader.java:

Nit -  length -= skipped; should be after if skipped == 0

heapDumper.cpp:480

1. For windows you can do :
 assert(len < (size_t)UINT_MAX, ... );
 ::write( ..., (uint) (len & UINT_MAX));

2. You should check for n < len and if it is true,
deal with errno. n = 0 is legitimate case,
so assert is not necessary.


I only think n = 0 is valid if write is called with length 0, which 
write_internal will never do.
Otherwise, according to posix, n will be -1 if an error occurred, or 
greater than zero if some bytes were written. [1]
If some bytes but not all were written then the while(len > 0) loop will 
make sure we try to write the rest of the bytes.

It looks like windows write should never return 0 when len != 0 either. [2]

I should however handle the -1 result with errno EINTR, working on a new 
webrev.


Thanks,
Andreas

[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html
If write() is interrupted by a signal before it writes any data, it 
shall return -1 with errno set to [EINTR].
If write() is interrupted by a signal after it successfully writes some 
data, it shall return the number of bytes written.


[2] https://msdn.microsoft.com/en-us/library/1570wh78(v=vs.80).aspx


-Dmitry

On 2015-12-28 17:02, Andreas Eriksson wrote:

Hi,

Here is the webrev for type changes.
Top-repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/

The windows_x64 native write function uses a length of type uint, not
size_t. I added an ifdef for that case and handled it, but better
suggestions are welcome.

Also found and fixed another problem in the hprof parser when skipping
over bytes.

I've not included the tests, they have OOM issues on several JPRT hosts,
and until I've figured out a way to fix that I wont be pushing them.

Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts, one
part with types cleanup and other part with array truncation related
changes or ever push these changes separately as it address different
problems.

Type cleanup could be reviewed quickly, but review of array truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup and
JDK-8144732 for array truncation)

Yes, that's a good point.



2.
it might be better to use size_t and jlong (or julong) across all file
and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've split
the type changes into a separate webrev.


3.
Each assert costs some time in nightly testing, so we probably don't
need as much asserts.

Alright.


4. write_internal:

There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
So we should check if ::write returns a value less that len and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.

Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary
(depending on platform), but Solaris has an assert that fails:
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692

It checks that os::write is called by a JavaThread that is in_native,
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for
Solaris.


5. we should handle zero-length array in calculate_array_max_length -
it's a legitimate case

Not sure what you mean here, I believe it does handle zero-length arrays.
It should just fall through without taking any of the if-clauses.
(Or do you mean that I should add a return immediately at the top if
length is zero?)


6. max_juint is less than SegmentedHeapDumpThreshold so non-segmented
heapdump can't contain huge array and it's better to check for segmented
dump before any other checks.

Correct me if I'm wrong here, but SegmentedHeapDumpThreshold could in
theory be set so that we have a non-segmented heap dump while still
having huge arrays.
Not sure if this is worth taking into account, since it most likely
would lead to other problems as well, and the flag is develop only, so
it can't happen in product builds.
What do you think would be best to do here?

- Andreas


-Dmitry


On 2015-12-14 18:38, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays, and general cleanup
of types in heapDumper.cpp.

Problem:
At several places in heapDumper.cpp overflows could happen when dumping
long arrays.
Also the hprof format uses an u4 as a record

Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2015-12-28 Thread Andreas Eriksson

Hi,

Here is the webrev for type changes.
Top-repo: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/jdk9-hs-rt/
Hotspot: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.01/hotspot/

The windows_x64 native write function uses a length of type uint, not 
size_t. I added an ifdef for that case and handled it, but better 
suggestions are welcome.


Also found and fixed another problem in the hprof parser when skipping 
over bytes.


I've not included the tests, they have OOM issues on several JPRT hosts, 
and until I've figured out a way to fix that I wont be pushing them.


Thanks,
Andreas

On 2015-12-14 19:34, Andreas Eriksson wrote:

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts, one
part with types cleanup and other part with array truncation related
changes or ever push these changes separately as it address different
problems.

Type cleanup could be reviewed quickly, but review of array truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup and
JDK-8144732 for array truncation)


Yes, that's a good point.




2.
it might be better to use size_t and jlong (or julong) across all file
and get rid of all other types with a few exclusion.


I'll see what I can do, and we can look at it closer once I've split 
the type changes into a separate webrev.



3.
Each assert costs some time in nightly testing, so we probably don't
need as much asserts.


Alright.



4. write_internal:

   There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
   So we should check if ::write returns a value less that len and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.


Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary 
(depending on platform), but Solaris has an assert that fails:
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692 

It checks that os::write is called by a JavaThread that is in_native, 
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for 
Solaris.




5. we should handle zero-length array in calculate_array_max_length -
it's a legitimate case


Not sure what you mean here, I believe it does handle zero-length arrays.
It should just fall through without taking any of the if-clauses.
(Or do you mean that I should add a return immediately at the top if 
length is zero?)



6. max_juint is less than SegmentedHeapDumpThreshold so non-segmented
heapdump can't contain huge array and it's better to check for segmented
dump before any other checks.


Correct me if I'm wrong here, but SegmentedHeapDumpThreshold could in 
theory be set so that we have a non-segmented heap dump while still 
having huge arrays.
Not sure if this is worth taking into account, since it most likely 
would lead to other problems as well, and the flag is develop only, so 
it can't happen in product builds.

What do you think would be best to do here?

- Andreas



-Dmitry


On 2015-12-14 18:38, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays, and general cleanup
of types in heapDumper.cpp.

Problem:
At several places in heapDumper.cpp overflows could happen when dumping
long arrays.
Also the hprof format uses an u4 as a record length field, but arrays
can be longer than that (counted in bytes).

Fix:
Many types that were previously signed are changed to equivalent
unsigned types and/or to a larger type to prevent overflow.
The bulk of the change though is the addition of
calculate_array_max_length, which for a given array returns the number
of elements we can dump. That length is then used to truncate arrays
that are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the rest of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Added a test that relies on the hprof parser, which also had a 
couple of

overflow problems (top repo changes).
I've also run this change against a couple of other tests, but they are
problematic in JPRT because they are using large heaps and lots of 
disk.


Bug:
8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to
copy
https://bugs.openjdk.java.net/browse/JDK-8129419

Also fixed in this change is the problems seen in these two bugs:
8133317: Integer overflow in heapDumper.cpp leads to corrupt HPROF 
dumps

https://bugs.openjdk.java.net/browse/JDK-8133317

8144732: VM_HeapDumper hits assert with bad dump_len
ht

RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2015-12-14 Thread Andreas Eriksson

Hi,

Please review this fix for dumping of long arrays, and general cleanup 
of types in heapDumper.cpp.


Problem:
At several places in heapDumper.cpp overflows could happen when dumping 
long arrays.
Also the hprof format uses an u4 as a record length field, but arrays 
can be longer than that (counted in bytes).


Fix:
Many types that were previously signed are changed to equivalent 
unsigned types and/or to a larger type to prevent overflow.
The bulk of the change though is the addition of 
calculate_array_max_length, which for a given array returns the number 
of elements we can dump. That length is then used to truncate arrays 
that are too long.

Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type 
object[] with length 1,073,741,823; truncating to length 536,870,908


Much of the rest of the change is moving functions needed by 
calculate_array_max_length to the DumpWriter or DumperSupport class so 
that they can be accessed.


Added a test that relies on the hprof parser, which also had a couple of 
overflow problems (top repo changes).
I've also run this change against a couple of other tests, but they are 
problematic in JPRT because they are using large heaps and lots of disk.


Bug:
8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy
https://bugs.openjdk.java.net/browse/JDK-8129419

Also fixed in this change is the problems seen in these two bugs:
8133317: Integer overflow in heapDumper.cpp leads to corrupt HPROF dumps
https://bugs.openjdk.java.net/browse/JDK-8133317

8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
Top repo: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/jdk9-hs-rt/
Hotspot: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/hotspot/

Regards,
Andreas


Re: RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

2015-12-14 Thread Andreas Eriksson

Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:

Andreas,

Nice cleanup.

Some generic comments below.

1. It would be excellent if you can split webrev into two parts, one
part with types cleanup and other part with array truncation related
changes or ever push these changes separately as it address different
problems.

Type cleanup could be reviewed quickly, but review of array truncation
requires some time.

(We already have two different CRs: JDK-8129419 for type cleanup and
JDK-8144732 for array truncation)


Yes, that's a good point.




2.
it might be better to use size_t and jlong (or julong) across all file
and get rid of all other types with a few exclusion.


I'll see what I can do, and we can look at it closer once I've split the 
type changes into a separate webrev.



3.
Each assert costs some time in nightly testing, so we probably don't
need as much asserts.


Alright.



4. write_internal:

   There are couple of cases where ::write writes less bytes than
requested and doesn't return -1.
   So we should check if ::write returns a value less that len and if it
happens deal with errno - either repeat entire write
attempt(EINTR/EAGAIN) or abort writing.


Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary 
(depending on platform), but Solaris has an assert that fails:

http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692
It checks that os::write is called by a JavaThread that is in_native, 
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for 
Solaris.




5. we should handle zero-length array in calculate_array_max_length -
it's a legitimate case


Not sure what you mean here, I believe it does handle zero-length arrays.
It should just fall through without taking any of the if-clauses.
(Or do you mean that I should add a return immediately at the top if 
length is zero?)



6. max_juint is less than SegmentedHeapDumpThreshold so non-segmented
heapdump can't contain huge array and it's better to check for segmented
dump before any other checks.


Correct me if I'm wrong here, but SegmentedHeapDumpThreshold could in 
theory be set so that we have a non-segmented heap dump while still 
having huge arrays.
Not sure if this is worth taking into account, since it most likely 
would lead to other problems as well, and the flag is develop only, so 
it can't happen in product builds.

What do you think would be best to do here?

- Andreas



-Dmitry


On 2015-12-14 18:38, Andreas Eriksson wrote:

Hi,

Please review this fix for dumping of long arrays, and general cleanup
of types in heapDumper.cpp.

Problem:
At several places in heapDumper.cpp overflows could happen when dumping
long arrays.
Also the hprof format uses an u4 as a record length field, but arrays
can be longer than that (counted in bytes).

Fix:
Many types that were previously signed are changed to equivalent
unsigned types and/or to a larger type to prevent overflow.
The bulk of the change though is the addition of
calculate_array_max_length, which for a given array returns the number
of elements we can dump. That length is then used to truncate arrays
that are too long.
Whenever an array is truncated a warning is printed:
Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
object[] with length 1,073,741,823; truncating to length 536,870,908

Much of the rest of the change is moving functions needed by
calculate_array_max_length to the DumpWriter or DumperSupport class so
that they can be accessed.

Added a test that relies on the hprof parser, which also had a couple of
overflow problems (top repo changes).
I've also run this change against a couple of other tests, but they are
problematic in JPRT because they are using large heaps and lots of disk.

Bug:
8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to
copy
https://bugs.openjdk.java.net/browse/JDK-8129419

Also fixed in this change is the problems seen in these two bugs:
8133317: Integer overflow in heapDumper.cpp leads to corrupt HPROF dumps
https://bugs.openjdk.java.net/browse/JDK-8133317

8144732: VM_HeapDumper hits assert with bad dump_len
https://bugs.openjdk.java.net/browse/JDK-8144732

Webrev:
Top repo:
http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/jdk9-hs-rt/
Hotspot: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/hotspot/

Regards,
Andreas






Re: RFR: JDK-8134030: test/serviceability/dcmd/gc/HeapDumpTest fails to verify the dump

2015-11-03 Thread Andreas Eriksson



On 2015-11-03 07:27, David Holmes wrote:

Hi Andreas,

On 31/10/2015 12:03 AM, Andreas Eriksson wrote:

Hi,

Please review this change to heap dump logic.
Including runtime list, since this is related to class loading.

Bug:
JDK-8134030: test/serviceability/dcmd/gc/HeapDumpTest fails to verify
the dump
https://bugs.openjdk.java.net/browse/JDK-8134030

Webrev: http://cr.openjdk.java.net/~aeriksso/8134030/webrev.00/

A heap dump can occur at a point where an InstanceKlass doesn't have a
Java class mirror yet.
To avoid a crash because of this JDK-8131600 [1] excluded classes that
are not linked yet from the dump.

The problem here is that since a class that has been loaded is already
exposed to the Java side, another class can have a reference to the
excluded class.
So, if it has been *loaded* but not *linked* it will be excluded, but
can still be referenced by other classes in the dump.


I'm having trouble seeing exactly when the class is marked as "loaded" 
but it certainly seems to be after the mirror has been set. However 
linking happens even later than that - so if you still see a crash 
excluding linked classes then I fear you will also see problems 
excluding loaded classes!


David
-



Hi,

The crash does not happen when excluding linked classes, sorry if that 
was unclear.


Let me try to explain it again:
The crash was fixed by excluding non-linked classes (in JDK-8131600).
But that excludes some classes that are already referenced in the heap 
dump by other classes.
By changing to excluding non-loaded classes, the crash can still not 
happen, and all classes that are referenced in the dump are also included.


Not dumping these classes is only a problem when we try to look at the 
heap dump at a later point (and even then it is only a problem of 
missing information, not a crash).
For example, HeapDumpTest gives the following warning when it tries to 
follow a reference to a class that was excluded from the dump: WARNING: 
Failed to resolve object id 0x6c003d348 [...].
VisualVM on the other hand will ignore the value, and tell the user the 
field was null (which is very misleading).


I hope that made sense.

Thanks,
Andreas




This gives the following warning in HeapDumpTest:
WARNING: Failed to resolve object id 0x6c003d348 [...].

This fix changes heapDumper.cpp to only exclude classes that are not yet
loaded instead.

I'd like confirmation on this from someone who knows more about loaded
vs linked, but I think that this is correct:
When a class has been loaded, we are guaranteed that a Java class mirror
is setup, so the crash seen in JDK-8131600 still cannot happen.
This seems to be confirmed by the manual reproducer from JDK-8131600,
which still succeeds with this fix.

Also in this change is the re-addition of the actual heap dump
verification call in HeapDumpTest, which was accidentally removed when
the test was re-factored.

Regards,
Andreas

[1]: https://bugs.openjdk.java.net/browse/JDK-8131600




RFR: JDK-8134030: test/serviceability/dcmd/gc/HeapDumpTest fails to verify the dump

2015-10-30 Thread Andreas Eriksson

Hi,

Please review this change to heap dump logic.
Including runtime list, since this is related to class loading.

Bug:
JDK-8134030: test/serviceability/dcmd/gc/HeapDumpTest fails to verify 
the dump

https://bugs.openjdk.java.net/browse/JDK-8134030

Webrev: http://cr.openjdk.java.net/~aeriksso/8134030/webrev.00/

A heap dump can occur at a point where an InstanceKlass doesn't have a 
Java class mirror yet.
To avoid a crash because of this JDK-8131600 [1] excluded classes that 
are not linked yet from the dump.


The problem here is that since a class that has been loaded is already 
exposed to the Java side, another class can have a reference to the 
excluded class.
So, if it has been *loaded* but not *linked* it will be excluded, but 
can still be referenced by other classes in the dump.

This gives the following warning in HeapDumpTest:
WARNING: Failed to resolve object id 0x6c003d348 [...].

This fix changes heapDumper.cpp to only exclude classes that are not yet 
loaded instead.


I'd like confirmation on this from someone who knows more about loaded 
vs linked, but I think that this is correct:
When a class has been loaded, we are guaranteed that a Java class mirror 
is setup, so the crash seen in JDK-8131600 still cannot happen.
This seems to be confirmed by the manual reproducer from JDK-8131600, 
which still succeeds with this fix.


Also in this change is the re-addition of the actual heap dump 
verification call in HeapDumpTest, which was accidentally removed when 
the test was re-factored.


Regards,
Andreas

[1]: https://bugs.openjdk.java.net/browse/JDK-8131600


RFR: JDK-8074696: Remote debugging session hangs for several minutes when calling findBootType

2015-10-28 Thread Andreas Eriksson

Hi,

Please review this change to JDI to address a performance bottleneck in 
findBootType for high delay networks.


Bug:
8074696: Remote debugging session hangs for several minutes when calling 
findBootType

https://bugs.openjdk.java.net/browse/JDK-8074696

Webrev:
http://cr.openjdk.java.net/~aeriksso/8074696/webrev.01/

Description:
VirtualMachineImpl.findBootType loops over all loaded classes and does a 
remote call to check if the signature matches.
It will wait for the server response for each class before moving on to 
the next class, thus for many classes and high delay this will take a 
long time.


Solution:
Since we have a signature that should match, use 
retrieveClassesBySignature command that only returns matching classes.
At worst we have to loop over the number of active classloader since we 
want the class loaded by the boot classloader (null class loader);


Note:
This bug was split in two, and this is only for the findBootType part.
The more problematic visibleClasses change that was part of a previous 
review will be handled as a separate bug (JDK-8140515).


Regards,
Andreas


RFR: JDK-8074696: Remote debugging session hangs for several minutes

2015-10-21 Thread Andreas Eriksson

Hi,

Please review this change to JDI/JDWP to address two performance 
bottlenecks for high delay networks.


Bug:
8074696: Remote debugging session hangs for several minutes
https://bugs.openjdk.java.net/browse/JDK-8074696

Webrev:
http://cr.openjdk.java.net/~aeriksso/8074696/webrev.00/

This change fixes two problems:

1) VirtualMachineImpl.findBootType loops over all loaded classes and 
does a remote call to check if the signature matches.
It will wait for the server response for each class before moving on to 
the next class, thus for many classes and high delay this will take a 
long time.


Solution:
Since we have a signature that should match, use 
retrieveClassesBySignature command that only returns matching classes.
At worst we have to loop over the number of active classloader since we 
want the class loaded by the boot classloader (null class loader);

This is the changes to VirtualMachineImpl.java.

2) ClassLoaderReferenceImpl.visibleClasses requests all the classes 
visible from a specific classloader, and then proceeds to add them to a 
lookup table based on signatures.
The jdwp command for visibleClasses doesn't include signature 
information however, and if the signatures are not cached the client 
will again have to do sequential remote calls to lookup signatures.
Usually, signatures for all classes are loaded and cached at the 
beginning of the debugging session (which can be done by a single 
command), but visibleClasses includes unprepared classes that are not 
part of the initial caching.
So if there are many unprepared classes visibleClasses can spend a lot 
of time doing remote calls to load signatures.


Solution:
Stop including classes that are not prepared. No other commands include 
non-prepared classes (see JDK-4368402 
), and I think it is 
an oversight that visibleClasses includes them.
This is the changes to ClassLoaderReferenceImpl.c, and they make 
visibleClasses behave like allClasses1 and classesForSignature in 
VirtualMachineImpl.c.


Regards,
Andreas


Jdk8 backports of 6904403 and 8044364

2015-07-10 Thread Andreas Eriksson

Hi,

Just a heads up that I'm backporting these fixes to jdk8:
https://bugs.openjdk.java.net/browse/JDK-6904403: assert(f == 
k-has_finalizer(),inconsistent has_finalizer) with debug VM
https://bugs.openjdk.java.net/browse/JDK-8044364: [TESTBUG] 
runtime/RedefineFinalizer test fails on windows


Both of them applies cleanly from the jdk9 changesets (so no review 
required).

http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c597dc3eb862
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/29d15865d20f

Unless someone objects I'll get them pushed later today.

- Andreas