Hi

Found new webrev here:
http://cr.openjdk.java.net/~lmesnik/8203491/webrev.01/ 
<http://cr.openjdk.java.net/~lmesnik/8203491/webrev.01/>
and inc diff
http://cr.openjdk.java.net/~lmesnik/8203491/webrev.01-00/ 
<http://cr.openjdk.java.net/~lmesnik/8203491/webrev.01-00/>

I don't know if existing 64m is enough to produce a lot of classes. However 
this size was used in original test so I use same in new test 
TestJmapCoreMetaspace.java.

I fixed comments, import and timeout(set to 240) also.

Leonid

> On May 24, 2018, at 9:16 AM, Jini George <jini.geo...@oracle.com> wrote:
> 
> Hi Leonid,
> 
> My comments inline.
> 
> On 5/24/2018 12:09 AM, Leonid Mesnik wrote:
> 
>> I am not sure that JMapMetaspaceCore provides any additional coverage. The 
>> test just fill 64M of metaspace and then send signal to dump core. So I 
>> don't see how this test could improve coverage.
>> I think that idea of original test was to fill PermGen like Heap to expand 
>> it as much as possible or it was just an analog of test 
>> OnOOMToFileMetaspace. While current test just fill highly limited metaspace. 
>> So number of classes seems to be not significantly larger then for current 
>> TestJmapCore.java test. From my point of view it would be make a sense to 
>> generate dump containing a lot of loaded classes or some very large classes.
>> While current test looks pretty similar to existing TestJmapCore.java test.  
>> Please let me know if you see the test scenario when such test could be 
>> useful.
> 
> From what I can make out, EatMemory with -metaspace would create a lot of 
> loaded classes with GeneratedClassProducer. And this could provide some good 
> testing for writeClassDumpRecords() of HeapHprofBinWriter. Let me know if you 
> think I have overlooked something.
> 
> 
>>> * You might want to increase the timeout factor for this test. The test 
>>> times out on my machine.
>>> 
>>> 
>> I see that test finishes in 1 minute in our lab while. I see that it takes 
>> 30 seconds on 2CPU Oracle Linux VM with 2GB java heap. And test just fails 
>> with JDK-8176557 when I increase heap.
>> How many time it takes on you machine? The timeoutFactor might be used for 
>> untypical environment/command-line options.
> 
> It took about 130 secs a couple of times. Don't know if it was an anomaly.
> 
>>> * You might want to consider removing the corefile and the heapdump files 
>>> after the test execution (in the cases where the test passes).
>>> 
>> The default jtreg retain policy in make files just removes all files in test 
>> directory for passed tests. The jtreg default test policy says
>>  "If -retain is not specified, only the files from the last test executed 
>> will be retained".
>>  So it should be not a problem in most of cases. While there is no way for 
>> user to retain core/heapdump files even if user wants to keeps them.
> 
> Ok.
> 
>> However if it is the common rule for sa tests to delete such artifacts then 
>> I could remove heap/core dumps.
>>> 
>>> One suggestion is to check if /proc/sys/kernel/core_pattern has a line 
>>> starting with '|' and print a warning that a crash reporting tool might be 
>>> used (Something like in ClhsdbCDSCore.java). But it is just a suggestion 
>>> and you are free to ignore it. In due course, we could include this test 
>>> also as a part of the consolidation of SA's corefile testing effort 
>>> (https://bugs.openjdk.java.net/browse/JDK-8202297).
>>> 
>> I would prefer to left this improvement for JDK-8202297. I think good core 
>> dump processing/ulimit settings requires more efforts and testing and 
>> different version of Linux. (Might be even for Non-Oracle platforms).
>> Also logic in test ClhsdbCDSCore.java is slightly different. It tries to get 
>> possible core location from hs_err file and print this hint of core file 
>> from hs_err doesn't exists. While printing this hint if core dumps are just 
>> completely disabled might just confuse users.
> Sounds fine.
> 
> Thanks,
> Jini.

Reply via email to