> This is part of changes which try to reduce size of `nmethod` and `codeblob`
> data vs code in CodeCache.
> These changes reduced size of `nmethod` header from 288 to 232 bytes. From
> 304 to 248 in optimized VM:
>
> Statistics for 1282 bytecoded nmethods for C2:
> total in heap = 5560352
On Tue, 16 Apr 2024 02:28:14 GMT, Dean Long wrote:
>> Vladimir Kozlov has updated the pull request incrementally with one
>> additional commit since the last revision:
>>
>> Union fields which usages do not overlap
>
> src/hotspot/share/code/nmethod.hpp line 282:
>
>> 280:
On Tue, 16 Apr 2024 02:34:29 GMT, Dean Long wrote:
>> Vladimir Kozlov has updated the pull request incrementally with one
>> additional commit since the last revision:
>>
>> Union fields which usages do not overlap
>
> src/hotspot/share/code/nmethod.hpp line 205:
>
>> 203: // offsets
On Mon, 15 Apr 2024 03:24:07 GMT, Vladimir Kozlov wrote:
>> This is part of changes which try to reduce size of `nmethod` and `codeblob`
>> data vs code in CodeCache.
>> These changes reduced size of `nmethod` header from 288 to 232 bytes. From
>> 304 to 248 in optimized VM:
>>
>> Statistics
On Mon, 15 Apr 2024 03:24:07 GMT, Vladimir Kozlov wrote:
>> This is part of changes which try to reduce size of `nmethod` and `codeblob`
>> data vs code in CodeCache.
>> These changes reduced size of `nmethod` header from 288 to 232 bytes. From
>> 304 to 248 in optimized VM:
>>
>> Statistics
On Tue, 16 Apr 2024 01:30:50 GMT, Dean Long wrote:
>> Vladimir Kozlov has updated the pull request incrementally with one
>> additional commit since the last revision:
>>
>> Union fields which usages do not overlap
>
> src/hotspot/share/code/codeBlob.cpp line 88:
>
>> 86:
On Mon, 15 Apr 2024 23:18:54 GMT, Alex Menkov wrote:
>> The fix makes VM heap dumping parallel by default.
>> `jcmd GC.heap_dump` and `jmap -dump` had parallel dumping by default, the
>> fix affects `HotSpotDiagnosticMXBean.dumpHeap()`,
>> `-XX:+HeapDumpBeforeFullGC`,
On Mon, 15 Apr 2024 03:24:07 GMT, Vladimir Kozlov wrote:
>> This is part of changes which try to reduce size of `nmethod` and `codeblob`
>> data vs code in CodeCache.
>> These changes reduced size of `nmethod` header from 288 to 232 bytes. From
>> 304 to 248 in optimized VM:
>>
>> Statistics
On Mon, 15 Apr 2024 03:24:07 GMT, Vladimir Kozlov wrote:
>> This is part of changes which try to reduce size of `nmethod` and `codeblob`
>> data vs code in CodeCache.
>> These changes reduced size of `nmethod` header from 288 to 232 bytes. From
>> 304 to 248 in optimized VM:
>>
>> Statistics
On Tue, 9 Apr 2024 11:08:31 GMT, Kevin Walls wrote:
> This test incorrectly fails, although rarely, thinking its "thread 2" has
> deadlocked.
> A change of sleep will likely fix this, but there are other issues, so
> cleaning up the test a little.
>
> Remove the probe for the
> The fix makes VM heap dumping parallel by default.
> `jcmd GC.heap_dump` and `jmap -dump` had parallel dumping by default, the fix
> affects `HotSpotDiagnosticMXBean.dumpHeap()`, `-XX:+HeapDumpBeforeFullGC`,
> `-XX:+HeapDumpAfterFullGC` and `-XX:+HeapDumpOnOutOfMemoryError`.
>
> Testing:
>
On Mon, 15 Apr 2024 06:47:24 GMT, Serguei Spitsyn wrote:
> This is the test issue. The `WaitingPT3` thread posted the `MonitorWait`
> event but has not released the `lockCheck` monitor yet. It has been fixed to
> wait for each `WaitingTask` thread to really reach the `WAITING` state. The
>
> The fix makes VM heap dumping parallel by default.
> `jcmd GC.heap_dump` and `jmap -dump` had parallel dumping by default, the fix
> affects `HotSpotDiagnosticMXBean.dumpHeap()`, `-XX:+HeapDumpBeforeFullGC`,
> `-XX:+HeapDumpAfterFullGC` and `-XX:+HeapDumpOnOutOfMemoryError`.
>
> Testing:
>
On Mon, 15 Apr 2024 08:33:25 GMT, Bernhard Urban-Forster
wrote:
> do you have numbers on how many transitions are done with your PR vs. the
> current state when running the same program?
With just simple **java -version** it is ~180 vs ~9500 (new vs old), for
**java -help** ~1120 vs ~86300.
This is the test issue. The `WaitingPT3` thread posted the `MonitorWait` event
but has not released the `lockCheck` monitor yet. It has been fixed to wait for
each `WaitingTask` thread to really reach the `WAITING` state. The same
approach is used for `EnteringTask` threads. It has been fixed
On Fri, 12 Apr 2024 14:40:05 GMT, Sergey Nazarkin wrote:
> An alternative for preemptively switching the W^X thread mode on macOS with
> an AArch64 CPU. This implementation triggers the switch in response to the
> SIGBUS signal if the *si_addr* belongs to the CodeCache area. With this
>
16 matches
Mail list logo