Under certain circumstances a safepoint could happen between a JavaThread object being created and that object being added to the Java threads list. This could cause the active field of that thread's SATB queue to get out-of-sync with respect to the other Java threads. The solution is to activate the SATB queue, when necessary, before adding the thread to the Java threads list, not when the JavaThread object is created. The changeset also includes a small fix to rename the surrogate locker thread from "Surrogate Locker Thread (CMS)" to "Surrogate Locker Thread (Concurrent GC)" since it's also used in G1.
Reviewed-by: iveresov, ysr, johnc, jcoomes
G1 was not handling explicit GCs correctly in many ways. It does now. See the CR for the list of improvements contained in this changeset.
Reviewed-by: iveresov, ysr, johnc
Autonomic per-worker free block cache sizing, tunable coalition policies, fixes to per-size block statistics, retuned gain and bandwidth of some feedback loop filters to allow quicker reactivity to abrupt changes in ambient demand, and other heuristics to reduce fragmentation of the CMS old gen. Also tightened some assertions, including those related to locking.
Reviewed-by: jmasa
Treat ProfileData in MDO's as a source of weak, not strong, roots. Fixes the bug for stop-world collection -- the case of concurrent collection will be fixed separately.
Reviewed-by: jcoomes, jmasa, kvn, never
Removing the concurrent overhead tracker from G1, along with the GC overhead reporter and the G1AccountConcurrentOverhead (both of which rely on the the concurrent overhead tracker).
Reviewed-by: iveresov, johnc
Call newly created CollectedHeap::dump_{pre,post}_full_gc before and after every stop-world full collection cycle on GenCollectedHeap and ParallelScavengeHeap. (Support for G1CollectedHeap forthcoming under CR 6810861.) Small modifications to existing heap dumping and class histogram implementation, especially to allow multiple on-the-fly histos/dumps by the VM thread during a single safepoint.
Reviewed-by: jmasa, alanb, mchung
In os::Linux::rebuild_cpu_to_node_map() fix the size of the CPU bitmap. Fixed arithmetic in MutableNUMASpace::adaptive_chunk_size() that could cause overflows and underflows of the chunk_size variable.
Reviewed-by: apetrusenko
The per-lgrp chuck size can be incorrectly computed (causing an assertion failure) because of the non-associativity of the floating point operations. The fix is to rearrange the operations.
Reviewed-by: ysr
6723229: NUMA allocator: assert(lgrp_num > 0, "There should be at least one locality group")
The fix takes care of the assertion triggered during TLAB resizing after reconfiguration. Also it now handles a defect in the topology graph, in which a single leaf node doesn't have memory.
Reviewed-by: jmasa