Splitting out a g1 allocator class to simply specialized allocators which can associate each allocation with a given context.
Reviewed-by: mgerdin, brutisso
Replace G1CodeRootSet with a Hashtable based implementation, merge Code Root Migration phase into Code Root Scanning
Reviewed-by: jmasa, brutisso, tschatzl
The verification code that checked whether no bitmap mark had been found re-read HeapRegion::end() after the check on the bitmap. Concurrent humongous object allocation could have changed HeapRegion::end() in the meantime. Fix this by using the actual end of the region instead of HeapRegion::end() for comparison.
Reviewed-by: brutisso, jmasa
Let HeapRegionSeq manage the heap region and auxiliary data to decrease the amount of responsibilities of G1CollectedHeap, and encapsulate this work from other code.
Reviewed-by: jwilhelm, jmasa, mgerdin, brutisso
The free list is being maintained in a sorted fashion and old and humongous regions are allocated from the bottom of the heap while young regions are allocated at the top.
Co-authored-by: Staffan Friberg <staffan.friberg@oracle.com>
Reviewed-by: tschatzl, mgerdin
Use conservative assumptions of required alignment for the various garbage collector components into account when determining the maximum heap size that supports compressed oops. Using this conservative value avoids several circular dependencies in the calculation.
Reviewed-by: stefank, dholmes
Add a list of nmethods to the RSet for a region that contain references into the region. Skip scanning the code cache during root scanning and scan the nmethod lists during RSet scanning instead.
Reviewed-by: tschatzl, brutisso, mgerdin, twisti, kvn
Remove PermGen, allocate meta-data in metaspace linked to class loaders, rewrite GC walking, rewrite and rename metadata to be C++ classes
Co-authored-by: Stefan Karlsson <stefan.karlsson@oracle.com>
Co-authored-by: Mikael Gerdin <mikael.gerdin@oracle.com>
Co-authored-by: Tom Rodriguez <tom.rodriguez@oracle.com>
Reviewed-by: jmasa, stefank, never, coleenp, kvn, brutisso, mgerdin, dholmes, jrose, twisti, roland
Removed the HeapRegion::_top_at_conc_mark_count field. It is no longer needed as a result of the changes for 6888336 and 7127706. Refactored the closures that finalize and verify the liveness counting data so that common functionality was placed into a base class.
Reviewed-by: brutisso, tonyp
Cleanup of the CSet chooser class: standardize on uints for region num and indexes (instead of int, jint, etc.), make the method / field naming style more consistent, remove a lot of dead code.
Reviewed-by: johnc, brutisso
Change the type of fields / variables / etc. that represent region counts and indeces from size_t to uint.
Reviewed-by: iveresov, brutisso, jmasa, jwilhelm
Revamp of the mechanism that chooses old regions for inclusion in the CSet. It simplifies the code and introduces min and max bounds on the number of old regions added to the CSet at each mixed GC to avoid pathological cases. It also ensures that when we do a mixed GC we'll always find old regions to add to the CSet (i.e., it eliminates the case where a mixed GC will collect no old regions which can happen today).
Reviewed-by: johnc, brutisso
Remove the separate counting phase of concurrent marking by tracking the amount of marked bytes and the cards spanned by marked objects in marking task/worker thread local data structures, which are updated as individual objects are marked.
Reviewed-by: brutisso, tonyp
This change simplifies the interaction between GC and concurrent marking. By disabling survivor spaces during the initial-mark pause we don't need to propagate marks of objects we copy during each GC (since we never need to copy an explicitly marked object).
Reviewed-by: johnc, brutisso
Parallelize the removal of self forwarding pointers etc. by wrapping in a HeapRegion closure, which is then wrapped inside an AbstractGangTask.
Reviewed-by: tonyp, iveresov
Parallelize the serial code that was used to mark objects reachable from survivor objects in the collection set. Some minor improvments in the timers used to track the freeing of the collection set along with some tweaks to PrintGCDetails.
Reviewed-by: tonyp, brutisso
Various cleanups and formatting changes identified during a code walk through of the changes for 6484965 ("G1: piggy-back liveness accounting phase on marking").
Reviewed-by: brutisso, tonyp
7102445: G1: Unnecessary Resource allocations during RSet scanning
Add a new per-worker thread line in the PrintGCDetails output. GC Worker Other is the difference between the elapsed time for the parallel phase of the evacuation pause and the sum of the times of the sub-phases (external root scanning, mark stack scanning, RSet updating, RSet scanning, object copying, and termination) for that worker. During RSet scanning, stack allocate DirtyCardToOopClosure objects; allocating these in a resource area was causing abnormally high GC Worker Other times while the worker thread freed ResourceArea chunks.
Reviewed-by: tonyp, jwilhelm, brutisso
G1 now uses two reference processors - one is used by concurrent marking and the other is used by STW GCs (both full and incremental evacuation pauses). In an evacuation pause, the reference processor is embedded into the closures used to scan objects. Doing so causes causes reference objects to be 'discovered' by the reference processor. At the end of the evacuation pause, these discovered reference objects are processed - preserving (and copying) referent objects (and their reachable graphs) as appropriate.
Reviewed-by: ysr, jwilhelm, brutisso, stefank, tonyp
Refactor the allocation code during GC to use the G1AllocRegion abstraction. Use separate subclasses of G1AllocRegion for survivor and old regions. Avoid BOT updates and dirty survivor cards incrementally for the former.
Reviewed-by: brutisso, johnc, ysr
Perform a heap verification after the first phase of G1's full GC using objects' mark words to determine liveness. The third parameter of the heap verification routines, which was used in G1 to determine which marking bitmap to use in liveness calculations, has been changed from a boolean to an enum with values defined for using the mark word, and the 'prev' and 'next' bitmaps.
Reviewed-by: tonyp, ysr
7042285: G1: native memory leak during humongous object allocation
6804436: G1: heap region indices should be size_t
A series of fixes and improvements to the HeapRegionSeq class: a) replace the _regions growable array with a standard C array, b) avoid de-allocating / re-allocating HeapRegion instances when the heap shrinks / grows (fix for 7042285), c) introduce fast method to map address to HeapRegion via a "biased" array pointer, d) embed the _hrs object in G1CollectedHeap, instead of pointing to it via an indirection, e) assume that all the regions added to the HeapRegionSeq instance are contiguous, f) replace int's with size_t's for indexes (and expand that to HeapRegion as part of 6804436), g) remove unnecessary / unused methods, h) rename a couple of fields (_alloc_search_start and _seq_bottom), i) fix iterate_from() not to always start from index 0 irrespective of the region passed to it, j) add a verification method to check the HeapRegionSeq assumptions, k) always call the wrappers for _hrs.iterate(), _hrs_length(), and _hrs.at() from G1CollectedHeap, not those methods directly, and l) unify the code that expands the sequence (by either re-using or creating a new HeapRegion) and make it robust wrt to a HeapRegion allocation failing.
Reviewed-by: stefank, johnc, brutisso