7023069: G1: Introduce symmetric locking in the slow allocation path

7023151: G1: refactor the code that operates on _cur_alloc_region to be re-used for allocs by the GC threads
7018286: G1: humongous allocation attempts should take the GC locker into account

First, this change replaces the asymmetric locking scheme in the G1 slow alloc path by a summetric one. Second, it factors out the code that operates on _cur_alloc_region so that it can be re-used for allocations by the GC threads in the future.

Reviewed-by: stefank, brutisso, johnc
This commit is contained in:
Antonios Printezis 2011-03-30 10:26:59 -04:00
parent 349d820dd1
commit 3e9fe24ddd
11 changed files with 920 additions and 747 deletions

View file

@ -38,15 +38,8 @@ inline HeapWord* G1OffsetTableContigSpace::allocate(size_t size) {
// this is used for larger LAB allocations only.
inline HeapWord* G1OffsetTableContigSpace::par_allocate(size_t size) {
MutexLocker x(&_par_alloc_lock);
// This ought to be just "allocate", because of the lock above, but that
// ContiguousSpace::allocate asserts that either the allocating thread
// holds the heap lock or it is the VM thread and we're at a safepoint.
// The best I (dld) could figure was to put a field in ContiguousSpace
// meaning "locking at safepoint taken care of", and set/reset that
// here. But this will do for now, especially in light of the comment
// above. Perhaps in the future some lock-free manner of keeping the
// coordination.
HeapWord* res = ContiguousSpace::par_allocate(size);
// Given that we take the lock no need to use par_allocate() here.
HeapWord* res = ContiguousSpace::allocate(size);
if (res != NULL) {
_offsets.alloc_block(res, size);
}