Commit graph

90 commits

Author SHA1 Message Date
kojix2
550ac2f2ed
[DOC] Fix typos 2024-10-31 12:44:50 +09:00
Peter Zhu
990a53825e Remove useless freelist unlock/lock in gc_ref_update 2024-10-25 15:36:35 -04:00
Peter Zhu
5460d5b119 Move error handling for GC.stat_heap to gc.c 2024-10-23 13:18:09 -04:00
Peter Zhu
d3aaca9785 Make rb_gc_impl_stat_heap return a VALUE instead of size_t 2024-10-23 13:18:09 -04:00
Peter Zhu
c0b50d05c7 Move error handling for GC.stat to gc.c 2024-10-23 13:18:09 -04:00
Peter Zhu
9dea0fae25 Make rb_gc_impl_stat return a VALUE instead of size_t 2024-10-23 13:18:09 -04:00
Peter Zhu
c2af84b244 Move error handling for GC.latest_gc_info to gc.c 2024-10-23 13:18:09 -04:00
Peter Zhu
5131fb5dbe Don't clear out flags in rb_gc_obj_free
If there's a crash after rb_gc_obj_free, it's hard to debug because the
flags have been cleared out already.
2024-10-21 12:48:53 -04:00
Peter Zhu
3ddaf24cd2 Move object processing in Process.warmup to gc.c 2024-10-18 09:06:46 -04:00
Peter Zhu
3d8fe462df Move return value of rb_gc_impl_config_set to gc.c 2024-10-10 14:34:54 -04:00
Peter Zhu
2bb7cbff30 Directly call rb_gc_impl_writebarrier_unprotect 2024-10-10 09:41:09 -04:00
Peter Zhu
fc40bd7cbd Directly call rb_gc_impl_copy_finalizer 2024-10-10 09:41:09 -04:00
Matt Valentine-House
8e7df4b7c6 Rename size_pool -> heap
Now that we've inlined the eden_heap into the size_pool, we should
rename the size_pool to heap. So that Ruby contains multiple heaps, with
different sized objects.

The term heap as a collection of memory pages is more in memory
management nomenclature, whereas size_pool was a name chosen out of
necessity during the development of the Variable Width Allocation
features of Ruby.

The concept of size pools was introduced in order to facilitate
different sized objects (other than the default 40 bytes). They wrapped
the eden heap and the tomb heap, and some related state, and provided a
reasonably simple way of duplicating all related concerns, to provide
multiple pools that all shared the same structure but held different
objects.

Since then various changes have happend in Ruby's memory layout:

* The concept of tomb heaps has been replaced by a global free pages list,
  with each page having it's slot size reconfigured at the point when it
  is resurrected
* the eden heap has been inlined into the size pool itself, so that now
  the size pool directly controls the free_pages list, the sweeping
  page, the compaction cursor and the other state that was previously
  being managed by the eden heap.

Now that there is no need for a heap wrapper, we should refer to the
collection of pages containing Ruby objects as a heap again rather than
a size pool
2024-10-03 21:20:09 +01:00
Matt Valentine-House
b58a364522 Inline eden_heap into size_pool
After the individual tomb_heaps were removed in favour of a global list
of empty pages, the only instance of rb_heap_t left is the eden_heap
within each size pool.

This PR inlines the heap fields directly into rb_size_pool_t to remove
indirection and remove the SIZE_POOL_EDEN_HEAP macro
2024-10-03 21:20:09 +01:00
Matt Valentine-House
b421964b9d Remove unused macros 2024-10-03 12:49:24 +01:00
Matt Valentine-House
d3e2d23c60 Fix compilation when RGENGC_CHECK_MODE >= 4
the mark_function_data callback was moved from the ractor to the VM.
2024-10-02 20:43:35 +01:00
Peter Zhu
bf8a8820ba Deduplicate RGENGC_CHECK_MODE into gc/gc.h 2024-10-02 11:47:45 -04:00
Peter Zhu
3932d8a87a Replace heap_eden_total_slots with objspace_available_slots 2024-10-01 08:48:51 -04:00
Peter Zhu
30507a4aed Move RUBY_INTERNAL_EVENT_FREEOBJ into GC implementation
Instead of calling rb_gc_event_hook inside of rb_gc_obj_free, it should
be up to the GC implementation to call the event.
2024-09-30 14:23:32 -04:00
Peter Zhu
2a58092360 Remove unneeded prototype for objspace_available_slots 2024-09-30 12:58:43 -04:00
Peter Zhu
f6dcab5f50 Assert that objects in write barrier are not dead 2024-09-23 10:36:48 -04:00
KJ Tsanaktsidis
02b36f7572 Unpoison page->freelist before trying to assert on it
Otherwise trying to deref the pointer can cause an ASAN crash, even
though the only reason we're dereferencing it is so that we can assert
on it.
2024-09-23 10:11:54 +10:00
Peter Zhu
2882408dcb Remove unneeded function prototype for rb_gc_impl_mark 2024-09-20 10:58:19 -04:00
Peter Zhu
167fba52f0 Remove rb_gc_impl_initial_stress_set 2024-09-19 08:21:10 -04:00
Peter Zhu
5df5eba465 Change rb_gc_impl_get_measure_total_time to return a bool 2024-09-18 10:18:47 -04:00
Peter Zhu
5307c65c76 Make rb_gc_impl_set_measure_total_time return void 2024-09-17 16:35:52 -04:00
Peter Zhu
dc61c7fc7d Rename rb_gc_impl_get_profile_total_time to rb_gc_impl_get_total_time 2024-09-17 15:22:43 -04:00
Peter Zhu
2af080bd30 Change rb_gc_impl_get_profile_total_time to return unsigned long long 2024-09-17 15:22:43 -04:00
Peter Zhu
5de7517bcb Use unsigned long long for marking and sweeping time 2024-09-17 15:22:43 -04:00
Peter Zhu
50d4840bd9 Move desired_compaction_pages_i inside of GC_CAN_COMPILE_COMPACTION
Fixes the following warning on WebAssembly:

    gc/default.c:7306:1: warning: unused function 'desired_compaction_pages_i' [-Wunused-function]
    desired_compaction_pages_i(struct heap_page *page, void *data)
2024-09-16 15:58:27 -04:00
Peter Zhu
50564f8882 ASAN unpoison whole heap page after adding to size pool 2024-09-16 09:27:29 -04:00
Peter Zhu
46ba3752c2 Don't return inside of asan_unpoisoning_object 2024-09-16 09:27:29 -04:00
Peter Zhu
c5a782dfb0 Replace with asan_unpoisoning_object 2024-09-16 09:27:29 -04:00
Peter Zhu
0fc8422a05 Move checks for heap traversal to rb_gc_mark_weak
If we are during heap traversal, we don't want to call rb_gc_impl_mark_weak.
This commit moves that check from rb_gc_impl_mark_weak to rb_gc_mark_weak.
2024-09-12 16:03:28 -04:00
Peter Zhu
606db2c423 Move special const checks to rb_gc_mark_weak 2024-09-12 16:03:28 -04:00
Peter Zhu
1205f17125 ASAN unlock freelist in size_pool_add_page 2024-09-09 10:55:18 -04:00
Peter Zhu
f2057277ea ASAN unlock freelist in gc_sweep_step 2024-09-09 10:23:25 -04:00
Peter Zhu
5a502c1873 Add keys to GC.stat and fix tests
This adds keys heap_empty_pages and heap_allocatable_slots to GC.stat.
2024-09-09 10:15:21 -04:00
Peter Zhu
079ef92b5e Implement global allocatable slots and empty pages
[Bug #20710]

This commit introduces moves allocatable slots and empty pages from per
size pool to global. This allows size pools to grow globally and allows
empty pages to move between size pools.

For the benchmark in [Bug #20710], this signficantly improves performance:

    Before:
        new_env      2.563 (± 0.0%) i/s -     26.000 in  10.226703s
        new_rails_env      0.293 (± 0.0%) i/s -      3.000 in  10.318960s

    After:
        new_env      3.781 (±26.4%) i/s -     37.000 in  10.302374s
        new_rails_env      0.911 (± 0.0%) i/s -      9.000 in  10.049337s

In the headline benchmarks on yjit-bench, we see the performance is
basically on-par with before, with ruby-lsp being signficantly faster
and activerecord and erubi-rails being slightly slower:

    --------------  -----------  ----------  -----------  ----------  --------------  -------------
    bench           master (ms)  stddev (%)  branch (ms)  stddev (%)  branch 1st itr  master/branch
    activerecord    452.2        0.3         479.4        0.4         0.96            0.94
    chunky-png      1157.0       0.4         1172.8       0.1         0.99            0.99
    erubi-rails     905.4        0.3         967.2        0.4         0.94            0.94
    hexapdf         3566.6       0.6         3553.2       0.3         1.03            1.00
    liquid-c        88.9         0.9         89.0         1.3         0.98            1.00
    liquid-compile  93.4         0.9         89.9         3.5         1.01            1.04
    liquid-render   224.1        0.7         227.1        0.5         1.00            0.99
    lobsters        1052.0       3.5         1067.4       2.1         0.99            0.99
    mail            197.1        0.4         196.5        0.5         0.98            1.00
    psych-load      2960.3       0.1         2988.4       0.8         1.00            0.99
    railsbench      2252.6       0.4         2255.9       0.5         0.99            1.00
    rubocop         262.7        1.4         270.1        1.8         1.02            0.97
    ruby-lsp        275.4        0.5         242.0        0.3         0.97            1.14
    sequel          98.4         0.7         98.3         0.6         1.01            1.00
    --------------  -----------  ----------  -----------  ----------  --------------  -------------
2024-09-09 10:15:21 -04:00
Peter Zhu
de7ac11a09 Replace heap_allocated_pages with rb_darray_size 2024-09-09 10:15:21 -04:00
Peter Zhu
b66d6e48c8 Switch sorted list of pages in the GC to a darray 2024-09-09 10:15:21 -04:00
Peter Zhu
ae84c017d6 Remove unused allocatable_pages field in objspace 2024-09-04 09:29:18 -04:00
Peter Zhu
e7fbdf8187 Fix indentation broken in 53eaa67 [ci skip] 2024-09-03 13:45:54 -04:00
Peter Zhu
53eaa67305 Unpoision the object in rb_gc_impl_garbage_object_p 2024-09-03 13:43:33 -04:00
Peter Zhu
3c63a01295 Move responsibility of heap walking into Ruby
This commit removes the need for the GC implementation to implement heap
walking and instead Ruby will implement it.
2024-09-03 10:05:38 -04:00
Peter Zhu
6b08a50a62 Move checks for special const for marking
This commit moves checks to RB_SPECIAL_CONST_P out of the GC implmentation
and into gc.c.
2024-08-29 09:11:40 -04:00
Peter Zhu
8c01dec827 Skip assertion in gc/default.c when multi-Ractor
The counter for total allocated objects may not be accurate when there are
multiple Ractors since it is not atomic so there could be race conditions
when it is incremented.
2024-08-26 13:25:12 -04:00
Peter Zhu
1cafc9d51d Use rb_gc_multi_ractor_p in gc/default.c 2024-08-26 13:25:12 -04:00
Peter Zhu
80d457b4b4 Fix object allocation counters in compaction
When we move an object in compaction, we do not decrement the total_freed_objects
of the original size pool or increment the total_allocated_objects of the
new size pool. This means that when this object dies, it will appear as
if the object was never freed from the original size pool and the new
size pool will have one more free than expected. This means that the new
size pool could appear to have a negative number of live objects.
2024-08-26 09:40:07 -04:00
Peter Zhu
c3dc1322ba Move final_slots_count to per size pool 2024-08-26 09:40:07 -04:00