Commit graph

2250 commits

Author SHA1 Message Date
Yusuke Endoh
ef64ab917e gc.c: Increase STACKFRAME_FOR_CALL_CFUNC
On macOS Mojave, the child process invoked in TestFiber#test_stack_size
gets stuck because the stack overflow detection is too late.
(ko1 figured out the mechanism of the failure.)

This change attempts to detect stack overflow earlier.
2019-08-09 17:31:19 +09:00
Nobuyoshi Nakada
a04e3585d3
Extracted wmap_live_p 2019-08-06 23:00:29 +09:00
Aaron Patterson
81252c5ccd
Let prev EP move again
The last time we committed this, we were asking the VM to write to the
ep.  But VM assertions check if the ENV data is the correct type, which
if it's a T_MOVED pointer it's not the correct type.  So the vm
assertions would fail.  This time we just directly write to it from the
GC and that bypasses the vm assertion checks.
2019-08-05 13:31:58 -07:00
git
c9192ef2e8 * expand tabs. 2019-08-06 00:56:05 +09:00
Aaron Patterson
33d7a58ffb
add compaction support to weak maps 2019-08-05 08:55:34 -07:00
Koichi Sasada
e03b3b4ae0 add debug_counters to check details.
add debug_counters to check the Hash object statistics.
2019-08-02 15:59:47 +09:00
Nobuyoshi Nakada
8b162ce9d1
Fix assertion failure when VM_CHECK_MODE
Some VM frames (dummy and top pushed by `rb_vm_call_cfunc`) has
iseq but has no pc.
2019-08-01 20:55:03 +09:00
卜部昌平
5d33f78716 fix tracepoint + backtrace SEGV
PC modification in gc_event_hook_body was careless.  There are (so
to say) abnormal iseqs stored in the cfp.  We have to check sanity
before we touch the PC.

This has not been fixed because there was no way to (ab)use the
setup from pure-Ruby.  However by using our official C APIs it is
possible to touch such frame(s), resulting in SEGV.

Fixes [Bug #14834].
2019-08-01 16:00:59 +09:00
Aaron Patterson
5ad2dfd8dc
Revert "Let prev EP move"
This reverts commit e352445863.

This is breaking CI and I'm not sure why yet, so I'll revert for now.
2019-07-31 10:34:23 -07:00
Aaron Patterson
e352445863
Let prev EP move
This commit allows the previos EP pointer to move, then updates its
location
2019-07-31 09:42:43 -07:00
Koichi Sasada
82b02c131e pass to obj_info().
obj_info() has a routine to show SPECIAL_CONST_P() objects so
we don't need to check it here.
2019-07-26 11:45:25 +09:00
Lourens Naudé
90c4bd2d2b
Let memory sizes of the various IMEMO object types be reflected correctly
[Feature #15805]

Closes: https://github.com/ruby/ruby/pull/2140
2019-07-23 16:22:34 +09:00
Jeremy Evans
01995df645 Document BasicObject does not implement #object_id and #send [ci skip]
Fixes [Bug #10422]
2019-07-22 15:07:22 -07:00
Koichi Sasada
f75561b8d4 constify RHash::ifnone.
RHash::ifnone should be protected by write-barriers so this field
should be const. However, to introduce GC.compact, the const was
removed. This commit revert this removing `const` and modify
gc.c `TYPED_UPDATE_IF_MOVED` to remove `const` forcely by a type cast.
2019-07-22 17:01:31 +09:00
Aaron Patterson
d304f77c58
Only disable GC around reference updating
This is the only place that can change the size of the object id tables
and cause a GC.
2019-07-19 15:12:50 -07:00
Koichi Sasada
fba3e76e3f fix debug counter for Hash counts.
Change debug_counters for Hash object counts:

* obj_hash_under4 (1-3) -> obj_hash_1_4 (1-4)
* obj_hash_ge4 (4-7)    -> obj_hash_5_8 (5-8)
* obj_hash_ge8 (>=8)    -> obj_hash_g8  (> 8)

For example on rdoc benchmark:

[RUBY_DEBUG_COUNTER]    obj_hash_empty                         554,900
[RUBY_DEBUG_COUNTER]    obj_hash_under4                        572,998
[RUBY_DEBUG_COUNTER]    obj_hash_ge4                             1,825
[RUBY_DEBUG_COUNTER]    obj_hash_ge8                             2,344
[RUBY_DEBUG_COUNTER]    obj_hash_empty                         553,097
[RUBY_DEBUG_COUNTER]    obj_hash_1_4                           571,880
[RUBY_DEBUG_COUNTER]    obj_hash_5_8                               982
[RUBY_DEBUG_COUNTER]    obj_hash_g8                              2,189
2019-07-19 16:24:14 +09:00
Koichi Sasada
182ae1407b fix shared array terminology.
Shared arrays created by Array#dup and so on points
a shared_root object to manage lifetime of Array buffer.
However, sometimes shared_root is called only shared so
it is confusing. So I fixed these wording "shared" to "shared_root".

* RArray::heap::aux::shared -> RArray::heap::aux::shared_root
* ARY_SHARED() -> ARY_SHARED_ROOT()
* ARY_SHARED_NUM() -> ARY_SHARED_ROOT_REFCNT()

Also, add some debug_counters to count shared array objects.

* ary_shared_create: shared ary by Array#dup and so on.
* ary_shared: finished in shard.
* ary_shared_root_occupied: shared_root but has only 1 refcnt.
  The number (ary_shared - ary_shared_root_occupied) is meaningful.
2019-07-19 13:07:59 +09:00
Koichi Sasada
f326b4f4af simplify around GC_ASSERT() 2019-07-15 10:39:57 +09:00
Samuel Williams
5c8061a9e2
Make stack_check slightly easier to use in debugger. 2019-07-12 11:56:51 +12:00
Nobuyoshi Nakada
26d674fdc7
Suppress warning on x64-mingw 2019-07-11 11:58:35 +09:00
Aaron Patterson
12762b76cb
Don't manipulate GC flags directly
We need to disable the GC around compaction (for now) because object id
book keeping can cause malloc to happen and that can trigger GC.
2019-07-10 11:12:28 -05:00
Nobuyoshi Nakada
23c92b6f82
Revert self-referencing finalizer warning [Feature #15974]
It has caused CI failures.

* d0cd0866d8

  Disable GC during rb_objspace_reachable_object_p

* 89cef1c56b

  Version guard for [Feature #15974]

* 796eeb6339.

  Fix up [Feature #15974]

* 928260c2a6.

  Warn in verbose mode on defining a finalizer that captures the object
2019-07-04 04:01:06 +09:00
git
c62aac1086 * expand tabs. 2019-07-04 01:04:44 +09:00
Nobuyoshi Nakada
d0cd0866d8
Disable GC during rb_objspace_reachable_object_p
Try to fix CI breakage by [Feature #15974].
2019-07-04 00:58:52 +09:00
Nobuyoshi Nakada
9f1d67a68f
Renamed to rb_objspace_reachable_object_p 2019-07-03 23:52:52 +09:00
Aaron Patterson
6bd49b33c8
Ensure that GC is disabled during compaction
Various things can cause GC to occur when compaction is running, for
example resizing the object identity map:

```
    frame #24: 0x000000010c784a10 ruby`gc_grey [inlined] push_mark_stack(stack=<unavailable>, data=<unavailable>) at gc.c:4311:42
    frame #25: 0x000000010c7849ff ruby`gc_grey(objspace=0x00007fc56c804400, obj=140485906037400) at gc.c:4907
    frame #26: 0x000000010c78f881 ruby`gc_start at gc.c:6464:8
    frame #27: 0x000000010c78f5d1 ruby`gc_start [inlined] gc_marks_start(objspace=0x00007fc56c804400, full_mark=<unavailable>) at gc.c:6009
    frame #28: 0x000000010c78f3c0 ruby`gc_start at gc.c:6291
    frame #29: 0x000000010c78f399 ruby`gc_start(objspace=0x00007fc56c804400, reason=<unavailable>) at gc.c:7104
    frame #30: 0x000000010c78930c ruby`objspace_xmalloc0 [inlined] objspace_malloc_fixup(objspace=<unavailable>, mem=0x000000011372a000, size=<unavailable>) at gc.c:9665:5
    frame #31: 0x000000010c7892f5 ruby`objspace_xmalloc0(objspace=0x00007fc56c804400, size=12582912) at gc.c:9707
    frame #32: 0x000000010c89bc13 ruby`st_init_table_with_size(type=<unavailable>, size=<unavailable>) at st.c:605:39
    frame #33: 0x000000010c89c5e2 ruby`rebuild_table_if_necessary [inlined] rebuild_table(tab=0x00007fc56c40b250) at st.c:780:19
    frame #34: 0x000000010c89c5ac ruby`rebuild_table_if_necessary(tab=0x00007fc56c40b250) at st.c:1142
    frame #35: 0x000000010c89c379 ruby`st_insert(tab=0x00007fc56c40b250, key=140486132605040, value=140485922918920) at st.c:1161:5
    frame #36: 0x000000010c794a16 ruby`gc_compact_heap [inlined] gc_move(objspace=0x00007fc56c804400, scan=<unavailable>, free=<unavailable>, moved_list=140485922918960) at gc.c:7441:9
    frame #37: 0x000000010c794917 ruby`gc_compact_heap(objspace=0x00007fc56c804400, comparator=<unavailable>) at gc.c:7695
    frame #38: 0x000000010c79410d ruby`gc_compact [inlined] gc_compact_after_gc(objspace=0x00007fc56c804400, use_toward_empty=1, use_double_pages=<unavailable>, use_verifier=1) at gc.c:0:22
```

We *definitely* need the heap to be in a consistent state during
compaction, so this commit sets the current state to "during_gc" so that
nothing will trigger a GC until the heap finishes compacting.

This fixes the bug we saw when running the tests for https://github.com/ruby/ruby/pull/2264
2019-07-03 14:45:50 +01:00
git
9f26242411 * expand tabs. 2019-07-03 04:26:53 +09:00
Nobuyoshi Nakada
796eeb6339
Fix up [Feature #15974]
* Fixed warning condition
* Fixed function signature
* Use ident hash
2019-07-03 04:22:41 +09:00
Chris Seaton
928260c2a6
Warn in verbose mode on defining a finalizer that captures the object
[Feature #15974]

Closes: https://github.com/ruby/ruby/pull/2264
2019-07-03 04:05:22 +09:00
Nobuyoshi Nakada
f3c81b4e90
Frozen objects in WeakMap
* gc.c (wmap_aset): bypass check for frozen and allow frozen
  object in WeakMap.  [Bug #13498]
2019-06-23 00:31:16 +09:00
Nobuyoshi Nakada
ab6d8d0b65
Adjust indent 2019-06-19 20:40:49 +09:00
Samuel Williams
d17344cfc5 Remove IA64 support. 2019-06-19 23:30:04 +12:00
Samuel Williams
3e5b885cd2 Rework debug conditional. 2019-06-19 20:39:10 +12:00
Samuel Williams
b24603adff Move vm stack init into thread. 2019-06-19 20:39:10 +12:00
Nobuyoshi Nakada
09a2189c1b
Adjust indent 2019-06-07 01:56:31 +09:00
Aaron Patterson
c9b74f9fd9
Pin keys in "compare by identity" hashes
Hashes that compare by identity care about the location of the object in
memory.  Since they care about the memory location, we can't let them
move.
2019-06-03 15:15:48 -07:00
Aaron Patterson
790a1b1790
object id is stable now for all objects, so we can let hash keys move 2019-06-03 13:38:47 -07:00
Aaron Patterson
2de3d92844
allow objects in imemo envs to move 2019-06-03 13:38:47 -07:00
NAKAMURA Usaku
ca22cccc14
get rid of a warning of VC++ 2019-06-04 03:52:53 +09:00
Koichi Sasada
c280519256 remove rb_objspace_pinned_object_p()
Nobody uses this function other than gc.c. We only need
RVALUE_PINNED().
2019-06-03 15:40:38 +09:00
git
106843d839 * expand tabs. 2019-05-30 17:12:53 +09:00
Koichi Sasada
5fc9f0008f reorder bitmap clearing. 2019-05-30 17:12:26 +09:00
Koichi Sasada
dd63d7da61 move pinned_bits[] position in struct heap_page.
pinned_bits are not used frequently (only GC.compact use it) so
move it at the end of struct heap_page.
2019-05-30 09:10:17 +01:00
Koichi Sasada
e15de86583 introduce during_compacting flag.
Usually PINNED_BITS are not needed (only for GC.compact need it)
so skip updating PINNED_BITS if the marking is not by GC.compact.
2019-05-30 16:52:42 +09:00
Takashi Kokubun
797d7efde1
Prevent MJIT compilation from running while moving
pointers.

Instead of 4fe908c164, just locking the MJIT
worker may be fine for this case. And also we might have the same issue
in all `gc_compact_after_gc` calls.
2019-05-29 08:56:27 +09:00
Takashi Kokubun
462a63c39e
Drop MJIT debug code from GC.compact
As ko1 added some improvements on GC.compact, I want to check if it
solved the problem too.
2019-05-29 05:10:12 +09:00
Koichi Sasada
8a2b497e3b remove obsolete rb_gc_finalize_deferred().
rb_gc_finalize_deferred() is remained for compatibility with
C-extensions. However, this function is no longer working
from Ruby 2.4 (crash with SEGV immediately).
So remove it completely.
2019-05-28 15:57:20 +09:00
Koichi Sasada
f3bddc103d use malloc() instead of calloc().
Here malloc() is enough because all elements of the page_list
will be overwrite.
2019-05-28 11:44:08 +09:00
Koichi Sasada
f9401d5d44 should skip T_ZOMBIE here. 2019-05-28 11:44:08 +09:00
Koichi Sasada
2229acaa54 should use heap_eden->total_pages.
The size of page_list is heap_eden->total_pages, but
init_cursors() assumes the size of page_list is `heap_allocated_pages`.
This patch fix it.
2019-05-28 11:44:08 +09:00