Previously, YJIT returned truthy for the block given query at the top
level. That's incorrect because the top level script never receives a
block, and `yield` is a syntax error there.
Inside methods, the number of hops to get from `iseq` to
`iseq->body->local_iseq` is the same as the number of
`VM_ENV_PREV_EP(ep)` hops to get to an environment with
`VM_ENV_FLAG_LOCAL`. YJIT and the interpreter both rely on this as can
be seen in get_lvar_level(). However, this identity does not hold for
the top level frame because of vm_set_eval_stack(), which sets up
`TOPLEVEL_BINDING`.
Since only methods can take a block that `yield` goes to, have ISEQs
that are the child of a non-method ISEQ return falsy for the block given
query. This fixes the issue for the top level script and is an
optimization for non-method contexts such as inside `ISEQ_TYPE_CLASS`.
* ZJIT: Implement SingleRactorMode invalidation
* ZJIT: Add macro for compiling jumps
* ZJIT: Fix typo in comment
* YJIT: Fix typo in comment
* ZJIT: Avoid using unexported types in zjit.h
`enum ruby_vminsn_type` is declared in `insns.inc` and is not exported.
Using it in `zjit.h` would cause build errors when the file including it
doesn't include `insns.inc`.
This is the second part of making YJIT work with parallel GC.
During GC, `rb_yjit_iseq_mark` and `rb_yjit_iseq_update_references` need
to resolve offsets in `Block::gc_obj_offsets` into absolute addresses
before reading or updating the fields. This needs the base address
stored in `VirtualMemory::region_start` which was previously behind a
`RefCell`. When multiple GC threads scan multiple iseq simultaneously
(which is possible for some GC modules such as MMTk), it will panic
because the `RefCell` is already borrowed.
We notice that some fields of `VirtualMemory`, such as `region_start`,
are never modified once `VirtualMemory` is constructed. We change the
type of the field `CodeBlock::mem_block` from `Rc<RefCell<T>>` to
`Rc<T>`, and push the `RefCell` into `VirtualMemory`. We extract
mutable fields of `VirtualMemory` into a dedicated struct
`VirtualMemoryMut`, and store them in a field `VirtualMemory::mutable`
which is a `RefCell<VirtualMemoryMut>`. After this change, methods that
access immutable fields in `VirtualMemory`, particularly `base_ptr()`
which reads `region_start`, will no longer need to borrow any `RefCell`.
Methods that access mutable fields will need to borrow
`VirtualMemory::mutable`, but the number of borrowing operations becomes
strictly fewer than before because borrowing operations previously done
in callers (such as `CodeBlock::write_mem`) are moved into methods of
`VirtualMemory` (such as `VirtualMemory::write_bytes`).
Some GC modules, notably MMTk, support parallel GC, i.e. multiple GC
threads work in parallel during a GC. Currently, when two GC threads
scan two iseq objects simultaneously when YJIT is enabled, both threads
will attempt to borrow `CodeBlock::mem_block`, which will result in
panic.
This commit makes one part of the change.
We now set the YJIT code memory to writable in bulk before the
reference-updating phase, and reset it to executable in bulk after the
reference-updating phase. Previously, YJIT lazily sets memory pages
writable while updating object references embedded in JIT-compiled
machine code, and sets the memory back to executable by calling
`mark_all_executable`. This approach is inherently unfriendly to
parallel GC because (1) it borrows `CodeBlock::mem_block`, and (2) it
sets the whole `CodeBlock` as executable which races with other GC
threads that are updating other iseq objects. It also has performance
overhead due to the frequent invocation of system calls. We now set the
permission of all the code memory in bulk before and after the reference
updating phase. Multiple GC threads can now perform raw memory writes
in parallel. We should also see performance improvement during moving
GC because of the reduced number of `mprotect` system calls.
This behave almost exactly as a T_OBJECT, the layout is entirely
compatible.
This aims to solve two problems.
First, it solves the problem of namspaced classes having
a single `shape_id`. Now each namespaced classext
has an object that can hold the namespace specific
shape.
Second, it open the door to later make class instance variable
writes atomics, hence be able to read class variables
without locking the VM.
In the future, in multi-ractor mode, we can do the write
on a copy of the `fields_obj` and then atomically swap it.
Considerations:
- Right now the `RClass` shape_id is always synchronized,
but with namespace we should likely mark classes that have
multiple namespace with a specific shape flag.
Previously, `asm.mov(m32, imm32)` panicked when `imm32 > 0x80000000`. It
attempted to split imm32 into a register before doing the store, but
then the register size didn't match the destination size.
Instead of splitting, use the `MOV r/m32, imm32` form which works for
all 32-bit values. Adjust asserts that assumed that all forms undergo
sign extension, which is not true for this case.
See: 54edc930f9
Followup: https://github.com/ruby/ruby/pull/13341 / [Feature #21353]
Even thought `shape_id_t` has been make 32bits, we were still limited
to use only the lower 16 bits because they had to fit alongside `attr_index_t`
inside a `uintptr_t` in inline caches.
By enlarging inline caches we can unlock the full 32bits on all
platforms, allowing to use these extra bits for tagging.
Whenever we run into an inline cache miss when we try to set
an ivar, we may need to take the global lock, just to be able to
lookup inside `shape->edges`.
To solve that, when we're in multi-ractor mode, we can treat
the `shape->edges` as immutable. When we need to add a new
edge, we first copy the table, and then replace it with
CAS.
This increases memory allocations, however we expect that
creating new transitions becomes increasingly rare over time.
```ruby
class A
def initialize(bool)
@a = 1
if bool
@b = 2
else
@c = 3
end
end
def test
@d = 4
end
end
def bench(iterations)
i = iterations
while i > 0
A.new(true).test
A.new(false).test
i -= 1
end
end
if ARGV.first == "ractor"
ractors = 8.times.map do
Ractor.new do
bench(20_000_000 / 8)
end
end
ractors.each(&:take)
else
bench(20_000_000)
end
```
The above benchmark takes 27 seconds in Ractor mode on Ruby 3.4,
and only 1.7s with this branch.
Co-Authored-By: Étienne Barrié <etienne.barrie@gmail.com>
Previously we used a flag to set whether a module was uninitialized.
When checked whether a class was initialized, we first had to check that
it had a non-zero superclass, as well as that it wasn't BasicObject.
With the advent of namespaces, RCLASS_SUPER is now an expensive
operation, and though we could just check for the prime superclass, we
might as well take this opportunity to use a flag so that we can perform
the initialized check with as few instructions as possible.
It's possible in the future that we could prevent uninitialized classes
from being available to the user, but currently there are a few ways to
do that.
This commit allows building YJIT and ZJIT simultaneously, a "combo
build". Previously, `./configure --enable-yjit --enable-zjit` failed. At
runtime, though, only one of the two can be enabled at a time.
Add a root Cargo workspace that contains both the yjit and zjit crate.
The common Rust build integration mechanisms are factored out into
defs/jit.mk.
Combo YJIT+ZJIT dev builds are supported; if either JIT uses
`--enable-*=dev`, both of them are built in dev mode.
The combo build requires Cargo, but building one JIT at a time with only
rustc in release build remains supported.
As well as `RB_OBJ_SHAPE_ID` -> `rb_obj_shape_id`
and `RSHAPE` is now a simple alias for `rb_shape_lookup`.
I tried to turn all these into `static inline` but I'm having
trouble with `RUBY_EXTERN rb_shape_tree_t *rb_shape_tree_ptr;`
not being exposed as I'd expect.
And get rid of the `obj_to_id_tbl`
It's no longer needed, the `object_id` is now stored inline
in the object alongside instance variables.
We still need the inverse table in case `_id2ref` is invoked, but
we lazily build it by walking the heap if that happens.
The `object_id` concern is also no longer a GC implementation
concern, but a generic implementation.
Co-Authored-By: Matt Valentine-House <matt@eightbitraptor.com>
Ivars will longer be the only thing stored inline
via shapes, so keeping the `iv_index` and `ivptr` names
would be confusing.
Instance variables won't be the only thing stored inline
via shapes, so keeping the `ivptr` name would be confusing.
`field` encompass anything that can be stored in a VALUE array.
Similarly, `gen_ivtbl` becomes `gen_fields_tbl`.
Working towards having YJIT and ZJIT in the same build, we need to
deduplicate some glue code that would otherwise cause name collision.
Add jit.c for this and build it for YJIT and ZJIT builds. Update bindgen
to look at jit.c; some shuffling of functions in the output, but the set
of functions shouldn't have changed.
* ZJIT: Disable ZJIT instructions when USE_ZJIT is 0
* Test the order of ZJIT instructions
* Add more jobs that disable JITs
* Show instruction names in the message
Avoid generating an infinite loop in the case where:
1. Block `first` is adjacent to block `second`, and the branch from `first` to
`second` is a fallthrough, and
2. Block `second` immediately exits to the interpreter, and
3. Block `second` is invalidated and YJIT is OOM
While pondering how to fix this, I think I've stumbled on another related edge case:
1. Block `incoming_one` and `incoming_two` both branch to block `second`. Block
`incoming_one` has a fallthrough
2. Block `second` immediately exits to the interpreter (so it starts with its exit)
3. When Block `second` is invalidated, the incoming fallthrough branch from
`incoming_one` might be rewritten first, which overwrites the start of block
`second` with a jump to a new branch stub.
4. YJIT runs of out memory
5. The incoming branch from `incoming_two` is then rewritten, but because we're
OOM we can't generate a new stub, so we use `second`'s exit as the branch
target. However `second`'s exit was already overwritten with a jump to the
branch stub for `incoming_one`, so `incoming_two` will end up jumping to
`incoming_one`'s branch stub.
Fixes [Bug #21257]