Previously checktype only supported heap objects, however it's not
uncommon to receive an immediate, for example when string interpolating
a Symbol or Integer.
This change fixes some cases where YJIT fails to fire tracing events.
Most of the situations YJIT did not handle correctly involves enabling
tracing while running inside generated code.
A new operation to invalidate all generated code is added, which uses
patching to make generated code exit at the next VM instruction
boundary. A new routine called `jit_prepare_routine_call()` is
introduced to facilitate this and should be used when generating code
that could allocate, or could otherwise use `RB_VM_LOCK_ENTER()`.
The `c_return` event is fired in the middle of an instruction as opposed
to at an instruction boundary, so it requires special handling. C method
call return points are patched to go to a fucntion which does everything
the interpreter does, including firing the `c_return` event. The
generated code for C method calls normally does not fire the event.
Invalided code should not change after patching so the exits are not
clobbered. A new variable is introduced to track the region of code that
should not change.
RUBY_DEBUG have a very significant performance overhead. Enough that
YJIT with RUBY_DEBUG is noticeably slower than the interpreter without
RUBY_DEBUG.
This makes it hard to collect yjit-stats in production environments.
By allowing to collect JIT statistics without the RUBy_DEBUG overhead,
I hope to make such use cases smoother.
The FIXME is there so we remember to investigate why insns clears the
temporary array. Is this necessary? If it's not we can remove it from
both.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Adds yjit support for setting global variables.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: John Hawthorn <john@hawthorn.email>
Methods with optional parameters don't always start executing at the
first PC, but we compile all methods assuming that they do. This commit
adds a guard to ensure that we're actually starting at the first PC for
methods with optional params
Always using `ret` to return to the interpreter means that we never have
to check the VM_FRAME_FLAG_FINISH flag.
In the case that we return `Qundef`, the interpreter will execute the
cfp. We can take advantage of this by setting the PC to the instruction
we can't handle, and let the interpreter pick up the ball from there.
If we return a value other than Qundef, the interpreter will take that
value as the "return value" from the JIT and push that to the SP of the
caller
The leave instruction puts the return value on the top of the calling
frame's stack. YJIT does the same thing for leave instructions.
However, when we're returning back to the interpreter, the leave
instruction _should not_ put the return value on the top of the stack,
but put it in RAX and use RET. This commit pops the last value from the
stack pointer and puts it in RAX so that the interpreter is happy with
SP.
In jit_guard_known_klass whenever we encounter a new class should
recompile the current instruction.
However, previously once jit_guard_known_klass had guarded for a heap
object it would not recompile for any immediate (special const) objects
arriving afterwards and would take a plain side-exit instead of a chain
guard.
This commit uses jit_chain_guard inside jit_guard_known_klass instead of
the plain side exit, so that we can recompile for any special constants
arriving afterwards.
We recently added the ability to optimize a known cfunc with custom code
generation for it.
Previously we performed this lookup with a switch statement on the
address of the c function being called. This commit swaps that out for a
hash lookup on the method definition. For now I've kept this limited
this to cfuncs, but it wouldn't take significant changes to make this
work for other method types.
This implemenation is similar to how the interpreter keeps track of
which BOPs (basic operations) are redefined
This has a few advantages:
- Doesn't the C function's symbol to be exported (they're often static)
- This could support VM_METHOD_TYPE_OPTIMIZED in the future.
- This could support VM_METHOD_TYPE_ISEQ in the future. Kernel#class
would be a good candidate for this since to yjit it will just be a
constant push as we already know the class through BBV.
- Slightly cleaner to declare
- Less tightly coupled to each method's implementation
And a couple minor trade-offs:
- The looser coupling could be seen as a disadvantage (I don't think so,
- If a cfunc is defined multiple times we would need to declare it on
each definition. ex. BasicObject#== and BasicObject#equal?. This is
rare compared to using an alias.
As an optimization, multiple objects could share the same singleton
class. The optimization introduced in 6698e433934d810b16ee3636b63974c0a75c07f0
wasn't handling this correctly so was generating guards that never pass
for the inputs we defer compilation to wait for. After generating
identical code multiple times and failing, the call site is falsely
recognized as megamorphic and it side exits. See disassembly for the
following before this commit:
def foo(obj)
obj.itself
end
o = Object.new.singleton_class
foo(o)
puts YJIT.disasm(method(:foo))
See also: comment in rb_singleton_class_clone_and_attach().